hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0a2b80b362719ff5517a62a53d696cdd95e530dd | 17,262 | py | Python | prepare_data.py | whut2962575697/gat_sementic_segmentation | 3e280163b373a564462c5816578cb1cd0ba8ed32 | [
"MIT"
] | null | null | null | prepare_data.py | whut2962575697/gat_sementic_segmentation | 3e280163b373a564462c5816578cb1cd0ba8ed32 | [
"MIT"
] | null | null | null | prepare_data.py | whut2962575697/gat_sementic_segmentation | 3e280163b373a564462c5816578cb1cd0ba8ed32 | [
"MIT"
] | null | null | null | # -*- encoding: utf-8 -*-
'''
@File : prepare_data.py.py
@Contact : whut.hexin@foxmail.com
@License : (C)Copyright 2017-2020, HeXin
@Modify Time @Author @Version @Desciption
------------ ------- -------- -----------
2020/7/15 14:20 xin 1.0 None
'''
import numpy as np
from skimage.io import imread, imsave
import pickle
import os
import json
import random
import shutil
from PIL import Image
import cv2
# get node features and edge adj matrix
def calculate_feature(filename, save_path, small_roi, large_roi, gt, img):
small_roi_img = imread(small_roi)
large_roi_img = imread(large_roi)
gt_img= imread(gt)
rs_img = imread(img)
obj_map = {}
node_num = 0
feature_dim = 3
# n_cls = 12
for i, small_roi_row, large_roi_row, gt_row, rs_row in zip(range(small_roi_img.shape[0]), small_roi_img, large_roi_img, gt_img, rs_img):
for j, small_roi_cell, large_roi_cell, gt_cell, rs_cell in zip(range(small_roi_img.shape[1]), small_roi_row, large_roi_row, gt_row, rs_row):
if large_roi_cell not in obj_map:
obj_map[large_roi_cell] = {}
if small_roi_cell not in obj_map[large_roi_cell]:
node_num = node_num + 1
obj_map[large_roi_cell][small_roi_cell] = {'feature_idx':[(i, j)], 'x_min': i, 'y_min': j, 'x_max': i, 'y_max': j, 'gt': {gt_cell: 1}, 'features':[rs_cell]}
else:
obj_map[large_roi_cell][small_roi_cell]['feature_idx'].append((i, j))
obj_map[large_roi_cell][small_roi_cell]['features'].append(rs_cell)
if i > obj_map[large_roi_cell][small_roi_cell]['x_max']:
obj_map[large_roi_cell][small_roi_cell]['x_max'] = i
if j > obj_map[large_roi_cell][small_roi_cell]['y_max']:
obj_map[large_roi_cell][small_roi_cell]['y_max'] = j
if gt_cell not in obj_map[large_roi_cell][small_roi_cell]['gt']:
obj_map[large_roi_cell][small_roi_cell]['gt'][gt_cell] = 1
else:
obj_map[large_roi_cell][small_roi_cell]['gt'][gt_cell] = obj_map[large_roi_cell][small_roi_cell]['gt'][gt_cell] + 1
adj_mat = np.zeros((node_num, node_num)).astype(np.uint8)
feature_mat = np.zeros((node_num, feature_dim)).astype(np.float32)
label_mat = np.zeros((node_num)).astype(np.uint8)
roi_mat = np.zeros((node_num, 5)).astype(np.uint8)
n_d = 0
mask_json = []
for large_obj_id, large_obj in obj_map.items():
n_id_list = []
for small_obj_id, small_obj in large_obj.items():
mask_json.append(small_obj['feature_idx'])
n_id_list.append(n_d)
fea = [0, 0, 0]
for feature in small_obj['features']:
fea[0] = fea[0] + feature[0]/ 255.0
fea[1] = fea[1] + feature[1]/ 255.0
fea[2] = fea[2] + feature[2]/ 255.0
fea[0] = fea[0] / len(small_obj['features'])
fea[1] = fea[1] / len(small_obj['features'])
fea[2] = fea[2] / len(small_obj['features'])
feature_mat[n_d] = fea
roi_mat[n_d] = [0, small_obj['x_min'], small_obj['y_min'], small_obj['x_max'], small_obj['y_max']]
main_cls = [0, 0]
for _cls, count in small_obj['gt'].items():
if count>main_cls[1]:
main_cls[0] = _cls
main_cls[1] = count
label_mat[n_d] = main_cls[0]-1
n_d = n_d + 1
for n_id_1 in n_id_list:
for n_id_2 in n_id_list:
adj_mat[n_id_1, n_id_2] = 1
print(adj_mat)
print(feature_mat)
print(label_mat)
print(roi_mat)
if not os.path.exists(os.path.join(save_path, 'imgs')):
os.mkdir(os.path.join(save_path, 'imgs'))
if not os.path.exists(os.path.join(save_path, 'node_features')):
os.mkdir(os.path.join(save_path, 'node_features'))
if not os.path.exists(os.path.join(save_path, 'roi')):
os.mkdir(os.path.join(save_path, 'roi'))
if not os.path.exists(os.path.join(save_path, 'edge_adjs')):
os.mkdir(os.path.join(save_path, 'edge_adjs'))
if not os.path.exists(os.path.join(save_path, 'obj_masks')):
os.mkdir(os.path.join(save_path, 'obj_masks'))
if not os.path.exists(os.path.join(save_path, 'labels')):
os.mkdir(os.path.join(save_path, 'labels'))
shutil.copy(img, os.path.join(save_path, 'imgs', filename+'.tif'))
with open(os.path.join(save_path, 'node_features', filename+'.pkl'), 'wb') as f:
pickle.dump(feature_mat, f) # 序列化
with open(os.path.join(save_path, 'roi', filename+'.pkl'), 'wb') as f:
pickle.dump(roi_mat, f) # 序列化
with open(os.path.join(save_path, 'edge_adjs', filename+'.pkl'), 'wb') as f:
pickle.dump(adj_mat, f) # 序列化
with open(os.path.join(save_path, 'labels', filename+'.pkl'), 'wb') as f:
pickle.dump(label_mat, f) # 序列化
with open(os.path.join(save_path, 'obj_masks', filename+'.json'), 'w') as f:
json.dump(mask_json, f)
def calculate_obj(filename, save_path, small_roi, large_roi, gt, img):
small_roi_img = imread(small_roi)
large_roi_img = imread(large_roi)
gt_img = imread(gt)
rs_img = imread(img)
obj_map = {}
node_num = 0
feature_dim = 3
# n_cls = 12
for i, small_roi_row, large_roi_row, gt_row, rs_row in zip(range(small_roi_img.shape[0]), small_roi_img,
large_roi_img, gt_img, rs_img):
for j, small_roi_cell, large_roi_cell, gt_cell, rs_cell in zip(range(small_roi_img.shape[1]), small_roi_row,
large_roi_row, gt_row, rs_row):
if large_roi_cell not in obj_map:
obj_map[large_roi_cell] = {}
if small_roi_cell not in obj_map[large_roi_cell]:
node_num = node_num + 1
# if small_roi_cell == 25897:
# print(i, j)
obj_map[large_roi_cell][small_roi_cell] = {'feature_idx': [(i, j)], 'x_min': j, 'y_min': i, 'x_max': j,
'y_max': i, 'gt': {gt_cell: 1}, 'features': [rs_cell]}
else:
obj_map[large_roi_cell][small_roi_cell]['feature_idx'].append((i, j))
obj_map[large_roi_cell][small_roi_cell]['features'].append(rs_cell)
if j < obj_map[large_roi_cell][small_roi_cell]['x_min']:
obj_map[large_roi_cell][small_roi_cell]['x_min'] = j
if j > obj_map[large_roi_cell][small_roi_cell]['x_max']:
obj_map[large_roi_cell][small_roi_cell]['x_max'] = j
if i < obj_map[large_roi_cell][small_roi_cell]['y_min']:
obj_map[large_roi_cell][small_roi_cell]['y_min'] = i
if i > obj_map[large_roi_cell][small_roi_cell]['y_max']:
obj_map[large_roi_cell][small_roi_cell]['y_max'] = i
if gt_cell not in obj_map[large_roi_cell][small_roi_cell]['gt']:
obj_map[large_roi_cell][small_roi_cell]['gt'][gt_cell] = 1
else:
obj_map[large_roi_cell][small_roi_cell]['gt'][gt_cell] = \
obj_map[large_roi_cell][small_roi_cell]['gt'][gt_cell] + 1
for large_obj_id, large_obj in obj_map.items():
for small_obj_id, small_obj in large_obj.items():
print(small_obj['x_min'], small_obj['y_min'], small_obj['x_max'], small_obj['y_max'])
if small_obj['x_max'] - small_obj['x_min'] == 0 or small_obj['y_max'] - small_obj['y_min'] == 0:
node_num = node_num - 1
adj_mat = np.zeros((node_num, node_num)).astype(np.uint8)
feature_mat = np.zeros((node_num, feature_dim)).astype(np.float32)
label_mat = np.zeros((node_num)).astype(np.uint8)
roi_mat = np.zeros((node_num, 5)).astype(np.uint8)
n_d = 0
mask_json = []
mask_objs = np.zeros((node_num, 224, 224)).astype(np.uint8)
resized_mask_objs = []
for large_obj_id, large_obj in obj_map.items():
n_id_list = []
for small_obj_id, small_obj in large_obj.items():
if small_obj['x_max'] - small_obj['x_min'] == 0 or small_obj['y_max'] - small_obj['y_min'] == 0:
continue
mask_json.append(small_obj['feature_idx'])
print(len(small_obj['feature_idx']))
print(small_obj['x_min'], small_obj['y_min'], small_obj['x_max'], small_obj['y_max'])
for (i_x, j_y) in small_obj['feature_idx']:
# print(i_x, j_y)
mask_objs[n_d, i_x, j_y] = 1
print(np.sum(mask_objs[n_d]))
cv2.imwrite(r'D:\new_dataset\new_dataset\gat\temp/'+filename+'_0_'+str(n_d)+'.jpg', mask_objs[n_d])
# scipy.misc.toimage(mask_objs[n_d], cmin=0.0, cmax=...).save('outfile.jpg')
# scipy.misc.imsave(r'D:\new_dataset\new_dataset\gat\temp/'+filename+'_'+str(n_d)+'.jpg', mask_objs[n_d])
# imsave(r'D:\new_dataset\new_dataset\gat\temp/'+filename+'_'+str(n_d)+'.jpg', mask_objs[n_d])
new_mask_obj = mask_objs[n_d, small_obj['y_min']:small_obj['y_max'], small_obj['x_min']:small_obj['x_max']]
print(n_d, new_mask_obj.shape)
# new_img = Image.fromarray(new_mask_obj).resize((7, 7))
new_img = cv2.resize(new_mask_obj, (7, 7))
# tt = Image.fromarray(mask_objs[n_d]).save(r'D:\new_dataset\new_dataset\gat\temp/'+filename+'_'+str(n_d)+'.jpg')
cv2.imwrite(r'D:\new_dataset\new_dataset\gat\temp/' + filename + '_' + str(n_d) + '.jpg', new_img)
# with open(r'D:\new_dataset\new_dataset\gat\temp/'+filename+'_'+str(n_d)+'.jpg', 'w') as f:
# tt.save(f)
resized_mask_objs.append(np.array(new_img))
n_id_list.append(n_d)
fea = [0, 0, 0]
for feature in small_obj['features']:
fea[0] = fea[0] + feature[0] / 255.0
fea[1] = fea[1] + feature[1] / 255.0
fea[2] = fea[2] + feature[2] / 255.0
fea[0] = fea[0] / len(small_obj['features'])
fea[1] = fea[1] / len(small_obj['features'])
fea[2] = fea[2] / len(small_obj['features'])
feature_mat[n_d] = fea
roi_mat[n_d] = [0, small_obj['x_min'], small_obj['y_min'], small_obj['x_max'], small_obj['y_max']]
main_cls = [0, 0]
for _cls, count in small_obj['gt'].items():
if count > main_cls[1]:
main_cls[0] = _cls
main_cls[1] = count
label_mat[n_d] = main_cls[0] - 1
n_d = n_d + 1
for n_id_1 in n_id_list:
for n_id_2 in n_id_list:
adj_mat[n_id_1, n_id_2] = 1
resized_mask_objs = np.array(resized_mask_objs)
print(adj_mat)
print(resized_mask_objs)
print(feature_mat)
print(label_mat)
print(roi_mat)
if not os.path.exists(os.path.join(save_path, 'imgs')):
os.mkdir(os.path.join(save_path, 'imgs'))
if not os.path.exists(os.path.join(save_path, 'node_features')):
os.mkdir(os.path.join(save_path, 'node_features'))
if not os.path.exists(os.path.join(save_path, 'mask_objs')):
os.mkdir(os.path.join(save_path, 'mask_objs'))
if not os.path.exists(os.path.join(save_path, 'roi')):
os.mkdir(os.path.join(save_path, 'roi'))
if not os.path.exists(os.path.join(save_path, 'edge_adjs')):
os.mkdir(os.path.join(save_path, 'edge_adjs'))
if not os.path.exists(os.path.join(save_path, 'obj_masks')):
os.mkdir(os.path.join(save_path, 'obj_masks'))
if not os.path.exists(os.path.join(save_path, 'labels')):
os.mkdir(os.path.join(save_path, 'labels'))
shutil.copy(img, os.path.join(save_path, 'imgs', filename + '.tif'))
with open(os.path.join(save_path, 'node_features', filename + '.pkl'), 'wb') as f:
pickle.dump(feature_mat, f) # 序列化
with open(os.path.join(save_path, 'mask_objs', filename + '.pkl'), 'wb') as f:
pickle.dump(resized_mask_objs, f) # 序列化
with open(os.path.join(save_path, 'roi', filename + '.pkl'), 'wb') as f:
pickle.dump(roi_mat, f) # 序列化
with open(os.path.join(save_path, 'edge_adjs', filename + '.pkl'), 'wb') as f:
pickle.dump(adj_mat, f) # 序列化
with open(os.path.join(save_path, 'labels', filename + '.pkl'), 'wb') as f:
pickle.dump(label_mat, f) # 序列化
with open(os.path.join(save_path, 'obj_masks', filename + '.json'), 'w') as f:
json.dump(mask_json, f)
def main(roi_small_path, roi_large_path, gt_path, rs_img_path, save_path):
filenames = [x for x in os.listdir(rs_img_path) if x.endswith('.tif')]
for filename in filenames:
calculate_obj(filename.strip('.tif'), save_path,
os.path.join(roi_small_path, filename),
os.path.join(roi_large_path, filename),
os.path.join(gt_path, filename),
os.path.join(rs_img_path, filename))
def split_trainval(roi_small_path, roi_large_path, gt_path, rs_img_path, save_path):
filenames = [x for x in os.listdir(rs_img_path) if x.endswith('.tif')]
random.shuffle(filenames)
os.mkdir(os.path.join(save_path, 'train'))
os.mkdir(os.path.join(save_path, 'train', 'roi_small'))
os.mkdir(os.path.join(save_path, 'train', 'roi_large'))
os.mkdir(os.path.join(save_path, 'train', 'gt'))
os.mkdir(os.path.join(save_path, 'train', 'rs'))
os.mkdir(os.path.join(save_path, 'val'))
os.mkdir(os.path.join(save_path, 'val', 'roi_small'))
os.mkdir(os.path.join(save_path, 'val', 'roi_large'))
os.mkdir(os.path.join(save_path, 'val', 'gt'))
os.mkdir(os.path.join(save_path, 'val', 'rs'))
for filename in filenames[:int(0.7*len(filenames))]:
shutil.copy(os.path.join(roi_small_path, filename), os.path.join(save_path, 'train', 'roi_small', filename))
shutil.copy(os.path.join(roi_large_path, filename), os.path.join(save_path, 'train', 'roi_large', filename))
shutil.copy(os.path.join(gt_path, filename), os.path.join(save_path, 'train', 'gt', filename))
shutil.copy(os.path.join(rs_img_path, filename), os.path.join(save_path, 'train', 'rs', filename))
for filename in filenames[int(0.7*len(filenames)):]:
shutil.copy(os.path.join(roi_small_path, filename), os.path.join(save_path, 'val', 'roi_small', filename))
shutil.copy(os.path.join(roi_large_path, filename), os.path.join(save_path, 'val', 'roi_large', filename))
shutil.copy(os.path.join(gt_path, filename), os.path.join(save_path, 'val', 'gt', filename))
shutil.copy(os.path.join(rs_img_path, filename), os.path.join(save_path, 'val', 'rs', filename))
if __name__ == "__main__":
# a = imread(r'C:\Users\xin\Pictures/4cee953dc58bff6f31fef61e58cd92cc.png')
# print(a.shape)
# calculate_obj('0.tif'.strip('.tif'), r'',
# os.path.join(r'D:\new_dataset\new_dataset\roi_small1\roi_small1\raster_output_16', '0.tif'),
# os.path.join(r'D:\new_dataset\new_dataset\roi_large\raster_output_16', '0.tif'),
# os.path.join(r'D:\new_dataset\new_dataset\gt\raster_output_8', '0.tif'),
# os.path.join(r'D:\new_dataset\new_dataset\img\raster_output_8', '0.tif'))
# main(r'D:\new_dataset\new_dataset\trainval_datatset\train\roi_small', r'D:\new_dataset\new_dataset\trainval_datatset\train\roi_large',
# r'D:\new_dataset\new_dataset\trainval_datatset\train\gt', r'D:\new_dataset\new_dataset\trainval_datatset\train\rs',
# r'D:\new_dataset\new_dataset\gat\train')
#
# main(r'D:\new_dataset\new_dataset\trainval_datatset\val\roi_small',
# r'D:\new_dataset\new_dataset\trainval_datatset\val\roi_large',
# r'D:\new_dataset\new_dataset\trainval_datatset\val\gt',
# r'D:\new_dataset\new_dataset\trainval_datatset\val\rs',
# r'D:\new_dataset\new_dataset\gat\val')
main(r'D:\trainval_datatset\train\roi_small',
r'D:\trainval_datatset\train\roi_large',
r'D:\trainval_datatset\train\gt',
r'D:\trainval_datatset\train\rs',
r'D:\gat_dataset\train')
main(r'D:\trainval_datatset\val\roi_small',
r'D:\trainval_datatset\val\roi_large',
r'D:\trainval_datatset\val\gt',
r'D:\trainval_datatset\val\rs',
r'D:\gat_dataset\val')
# split_trainval(r'D:\new_dataset\new_dataset\roi_small\raster_output_16', r'D:\new_dataset\new_dataset\roi_large\raster_output_16',
# r'D:\new_dataset\new_dataset\gt\raster_output_8', r'D:\new_dataset\new_dataset\img\raster_output_8', r'D:\new_dataset\new_dataset\trainval_datatset')
# calculate_feature('0', r'D:\new_dataset\new_dataset\test', r'D:\new_dataset\new_dataset\roi_small\raster_output_16/0.tif',
# r'D:\new_dataset\new_dataset\roi_large\raster_output_16/0.tif', r'D:\new_dataset\new_dataset\gt\raster_output_8/0.tif',
# r'D:\new_dataset\new_dataset\img\raster_output_8/0.tif')
| 51.222552 | 172 | 0.608215 | 2,733 | 17,262 | 3.542993 | 0.064032 | 0.053289 | 0.07539 | 0.082412 | 0.88423 | 0.872457 | 0.855313 | 0.825984 | 0.802024 | 0.764433 | 0 | 0.016808 | 0.234851 | 17,262 | 336 | 173 | 51.375 | 0.716308 | 0.158441 | 0 | 0.624506 | 0 | 0 | 0.094517 | 0.022402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01581 | false | 0 | 0.035573 | 0 | 0.051383 | 0.055336 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a37fe50c29b10e452d19a5dcef8050b55faa538 | 32 | py | Python | datacatalog/formats/duke_haase/__init__.py | SD2E/python-datacatalog | 51ab366639505fb6e8a14cd6b446de37080cd20d | [
"CNRI-Python"
] | null | null | null | datacatalog/formats/duke_haase/__init__.py | SD2E/python-datacatalog | 51ab366639505fb6e8a14cd6b446de37080cd20d | [
"CNRI-Python"
] | 2 | 2019-07-25T15:39:04.000Z | 2019-10-21T15:31:46.000Z | datacatalog/formats/duke_haase/__init__.py | SD2E/python-datacatalog | 51ab366639505fb6e8a14cd6b446de37080cd20d | [
"CNRI-Python"
] | 1 | 2019-10-15T14:33:44.000Z | 2019-10-15T14:33:44.000Z | from .convert import Duke_Haase
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a70c9234b951494fb4209c7c05f1e9b5a1e39ec | 14,771 | py | Python | item_engine/bnf_2/v_0_0_9/engine/materials.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | item_engine/bnf_2/v_0_0_9/engine/materials.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | item_engine/bnf_2/v_0_0_9/engine/materials.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | from __future__ import annotations
from item_engine.textbase.items.lemmas import Lemma
from item_engine.textbase.items.tokens import Token
# this module has been auto-generated by ItemEngine
__all__ = ['P_Any_', 'P_All_', 'P_Skip_', 'Any_', 'P_Inv_', 'All_', 'P_Atom_', 'CharsetArg', 'PatternArg', 'GrammarArg', 'Atom_', 'P_Inv', 'P_Optional', 'P_Repeat', 'P_RepeatP', 'P_All', 'P_Any', 'Str', 'Var', 'Match', 'MatchAs', 'MatchIn', 'All', 'Any', 'Optional', 'Repeat', 'Enum', 'EnumP', 'Charset', 'Pattern', 'Operator', 'Group', 'Grammar', 'build']
class P_Any_:
pass
class P_All_(P_Any_):
pass
class P_Skip_(P_All_):
pass
class Any_:
pass
class P_Inv_(P_Skip_):
pass
class All_(Any_):
pass
class P_Atom_(P_Inv_):
pass
class CharsetArg:
pass
class PatternArg:
pass
class GrammarArg:
pass
class Atom_(All_):
pass
class P_Inv(P_Inv_):
def __init__(self, arg: Var):
self.arg: Var = arg
def __str__(self):
return 'not ' + str(self.arg)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.arg!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.arg == other.arg
else:
return NotImplemented
__hash__ = None
class P_Optional(P_Skip_):
def __init__(self, arg: P_Inv_):
self.arg: P_Inv_ = arg
def __str__(self):
return 'optional ' + str(self.arg)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.arg!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.arg == other.arg
else:
return NotImplemented
__hash__ = None
class P_Repeat(P_Skip_):
def __init__(self, arg: P_Inv_):
self.arg: P_Inv_ = arg
def __str__(self):
return 'repeat ' + str(self.arg)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.arg!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.arg == other.arg
else:
return NotImplemented
__hash__ = None
class P_RepeatP(P_Skip_):
def __init__(self, arg: P_Inv_):
self.arg: P_Inv_ = arg
def __str__(self):
return '+' + str(self.arg)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.arg!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.arg == other.arg
else:
return NotImplemented
__hash__ = None
class P_All(P_All_):
def __init__(self, args: List[P_Skip_]):
self.args: List[P_Skip_] = args
def __str__(self):
return ' '.join(map(str, self.args))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.args!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.args == other.args
else:
return NotImplemented
__hash__ = None
class P_Any(P_Any_):
def __init__(self, args: List[P_All_]):
self.args: List[P_All_] = args
def __str__(self):
return ' | '.join(map(str, self.args))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.args!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.args == other.args
else:
return NotImplemented
__hash__ = None
class Str(Atom_, CharsetArg, P_Atom_, PatternArg):
def __init__(self, expr: STR):
self.expr: STR = expr
def __str__(self):
return str(self.expr)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.expr!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.expr == other.expr
else:
return NotImplemented
__hash__ = None
class Var(CharsetArg, P_Atom_, PatternArg):
def __init__(self, name: VAR):
self.name: VAR = name
def __str__(self):
return str(self.name)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name
else:
return NotImplemented
__hash__ = None
class Match(Atom_):
def __init__(self, name: VAR):
self.name: VAR = name
def __str__(self):
return '{' + str(self.name) + '}'
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name
else:
return NotImplemented
__hash__ = None
class MatchAs(Atom_):
def __init__(self, name: VAR, key: VAR):
self.name: VAR = name
self.key: VAR = key
def __str__(self):
return '{' + str(self.name) + ' as ' + str(self.key) + '}'
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r}, {self.key!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name and self.key == other.key
else:
return NotImplemented
__hash__ = None
class MatchIn(Atom_):
def __init__(self, name: VAR, key: VAR):
self.name: VAR = name
self.key: VAR = key
def __str__(self):
return '{' + str(self.name) + ' in ' + str(self.key) + '}'
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r}, {self.key!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name and self.key == other.key
else:
return NotImplemented
__hash__ = None
class All(All_):
def __init__(self, args: List[Atom_]):
self.args: List[Atom_] = args
def __str__(self):
return ' '.join(map(str, self.args))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.args!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.args == other.args
else:
return NotImplemented
__hash__ = None
class Any(Any_):
def __init__(self, args: List[All_]):
self.args: List[All_] = args
def __str__(self):
return ' | '.join(map(str, self.args))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.args!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.args == other.args
else:
return NotImplemented
__hash__ = None
class Optional(Atom_):
def __init__(self, child: Any_):
self.child: Any_ = child
def __str__(self):
return '[' + str(self.child) + ']'
def __repr__(self):
return f'{self.__class__.__qualname__}({self.child!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.child == other.child
else:
return NotImplemented
__hash__ = None
class Repeat(Atom_):
def __init__(self, child: Any_):
self.child: Any_ = child
def __str__(self):
return '(' + str(self.child) + ')'
def __repr__(self):
return f'{self.__class__.__qualname__}({self.child!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.child == other.child
else:
return NotImplemented
__hash__ = None
class Enum(Atom_):
def __init__(self, separator: Str, child: MatchIn):
self.separator: Str = separator
self.child: MatchIn = child
def __str__(self):
return str(self.separator) + '.' + str(self.child)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.separator!r}, {self.child!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.separator == other.separator and self.child == other.child
else:
return NotImplemented
__hash__ = None
class EnumP(Atom_):
def __init__(self, separator: Str, child: MatchIn):
self.separator: Str = separator
self.child: MatchIn = child
def __str__(self):
return str(self.separator) + '^' + str(self.child)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.separator!r}, {self.child!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.separator == other.separator and self.child == other.child
else:
return NotImplemented
__hash__ = None
class Charset(GrammarArg):
def __init__(self, name: VAR, args: List[CharsetArg]):
self.name: VAR = name
self.args: List[CharsetArg] = args
def __str__(self):
return 'c:' + str(self.name) + ' = ' + ' '.join(map(str, self.args))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r}, {self.args!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name and self.args == other.args
else:
return NotImplemented
__hash__ = None
class Pattern(GrammarArg):
def __init__(self, name: VAR, arg: P_Any_):
self.name: VAR = name
self.arg: P_Any_ = arg
def __str__(self):
return 'p:' + str(self.name) + ' = ' + str(self.arg)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r}, {self.arg!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name and self.arg == other.arg
else:
return NotImplemented
__hash__ = None
class Operator(GrammarArg):
def __init__(self, name: VAR, rule: Any_):
self.name: VAR = name
self.rule: Any_ = rule
def __str__(self):
return 'o:' + str(self.name) + ' = ' + str(self.rule)
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r}, {self.rule!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name and self.rule == other.rule
else:
return NotImplemented
__hash__ = None
class Group(GrammarArg):
def __init__(self, name: VAR, names: List[VAR]):
self.name: VAR = name
self.names: List[VAR] = names
def __str__(self):
return 'g:' + str(self.name) + ' = ' + ' | '.join(map(str, self.names))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.name!r}, {self.names!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.name == other.name and self.names == other.names
else:
return NotImplemented
__hash__ = None
class Grammar:
def __init__(self, lang: STR, version: STR, whitespace: STR, args: List[GrammarArg]):
self.lang: STR = lang
self.version: STR = version
self.whitespace: STR = whitespace
self.args: List[GrammarArg] = args
def __str__(self):
return '@lang:' + str(self.lang) + '\n' + '@version:' + str(self.version) + '\n' + '@whitespace:' + str(self.whitespace) + '\n' + '\n'.join(map(str, self.args))
def __repr__(self):
return f'{self.__class__.__qualname__}({self.lang!r}, {self.version!r}, {self.whitespace!r}, {self.args!r})'
def __eq__(self, other):
if type(self) is type(other):
return self.lang == other.lang and self.version == other.version and self.whitespace == other.whitespace and self.args == other.args
else:
return NotImplemented
__hash__ = None
def build(obj):
if isinstance(obj, Lemma):
if obj.value == 'P_Inv':
return P_Inv(arg=build(obj.data['arg']))
elif obj.value == 'P_Optional':
return P_Optional(arg=build(obj.data['arg']))
elif obj.value == 'P_Repeat':
return P_Repeat(arg=build(obj.data['arg']))
elif obj.value == 'P_RepeatP':
return P_RepeatP(arg=build(obj.data['arg']))
elif obj.value == 'P_All':
return P_All(args=list(map(build, obj.data['args'])))
elif obj.value == 'P_Any':
return P_Any(args=list(map(build, obj.data['args'])))
elif obj.value == 'Str':
return Str(expr=build(obj.data['expr']))
elif obj.value == 'Var':
return Var(name=build(obj.data['name']))
elif obj.value == 'Match':
return Match(name=build(obj.data['name']))
elif obj.value == 'MatchAs':
return MatchAs(name=build(obj.data['name']), key=build(obj.data['key']))
elif obj.value == 'MatchIn':
return MatchIn(name=build(obj.data['name']), key=build(obj.data['key']))
elif obj.value == 'All':
return All(args=list(map(build, obj.data['args'])))
elif obj.value == 'Any':
return Any(args=list(map(build, obj.data['args'])))
elif obj.value == 'Optional':
return Optional(child=build(obj.data['child']))
elif obj.value == 'Repeat':
return Repeat(child=build(obj.data['child']))
elif obj.value == 'Enum':
return Enum(separator=build(obj.data['separator']), child=build(obj.data['child']))
elif obj.value == 'EnumP':
return EnumP(separator=build(obj.data['separator']), child=build(obj.data['child']))
elif obj.value == 'Charset':
return Charset(name=build(obj.data['name']), args=list(map(build, obj.data['args'])))
elif obj.value == 'Pattern':
return Pattern(name=build(obj.data['name']), arg=build(obj.data['arg']))
elif obj.value == 'Operator':
return Operator(name=build(obj.data['name']), rule=build(obj.data['rule']))
elif obj.value == 'Group':
return Group(name=build(obj.data['name']), names=list(map(build, obj.data['names'])))
elif obj.value == 'Grammar':
return Grammar(lang=build(obj.data['lang']), version=build(obj.data['version']), whitespace=build(obj.data['whitespace']), args=list(map(build, obj.data['args'])))
else:
raise ValueError(obj.value)
elif isinstance(obj, Token):
return obj.content
else:
raise TypeError(type(obj))
| 27.506518 | 356 | 0.572744 | 1,830 | 14,771 | 4.215301 | 0.052459 | 0.050817 | 0.051335 | 0.045631 | 0.763158 | 0.714934 | 0.662302 | 0.646098 | 0.625097 | 0.603967 | 0 | 0 | 0.287591 | 14,771 | 536 | 357 | 27.557836 | 0.733061 | 0.003317 | 0 | 0.621333 | 1 | 0.002667 | 0.118682 | 0.066304 | 0 | 0 | 0 | 0 | 0 | 1 | 0.237333 | false | 0.029333 | 0.008 | 0.117333 | 0.688 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6a908709c66dda57c971b69db2cd057e858d644b | 21,802 | py | Python | shoptimizer_api/optimizers_builtin/image_link_optimizer_test.py | alex-berish/shoptimizer | 3d8837352c0ae52dee2ac804750866a2b93809f1 | [
"Apache-2.0"
] | 27 | 2020-08-21T05:59:29.000Z | 2022-03-30T17:26:44.000Z | shoptimizer_api/optimizers_builtin/image_link_optimizer_test.py | alex-berish/shoptimizer | 3d8837352c0ae52dee2ac804750866a2b93809f1 | [
"Apache-2.0"
] | null | null | null | shoptimizer_api/optimizers_builtin/image_link_optimizer_test.py | alex-berish/shoptimizer | 3d8837352c0ae52dee2ac804750866a2b93809f1 | [
"Apache-2.0"
] | 20 | 2020-09-14T08:38:11.000Z | 2022-03-13T22:37:40.000Z | # coding=utf-8
# Copyright 2021 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for image_link_optimizer.py."""
import json
import time
from typing import Any, Dict, Iterable, List
from unittest import mock
import urllib.error
from absl.testing import absltest
import constants
import flask
from optimizers_builtin import image_link_optimizer
from test_data import requests_bodies
from util import app_util
from util import image_util
from util import networking
def _build_list_of_image_links(num_links: int,
file_type: str = 'jpg') -> List[str]:
return [f'https://examples.com/image{n}.{file_type}'
for n in list(range(num_links))]
def _request_body_from_image_links(links: Iterable[str]) -> Dict[str, Any]:
return requests_bodies.build_request_body(properties_to_be_updated={
'imageLink': links[0],
'additionalImageLink': links[1:]
})
def _setup_flask_with_configs_only():
app = flask.Flask(__name__)
app.config['CONFIGS'] = app_util._load_all_configs()
app.app_context().push()
@mock.patch.object(image_link_optimizer, '_CONFIG_FILE_NAME',
new='image_link_optimizer_config_test')
class ImageLinkOptimizerTest(absltest.TestCase):
def setUp(self):
super().setUp()
_setup_flask_with_configs_only()
# By default, mock load_bytes_at_url to return empty bytes
self.mock_urlopen = self.enter_context(
mock.patch.object(networking, 'load_bytes_at_url', return_value=b'',
autospec=True))
# By default, mock the ML model to avoid scoring each image
self.mock_model = self.enter_context(
mock.patch.object(image_util, 'score_image', return_value=float('inf'),
autospec=True))
self.optimizer = image_link_optimizer.ImageLinkOptimizer(
image_link_optimizer.CONFIGURATION_DEFAULTS)
def test_config_uses_defaults_if_no_config_file_or_assignment(self):
with mock.patch.object(image_link_optimizer, '_CONFIG_FILE_NAME', 'file'):
optimizer = image_link_optimizer.ImageLinkOptimizer()
self.assertEqual(
image_link_optimizer
.CONFIGURATION_DEFAULTS['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
image_link_optimizer
.CONFIGURATION_DEFAULTS['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_config_uses_config_file_if_no_assignment(self):
with open(f'config/{image_link_optimizer._CONFIG_FILE_NAME}.json') as f:
file_config = json.load(f)
optimizer = image_link_optimizer.ImageLinkOptimizer()
self.assertEqual(
file_config['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
file_config['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_config_uses_assignment_if_available(self):
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': float('inf')
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
self.assertEqual(
assignments['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
assignments['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_negative_require_image_score_quality_better_than_set_to_zero(self):
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_score_quality_better_than': -1
})
self.assertEqual(0, optimizer.require_image_score_quality_better_than)
def test_raises_if_invalid_require_image_score_quality_better_than(self):
with self.assertRaises(ValueError):
image_link_optimizer.ImageLinkOptimizer({
'require_image_score_quality_better_than': 'some string'
})
def test_optimizer_does_nothing_when_alternate_image_links_missing(self):
original_data = requests_bodies.build_request_body(
properties_to_be_removed=['additionalImageLink'])
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertNotIn('additionalImageLink', product)
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_does_nothing_when_alternate_image_links_valid(self):
image_links = _build_list_of_image_links(3)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links, product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_does_not_remove_image_links_when_not_above_maximum(self):
image_links = _build_list_of_image_links(constants.MAX_ALTERNATE_IMAGE_URLS)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links, product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_truncates_additional_images_above_maximum(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 1)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[:constants.MAX_ALTERNATE_IMAGE_URLS],
product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_optimizer_requests_data_from_all_image_urls(self):
image_links = _build_list_of_image_links(3)
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_urlopen.assert_has_calls(
[mock.call(image_links[0]),
mock.call(image_links[1]),
mock.call(image_links[2])],
any_order=True)
def test_doesnt_download_urls_if_not_require_image_can_be_downloaded(self):
image_links = _build_list_of_image_links(3)
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_can_be_downloaded': False
})
optimizer.process(_request_body_from_image_links(image_links))
self.mock_urlopen.assert_not_called()
def test_doesnt_attempt_scoring_if_not_require_image_can_be_downloaded(self):
image_links = _build_list_of_image_links(3)
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_can_be_downloaded': False
})
optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_optimizer_does_not_request_from_nonhttp_urls(self):
image_links = _build_list_of_image_links(2)
image_links[0] = 'ftp://google.com/image.jpg'
self.optimizer.process(_request_body_from_image_links(image_links))
self.assertNotIn(
mock.call(image_links[0]), self.mock_urlopen.call_args_list)
def test_optimizer_does_not_request_from_long_urls(self):
image_links = _build_list_of_image_links(2)
many_zeros = '0' * constants.MAX_IMAGE_URL_LENGTH
image_links[0] = f'https://google.com/image{many_zeros}.jpg'
self.optimizer.process(_request_body_from_image_links(image_links))
self.assertNotIn(
mock.call(image_links[0]), self.mock_urlopen.call_args_list)
def test_does_not_remove_additional_images_with_errors_below_max(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[1] = urllib.error.HTTPError(image_links[1], 500, 'Internal Error',
{}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_scores_all_valid_images(self):
image_links = _build_list_of_image_links(3)
responses = bytearray('ABCDEF', 'ASCII')
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_has_calls([
mock.call(responses[0]),
mock.call(responses[1]),
mock.call(responses[2])
], any_order=True)
def test_does_not_score_images_with_no_content(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_does_not_score_images_if_minimum_score_is_infinite(self):
image_links = _build_list_of_image_links(3)
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': float('inf')
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
responses = bytearray('ABCDEF', 'ASCII')
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_does_not_score_images_with_url_errors(self):
image_links = _build_list_of_image_links(3)
responses = [urllib.error.HTTPError(link, 500, 'Internal Error', {}, None)
for link in image_links]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_preferentially_removes_images_with_invalid_urls(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 2)
image_links[1] = 'ftp://google.com/image.jpg'
responses = [b''] * len(image_links)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 1st additional image link
expected_links = image_links[2:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_removes_images_above_size_limit(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 2)
responses = [b''] * len(image_links)
responses[1] = b'0' * (constants.MAX_IMAGE_FILE_SIZE_BYTES + 1)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 1st additional image link
expected_links = image_links[2:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_removes_images_with_errors_above_max(self):
image_links = _build_list_of_image_links(13)
responses = [b''] * len(image_links)
responses[4] = urllib.error.HTTPError(image_links[4], 500,
'Internal Error', {}, None)
responses[8] = urllib.error.HTTPError(image_links[8], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 4th and 8th image due to errors
expected_links = image_links[1:4] + image_links[5:8] + image_links[9:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_first_removes_errors_above_max_then_truncates_at_max(self):
image_links = _build_list_of_image_links(13)
responses = [b''] * len(image_links)
responses[4] = urllib.error.HTTPError(image_links[1], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 4th image due to error and the last from truncation
expected_links = image_links[1:4] + image_links[5:-1]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_swaps_on_primary_image_error_with_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[1], product['imageLink'])
expected_links = [image_links[0]] + image_links[2:]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_swaps_on_primary_image_error_with_any_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
responses[1] = urllib.error.HTTPError(image_links[1], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[2], product['imageLink'])
# Ensure imageLink swapped with 2nd alternate, since the 1st is an error
expected_links = [image_links[1], image_links[0]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_chooses_lowest_scoring_image(self):
image_links = _build_list_of_image_links(5)
image_responses = [b'101010'] * len(image_links)
image_responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
score_responses = [0.75, 0.5, 0.25, 1.0]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Ensure imageLink swapped with 3rd alternate; that has the lowest score
self.assertEqual(image_links[3], product['imageLink'])
expected_links = [image_links[1], image_links[2],
image_links[0], image_links[4]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_images_scoring_below_threshold_are_considered_invalid(self):
image_links = _build_list_of_image_links(3)
image_responses = [b'101010'] * len(image_links)
score_responses = [0.75, 0.25, 1.0]
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': 0.5
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Ensure imageLink swapped with 1st alternate; that has the lowest score
self.assertEqual(image_links[1], product['imageLink'])
expected_links = [image_links[0], image_links[2]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_do_not_swap_images_if_better_alternates_score_below_threshold(self):
image_links = _build_list_of_image_links(3)
image_responses = [b'101010'] * len(image_links)
score_responses = [0.75, 0.6, 0.7]
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': 0.5
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_does_not_swap_on_primary_image_error_if_no_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [urllib.error.HTTPError(link, 500, 'Internal Error', {}, None)
for link in image_links]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_downloads_images_in_parallel(self):
sleep_amount_secs = 0.25
image_links = _build_list_of_image_links(3)
def _wait_before_responding(*_args):
time.sleep(sleep_amount_secs)
return b''
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = _wait_before_responding
start_time = time.time()
self.optimizer.process(_request_body_from_image_links(image_links))
end_time = time.time()
# Elapsed time < sum of the sleep times iff requests are in parallel
self.assertLess(end_time - start_time,
len(image_links) * sleep_amount_secs)
| 42.170213 | 80 | 0.735254 | 2,776 | 21,802 | 5.362752 | 0.106268 | 0.10882 | 0.02922 | 0.026869 | 0.817223 | 0.793645 | 0.771949 | 0.74508 | 0.737959 | 0.699805 | 0 | 0.012366 | 0.172874 | 21,802 | 516 | 81 | 42.251938 | 0.813176 | 0.055591 | 0 | 0.634409 | 0 | 0 | 0.103497 | 0.037255 | 0 | 0 | 0 | 0 | 0.158602 | 1 | 0.094086 | false | 0 | 0.034946 | 0.005376 | 0.139785 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a9d44308ac56300c8eda323d8b3f13a0ea41d16 | 80 | py | Python | SeleniumCookies/__init__.py | L04DB4L4NC3R/Selenium-Cookie-Injector | 1c381d56e7f885cf744a394fadca5827a4feca8c | [
"MIT"
] | null | null | null | SeleniumCookies/__init__.py | L04DB4L4NC3R/Selenium-Cookie-Injector | 1c381d56e7f885cf744a394fadca5827a4feca8c | [
"MIT"
] | null | null | null | SeleniumCookies/__init__.py | L04DB4L4NC3R/Selenium-Cookie-Injector | 1c381d56e7f885cf744a394fadca5827a4feca8c | [
"MIT"
] | null | null | null | from SeleniumCookies import wrapper
from SeleniumCookies import cookie_injector
| 26.666667 | 43 | 0.9 | 9 | 80 | 7.888889 | 0.666667 | 0.535211 | 0.704225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 2 | 44 | 40 | 0.986111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6adf0747b8c46bf64fb51bc34c6661195d5fb9d7 | 37 | py | Python | tests/__init__.py | armohamm/xhtml2pdf | d591b8ac1ebf5454eccf773d718f06f9b483b345 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | armohamm/xhtml2pdf | d591b8ac1ebf5454eccf773d718f06f9b483b345 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | armohamm/xhtml2pdf | d591b8ac1ebf5454eccf773d718f06f9b483b345 | [
"Apache-2.0"
] | 1 | 2022-03-04T22:06:09.000Z | 2022-03-04T22:06:09.000Z | from .runtests import buildTestSuite
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a7b2c2d9c6bee3b00a2d6241527ef752264be13 | 169,812 | py | Python | avaliador_de_frames_coral.py | carlosjuniorcosta1/avaliador_de_frames_lexico | c3e641b6e6998874ebf3e7b8f91dc733c5c5713a | [
"MIT"
] | null | null | null | avaliador_de_frames_coral.py | carlosjuniorcosta1/avaliador_de_frames_lexico | c3e641b6e6998874ebf3e7b8f91dc733c5c5713a | [
"MIT"
] | null | null | null | avaliador_de_frames_coral.py | carlosjuniorcosta1/avaliador_de_frames_lexico | c3e641b6e6998874ebf3e7b8f91dc733c5c5713a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Mar 26 00:39:10 2022
@author: Usuario
"""
import pandas as pd
import re
!pip install spacy
!python3 -m spacy download pt
!pip install --upgrade plotly
import spacy
nlp = spacy.load('pt_core_news_sm')
import plotly.graph_objects as go
import plotly.express as px
import numpy as np
import os
file1 = pd.read_csv(str(input('Filename (C-ORAL-ESQ/BRASIL, csv): ')))
file2 = pd.read_csv('frame_net_dados.csv')
file_plot = ' '.join([x[:-4] for x in os.listdir() if not x.startswith('frame_net_dados') and x.endswith('csv')])
def coral_framenet():
df = file1.copy()
df_frame = file1.merge(file2, how = 'left')
df_frame = df_frame.fillna(' ')
df_frame['normalized_utterances'] = df_frame['normalized_utterances'].str.lower()
df_frame['lema'] = df_frame['normalized_utterances'].apply(lambda x: ' '.join([token.lemma_ for token in nlp(x)]))
#cria 698 colunas de frames e conta os lexemas dos enunciados
df_frame["Abundância_distribuída"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcobrir\b|\brevestir\b", str(x))))
df_frame["Abundância_distribuída"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcobrir\b|\brevestir\b", str(x))))
df_frame["Abandono"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babandonado\b|\babandonar\b|\babandono\b|\bdeixar\b|\besquecer\b|\besquecido\b|\bnegligenciar\b", str(x))))
df_frame["Abertura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baberto\b|\bfechado\b", str(x))))
df_frame["Absorção_de_calor"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bassar\b|\bbranquear\b|\bcozinhar\b|\bdourar\b|\bferventar\b|\bferver\b|\bfritar\b|\bgrelhar\b|\brefogar\b", str(x))))
df_frame["Abundância"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babundante\b|\babundar\b|\brico\b", str(x))))
df_frame["Abundância_distribuída"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcobrir\b|\brevestir\b", str(x))))
df_frame["Abundar_com"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babarrotado\b|\babundante\b|\badornado\b|\baglomerado\b|\baglomerar\b|\bamanteigado\b|\bamontoado\b|\basfaltado\b|\baspergido\b|\bborrifado\b|\bcheio\b|\bcoberto\b|\bdecorado\b|\bdesarrumado\b|\bdourado\b|\bdrapeado\b|\bembelezado\b|\bemperrado\b|\bempilhado\b|\bempoeirado\b|\bencapotado\b|\bencasacado\b|\bencoberto\b|\benfeitado\b|\bengatinhar\b|\benvernizado\b|\bescovado\b|\besmaltado\b|\bespalhado\b|\bforrado\b|\binjetado\b|\blacado\b|\bladrilhado\b|\blotado\b|\bmanchado\b|\bornamentado\b|\bpavimentado\b|\bpendurado\b|\bpintado\b|\bpolvilhado\b|\bpontilhado\b|\bpopulacional\b|\bpreenchido\b|\bproliferar\b|\brastejante\b|\brebocado\b|\brecheado\b|\bregado\b|\brepleto\b|\brespingado\b|\bsalpicado\b|\bsuperlotado\b", str(x))))
df_frame["Abusar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babusar\b|\babuso\b", str(x))))
df_frame["Acabar_de_descobrir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchocado\b", str(x))))
df_frame["Ação_sucedida"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbem\ssucedido\b|\bbem-sucedido\b|\bbombar\b|\bdesandar\b", str(x))))
df_frame["Aceitar_ou_recusar_a_agir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brecusar\b|\bresistir\b", str(x))))
df_frame["Acessórios_de_vestuário"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfita\b|\bmáscara\b", str(x))))
df_frame["Ações_do_árbitro"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapitar\sfim\b|\bapitar\sinício\b|\bapitar\b|\bconceder\b|\bdecidir\b|\bdecisão\b|\bdesclassificar\b|\bdesqualificar\b|\bencerrar\b|\bexpulsar\b|\biniciar\b|\binterromper\b|\bmarcar\sfalta\b|\bmarcar\b|\bmostrar\b|\bparalisar\b|\bparar\b|\breiniciar\b|\bsuspender\b|\bterminar\b", str(x))))
df_frame["Acomodação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacampamento\b|\bacomodação\b|\balbergue\b|\balojamento\b|\bapart-hotel\b|\bapartamento\b|\bbangalô\b|\bcafofo\b|\bcamping\b|\bcasa\sde\sférias\b|\bcasa\b|\bchácara\b|\bchalé\b|\bcomplexo\sde\scondomínio\b|\bcomplexo\sresidencial\b|\bestância\b|\bgranja\b|\bhospedagem\sdomiciliar\b|\bhospedagem\b|\bhóspede\b|\bhostel\b|\bhotel\sfazenda\b|\bhotel\b|\bhotelaria\b|\bmotel\b|\bpensão\b|\bpousada\b|\brancho\b|\bresort\b|\bsítio\b", str(x))))
df_frame["Acompanhamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\ssós\b|\bacompanhar\b|\bcom\b|\bcom\b|\bcompanhia\b|\bindividual\b|\bjunto\b|\bsozinho\b|\bunido\b", str(x))))
df_frame["Acordar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacordar\b", str(x))))
df_frame["Adequação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badequação\b|\badequado\b|\badequar\b|\bambientar\b|\bapropriado\b|\bbom\ssenso\b|\bbom\b|\bcerto\b|\bclimatização\b|\bclimatizar\b|\bcorreto\b|\binadequação\b|\binadequado\b|\binapropriado\b|\bindicado\b|\bprestar\b|\bpróprio\b|\bservir\b", str(x))))
df_frame["Adição"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacrescentar\b|\badicionar\b|\bmais\b|\bsomar\b", str(x))))
df_frame["Adjacência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badjacência\b|\badjacente\b|\bcontiguidade\b|\bcontíguo\b|\bestar\sjunto\b|\bjuntar\b|\blimitante\b|\blimitar\b|\bvizinho\b", str(x))))
df_frame["Adotar_seleção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badoção\b|\badotar\b|\bassumir\b|\bseguir\b", str(x))))
df_frame["Adquirir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconseguir\b|\bganhar\b|\bobtido\b|\breconquistar\b|\brecuperar\b", str(x))))
df_frame["Afetar_pelo_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacontecer\b|\bassolar\b|\batingir\b|\blevar\b|\bsofrer\b|\bver\b", str(x))))
df_frame["Afirmar_ou_negar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\binegável\b|\bnegar\b|\bnegativo\b", str(x))))
df_frame["Agir_intencionalmente"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bação\b|\bagente\b|\bagir\b|\batitude\b|\batividade\b|\bato\b|\bator\b|\batuar\b|\bconduzir\b|\bcoordenação\b|\bdesempenho\b|\bempenhar\b|\bempreender\b|\bengajar\b|\bexecução\b|\bexecutar\b|\bfase\b|\bfazer\b|\bfeito\b|\bgesto\b|\bmedida\b|\bmissão\b|\bmovimento\b|\bobra\b|\bpasso\b|\bperfazer\b|\bpromover\b|\brealizar\b", str(x))))
df_frame["Agregado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacervo\b|\baglomerado\b|\bamontoado\b|\banfitrião\b|\bassembléia\b|\bbancada\b|\bbanda\b|\bbando\b|\bbatalhão\b|\bcacho\b|\bcaravana\b|\bcardume\b|\bcasal\b|\bcírculo\ssocial\b|\bcírculo\b|\bclasse\b|\bcoleção\b|\bcolônia\b|\bcombinação\b|\bcombo\b|\bcomunidade\b|\bconjunto\b|\bcorja\b|\bcorpo\b|\bcorporação\b|\bdupla\b|\benxame\b|\bequipe\b|\bescola\b|\besquadra\b|\besquadrão\b|\bexército\b|\bfacção\b|\bfamília\b|\bfardo\b|\bfeixe\b|\bforça\b|\bfornada\b|\bfrota\b|\bgaláxia\b|\bgame\b|\bgangue\b|\bgentalha\b|\bgrupo\b|\bharém\b|\bhorda\b|\bjogo\b|\blegião\b|\blivro\b|\bmaço\b|\bmáfia\b|\bmaioria\b|\bmanada\b|\bmassa\b|\bmatilha\b|\bmonte\b|\bmultidão\b|\bmultiplicidade\b|\bmultiplicidade\b|\bmuvuca\b|\bninhada\b|\bpacote\b|\bpanelinha\b|\bpartido\b|\bpelotão\b|\bpenca\b|\bpilha\b|\bplebe\b|\bpopulação\b|\bpopulacho\b|\bpunhado\b|\bquarteto\b|\bquinteto\b|\bralé\b|\brebanho\b|\brepertório\b|\bsafra\b|\bsexteto\b|\bsortimento\b|\btime\b|\btribo\b|\btrio\b|\btripulação\b|\btropel\b|\bturma\b|\buniverso\b|\bvariedade\b", str(x))))
df_frame["Agricultura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcultivo\b", str(x))))
df_frame["Agrupar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bencontrar\b", str(x))))
df_frame["Ajustar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badaptar\b|\badequar\b", str(x))))
df_frame["Alcance"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdistância\b|\bvista\b", str(x))))
df_frame["Alimentação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baçaiteria\b|\bbar-restaurante\b|\bbar\b|\bbarraca\b|\bbarraquinha\b|\bbirosca\b|\bbistrô\b|\bbonbonnière\b|\bboteco\b|\bbotequim\b|\bbufê\b|\bbuffet\b|\bcafé\b|\bcafeteria\b|\bcervejaria\b|\bchampanharia\b|\bchampanheria\b|\bchocolateria\b|\bchoperia\b|\bchurrascaria\b|\bdrinkeria\b|\bfast-food\b|\bfood\struck\b|\bhamburgueria\b|\blanchonete\b|\bloja\sde\sbebidas\salcoólicas\b|\bloja\sde\sbebidas\b|\bloja\sde\scervejas\b|\bmercado\b|\bmercearia\b|\bpadaria\b|\bpastelaria\b|\bpé-sujo\b|\bpesque-pague\b|\bpesqueiro\b|\bpizzaria\b|\bpodrão\b|\bpub\b|\brestaurante\sárabe\b|\brestaurante\sbrasileiro\b|\brestaurante\schinês\b|\brestaurante\seuropeu\b|\brestaurante\sfrancês\b|\brestaurante\sitaliano\b|\brestaurante\sjaponês\b|\brestaurante\smexicano\b|\brestaurante\smineiro\b|\brestaurante\sportuguês\b|\brestaurante\sself-service\b|\brestaurante\svegano\b|\brestaurante\b|\brodízio\b|\bself-service\b|\bsorveteria\b|\bsupermercado\b|\btaberna\b|\btacacazeira\b|\btemakeria\b|\btrailer\b|\bvegetariano\b", str(x))))
df_frame["Alimentos_e_bebidas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacarajé\b|\bagnolini\b|\bágua\sde\scoco\b|\balimento\b|\bamor\sperfeito\b|\barroz-doce\b|\bbaguete\b|\bbanana\sfrita\b|\bbarreado\b|\bbatata-frita\b|\bbebida\salcoólica\b|\bbebida\b|\bbiscoito\b|\bbisteca\b|\bbobó\b|\bbolinho\b|\bbolo\b|\bbreja\b|\bbrigadeiro\b|\bbruschetta\b|\bbuchada\sde\sbode\b|\bburguer\b|\bburrata\b|\bburrito\b|\bcafé\b|\bcaipirinha\b|\bcaipiríssima\b|\bcaipiroska\b|\bcaipisaquê\b|\bcaipivodka\b|\bcajuína\b|\bcajuzinho\b|\bcalda\b|\bcaldeirada\b|\bcaldo\sde\scana\b|\bcaldo\b|\bcanjica\b|\bcanjiquinha\b|\bcapuccino\b|\bcarioca\b|\bcarpaccio\b|\bcaruru\b|\bcatchup\b|\bcereal\b|\bchampagne\b|\bchampanhe\b|\bcheeseburger\b|\bchimarrão\b|\bchope\b|\bchopp\b|\bchouriço\b|\bchurrasco\b|\bchurro\b|\bcocada\b|\bcomida\scaiçara\b|\bcomida\b|\bcompota\b|\bcoquetel\b|\bcroquete\b|\bcuca\b|\bcurau\b|\bdobradinha\b|\bdoce\b|\bdrink\b|\bdrinque\b|\beinsbein\b|\bempada\b|\bespeciaria\b|\bespresso\b|\bexpresso\b|\bfarofa\b|\bfeijão-tropeiro\b|\bfeijoada\b|\bfrutos\sdo\smar\b|\bgalinha\sao\smolho\spardo\b|\bgalinha\sensopada\b|\bgelato\b|\bgeleia\b|\bgengibre\b|\bgoiabada\b|\bgordice\b|\bguloseima\b|\bhambúrguer\b|\bhummus\b|\biguaria\b|\bkafta\b|\bkibe\b|\bleitão\sà\spururuca\b|\blicor\b|\blimão\b|\blimonada\b|\bmaniçoba\b|\bmarguerita\b|\bmilkshake\b|\bmolho\b|\bmoqueca\scapixaba\b|\bmoqueca\b|\bmousse\b|\bmozzarela\b|\bnachos\b|\bnoz-moscada\b|\bovo\b|\bpaella\sde\smariscos\b|\bpaella\b|\bpamonha\b|\bpanqueca\b|\bpastel\b|\bpé-de-moleque\b|\bpicadinho\b|\bpipoca\b|\bpirão\b|\bpirarucu\sde\scasaca\b|\bpizza\b|\bpodrão\b|\bpolenta\b|\bprato\stípico\b|\bprato\b|\bpudim\b|\bquentão\b|\bquibe\b|\brabada\b|\brefeição\b|\brefrigerante\b|\brisoto\b|\brosca\b|\bsaideira\b|\bsalada\b|\bsalgado\b|\bsalpicão\b|\bsanduíche\sde\spernil\scom\sabacaxi\b|\bsanduíche\b|\bsarapatel\b|\bsashimi\b|\bsobrecoxa\b|\bsonho\b|\bsopa\sagnolini\b|\bsopa\b|\bsorvete\b|\bsuco\b|\bsushi\b|\btacacá\b|\btangerina\b|\btapa\b|\btererê\b|\btorta\b|\buísque\b|\bvaca\satolada\b|\bvatapá\b|\bwhisky\b", str(x))))
df_frame["Alternatividade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bem\svez\sde\b", str(x))))
df_frame["Alugar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balugar\b", str(x))))
df_frame["Alvo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\b|\bpara\b", str(x))))
df_frame["Amalgamação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bentrelaçar\b|\bmisto\b|\bmistura\b|\bmisturar\b|\bunificado\b", str(x))))
df_frame["Amigável_ou_hostil"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badversário\b|\binimigo\b", str(x))))
df_frame["Andar_de_veículo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bandar\b|\bcruzeiro\b|\bfazer\smochilão\b|\bnavegação\b|\bnavegar\b|\bpegar\b|\bvelejar\b|\bvoar\b|\bvoo\b", str(x))))
df_frame["Anexação_incoativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bencontrar\b", str(x))))
df_frame["Anexar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\banexar\b|\binterligar\b|\bligar\b", str(x))))
df_frame["Animais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babelha\b|\bácaro\b|\banimal\b|\baraponga\b|\bbesouro\b|\bboi\b|\bborboleta\b|\bcachorro\b|\bcão\b|\bcarneiro\b|\bcavalo\b|\bcavaquinha\b|\bchimpanzé\b|\bcordeiro\b|\belefante\b|\bfauna\b|\bfilhote\b|\bfrango\b|\bgalinha\b|\bgalo\b|\bgato\b|\bgirafa\b|\binseto\b|\bjoaninha\b|\bleão\b|\bleitão\b|\blobo\b|\blouva-a-deus\b|\bmacaco\b|\bmosquito\b|\bovelha\b|\bpássaro\b|\bpeixe\b|\bpeixinho\b|\bpet\b|\bpolvo\b|\bporco\b|\braia\b|\braposa\b|\bserpente\b|\bsiri\b|\btigre\b|\burubu\b|\bvaca\b|\bvaga-lume\b|\bzebra\b", str(x))))
df_frame["Aparato"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bandroid\b|\baplicativo\b|\bar-condicionado\b|\bequipamento\b|\binformática\b|\bredes\ssociais\b|\bserra\b|\btecnologia\b|\busuário\b", str(x))))
df_frame["Aparecer_em"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparecer\b", str(x))))
df_frame["Aplicar_calor"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdourar\b", str(x))))
df_frame["Apoiar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapoiar\b|\bapoio\b", str(x))))
df_frame["Apostar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapostar\b", str(x))))
df_frame["Área_biológica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcerrado\b|\bdeserto\b|\bfloresta\b|\bmato\b|\boásis\b|\bpântano\b|\bpradaria\b|\bselva\b", str(x))))
df_frame["Arma"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barma\b|\bmaça\b|\btesoura\b|\btorpedo\b", str(x))))
df_frame["Armadilha"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barmadilha\b", str(x))))
df_frame["Armazenar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bguardado\b|\bmanter\b|\breservar\b", str(x))))
df_frame["Arquitetura_de_conexão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdegrau\b|\bjanela\b|\bporta\b", str(x))))
df_frame["Arrumar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrumado\b|\barrumar\b|\bcaprichado\b|\bequipar\b|\borganizado\b", str(x))))
df_frame["Artefato"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrastão\b|\basa-delta\b|\bbalão\b|\bbandeja\b|\bbebedouro\b|\bbico\sde\smamadeira\b|\bbolsa\b|\bbrinquedo\b|\bcadeira\b|\bcatálogo\b|\bcelular\b|\bchupeta\b|\bchuveiro\b|\bcoberta\b|\bcobertor\b|\bcomputador\b|\bconcha\b|\bcontrole\b|\bespelho\b|\bestilete\b|\bfio\b|\bfrigideira\b|\bimpressora\b|\binternet\b|\blâmina\b|\blençol\b|\blente\b|\bluva\scirúrgica\b|\bmala\b|\bmesa\b|\bmesa\b|\bmochila\b|\bmontanha\srussa\b|\borigami\b|\bpano\b|\bpapel\stoalha\b|\bpipa\b|\bplaca\b|\bpneu\b|\bprato\b|\bprisma\b|\brádio\b|\btecnologia\b|\btelefone\b|\btelevisão\b|\btesouro\b|\btoboágua\b|\btúmulo\b|\bvideo\sgame\b|\bvideo-game\b", str(x))))
df_frame["Artesanato"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barte\b|\bciência\b|\bcrochê\b|\bofício\b", str(x))))
df_frame["Artes_performáticas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barte\b|\bballet\b|\bcantar\b|\bencenação\b|\bensaiar\b|\bFazer\b|\bjazz\b|\bmusical\b|\bpantomima\b|\bpeça\sde\steatro\b|\bpeça\b|\bperformance\b|\bperformar\b|\bsapateado\b|\bteatral\b|\bteatro\b|\btocar\b", str(x))))
df_frame["Artificialidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparentemente\b|\benganosa\b", str(x))))
df_frame["Assear"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\banti-higiênico\b|\bbanho\b|\benxaguar\b|\blavar\b|\blavável\b|\blimpar\b", str(x))))
df_frame["Assistência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacudir\b|\bajudar\b|\batendimento\b|\bauxílio\b|\bcuidar\b", str(x))))
df_frame["Assistir_a_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bassistir\b|\bcomparecer\b|\bfrequentar\b|\bir\b|\bver\b", str(x))))
df_frame["Associação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brelativo\b", str(x))))
df_frame["Atacar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagressor\b", str(x))))
df_frame["Atenção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batenção\b|\batender\b|\bchamar\satenção\b|\bdar\sbola\b|\bligar\b", str(x))))
df_frame["Atividade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batividade\b|\bbrincadeira\b|\bbrincar\b|\bdivertimento\b|\bguerrinha\b|\bjogar\b|\bpique-esconde\b|\bsensação\b", str(x))))
df_frame["Atividades_do_turista"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bartesanato\b|\barvorismo\b|\bbanhar\b|\bbrincar\b|\bcapoeira\b|\bfrevo\b|\bpatinar\b|\bpintura\b|\bsinuca\b|\bsurfar\b|\btirolesa\b", str(x))))
df_frame["Atividade_em_andamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcontinuar\b|\bdecorrer\b|\bficar\b|\bpassar\b|\bprosseguir\b|\bviver\b", str(x))))
df_frame["Atividade_iniciar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcair\b|\bcomeçar\b|\bdesencadear\b|\bentrar\b|\bestrear\b|\bgerar\b|\binauguração\b|\binaugurar\b|\biniciante\b|\biniciar\b|\binstituir\b|\bpassar\b|\bprincipiar\b", str(x))))
df_frame["Atividade_interromper"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdeixar\b|\bparar\b", str(x))))
df_frame["Atividade_pausar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcortar\b|\bencerrar\b|\bimobilizar\b|\bparar\b|\breter\b", str(x))))
df_frame["Atividade_preparada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdisponível\b|\bpreparado\b|\bpreparo\b|\bpronto\b", str(x))))
df_frame["Atividade_preparar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bestruturar\b|\borganizar\b|\bpreparar\b|\bpreparo\b", str(x))))
df_frame["Atividade_terminar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babdicar\b|\bacabar\b|\bconcluir\b|\bdesistir\b|\bformar\b", str(x))))
df_frame["Atletas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badversário\b|\batleta\b|\bbateria\b|\bclube\b|\bcompetidor\b|\bdesafiante\b|\bdesportista\b|\bdueto\b|\bdupla\b|\bequipe\b|\besportista\b|\bjogador\b|\boponente\b|\bparaolímpico\b|\bparticipante\b|\bpelotão\b|\brival\b|\bseleção\b|\btime\b|\btrio\b", str(x))))
df_frame["Atletas_por_esporte"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamazona\b|\barqueiro\b|\barremessador\b|\batirador\b|\bboleiro\b|\bboxeador\b|\bcaiaquista\b|\bcanoísta\b|\bcarateca\b|\bcavaleiro\b|\bciclista\b|\bcorredor\b|\bdecatleta\b|\bescalador\b|\besgrimista\b|\bfundista\b|\bginasta\b|\bgolfista\b|\bgrequista\b|\bhalterofilista\b|\bheptatleta\b|\bjogador\sde\sbadminton\b|\bjogador\sde\sbasquete\b|\bjogador\sde\sbeisebol\b|\bjogador\sde\sfutebol\b|\bjogador\sde\shandball\b|\bjogador\sde\shóquei\ssobre\sgrama\b|\bjogador\sde\spólo\b|\bjogador\sde\srúgbi\b|\bjogador\sde\ssoftbol\b|\bjogador\sde\svôlei\b|\bjudoca\b|\blançador\b|\blevantador\b|\blutador\b|\bmaratonista\b|\bmarchador\b|\bmeio-fundista\b|\bmesatenista\b|\bnadador\b|\bpentatleta\b|\bpesista\b|\bpugilista\b|\bremador\b|\bsaltador\b|\bskatista\b|\bsurfista\b|\btenista\b|\btrampoliner\b|\btrampolinista\b|\btriatleta\b|\bvelejador\b|\bvelocista\b", str(x))))
df_frame["Atletas_por_posição"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babertura\b|\bala-armador\b|\bala-pivô\b|\bala\b|\bapanhador\b|\barmador\scentral\b|\barmador\b|\barremessador\b|\bartilheiro\b|\basa\b|\batacante\b|\bataque\b|\bavançado\b|\bbatedor\b|\bcabeça\sde\sárea\b|\bcapitão\b|\bcentral\sarmador\b|\bcentro\b|\bcentroavante\b|\bcontra-proa\b|\bcontra-voga\b|\bcraque\b|\bdefensor\sexterno\b|\bdefensor\sinterno\b|\bdefensor\b|\bdefesa\scentral\b|\bdefesa\sdireita\b|\bdefesa\sesquerda\b|\bdefesa\b|\bentrada\sde\srede\b|\bextremo\b|\bflanqueador\b|\bfly\shalf\b|\bfull\sback\b|\bgoleiro\b|\bhooker\b|\blançador\b|\blateral\b|\bleme\b|\blevantador\b|\blíbero\b|\bmédio\scentral\b|\bmédio\b|\bmeia\sarmador\b|\bmeia\sdireita\b|\bmeia\sesquerda\b|\bmeia\b|\bmeio\sde\scampo\b|\bmeio\sde\srede\b|\bmeio\sscrum\b|\bmeio-campista\b|\bmeio-campo\b|\bmeio\b|\bnúmero\scinco\b|\bnúmero\sdois\b|\bnúmero\soito\b|\bnúmero\squatro\b|\bnúmero\sseis\b|\bnúmero\ssete\b|\bnúmero\strês\b|\boitavo\b|\bpassador\b|\bpilar\saberto\b|\bpilar\sfechado\b|\bpivô\b|\bponta\sdireita\b|\bponta\sesquerda\b|\bponta\b|\bponteiro\b|\bposição\b|\bprimeira\slinha\b|\bprimeiro\scentro\b|\bprimeiro\slateral\b|\bprimeiro\sponta\b|\bproa\b|\brebatedor\b|\brecebedor\b|\breceptor\b|\breserva\b|\bsacador\b|\bsaída\sde\srede\b|\bsegunda\slinha\b|\bsegundo\scentro\b|\bsegundo\slateral\b|\bsegundo\sponta\b|\bservidor\b|\bsota-proa\b|\bsota-voga\b|\btalonador\b|\bterceira\slinha\b|\btimoneiro\b|\btitular\b|\bvoga\b|\bvolante\b|\bzaga\b|\bzagueiro\b", str(x))))
df_frame["Atrair_turistas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapresentar\b|\batração\b|\batrair\b|\batrativo\b|\batrativo\b|\bconvidar\b|\bdestacar-se\b|\bdestino\b|\blevar\b|\boferecer\b|\breservar\b|\bsurpreender\b", str(x))))
df_frame["Atravessar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bascender\b|\bascensão\b|\batravessar\b|\bcircular\b|\bcruzamento\b|\bcruzar\b|\bdecida\b|\bdescer\b|\bmontar\b|\bpassar\b|\bpular\b|\brodear\b|\bsaltar\b", str(x))))
df_frame["Atribuição_de_nome"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchamar-se\b|\bdublado\b", str(x))))
df_frame["Atributos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batributo\b|\bqualidade\b", str(x))))
df_frame["Atributos_graduáveis"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bsuper\b", str(x))))
df_frame["Atributos_mensuráveis"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balto\b|\bamplo\b|\bapertado\b|\bbaixo\b|\bcaloso\b|\bcurto\b|\belevado\b|\bespesso\b|\bestreito\b|\bfino\b|\bfundo\b|\bgrosso\b|\bleve\b|\blongo\b|\bmurcho\b|\bpesado\b|\bprofundo\b", str(x))))
df_frame["Auto_movimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balongamento\b|\bandar\b|\bcambalhota\b|\bcaminhada\b|\bcaminhar\b|\bcircular\b|\bcorrer\b|\bdança\b|\bdançar\b|\bdesfilar\b|\besquentar\b|\bir\b|\bmergulhar\b|\bmovimento\b|\bnadar\b|\bpisar\b|\bvoar\b", str(x))))
df_frame["Avaliação_de_moralidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babsurdamente\b|\babsurdo\b|\bantiético\b|\bbaixo\b|\bbom\b|\bcanalha\b|\bcandongueiro\b|\bcerto\b|\bdegenerado\b|\bdepravação\b|\bdepravado\b|\bdescente\b|\bdesonroso\b|\bdoloso\b|\berrado\b|\berrar\b|\berro\b|\bescuro\b|\bético\b|\bgeneroso\b|\bhorroroso\b|\bimoral\b|\bimpróprio\b|\binescrupuloso\b|\biníquo\b|\binsidioso\b|\bíntegro\b|\bjusto\b|\bmaldoso\b|\bmau\b|\bmelhor\b|\bmenos\b|\bmoral\b|\bnefasto\b|\bobsceno\b|\bpecaminoso\b|\bpecar\b|\bperverso\b|\bpior\b|\bréprobo\b|\brepulsivo\b|\bvil\b|\bvirtuoso\b", str(x))))
df_frame["Avaliar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bachar\b|\bavaliação\b|\bavaliar\b|\bbom\b|\bbom\b|\bimportar\b|\bjulgamento\b|\bjulgar\b|\blamentável\b|\bmaravilhoso\b|\bmelhor\b", str(x))))
df_frame["Boa_vontade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bboa-vontade\b|\bdispor\b", str(x))))
df_frame["Caçar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcaça\b|\bcaçada\b|\bcaçador\b|\bcaçar\b|\bpescar\b", str(x))))
df_frame["Cair_no_sono"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badormecer\b|\bdesmaiar\b", str(x))))
df_frame["Campos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagropecuária\b|\bâmbito\b|\barquitetura\b|\barte\b|\bartes\svisuais\b|\bartístico\b|\bastrofísica\b|\bastrofísico\b|\bastrologia\b|\bastronomia\b|\baviação\b|\bcampo\b|\bciência\b|\bcientífico\b|\bcosmológico\b|\bcrítica\b|\bculinária\b|\bcultura\b|\bdança\b|\bdemografia\b|\bdesenho\b|\bdisciplina\b|\bdomínio\b|\bdrama\b|\becologia\b|\beconomia\b|\bfilosofia\b|\bfinança\b|\bfísica\b|\bgastronomia\b|\bgeografia\b|\bhistória\b|\bhumanas\b|\bindustrialização\b|\binglês\b|\blazer\b|\blíngua\b|\bmatemática\b|\bmorfologia\b|\bmúsica\b|\bpoesia\b|\bquântico\b|\brubrica\b|\bsemântica\b|\btelecomunicação\b", str(x))))
df_frame["Caos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbagunça\b|\bbagunçado\b|\bturbulento\b", str(x))))
df_frame["Capacidade_ação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baptidão\b|\bapto\b|\bcapacidade\b|\bcapacitar\b|\bcapaz\b|\bcompetência\b|\bconseguir\b|\bdar\b|\bdom\b|\bhabilidade\b|\bimpotente\b|\bincapaz\b|\bpoder\b|\btalento\b", str(x))))
df_frame["Careza"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacessibilidade\b|\bacessível\b|\bbaixo\scusto\b|\bbarato\b|\bcaro\b|\bcustar\b|\bcusto\b|\bdespesa\b|\bexorbitante\b|\bgratuito\b|\boneroso\b|\bsuperfaturado\b|\bvaler\b", str(x))))
df_frame["Catástrofe"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcrise\b|\bfatalidade\b|\bincidente\b", str(x))))
df_frame["Categorização"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bclassificação\b|\bclassificado\b|\bclassificar\b|\bconsiderado\b|\bconsiderar\b|\bdeclarar\b|\binterpretar\b|\breconhecer\b", str(x))))
df_frame["Causalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bassim\b|\bcausa\b|\bcausar\b|\bconsequência\b|\bconsequentemente\b|\bculminar\b|\bdar\b|\bde\smodo\sque\b|\bdeixar\b|\bdesencadear\b|\bdespertar\b|\bdever\b|\befeito\b|\bentão\b|\bfazer\scom\sque\b|\bfazer\b|\bmedida\b|\bpor\b|\bporque\b|\bportanto\b|\bprovocar\b|\brender\b|\bresponsável\b|\bresultado\b|\bresultado\b|\bresultar\b|\btornar\b", str(x))))
df_frame["Causar_acordar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacordar\b", str(x))))
df_frame["Causar_continuar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacalentar\b|\bpreservar\b", str(x))))
df_frame["Causar_dano"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacertar\b|\bapedrejar\b|\barranhar\b|\bbater\b|\bferir\b|\bmachucar\b|\btorcer\b", str(x))))
df_frame["Causar_emoção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdeixar\b", str(x))))
df_frame["Causar_estar_incluído"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bincluir\b", str(x))))
df_frame["Causar_expansão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bampliar\b|\baumentar\b|\bcaprichar\b|\bcrescimento\b|\bdiminuir\b|\besticar\b|\bexpandir\b|\bminimizar\b", str(x))))
df_frame["Causar_fazer_progresso"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdedicar\b|\besmerar\b|\binvestimento\b|\binvestir\b|\bsofisticar\b", str(x))))
df_frame["Causar_ficar_afiado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafiar\b", str(x))))
df_frame["Causar_ficar_molhado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmolho\b", str(x))))
df_frame["Causar_ficar_seco"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\benxugar\b|\bsecar\b", str(x))))
df_frame["Causar_fragmentar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrebentar\b|\bquebrar\b|\bromper\b", str(x))))
df_frame["Causar_fundir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcombinar\b|\bgrupo\b|\bjuntar\b|\breunir\b", str(x))))
df_frame["Causar_movimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagitar\b|\barejar\b|\batrair\b|\bempurrar\b|\bjogar\b|\blançar\b|\blargar\b|\blevantar\b|\bmovimentar\b|\bsacar\b|\bsubir\b|\btampar\b", str(x))))
df_frame["Causar_movimento_fluídico"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bentornar\b", str(x))))
df_frame["Causar_mudança"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balterar\b|\bcustomizado\b|\bmodificador\b|\bmudar\b|\btransformar\b|\btrocar\b", str(x))))
df_frame["Causar_mudança_de_força"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\breforçar\b", str(x))))
df_frame["Causar_mudança_de_posição_em_uma_escala"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\breduzir\b|\bvalorizar\b", str(x))))
df_frame["Causar_mudança_de_temperatura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brefrigerar\b", str(x))))
df_frame["Causar_mudar_de_lugar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchacoalhar\b", str(x))))
df_frame["Causar_perceber"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapontar\b|\bapresentação\b|\bapresentar\b|\bassinalar\b|\bdemonstrar\b|\besbanjar\b|\bexpor\b|\bexposição\b|\biluminar\b|\blançar\b|\bmostrar\b|\bpublicar\b|\brepresentar\b|\brevelar\b", str(x))))
df_frame["Causar_retomar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\breviver\b", str(x))))
df_frame["Causar_terminar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdissipar\b|\bterminar\b", str(x))))
df_frame["Ceder"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bentregar\b|\bimplacavelmente\b", str(x))))
df_frame["Cenário_da_história"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brelembrar\b", str(x))))
df_frame["Cenário_de_aquisição"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baquisição\b", str(x))))
df_frame["Cenário_de_doação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcontribuir\b|\bcortesia\b", str(x))))
df_frame["Cenário_de_importação_e_exportação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexportador\b", str(x))))
df_frame["Cenário_de_interação_médica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacidentado\b|\bcirurgia\b|\bcirúrgico\b|\bdegenerativo\b|\bdificuldade\b|\bponto\b|\bvítima\b", str(x))))
df_frame["Cenário_de_obrigação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdever\b", str(x))))
df_frame["Cenário_do_comércio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcobrar\b|\bcomércio\b|\bdesconto\b|\bpreço\b|\bserviço\b|\btarifa\b", str(x))))
df_frame["Cenário_do_turismo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bturismo\b", str(x))))
df_frame["Cenário_do_turismo_estada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bestada\b|\bestadia\b|\bestar\b", str(x))))
df_frame["Cenário_do_turismo_partida"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bida\b|\bir\sembora\b|\bpartida\b|\bpartir\b|\bsair\b", str(x))))
df_frame["Cenário_visita_chegada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bChega\b", str(x))))
df_frame["Cercanias"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bao\sredor\sde\b|\barredor\b|\bcercar\b|\bcircundar\b|\benvolto\b|\bpor\b|\bredondeza\b|\bredor\b|\brodeado\b", str(x))))
df_frame["Cerimônias"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babertura\b|\bcerimônia\b|\bencerramento\b|\bmedalha\b", str(x))))
df_frame["Certeza"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bassegurar\b|\bcertamente\b|\bdecerto\b|\bdúvida\b|\benigmático\b|\bexatamente\b|\bincerteza\b|\bmistério\b|\bmisterioso\b", str(x))))
df_frame["Chance"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bimpossível\b|\bpossível\b|\btalvez\b|\btender\b", str(x))))
df_frame["Chegada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baportar\b|\bchegada\b|\bchegar\b", str(x))))
df_frame["Chegada_ao_alojamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcheck-in\b", str(x))))
df_frame["Chegada_ao_destino"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesembarcar\b|\bdesembarque\b", str(x))))
df_frame["Chegar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparecer\b|\baportar\b|\baproximar\b|\bchegar\b|\bentrar\b|\bregressar\b|\bretornar\b|\bvir\b|\bvoltar\b", str(x))))
df_frame["Chegar_a_acreditar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchutar\b|\bconclusão\b", str(x))))
df_frame["Circunstâncias_contrárias"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapesar\sde\b|\bmesmo\sque\b", str(x))))
df_frame["Classificação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bgraduação\b", str(x))))
df_frame["Classificação_biológica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bespécie\b", str(x))))
df_frame["Clima"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bártico\b|\bavalanche\b|\bcerração\b|\bclima\b|\bdilúvio\b|\benchente\b|\benxurrada\b|\bgeada\b|\binundação\b|\bnévoa\b|\bonda\b|\bressaca\b|\bseco\b|\bsol\b|\btempestade\b|\btropical\b|\búmido\b|\bvendaval\b", str(x))))
df_frame["Codificar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexpressão\b|\bfrase\b|\bpalavra\b", str(x))))
df_frame["Cogitação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcismar\b|\bconcentrar\b|\bcontemplação\b|\bcontemplativo\b|\blevar\sem\sconta\b|\bpensamento\b|\bpensar\b|\bponderar\b|\brepensar\b|\bvir\sà\smente\b", str(x))))
df_frame["Coincidência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcasualidade\b|\bcoincidentemente\b", str(x))))
df_frame["Colaboração"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcolaborar\b|\binteração\b", str(x))))
df_frame["Colocação_espacial"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blá\b", str(x))))
df_frame["Colocação_temporal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bà\smedida\sque\b|\ba\b|\bagora\b|\bantigamente\b|\bantigo\b|\bao\slongo\sde\b|\batual\b|\batualmente\b|\bdentro\sde\b|\bdurante\b|\bem\b|\benquanto\b|\bentão\b|\bfuturo\b|\bfuturo\b|\bhoje\sem\sdia\b|\bhoje\b|\bimediatamente\b|\bmais\b|\bmoderno\b|\bpor\svolta\sde\b|\bpor\b|\bpré-histórico\b|\bquando\b|\bquando\b|\brecentemente\b|\btão\slogo\b|\búltimamente\b", str(x))))
df_frame["Colocar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balinhamento\b|\baplicar\b|\bcolocar\b|\bestacionar\b|\blevar\b|\bmergulhar\b|\bparar\b|\bpendurar\b|\bpõem\b|\bpôr\b", str(x))))
df_frame["Colonização"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcolonizar\b|\binstalar\b", str(x))))
df_frame["Comércio_comprar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badquirir\b|\bcliente\b|\bcompra\b|\bcomprar\b|\bconsumidor\b", str(x))))
df_frame["Comércio_pagar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcouvert\b|\bimposto\b|\bpagamento\b", str(x))))
df_frame["Comércio_receber"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcobrar\b", str(x))))
df_frame["Comércio_vender"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomercializar\b|\bleilão\b|\bpromoção\b|\bvenda\b|\bvender\b", str(x))))
df_frame["Comissão_técnica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\banalista\sde\sdesempenho\b|\bauxiliar\stécnico\b|\bauxiliar\b|\bchef\b|\bcomissão\stécnica\b|\bcoordenador\b|\bcozinheiro\b|\bdiretor\b|\bfisiologista\b|\bfisioterapeuta\b|\bfotógrafo\b|\bgerente\b|\binstrutor\b|\bmassagista\b|\bmédico\b|\bnutricionista\b|\bobservador\stécnico\b|\bolheiro\b|\bpreparador\sde\sgoleiro\b|\bpreparador\sfísico\b|\bpsicólogo\b|\broupeiro\b|\bsegurança\b|\bsupervisor\b|\btécnico\b|\btreinador\sassistente\b|\btreinador\b|\bveterinário\b", str(x))))
df_frame["Comparação_avaliativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomparar\b|\bdo\sque\b|\bequivaler\b|\bigualmente\b|\bincomparável\b|\blonge\b|\bmais\b|\bmelhor\b|\bmelhorar\b|\bmenor\b|\bpiorar\b", str(x))))
df_frame["Comparecer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bauto-atendimento\b|\bir\b", str(x))))
df_frame["Compatibilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcondizente\b|\bcondizer\b|\bconsistência\b|\bharmonia\b", str(x))))
df_frame["Competição"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbrigar\b|\bcampeonato\b|\bcombate\b|\bcompetição\b|\bcompetidor\b|\bcompetir\b|\bcompetitivo\b|\bconcorrência\b|\bdesafio\b|\bdisputa\b|\bdisputar\b|\bencarar\b|\bgame\b|\bjogar\b|\bjogo\b|\bliga\b|\brivalidade\b|\btorneio\b", str(x))))
df_frame["Completude"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomplementar\b|\bcompletar\b|\bcompleto\b|\btotal\b|\btotalidade\b", str(x))))
df_frame["Complexidade_sistêmica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomplexidade\b|\bsimples\b", str(x))))
df_frame["Comprar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomprar\b|\bcompras\b|\bcusto\sbenefício\b", str(x))))
df_frame["Comprometimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bameaçar\b|\bjuramento\b|\bjurar\b|\bprometer\b", str(x))))
df_frame["Comunicação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomunicar\b|\btransmitir\b", str(x))))
df_frame["Comunicação_de_julgamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcelebrar\b|\bcrítica\b|\bcriticar\b|\bcrítico\b", str(x))))
df_frame["Comunicação_direta_de_julgamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bobrigado\b|\bobrigado\b|\bparabéns\b", str(x))))
df_frame["Comunicação_resposta"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\breplicar\b|\bresponder\b|\btornar\b", str(x))))
df_frame["Comunicar_categorização"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdefinição\b|\bdefinir\b|\bdeterminação\b|\bdeterminado\b|\bretratar\b|\bsimbolizar\b", str(x))))
df_frame["Concessão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bainda\sassim\b|\bainda\sque\b|\bapesar\sde\b|\bexceção\b|\bmas\b|\bna\srealidade\b|\bna\sverdade\b|\bno\sentanto\b", str(x))))
df_frame["Condições_médicas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badoecer\b|\balérgico\b|\bárea\b|\bcadeirante\b|\bcadeirante\b|\bcâncer\b|\bcardíaco\b|\bcirúrgico\b|\bdeficiência\b|\bderrame\b|\bdiagnosticado\b|\bdistrofia\b|\bdoença\b|\bdoente\b|\bdoer\b|\bdor\b|\bepidemia\b|\besclerose\slateral\samiotrófica\b|\bfratura\b|\bgrávida\b|\bhemorragia\b|\binternado\b|\blesão\b|\bnervoso\b|\bpaciente\b|\bparada\srespiratória\b|\bpassar\smal\b|\bpatogénico\b|\bportador\b|\bproblema\srespiratório\b|\bproblema\b|\breceber\salta\b|\bsaúde\b|\bvítima\b|\bvômito\b", str(x))))
df_frame["Conduta"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomportamento\b", str(x))))
df_frame["Conectores"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcabo\b|\bcorda\b|\bfilamento\b|\bfita\sadesiva\b|\bgancho\b|\bluva\b", str(x))))
df_frame["Conexão_cognitiva"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bassociação\b|\bassociado\b|\bconectar\b|\benvolver\b|\bligar\b|\bremontar\b|\bter\sa\sver\b", str(x))))
df_frame["Confiar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconfiar\b", str(x))))
df_frame["Confrontar_problema"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bencarar\b|\benfrentar\b|\bpassar\b", str(x))))
df_frame["Conhecimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacessível\b|\bachar\b|\bcompreender\b|\bconcepção\b|\bconhecer\b|\bconhecimento\b|\bconsiderar\b|\bcrer\b|\bdesavisado\b|\bdiscernimento\b|\bentender\b|\bfazer\sideia\b|\bideia\b|\bimaginação\b|\bimaginar\b|\binacessível\b|\binalcançável\b|\bnoção\b|\bpensamento\b|\bpensar\b|\brepensar\b|\bsabedoria\b|\bsaber\b|\bsuspeitar\b|\bter\b", str(x))))
df_frame["Conquistar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconquistar\b|\btomar\sconta\b|\btomar\b", str(x))))
df_frame["Construir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconceber\b|\bconstruir\b|\berguer\b|\binaugurar\b|\breforma\b|\breformado\b|\breformar\b|\bresidência\b", str(x))))
df_frame["Contatar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchamar\b|\bcontactar\b|\bcontato\b|\bcorresponder\b|\bligar\b|\btelefonar\b", str(x))))
df_frame["Conter"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balojar\b|\bter\b", str(x))))
df_frame["Contingência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdependência\b|\bdepender\b|\bindependente\b", str(x))))
df_frame["Contra-atacar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcontra-atacar\b|\bcontra-ataque\b", str(x))))
df_frame["Contratar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bterceirizar\b", str(x))))
df_frame["Contrição"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrepender-se\b|\barrepender\b|\barrependido\b|\barrependimento\b|\bcontrição\b|\bcontrito\b|\bculpa\b|\bculpado\b|\bdesculpa\b|\bdesculpar\b|\bimpenitente\b|\bpenalizado\b|\bpenitência\b|\bpenitente\b|\bremorso\b|\bremorso\b", str(x))))
df_frame["Controlar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcondicionar\b|\bdeterminar\b|\bregulamentação\b|\bregulamentar\b|\bregulamento\b", str(x))))
df_frame["Conversar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbate-papo\b|\bcontar\b|\bconversar\b|\bpiada\b|\bzoar\b", str(x))))
df_frame["Convidado_e_anfitrião"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconvida\b|\bconvidado\b", str(x))))
df_frame["Cor"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balaranjado\b|\bamarelado\b|\bamarelo\b|\bazul\b|\bbranco\b|\bcolorido\b|\bcor\b|\bpreto\b|\bverde-clara\b|\bverde\b|\bvermelho\b|\bvioleta\b", str(x))))
df_frame["Cortar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcortar\b|\bcorte\b|\bpicadinha\b|\bpicado\b|\btosa\b", str(x))))
df_frame["Costume"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacostumar\b|\bclássico\b|\bcostumar\b|\bcostume\b|\bparadigma\b|\btradição\b|\btradicional\b", str(x))))
df_frame["Cotema"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconduzir\b|\bguiar\b|\bseguir\b", str(x))))
df_frame["Crença_religiosa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcredo\b|\bcrença\b|\bcrer\b|\bdevoto\b|\bfé\b|\bfiel\b|\breligião\b", str(x))))
df_frame["Criação_culinária"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacrescentar\b|\badicionar\b|\bassado\b|\bassar\b|\bbater\b|\bcolocar\b|\bconsertar\b|\bcozinhar\b|\bcozinheiro\b|\bculinária\b|\bculinário\b|\bdecorar\b|\bdegustação\b|\bdeixar\b|\bdespejar\b|\bdourar\b|\bfazer\b|\bfeito\b|\bfritar\b|\bfrito\b|\bfritura\b|\bgratinar\b|\bgrelhar\b|\binventar\b|\bmexer\b|\bmilanesa\b|\bparmegiana\b|\bpiamontese\b|\bpicado\b|\bpolvilhar\b|\bpreparação\b|\bpreparar\b|\bsalgar\b|\btemperar\b", str(x))))
df_frame["Criar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconceber\b|\bconsistir\b|\bcriar\b|\bformação\b|\bformar\b|\binovação\b|\binovar\b|\binstituir\b|\bproduzir\b", str(x))))
df_frame["Criar_arte_física"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bartista\b|\bdesenhar\b|\bescalar\b|\besculpir\b|\bpintado\b|\bpintar\b|\btirar\sfoto\b", str(x))))
df_frame["Criar_intencionalmente"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barmação\b|\barmar\b|\bconfigurar\b|\bcriar\b|\bdar\sorigem\b|\belaborar\b|\bestabelecer\b|\bfazer\b|\bfundado\b|\bfundar\b|\bideia\b|\bInventa\b|\bpreparar\b|\bprodutor\b|\bproduzir\b|\brealizar\b|\bter\b", str(x))))
df_frame["Criar_representação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesenhar\b|\besboçar\b|\besculpir\b|\bfoto\b|\bfotografar\b|\bfotografia\b|\bilustrado\b|\bpintar\b", str(x))))
df_frame["Criminalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcrime\b", str(x))))
df_frame["Cultivar_alimentos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcultivar\b", str(x))))
df_frame["Cumprimento_de_normas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcontrariar\b|\bfiel\b|\bmandar\b|\bobedecer\b|\bseguir\b", str(x))))
df_frame["Cura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btratamento\b", str(x))))
df_frame["Dançar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bsambar\b", str(x))))
df_frame["Danificar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrebentar\b|\bfurar\b|\brasgar\b|\btrincar\b", str(x))))
df_frame["Dar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbrinde\b|\bceder\b|\bdádiva\b|\bdar\b|\bdoação\b|\bprenda\b|\bpresente\b|\bsouvenir\b", str(x))))
df_frame["Dar_à_luz"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdar\sorigem\b", str(x))))
df_frame["Dar_forma"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btorcer\b", str(x))))
df_frame["Dar_impressão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparentar\b|\baparente\b|\bcheirar\b|\bfeder\b|\bimpressão\b|\blembrar\b|\bparecer\b|\bprovar\b|\bsoar\b", str(x))))
df_frame["Data_comemorativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baniversário\b|\bcarnaval\b|\bNatal\b", str(x))))
df_frame["Decidir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdecidir\b|\bdecisão\b|\bdecisiva\b|\bestabelecer\b|\bresolver\b", str(x))))
df_frame["Declaração"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badmitir\b|\bafirmação\b|\bafirmar\b|\balegação\b|\balegar\b|\bamuar\b|\banunciar\b|\banúncio\b|\barriscar\b|\batestar\b|\bcitar\b|\bcomentar\b|\bcomentário\b|\bcompletar\b|\bcomprovar\b|\bconcessão\b|\bconfessar\b|\bconfirmar\b|\bconfissão\b|\bconjetura\b|\bconjeturar\b|\bcontar\b|\bcontar\b|\bconversa\b|\bconversar\b|\bdeclaração\b|\bdeclarar\b|\bdescrever\b|\bdetalhar\b|\bdizer\b|\besclarecimento\b|\bescrever\b|\bexclamação\b|\bexclamar\b|\bexplicação\b|\bexplicar\b|\bexplicar\b|\bexpressar\b|\bexultar\b|\bfala\b|\bfalar\b|\binsistência\b|\binsistir\b|\bmanter\b|\bmenção\b|\bmencionar\b|\bmensagem\b|\bnegação\b|\bnotar\b|\bobservar\b|\borar\b|\bousar\b|\bpremissa\b|\bprestar\sconta\b|\bproclamação\b|\bproclamar\b|\bprofessar\b|\bpromulgação\b|\bpronunciamento\b|\bpronunciar\b|\bpropor\b|\bproposição\b|\bproposta\b|\breafirmar\b|\breclamar\b|\brefutar\b|\breiterar\b|\brelacionar\b|\brelatar\b|\brelato\b|\brelatório\b|\brepetir\b|\breproduzir\b|\bser\scomo\b|\bsermão\b|\bsorrir\b|\bsugerir\b", str(x))))
df_frame["Degustar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdegustar\b|\bdeliciar-se\b|\bexperimentar\b|\bprovar\b", str(x))))
df_frame["Deixado_por_fazer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdeixado\b|\bdeixar\b|\brestante\b|\brestar\b|\bsobrar\b", str(x))))
df_frame["Deixar_de_ser"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesaparecer\b", str(x))))
df_frame["Delegação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconfederação\b|\bdelegação\b", str(x))))
df_frame["Delitos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfraude\b", str(x))))
df_frame["Desastre_natural"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bavalanche\b|\bciclone\b|\bdesastre\b|\bdesertificação\b|\bmaremoto\b|\bseca\b|\bterremoto\b", str(x))))
df_frame["Descrição_corporal_holística"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfeminino\b", str(x))))
df_frame["Descrição_de_duração"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbreve\b|\bcontínuo\b|\bcrônico\b|\bcurto\b|\bduradouro\b|\bdurável\b|\befêmero\b|\bestendido\b|\beternamente\b|\beterno\b|\bfase\b|\binterino\b|\blongo\b|\bmomentâneo\b|\bperpétuo\b|\bpersistente\b|\brápido\b|\bsustentável\b", str(x))))
df_frame["Descrição_parte_do_corpo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bemagrecer\b|\bgorda\b|\bliso\b", str(x))))
df_frame["Descrição_químico-sensorial"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcheiro\b|\bcheiroso\b|\bcrocante\b|\bdelicioso\b|\bdoce\b|\bsalgado\b|\btorrada\b", str(x))))
df_frame["Desejabilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badmirável\b|\baprovado\b|\barrasar\b|\bbabaca\b|\bbem-cuidado\b|\bbem\b|\bbenigno\b|\bbobo\b|\bbom\b|\bchato\b|\bchato\b|\bcorrupto\b|\bdaora\b|\bdecente\b|\bdeprimente\b|\bdesagradável\b|\bdesejável\b|\bdeslumbrante\b|\bdespojado\b|\bdespreparado\b|\bdigno\b|\bdisponível\b|\bdoce\b|\beclético\b|\beficiente\b|\belitizado\b|\bespetacular\b|\besplêndido\b|\bestupendo\b|\bexcelência\b|\bexcelente\b|\bexcepcional\b|\bexecrável\b|\bextraordinário\b|\bextremo\b|\bexuberante\b|\bfabuloso\b|\bfantástico\b|\bfavorável\b|\bfenomenal\b|\bferrado\b|\bformidável\b|\bhorrível\b|\bidílico\b|\bimundo\b|\bincrível\b|\bindescritível\b|\bIndistinguível\b|\binfeliz\b|\binferior\b|\binútil\b|\binvasivo\b|\birresistível\b|\bjoia\b|\bjulgar\b|\bjusto\b|\blamentável\b|\bleve\b|\blimpo\b|\blixo\b|\bmagnífico\b|\bmaravilha\b|\bmaravilhoso\b|\bmedíocre\b|\bmeia-boca\b|\bmelhor\b|\bmerda\b|\bmetido\b|\bmiserável\b|\bnormal\b|\bnovo\b|\bótimo\b|\bouro\b|\bpatético\b|\bperdido\b|\bperito\b|\bpéssimo\b|\bpior\b|\bpobre\b|\bpodre\b|\bpopular\b|\bporcaria\b|\bprimoroso\b|\brazoável\b|\bruim\b|\bsaudável\b|\bsensacional\b|\bsimples\b|\bsofisticado\b|\bsofrível\b|\bsujo\b|\bsuper\b|\bsupremo\b|\bsurreal\b|\bterrível\b|\btolerável\b|\btremendo\b|\bvelho\b|\bverdadeiramente\b|\bverdadeiro\b|\bviolento\b", str(x))))
df_frame["Desejar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balmejar\b|\bambição\b|\bambicionar\b|\bambicioso\b|\banseio\b|\bânsia\b|\bansiar\b|\bansioso\b|\baspiração\b|\baspirar\b|\bcobiça\b|\bcobiçar\b|\bdefinhar\b|\bdesejado\b|\bdesejar\b|\bdesejo\b|\bdesejoso\b|\bdeterminação\b|\besperança\b|\besperar\b|\bfenômeno\b|\bimpaciente\b|\bimperativo\b|\bimpulso\b|\binteressado\b|\bluxúria\b|\bprocurar\b|\bquerer\b|\bquerer\b|\brelutante\b|\bsaudade\b|\bsede\b|\bsedento\b|\bvontade\b", str(x))))
df_frame["Desembarcar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesembarcação\b|\bdesembarcar\b|\bdesmontar\b|\bpousar\b", str(x))))
df_frame["Deslocamento_intencional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baceder\b|\bescalar\b|\bsubir\b", str(x))))
df_frame["Deslocar-se"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbotecar\b|\bpassar\b|\bpassear\b|\bpasseio\b", str(x))))
df_frame["Despedaçar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bquebrar\b", str(x))))
df_frame["Destacar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdescolar\b", str(x))))
df_frame["Destruir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdestruição\b", str(x))))
df_frame["Diferenciação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdiferente\b|\bdistinção\b|\bdistinguir\b", str(x))))
df_frame["Dificuldade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomplexo\b|\bcrítico\b|\bdifícil\b|\bdificuldade\b|\bfácil\b|\bfacilidade\b|\bfacilmente\b|\bimpenetrável\b|\bimpossível\b|\bproblema\b", str(x))))
df_frame["Dificultar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batraso\b|\bdemorar\b|\bdificultar\b", str(x))))
df_frame["Dimensão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baltura\b|\bárea\b|\bcomprimento\b|\bnível\b", str(x))))
df_frame["Dinamismo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdinâmico\b|\bintensidade\b|\bintenso\b|\bpreguiçoso\b|\bvibrante\b", str(x))))
df_frame["Dinheiro"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcartão\sde\scrédito\b|\bcartão\b|\bdinheiro\b|\bnota\b", str(x))))
df_frame["Direção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badiante\b|\balto\b|\balto\b|\bbaixo\b|\bcaminho\b|\bcima\b|\bdireção\b|\bdireita\b|\besquerda\b|\bfora\b|\bleste\b|\bleste\b|\bnorte\b|\bnorte\b|\boeste\b|\boeste\b|\bpara\scima\b|\bpara\scima\b|\bpara\sfrente\b|\brumo\b|\bsul\b|\bsul\b", str(x))))
df_frame["Discussão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconferência\b|\bconvenção\b|\bconversa\b|\bdebate\b|\bdiscurso\b|\bpainel\b|\bpalestra\b|\bplenária\b|\breunião\b|\bseminário\b", str(x))))
df_frame["Discutir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bargumento\b|\blutar\b|\bprotesto\b", str(x))))
df_frame["Dispersão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdifundido\b|\bdifundir\b|\bdifuso\b|\bdispersão\b|\bdispersar\b|\bdissolver\b|\bdistribuição\b|\bdistribuir\b|\bespalhar\b", str(x))))
df_frame["Distinção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparência\b|\baspecto\b|\bcaracterístico\b|\bdiferenciar\b|\bdistinção\b|\bgarantir\b|\bmarcado\b|\bmarcar\b|\bter\b", str(x))))
df_frame["Diversidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamplo\b|\bdiversidade\b|\bdiversificado\b|\bdiverso\b|\bextensão\b|\bheterogeneidade\b|\bheterogêneo\b|\bhomogeneidade\b|\bhomogêneo\b|\blargura\b|\bmistura\b|\bmultifacetada\b|\bmultifacetado\b|\bmultiplicidade\b|\bmúltiplo\b|\bpuro\b|\bsortido\b|\bsortimento\b|\buniforme\b|\buniformidade\b|\bvariabilidade\b|\bvariação\b|\bvariado\b|\bvariedade\b|\bvário\b", str(x))))
df_frame["Divisão_temporal_do_esporte"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacréscimo\b|\bassalto\b|\bdisputa\sde\spênaltis\b|\bfinal\b|\bgolden\sscore\b|\binício\b|\bintervalo\b|\bprorrogação\b|\bquarto\b|\brodada\b|\brotina\b|\bround\b|\bsérie\b|\bset\b|\btempo\sregulamentar\b|\btempo\b|\btentativa\b|\bvolta\b", str(x))))
df_frame["Dizer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bavisar\b|\bcontar\b|\bdesabafar\b|\bdizer\b|\bfalar\b|\bnarrar\b", str(x))))
df_frame["Documentos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacordo\b|\bautorização\b|\bcarta\b|\bcomprovante\sde\svacinação\b|\bconcessão\b|\bconfirmação\b|\bcontrato\b|\bcontratual\b|\bconvocação\b|\bcupom\sfiscal\b|\bdecisão\b|\bdeclaração\b|\bdepoimento\b|\bdescoberta\b|\bdiploma\b|\bdireito\b|\bdocumentação\b|\bdocumento\b|\bescritura\b|\bgarantia\b|\bidentificação\b|\bintimação\b|\blei\b|\blicença\b|\bnota\b|\bopinião\b|\bordem\b|\bpapéis\b|\bpassaporte\b|\bpermissão\b|\bsumário\b|\btestamento\b|\btestemunho\b|\btítulo\b|\btratado\b|\bvisto\b", str(x))))
df_frame["Doença"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcâncer\b|\bdoença\b|\bhérnia\b|\bzika\b", str(x))))
df_frame["Dominar_situação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdominar\b|\bpredominar\b", str(x))))
df_frame["Domínio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barquitetônico\b|\bcientificamente\b|\bcultural\b|\bem\stermos\b|\bhistoricamente\b|\bhistórico\b|\bmusical\b|\bpsicológico\b|\bsocial\b", str(x))))
df_frame["Dormir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdormir\b|\binconsciente\b|\bsono\b", str(x))))
df_frame["Duplicação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bclonado\b", str(x))))
df_frame["Eclipse"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamortalhado\b|\bamortalhar\b|\bapagar\b|\bblindado\b|\bblindar\b|\bbloquear\b|\bcoberto\b|\bcobrir\b|\beclipse\b|\beclipse\b|\bencoberto\b|\bencobrir\b|\besconder\b|\bescondido\b|\bmascarado\b|\bmascarar\b|\bobscurecer\b|\bobscurecido\b|\bobstruir\b|\boclusão\b|\bocultação\b|\bocultar\b|\bproteger\b|\bprotegido\b|\bvelado\b|\bvelar\b", str(x))))
df_frame["Economia"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\beconomia\b|\beconômico\b", str(x))))
df_frame["Educação_ensino"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacadêmico\b|\balfabetização\b|\baluno\b|\baprender\b|\baprendizado\b|\baula\b|\bbacharelado\b|\bcursar\b|\bcurso\b|\bdiplomar\b|\bdisciplina\b|\bdoutorado\b|\beducação\b|\beducacional\b|\beducado\b|\beducar\b|\bensinamento\b|\bensinar\b|\bentender\b|\blecionar\b|\bmagistério\b|\bmatemática\b|\bmestrado\b|\bnormalista\b|\bprofessor\b|\bregistro\b|\buniversitário\b", str(x))))
df_frame["Eletroeletrônicos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bar\scondicionado\b|\bfotocopiadora\b|\bmáquina\sde\slavar\b|\bmáquina\b|\bprancha\b", str(x))))
df_frame["Emergência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bemergência\b", str(x))))
df_frame["Emitir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bre-emitir\b", str(x))))
df_frame["Emoção_com_foco_no_experienciador"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babominar\b|\badoração\b|\badorar\b|\badorável\b|\bagradecer\b|\balegre\b|\balegremente\b|\bamar\b|\bamor\b|\bantipatia\b|\bapaixonar\b|\bapiedar\b|\bapreensivo\b|\barrepender\b|\barrependimento\b|\baversão\b|\bboquiaberto\b|\bcalmo\b|\bcarinho\b|\bcarinhosamente\b|\bchateado\b|\bcheio\b|\bcompaixão\b|\bconforto\b|\bconsolação\b|\bdeliciar\b|\bdesconforto\b|\bdesesperado\b|\bdesesperar\b|\bdesespero\b|\bdesgostar\b|\bdesgosto\b|\bdetestar\b|\bempatia\b|\bentusiasmado\b|\bexaltado\b|\bfebril\b|\bfebrilmente\b|\bfelizmente\b|\bfrancamente\b|\bgostar\b|\bhomofobia\b|\bhomofóbico\b|\bimpressionado\b|\binabalado\b|\binfelizmente\b|\binsatisfeito\b|\binteressado\b|\bintimidado\b|\binveja\b|\binvejar\b|\birritado\b|\blamentar\b|\blastimar\b|\blastimar\b|\bmedo\b|\bmenosprezar\b|\bnervoso\b|\bodiar\b|\bódio\b|\bpaciente\b|\bpena\b|\bperturbado\b|\bprantear\b|\bprazer\b|\bpreocupado\b|\bressentimento\b|\bressentir\b|\bsatisfação\b|\bsatisfeito\b|\bsentir\saversão\b|\bsossegar\b|\bsurtar\b|\btemer\b|\btomado\b|\btranquilidade\b|\btranquilo\b", str(x))))
df_frame["Emoção_direcionada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babalado\b|\babatido\b|\babatimento\b|\baborrecido\b|\baborrecimento\b|\badmirado\b|\baflição\b|\baflito\b|\bafobado\b|\bagitado\b|\bagonia\b|\bagonizado\b|\balarmado\b|\balegria\b|\bamargura\b|\bamargurado\b|\bambicioso\b|\bamedrontado\b|\bamor\b|\bangústia\b|\bangustiado\b|\bansioso\b|\bantipático\b|\barara\b|\bassutado\b|\batordoado\b|\batormentado\b|\bbem\b|\bbravo\b|\bchateação\b|\bchateado\b|\bchocado\b|\bcondoído\b|\bcontente\b|\bcordialidade\b|\bcurioso\b|\bdecadente\b|\bdecepcionante\b|\bdeleite\b|\bdemolido\b|\bdepressivo\b|\bdesagradável\b|\bdesagrado\b|\bdesanimado\b|\bdesânimo\b|\bdesapontado\b|\bdesapontamento\b|\bdesconcertado\b|\bdesconfiança\b|\bdesconforto\b|\bdesconsolado\b|\bdescontentamento\b|\bdescontraído\b|\bdesencorajado\b|\bdesencorajamento\b|\bdesespero\b|\bdesgastante\b|\bdesgosto\b|\bdesgostoso\b|\bdesolado\b|\bdesorientação\b|\bdesorientado\b|\bdevastado\b|\bdiversão\b|\bdoloroso\b|\bdor\b|\bembaraçado\b|\bembaraço\b|\bemocionado\b|\bempolgado\b|\bencantado\b|\benfurecido\b|\benjoado\b|\bentediado\b|\bentretido\b|\bentristecido\b|\benvergonhado\b|\besmagado\b|\bespanto\b|\bestressado\b|\bestupefação\b|\bestupefato\b|\beuforia\b|\beufórico\b|\bexasperação\b|\bexasperado\b|\bexausto\b|\bexcitação\b|\bexcitado\b|\bextasiado\b|\bfarto\b|\bfascinado\b|\bfelicidade\b|\bfeliz\b|\bferido\b|\bfúria\b|\bfurioso\b|\bgraça\b|\bgratificação\b|\bhorror\b|\bhorrorizado\b|\bhumilhação\b|\bhumilhado\b|\binconsolável\b|\bindignado\b|\binquietação\b|\binquieto\b|\binsípido\b|\binteressar\b|\binteresse\b|\birado\b|\birritação\b|\birritado\b|\bjubiloso\b|\blívido\b|\blúgrube\b|\bluto\b|\bmaravilhado\b|\bmau\b|\bmelancólico\b|\bmiserável\b|\bmistificado\b|\bnervoso\b|\bofendido\b|\bofensa\b|\bperplexidade\b|\bperplexo\b|\bperturbado\b|\bpetrificado\b|\bpreocupação\b|\bpreocupado\b|\bradiante\b|\braiva\b|\brelaxado\b|\brepulsa\b|\bressentido\b|\brevoltado\b|\bsaqueado\b|\bsatisfação\b|\bsatisfeito\b|\bsimpatia\b|\bsimpático\b|\bsimpatizar\b|\bsofrimento\b|\bsombrio\b|\bsurpreendido\b|\bsurpreso\b|\btranstornado\b|\btraumatizado\b|\btriste\b|\btristemente\b|\btristeza\b|\bvexação\b|\bzangado\b", str(x))))
df_frame["Emoções_de_atividade_mental"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesfrutar\b|\bdistrair\b|\bdivertir\b", str(x))))
df_frame["Emoções_por_estímulo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balegria\b|\bconfundir\b|\bdesanimado\b|\bdeslumbrar\b|\bintrigado\b|\bpreocupar\b|\bsurpreender\b", str(x))))
df_frame["Empregar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdespedido\b|\bempregado\b", str(x))))
df_frame["Encontrar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdar\sde\scara\b", str(x))))
df_frame["Encontro_hostil"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbatalha\b|\bbriga\b|\bbrigaiada\b|\bbrigar\b|\bconflito\b|\bconfronto\b|\bdesentendimento\b|\bdiscussão\b|\bdisputa\b|\bguerra\b|\binsultar\b|\bluta\b|\blutar\b|\bmorder\b|\btiro\b|\btumultuar\b|\bxingar\b", str(x))))
df_frame["Enfatizar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bênfase\b|\bfocar\b|\bfoco\b|\bprestar\b", str(x))))
df_frame["Enterrar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\benterrado\b|\benterrar\b", str(x))))
df_frame["Entidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balgo\b|\bcama\b|\bcobertor\b|\bcoisa\b|\bdeus\b|\bentidade\b|\bfigura\b|\bfogão\sà\slenha\b|\bfogão\b|\bindivíduo\b|\bitem\b|\blápis\b|\blençol\b|\bmaterial\b|\bmonstro\b|\bnada\b|\bobjeto\b|\bsofá\b|\btirolesa\b|\btravesseiro\b|\bvasilha\b|\bvela\b", str(x))))
df_frame["Entidade_biológica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbacilo\b|\bbactéria\b|\bcoco\b|\bcogumelo\b|\bespirilo\b|\bforma\sde\svida\b|\bhumano\b|\blivre\b|\bmicrorganismo\b|\bmofo\b|\borganismo\b|\bparasita\b|\bprocariota\b|\bunicelular\b|\bvida\b", str(x))))
df_frame["Entidade_física"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bátomo\b|\bburaco\snegro\b|\bestelar\b|\bestrela\b|\bpartícula\b|\bsol\b", str(x))))
df_frame["Entregar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bentrega\b|\bentregar\b", str(x))))
df_frame["Entretenimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bclube\b|\bentretenimento\b|\bentreter\b", str(x))))
df_frame["Envelhecimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamadurecer\b|\bcrescer\b|\benvelhecer\b|\benvelheimento\b", str(x))))
df_frame["Enviar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bendereçar\b|\benviar\b|\bmandado\b|\bmandar\b", str(x))))
df_frame["Equipamentos_esportivos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparelho\b|\bapito\b|\barco\b|\bargola\b|\barma\b|\bbandeira\b|\bbarco\b|\bbarra\sfixa\b|\bbarra\b|\bbarras\sassimétricas\b|\bbarras\sparalelas\b|\bbastão\b|\bbicicleta\b|\bbike\b|\bbola\b|\bcaiaque\b|\bcâmera\sdigital\b|\bcaneleira\b|\bcanoa\b|\bcapacete\b|\bcartão\b|\bcavalo\scom\salças\b|\bcavalo\b|\bclipe\snasal\b|\bcoquilha\b|\bcorda\b|\bcotoveleira\b|\bdardo\b|\bdisco\b|\bembarcação\b|\bequipamento\b|\bespada\b|\bfita\b|\bflecha\b|\bflorete\b|\bjoelheira\b|\bmaça\b|\bmartelo\b|\bmáscara\sfacial\b|\bmáscara\b|\bmesa\sde\ssalto\b|\bmesa\b|\bóculos\sde\snatação\b|\bóculos\b|\bpena\b|\bpeso\b|\bpeteca\b|\bpistola\sde\spartida\b|\bpistola\b|\bprancha\b|\bprotetor\sbucal\b|\bprotetor\sde\scabeça\b|\bprotetor\sde\sgarganta\b|\bprotetor\sde\sorelha\b|\bprotetor\sde\souvido\b|\bprotetor\snasal\b|\braquete\b|\bremo\b|\bsabre\b|\bskate\b|\bsolo\b|\btaco\b|\btrampolim\b|\btrave\b|\bvara\b|\bvela\b|\bvolante\b", str(x))))
df_frame["Escapar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfugir\b", str(x))))
df_frame["Escolher"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdiscotecagem\b|\beleger\b|\beleição\b|\bescolha\b|\bescolher\b|\boptar\b|\bselecionar\b|\bvotação\b|\bvotar\b", str(x))))
df_frame["Esconder_objetos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\besconder\b|\bescondido\b", str(x))))
df_frame["Escrutínio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\banalisar\b|\banálise\b|\banalista\b|\banalítico\b|\bbusca\b|\bbuscar\b|\bchecar\b|\bencarar\b|\bescanear\b|\bescrutinar\b|\bescrutínio\b|\bestudar\b|\bestudo\b|\bexaminar\b|\bexplorado\b|\bexplorar\b|\bfolhear\b|\binspeção\b|\binspecionar\b|\binspetor\b|\bintrometer-se\b|\binvestigação\b|\binvestigar\b|\bmonitoração\b|\bmonitorar\b|\bnão\smonitorado\b|\bobservar\b|\bpeneirar\b|\bprocura\b|\bprocurar\b|\breconhecer\b|\breconhecimento\b|\brevisar\b|\brevistar\b|\bsondar\b|\bvarredura\b|\bvarrer\b|\bvasculhar\b|\bver\b|\bverificar\b|\bvigilância\b", str(x))))
df_frame["Especialidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badepto\b|\badepto\b|\bamador\b|\bamador\b|\bás\b|\bás\b|\bbem\sversado\b|\bbom\b|\bcompetência\b|\bcompetente\b|\bconhecedor\b|\bcraque\b|\bdesqualificado\b|\bespecialista\b|\bespecializado\b|\besplêndido\b|\bestupêndo\b|\bexcelente\b|\bexperiência\b|\bexperiente\b|\bexpert\b|\bfã\b|\bfamiliar\b|\bfantástico\b|\bforte\b|\bfraco\b|\bguru\b|\bhabilidade\b|\bhabilidoso\b|\bhorrível\b|\bignorante\b|\binacreditável\b|\bincompetência\b|\bincompetente\b|\binépcia\b|\binepto\b|\binexperiente\b|\bleigo\b|\bmaestria\b|\bmago\b|\bmaravilhoso\b|\bmediano\b|\bmedíocre\b|\bmestre\b|\bmestre\b|\bnotável\b|\bnovato\b|\bnovo\b|\bótimo\b|\bpró\b|\bproeza\b|\bproficiência\b|\bproficiente\b|\bruim\b|\bsoberbo\b|\bsobressair\b|\bsuperlativo\b|\btécnica\b|\bterrível\b|\btremendo\b|\bversado\b|\bvirtuosidade\b|\bvirtuoso\b", str(x))))
df_frame["Especificação_individual"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bespecífico\b", str(x))))
df_frame["Esperar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baguardar\b|\besperar\b", str(x))))
df_frame["Esportes"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batletismo\b|\bbadminton\b|\bbaseball\b|\bbasquete\b|\bbasquetebol\b|\bbeisebol\b|\bboxe\b|\bcanoagem\b|\bcaratê\b|\bciclismo\b|\bcrossfit\b|\bescalada\b|\besgrima\b|\besporte\b|\besportivo\b|\besqueite\b|\besqueitismo\b|\bfutebol\b|\bfutsal\b|\bginástica\b|\bgolfe\b|\bhalterofilismo\b|\bhandebol\b|\bhipismo\b|\bhóquei\ssobre\sgrama\b|\bjudô\b|\bkaratê\b|\blevantamento\sde\speso\b|\bluta\solímpica\b|\bnado\ssincronizado\b|\bnatação\b|\bpentatlo\smoderno\b|\bpólo\saquático\b|\bremo\b|\brugbi\b|\brugby\b|\bsalto\sornamental\b|\bskate\b|\bsoftball\b|\bsoftbol\b|\bsurfe\b|\btaekwondo\b|\btênis\sde\smesa\b|\btênis\b|\btiro\scom\sarco\b|\btiro\sesportivo\b|\btriatlo\b|\bvela\b|\bvôlei\sde\spraia\b|\bvôlei\b|\bvoleibol\b", str(x))))
df_frame["Estado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bestar\b", str(x))))
df_frame["Estado_continuar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdeixar\b|\bdescansar\b|\bestar\b|\bficar\b|\bmanter\b|\bpermanecer\b|\bprevalecer\b", str(x))))
df_frame["Estado_da_entidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomplexo\b|\bcondição\b|\bestado\sde\schoque\b|\bestado\sde\sconsciência\b|\bestado\b|\bestar\b", str(x))))
df_frame["Estágio_de_progresso"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balta\stecnologia\b|\bantigo\b|\bavançado\b|\bbaixa\stecnologia\b|\bcontemporâneo\b|\bde\sponta\b|\bde\súltima\sgeração\b|\bdesenvolvido\b|\bgeração\b|\bmaduro\b|\bmaturidade\b|\bmodernizar\b|\bmoderno\b|\bpróxima\sgeração\b|\bsofisticação\b|\bsofisticado\b|\búltima\sgeração\b", str(x))))
df_frame["Estar_anexado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\belo\b|\bligado\b|\bponto\sde\sintegração\b|\bsolto\b", str(x))))
df_frame["Estar_de_acordo_sobre_a_avaliação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconcordar\b", str(x))))
df_frame["Estar_em_cativeiro"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpreso\b", str(x))))
df_frame["Estar_em_risco"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bsegurança\b|\bseguro\b", str(x))))
df_frame["Estar_em_vigor"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bválido\b", str(x))))
df_frame["Estar_molhado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmolhado\b", str(x))))
df_frame["Estar_no_controle"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badministrado\b|\badministrar\b|\bconseguir\b|\bcontrolar\b", str(x))))
df_frame["Estar_separado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesligamento\b", str(x))))
df_frame["Estética"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamável\b|\bbarroco\b|\bbeleza\b|\bbelo\b|\bbonito\b|\bbucólico\b|\belegante\b|\besportivo\b|\besteticamente\b|\bestiloso\b|\bfeio\b|\bformosura\b|\bfrescura\b|\bhorrendo\b|\blindo\b|\bpitoresco\b|\bplástico\b|\brequintar\b|\brequinte\b|\brústico\b|\bsaboroso\b", str(x))))
df_frame["Estimar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badivinha\b|\badivinhar\b|\bestimativa\b", str(x))))
df_frame["Estimular_emoção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bencantar\b|\birritar\b|\bprazeroso\b|\bsurpresa\b", str(x))))
df_frame["Estragar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bequivocar\b|\berrar\b", str(x))))
df_frame["Estudar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcursar\b|\bdiplomar\b|\bestudar\b", str(x))))
df_frame["Esvaziar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\benxaguar\b|\besponja\b|\blavável\b|\blimpar\b|\blimpeza\b|\bpolido\b|\bsujeira\b", str(x))))
df_frame["Evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacontecer\b|\bacontecimento\b|\bassolar\b|\bcongresso\b|\bdesenvolvimento\b|\bepisódio\b|\bevento\b|\bfato\b|\bincidente\b|\bjogo\b|\blotar\b|\bmissa\b|\bocorrer\b|\bprosseguir\b|\bquadro\b|\brealizado\b|\brealizar\b|\bretiro\b|\bser\b|\bshow\b|\bsituação\b|\bsuceder\b", str(x))))
df_frame["Evento_desejável"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bboa\sideia\b|\bdever\b|\bmá\sideia\b|\bpoder\b", str(x))))
df_frame["Evento_esportivo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapresentação\b|\bbrasileirão\b|\bcombate\b|\bCopa\sAmérica\b|\bCopa\sdo\sMundo\b|\bCopa\b|\bcorrida\b|\bduelo\b|\bevento\b|\bgame\b|\bjogo\sde\sida\b|\bjogo\sde\svolta\b|\bjogo\sem\scasa\b|\bjogo\sfora\sde\scasa\b|\bjogo\b|\bjogos\solímpicos\b|\bluta\b|\bmundial\b|\bolimpíada\b|\bolímpico\b|\bparaolimpíada\b|\bpartida\sde\sida\b|\bpartida\sde\svolta\b|\bpartida\sem\scasa\b|\bpartida\sfora\sde\scasa\b|\bpartida\b|\bprova\b|\bregata\b|\btemporada\b", str(x))))
df_frame["Evento_histórico"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmomento\b", str(x))))
df_frame["Evento_social"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbadalar\b|\bbaile\b|\bbalada\b|\bbanquete\b|\bceia\b|\bcelebração\b|\bcelebrar\b|\bchurrasco\b|\bcomemoração\b|\bconselho\b|\bencontro\b|\bfeira\b|\bfesta\sbeneficente\b|\bfesta\b|\bfestejar\b|\bfestival\b|\bhappy\shour\b|\bjantar\b|\bnoitada\b|\bpiquenique\b|\bpromover\b|\brave\b|\brecepção\b|\breunião\b|\bsamba\b|\bsocial\b|\bvelório\b", str(x))))
df_frame["Evidências"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbasear\b|\bevidência\b|\bevidenciar\b|\bindicar\b|\bindicativo\b|\bindício\b|\bmanifestar\b|\bpostulado\b|\bprova\b", str(x))))
df_frame["Evitar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bevitar\b", str(x))))
df_frame["Exatidão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcerto\b|\bcorreto\b|\bcorrigir\b|\bdireito\b|\berrar\b|\bexato\b|\bpreciso\b", str(x))))
df_frame["Exemplar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmodelo\b|\bpadrão\b|\bparadigma\b", str(x))))
df_frame["Exercitar-se"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexercício\b", str(x))))
df_frame["Existência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babstrato\b|\bconcreto\b|\bencontrar\b|\bestar\b|\bexistência\b|\bexistente\b|\bexistir\b|\bhaver\b|\bpermanecer\b|\breal\b|\brealidade\b|\bter\b", str(x))))
df_frame["Existência_circunscrita"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bsurgir\b", str(x))))
df_frame["Expansão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcrescer\b|\bexpansão\b|\bextensão\b|\binflação\b", str(x))))
df_frame["Expectativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdever\b|\bespera\b|\besperar\b|\bexpectativa\b|\bimprevisibilidade\b|\bsonhar\b", str(x))))
df_frame["Expectativa_classificada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapenas\b|\bdimensão\b|\bgeral\b|\binteiro\b|\bmero\b|\btamanho\b|\btodo\b", str(x))))
df_frame["Experenciar_ferimento_corporal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrebentar\b|\bferimento\b|\bfurar\b|\bmachucar\b|\bquebrar\b|\bsangrar\b|\btorcer\b", str(x))))
df_frame["Experiência_de_percepção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcheirar\b|\bcompreender\b|\bdelirar\b|\bdelírio\b|\bdetectar\b|\bescutar\b|\bexperiência\b|\bexperimentar\b|\binvisível\b|\bouvir\b|\bperceber\b|\bpercepção\b|\bpesadelo\b|\bsaborear\b|\bsentir\b|\bsonhar\b|\bsonho\b|\btestemunhar\b|\bver\b|\bvivenciar\b", str(x))))
df_frame["Experimentação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btratamento\b", str(x))))
df_frame["Experimentar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexperimentar\b|\bvivenciar\b", str(x))))
df_frame["Expressão_facial"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcareta\b|\bsorriso\b", str(x))))
df_frame["Expressar_publicamente"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexpressar\b|\bmanifestar\b|\bpassar\b|\bvoz\b", str(x))))
df_frame["Extensão_linear_de_medidas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bano-luz\b|\bjarda\b|\bkm\b|\bmetro\b|\bmilha\b|\bmilímetro\b|\bpolegada\b|\bquilômetro\b", str(x))))
df_frame["Fama"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcelebridade\b|\bconhecido\b|\bépico\b|\bestatura\b|\bfama\b|\bfamoso\b|\bfamoso\b|\bfazer\snome\spara\salguém\b|\bgrande\snome\b|\binfame\b|\blendário\b|\bnotoriedade\b|\bnotório\b|\bovelha\snegra\b|\brenomado\b|\brenome\b|\breputação\b", str(x))))
df_frame["Familiaridade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconhecer\b|\bconhecido\b|\bdesconhecido\b|\bfamiliar\b|\bintimista\b|\bíntimo\b|\bnovo\b", str(x))))
df_frame["Fase_preliminar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bclassificação\b|\bconquistar\svaga\b|\beliminatórias\b|\bfase\sclassificatória\b|\bfase\sde\sgrupos\b|\bfase\spreliminar\b|\bgrupo\b|\bpreliminares\b|\bvaga\b", str(x))))
df_frame["Fazedores_de_barulho"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batabaque\b|\bcandongueiro\b|\bchocalho\b|\bgaita\sde\sfole\b|\btambor\b|\btrombeta\b", str(x))))
df_frame["Fazer_barulho"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balgazarra\b|\bbarulho\b|\bcanto\b|\bchorar\b|\bgargalhada\b|\bgritar\b|\bguincho\b|\bressoar\b|\brir\b|\bsoluçar\b|\btrovejar\b|\bzoar\b", str(x))))
df_frame["Fazer_câmbio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcâmbio\b|\btroca\b|\btrocar\b", str(x))))
df_frame["Fazer_compras"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcompras\b", str(x))))
df_frame["Fazer_turismo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacampar\b|\bapreciar\b|\baproveitar\b|\bconhecer\b|\bcurtir\b|\bdesfrutar\b|\bfazer\sturismo\b|\bpaisagem\b|\breceber\b|\btour\b|\bturismo\sferroviário\b|\bturismo\sgastronômico\b|\bturismo\b|\bturístico\b|\bver\b|\bvisita\b|\bvisitação\b|\bvisitar\b|\bvista\b|\bvisual\b", str(x))))
df_frame["Fechamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babrir\b|\bfechar\b|\btampar\b", str(x))))
df_frame["Fechamento_de_locais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfechar\b", str(x))))
df_frame["Fenômenos_naturais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamanhecer\b|\bamanhecer\b", str(x))))
df_frame["Final"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconquistar\b|\bentregar\b|\bfinal\b|\bganhar\b|\bperder\b|\btítulo\b|\bvencer\b", str(x))))
df_frame["Finalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balvo\b|\bde\smodo\sa\b|\bde\b|\bdeterminado\b|\bfinalidade\b|\bintenção\b|\bintuito\b|\bmotivo\b|\bobjetivo\b|\bobjeto\b|\bpara\sque\b|\bpara\b|\bplanejar\b|\bplano\b|\bpra\b|\bpretender\b|\bpretendido\b|\bpropósito\b|\bproposta\b|\bresolvido\b|\broteiro\b|\buso\b|\bvisar\b", str(x))))
df_frame["Finalidade_do_utensílio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfunção\b|\brecomendar\b|\buso\b", str(x))))
df_frame["Financiamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfundar\b", str(x))))
df_frame["Fingir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcavar\b|\bfingir\b|\bsimular\b", str(x))))
df_frame["Foco_de_estímulo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babominável\b|\baconchegante\b|\bagonizante\b|\bagradável\b|\bagravação\b|\bagravante\b|\balarmante\b|\balucinante\b|\bameno\b|\bangustiante\b|\banimado\b|\banimador\b|\bapaixonante\b|\bapaziguador\b|\bapetitoso\b|\baprazível\b|\bapreciável\b|\bapresentável\b|\barrebatador\b|\barrepiante\b|\barrepio\b|\bassustador\b|\baterrorizante\b|\batormentador\b|\bbacana\b|\bbem-humorado\b|\bcalmante\b|\bcansativo\b|\bcativante\b|\bcharmoso\b|\bchato\b|\bcheio\b|\bchocante\b|\bcômico\b|\bcomodidade\b|\bcomovente\b|\bconfortante\b|\bconfortável\b|\bconfuso\b|\bconsolador\b|\bconstrangedor\b|\bdelícia\b|\bdelicioso\b|\bdepressivo\b|\bdesagradável\b|\bdesapontador\b|\bdesbaratado\b|\bdescanso\b|\bdesconcertante\b|\bdesconfortável\b|\bdesencorajador\b|\bdesmotivante\b|\bdesorientante\b|\bdevastador\b|\bdivertido\b|\beletrizante\b|\bemocionante\b|\bempolgante\b|\bencantador\b|\bencorajador\b|\benfadonho\b|\benfurecedor\b|\bengraçado\b|\benlouquecedor\b|\bentristecedor\b|\benvolvente\b|\bespantoso\b|\bestimulante\b|\bestremecedor\b|\bestressante\b|\bestupeficante\b|\bexasperador\b|\bfascinante\b|\bformidável\b|\bfrio\b|\bglamour\b|\bgostoso\b|\bgratificante\b|\bhilário\b|\bhumilhante\b|\bimpressionante\b|\bincitador\b|\bincômodo\b|\bincrível\b|\binquietante\b|\binsatisfatório\b|\binsultante\b|\bintimidador\b|\bintrigante\b|\birritação\b|\birritante\b|\birritante\b|\blamentável\b|\blegal\b|\bmarcante\b|\bmistificante\b|\bmonótono\b|\bmortificante\b|\bnojeira\b|\bnojento\b|\bofensivo\b|\bpacificador\b|\bpatético\b|\bperturbador\b|\bperturbar\b|\bpreocupante\b|\bproblemático\b|\bproveito\b|\bquerido\b|\brecreação\b|\brelaxamento\b|\brelaxante\b|\brelaxar\b|\brepugnante\b|\brepulsivo\b|\brevigorante\b|\brevoltante\b|\brico\b|\bsatisfatório\b|\bsério\b|\bsinistro\b|\bsolene\b|\bsuculento\b|\bsurpreendente\b|\bsuspense\b|\btedioso\b|\bterrível\b|\btocante\b|\btranquilizador\b|\btraumático\b|\btraumatizante\b|\btriste\b|\bvazio\b", str(x))))
df_frame["Formar_relações"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bazaração\b|\bboda\b|\bcasar\b|\bnamorar\b|\bperto\b|\bseparar\b|\bunir\b", str(x))))
df_frame["Formas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bforma\b|\bformar\b|\binclinado\b|\bíngreme\b|\blinha\b|\bperfil\b|\bredondo\b", str(x))))
df_frame["Fornecimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfornecer\b|\bproporcionar\b|\bservido\b|\bservir\b", str(x))))
df_frame["Fracasso_de_empreendimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfalir\b", str(x))))
df_frame["Frequência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagora\b|\banual\b|\banualmente\b|\bàs\svezes\b|\bbianual\b|\bbimestral\b|\bcomum\b|\bconstantemente\b|\bcotidiano\b|\bcotidiano\b|\bde\stempos\sem\stempos\b|\bde\svez\sem\squando\b|\bdesta\svez\b|\bdiariamente\b|\bdiário\b|\bdiário\b|\besporádico\b|\bfrequência\b|\bfrequente\b|\bfrequentemente\b|\bgeralmente\b|\binfrequente\b|\binfrequentemente\b|\bintermintente\b|\bintervalo\b|\bmensalmente\b|\bnormal\b|\bnormalmente\b|\bnoturno\b|\bnunca\smais\b|\bnunca\b|\bo\stempo\stodo\b|\bocasional\b|\bocasionalmente\b|\bordináriamente\b|\bperiódico\b|\bperíodo\b|\bquinzenalmente\b|\bquotidiano\b|\bquotidiano\b|\braramente\b|\braro\b|\brecorrência\b|\brecorrente\b|\bregular\b|\bregularmente\b|\brepetir\b|\bsemanalmente\b|\bsemestre\b|\bsempre\b|\bsomente\b", str(x))))
df_frame["Frugalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesperdiçar\b", str(x))))
df_frame["Função"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bservir\b", str(x))))
df_frame["Ganhar_um_prêmio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bvitória\b", str(x))))
df_frame["Ganhos_e_perdas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcusto-benefício\b|\bganhar\b|\brender\b", str(x))))
df_frame["Grau"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babsolutamente\b|\bdeveras\b|\bem\sparte\b|\benorme\b|\bestupidamente\b|\bextremamente\b|\bextremo\b|\bgrande\b|\bligeiramente\b|\bmais\b|\bmenos\b|\bmuito\b|\brealmente\b|\btanto\b|\btão\b|\btotalmente\b", str(x))))
df_frame["História"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bhistória\b", str(x))))
df_frame["Hospedar-se"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bficar\b|\bhospedar\b", str(x))))
df_frame["Hospital"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bhospital\b", str(x))))
df_frame["Hospitalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacolhedor\b", str(x))))
df_frame["Idade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badolescência\b|\badulto\b|\bantigo\b|\bcom\b|\bde\b|\bidade\b|\binfância\b|\binfantil\b|\bjovem\b|\bmaduro\b|\bmeninice\b|\bnovo\b|\bter\b|\bvelhice\b|\bvelho\b", str(x))))
df_frame["Identidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bidentidade\b", str(x))))
df_frame["Idiossincrasia"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpeculiar\b|\bpessoal\b|\bprivativo\b|\búnico\b", str(x))))
df_frame["Impacto"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcolidir\b|\bpaulada\b|\bporrada\b", str(x))))
df_frame["Impedir_ou_permitir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baprovar\b|\bdar-se\sao\sluxo\b|\bdeixar\b|\binadimissível\b|\binviabilizar\b|\bpermitir\b", str(x))))
df_frame["Importância"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcentral\b|\bconhecido\b|\bcrítico\b|\bdominar\b|\bgravemente\b|\bimperdível\b|\bimportância\b|\bimportante\b|\bmarco\b|\bprimário\b|\bprincipal\b|\bprivilegiar\b|\bqualificar\b|\bsecundário\b|\bselo\b|\bsimbolo\b", str(x))))
df_frame["Impor_obrigação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexigir\b|\bobrigar\b", str(x))))
df_frame["Impressão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparência\b|\bimagem\b|\bimpressionar\b", str(x))))
df_frame["Impulso_biológico"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfaminto\b|\bfome\b", str(x))))
df_frame["Inclinação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bvocação\b", str(x))))
df_frame["Inclusão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babranger\b|\bagrupar\b|\baté\b|\bcom\b|\bcontar\b|\bconter\b|\benglobar\b|\benvolver\b|\bincluir\b|\bincorporar\b|\bjuntar\b|\bmisturar\b|\bpossuir\b|\breunir\b", str(x))))
df_frame["Incremento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balém\sde\b|\bmais\b|\boutro\b|\bsomar\b", str(x))))
df_frame["Indicar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacusar\b", str(x))))
df_frame["Inefabilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmagia\b|\bmágica\b|\bmágico\b", str(x))))
df_frame["Influência_objetiva"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafetar\b|\befeito\b|\bimpactar\b|\bimpacto\b|\binfluência\b|\binfluenciar\b|\bpoder\b|\bprejudicar\b|\bprejuízo\b|\bprocurar\b", str(x))))
df_frame["Influência_subjetiva"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconvidativo\b|\bdesestimular\b|\bembriagado\b|\binfluenciar\b|\binspiração\b|\binspirador\b|\binspirar\b|\bmusa\b|\btrazer\b|\bvaler\sa\spena\b", str(x))))
df_frame["Informação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdado\b|\bdados\b|\bdica\b|\binformação\b|\binformar\b|\bnoticiar\b", str(x))))
df_frame["Informação_atribuída"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bde\sacordo\scom\b|\bsegundo\b", str(x))))
df_frame["Informação_não_atribuída"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bsupostamente\b", str(x))))
df_frame["Infrações"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfalha\b|\bfalta\b|\binfração\b|\bmarcar\sfalta\b", str(x))))
df_frame["Infrações_diretas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcarrinho\b|\bderrubar\b|\bentrada\b|\bsplashing\b", str(x))))
df_frame["Infrações_indiretas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcarregada\b|\bcarregar\b|\bcavar\b|\bcondução\b|\bconduzir\b|\bdois\stoques\b|\bdupla\sfalta\b|\bfalta\sde\spé\b|\bimpedimento\b|\binvadir\b|\binvasão\b|\bjogo\sperigoso\b|\bmão\b|\bqueimar\sa\slargada\b|\bqueimar\b|\bsimulação\b|\bsimular\b", str(x))))
df_frame["Infraestrutura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbase\b|\binfraestrutura\b", str(x))))
df_frame["Ingestão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balimentação\b|\balimentar\b|\balmoçar\b|\balmoço\b|\bbeber\b|\bbrocar\b|\bcomer\b|\bcomida\b|\bconsumir\b|\bjantar\b|\blanchar\b|\blanche\b|\bpetiscar\b|\btomar\b", str(x))))
df_frame["Ingredientes"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babacaxi\b|\baçaí\b|\baçúcar\sde\sconfeiteiro\b|\baçúcar\b|\bágua\b|\baguardente\b|\baipim\b|\bálcool\b|\balho\b|\banchova\b|\barroz\b|\bazeite\sde\sdendê\b|\bazeite\sde\soliva\b|\bazeite\b|\bbacalhau\b|\bbacon\b|\bbacuri\b|\bbadejo\b|\bbanana-da-terra\b|\bbanana\b|\bbanha\sde\sporco\b|\bbatata-baroa\b|\bbatata\b|\bbife\b|\bburiti\b|\bcação\b|\bcacau\b|\bcachaça\b|\bcafé\b|\bcajá\b|\bcaju\b|\bcaldo\sde\scarne\b|\bcamarão\b|\bcana-de-açúcar\b|\bcana\b|\bcanela\b|\bcapivara\b|\bcaranguejo\b|\bcarne-seca\b|\bcarne\b|\bcarneiro\b|\bcatupiry\b|\bcavaquinha\b|\bcebola\b|\bcereal\b|\bcerveja\b|\bchantili\b|\bchantilly\b|\bcharque\b|\bcheddar\b|\bchocolate\sem\spó\b|\bchocolate\sgranulado\b|\bchocolate\b|\bchuchu\b|\bcoalhada\b|\bcoco\b|\bcoentro\b|\bcontra-filé\b|\bcostela\b|\bcravo\b|\bcrustáceo\b|\bcupuaçu\b|\bdendê\b|\bdoce\sde\sleite\b|\berva-mate\b|\bfarinha\sde\smandioca\b|\bfarinha\b|\bfécula\b|\bfeijão\b|\bfermento\b|\bfilé\b|\bfrango\b|\bfruta\b|\bfruto\b|\bgalinha\b|\bgorgonzola\b|\bguaraná\b|\bhortelã\b|\bingrediente\b|\biogurte\b|\bjaca\b|\bjambu\b|\bjavali\b|\bjoelho\sde\sporco\b|\bketchup\b|\blagarto\b|\blagosta\b|\blagostim\b|\blegume\b|\bleite\scondensado\b|\bleite\b|\blinguiça\scalabresa\b|\blinguiça\b|\blombo\b|\bmacaxeira-brava\b|\bmacaxeira\b|\bmaionese\b|\bmandioca-brava\b|\bmandioca\b|\bmanga\b|\bmangaba\b|\bmaniva\b|\bmanteiga\b|\bmarisco\b|\bmassa\b|\bmel\b|\bmilho\b|\bmoranga\b|\bmorango\b|\bmozzarela\b|\bmurici\b|\bnoz\b|\bnutella\b|\bóleo\b|\bora-pro-nóbis\b|\bovo\b|\bpaçoca\b|\bpacu\b|\bpaio\b|\bpão\b|\bpeito\sde\sfrango\b|\bpeixe\b|\bpequi\b|\bpernil\b|\bperu\b|\bpicanha\b|\bpimenta\b|\bpinhão\b|\bpiranha\b|\bpirarucu\b|\bpolvilho\b|\bqueijo\b|\bquiabo\b|\bquirera\b|\brepolho\b|\bsal\b|\bsalmão\b|\bsalsicha\b|\bsalsichão\b|\bsapoti\b|\bsardinha\b|\bshitake\b|\bsobrecoxa\b|\bsteak\b|\btapioca\b|\btempero\b|\btomate\b|\btorresmo\b|\btortelli\b|\btucumã\b|\btucunaré\b|\btucupi\b|\bumbu\b|\bvegetal\b|\bvinho\b|\bwasabi\b|\bwurst\b", str(x))))
df_frame["Instalações_esportivas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bárea\spara\scanoagem\b|\barena\b|\bcampo\sde\satletismo\b|\bcampo\sde\sbeisebol\b|\bcampo\sde\sfutebol\b|\bcampo\sde\sgolfe\b|\bcampo\sde\shóquei\b|\bcampo\sde\srúgbi\b|\bcampo\spara\sequitação\b|\bcampo\b|\bcentro\saquático\b|\bcentro\sde\sginástica\solímpica\b|\bcircuito\b|\bestádio\sde\sfutebol\b|\bestádio\b|\bginásio\spoliesportivo\b|\bginásio\b|\binstalação\b|\blagoa\b|\bmar\b|\bpavilhão\b|\bpista\sde\satletismo\b|\bpista\sde\sciclismo\b|\bpista\b|\bpraia\b|\bquadra\sde\sbadminton\b|\bquadra\sde\sbasquete\b|\bquadra\sde\shandebol\b|\bquadra\sde\stênis\b|\bquadra\sde\svôlei\b|\bquadra\b|\brua\b|\bsambódromo\b|\bvelódromo\b", str(x))))
df_frame["Instância"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomo\b|\bexemplo\b", str(x))))
df_frame["Instância_de_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bciclo\b|\bde\snovo\b|\bfase\b|\bnovamente\b|\bocasião\b|\brepetido\b|\buma\svez\b|\bvez\b", str(x))))
df_frame["Instância_única"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcada\b|\bsimplesmente\b|\bsó\b|\búnico\b", str(x))))
df_frame["Instituições"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\binstituição\b", str(x))))
df_frame["Intérpretes_e_papéis"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapresentação\b|\bapresentar\b|\bassistir\b|\batuar\b|\bbrincar\b|\bensaiar\b|\bespetáculo\b|\besquete\b|\bestrela\b|\bestrelar\b|\bfazer\b|\bfilme\b|\bpalco\b|\bpapel\b|\bpeça\b|\bprotagonizar\b|\bser\b|\bteatro\b|\btreinar\b", str(x))))
df_frame["Intervenção_médica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapresentar\b|\bexame\b|\bmedicar\b|\breceitar\b|\bremédio\b|\btraqueotomia\b|\bvítima\b", str(x))))
df_frame["Jogadas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bestilo\b|\bjogada\b|\bjogar\b|\blance\b|\bmanobra\b|\btécnica\b", str(x))))
df_frame["Jogadas_individuais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacertar\b|\bafundar\b|\bagachamento\b|\bagachar\b|\bagarramento\b|\bagarrar\b|\balinhamento\b|\bamorti\b|\bapproach\b|\baproveitar\srebote\b|\baproximação\b|\barranque\b|\barremessar\b|\barremesso\b|\batacar\b|\bataque\b|\batirar\b|\bavançar\b|\bavanço\b|\bback\sswing\b|\bbackhand\sclear\b|\bbackhand\b|\bbate-pronto\b|\bbater\b|\bbatida\b|\bbicicleta\b|\bboggey\b|\bborboleta\b|\bbraçada\b|\bcabeceada\b|\bcabecear\b|\bcabeceio\b|\bcaminhar\b|\bchina\b|\bchop\b|\bchutar\b|\bchute\b|\bcobrança\b|\bcobrar\b|\bconcha\b|\bcorrer\b|\bcorrida\b|\bcortada\b|\bcortar\b|\bcostas\b|\bcrawl\b|\bcrol\b|\bcruzada\b|\bcruzar\b|\bdeixadinha\b|\bdisparar\b|\bdouble\sboggey\b|\bdrive\b|\bdrop\sgoal\b|\bdrop\sshot\b|\bdrop\b|\beagle\b|\bempurrão\b|\berguer\b|\bescalar\b|\bescanteio\b|\bespalmar\b|\bestilo\slivre\b|\bflick\b|\bforehand\b|\bfresh\sair\b|\bfuga\b|\bgirar\b|\bgiro\b|\blançamento\b|\blançar\b|\blance\slivre\b|\blance-livre\b|\blateral\b|\blevantamento\b|\blevantar\b|\blineout\b|\blivre\b|\bmarco\b|\bmedley\b|\bmeio\spasso\b|\bnadar\b|\bnado\slivre\b|\bnado\b|\bobstrução\b|\bpancada\sleve\b|\bparalela\b|\bpassada\b|\bpegar\srebote\b|\bpegar\b|\bpeito\b|\bpeixinho\b|\bpênalti\b|\bpenalty\sgoal\b|\bpernada\b|\bpiaffe\b|\bpontapé\sde\spenalidade\b|\bpontapé\sde\sressalto\b|\bpontapé\b|\bprogressão\b|\bpular\b|\bpulo\b|\bpush\sand\spump\b|\bpush-hit\b|\bquicar\b|\bquique\b|\brebote\b|\bremada\b|\bremar\b|\brolamento\b|\brolar\b|\bsacar\b|\bsaltar\b|\bsalto\stesoura\b|\bsalto\b|\bsaque\b|\bsegurar\b|\bserviço\b|\bservir\b|\bshot\b|\bsmash\b|\bsoltar\b|\bsprint\b|\bswing\b|\btacada\b|\btacar\b|\btesoura\b|\btiro\sde\scanto\b|\btiro\sde\sgol\b|\btiro\sde\smeta\b|\btiro\slivre\b|\btocar\b|\btopspin\b|\btoque\b|\bv\b|\bvelejar\b|\bvoleio\b", str(x))))
df_frame["Jogadas_interativas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagarramento\b|\bagarrar\b|\barremessar\b|\barremesso\b|\batacar\b|\bataque\b|\bbater\b|\bblock\spass\b|\bblock\b|\bbloquear\b|\bbloqueio\b|\bbola\salta\b|\bcarretilha\b|\bchapéu\b|\bchutar\b|\bchute\b|\bclear\b|\bclinche\b|\bcombinar\b|\bcontra-atacar\b|\bcontra-ataque\b|\bcruzado\b|\bcruzamento\b|\bcruzar\b|\bdefender\b|\bdefesa\b|\bdefletir\b|\bdeflexão\b|\bderrubada\b|\bderrubar\b|\bdevolução\b|\bdevolver\b|\bdireto\b|\bdriblar\b|\bdrible\sda\svaca\b|\bdrible\b|\berguer\b|\bescalão\b|\besquiva\b|\besquivar\b|\bestabilização\b|\bestrangulamento\b|\bfinta\b|\bgancho\b|\bgolpe\b|\bgolpear\b|\bimobilização\b|\bimobilizar\b|\binterceptação\b|\binterceptar\b|\bjab\b|\bknockdown\b|\blambreta\b|\blançamento\b|\blançar\b|\blençol\b|\bleque\b|\blevantamento\b|\blevantar\b|\blivrar\b|\blob\b|\blutar\b|\bmarcação\b|\bmarcar\b|\bmaul\b|\bmeia-lua\b|\bparada\b|\bpassagem\sdo\sbastão\b|\bpassagem\b|\bpassar\b|\bpasse\smolhado\b|\bpasse\sseco\b|\bpasse\b|\bpontapé\b|\bpressionar\b|\bqueda\b|\breceber\b|\brecepção\b|\broubo\sde\sbola\b|\bruck\b|\bsocar\b|\bsoco\b|\bswing\b|\btabela\b|\btabelar\b|\btackle\b|\btocar\b|\btoco\b|\btoque\b|\btroca\b|\btrocar\b|\bultrapassar\b|\buppercut\b", str(x))))
df_frame["Jogadas_pontuadas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babertura\b|\babrir\smarcador\b|\babrir\splacar\b|\bace\b|\bacertar\b|\badolph\b|\balbatross\b|\bampliar\b|\barco\b|\barremesso\b|\bassistência\b|\bback\shalf\stwist\b|\bback\sthree\squarter\b|\bback\b|\bbalançar\b|\bbalanço\b|\bball\sout\b|\bbandeja\b|\bbarani\sout\b|\bbarani\b|\bbirdie\b|\bbombeiro\b|\bbreak-point\b|\bcan\scan\b|\bcarpado\b|\bcesta\b|\bchave\b|\bcody\b|\bcompletar\b|\bcompulsory\b|\bconcluir\b|\bconclusão\b|\bconner\sspin\b|\bconverter\b|\bcravada\b|\bcravar\b|\bcruzado\b|\bdecolagem\b|\bdiamidov\b|\bdireto\b|\bdouble\sback\b|\bdouble\sfull\b|\bdouble\smini\stramp\b|\bduplo\stwist\scarpado\b|\bduplo-duplo\b|\bempunhaduras\b|\benterrada\b|\benterrar\b|\bequilíbrio\b|\bespacate\b|\bespacato\b|\bespargata\b|\bestabilização\b|\bestendida\b|\besticada\b|\bfinalização\b|\bfinalizar\b|\bflic-flac\b|\bfliffis\b|\bflutuador\b|\bfront\sfull\b|\bfront\sthree\squarter\b|\bfront\b|\bfull\b|\bgame\spoint\b|\bgirar\b|\bgiro\sgigante\b|\bgiro\b|\bgol\scontra\b|\bgol\solímpico\b|\bgol\b|\bgolden\sscore\b|\bgrupado\b|\bguindaste\b|\bgut\swrench\b|\bhalf\sin\shalf\sout\b|\bhalf\snelson\b|\bhalf\b|\bhandspring\b|\bhole\sin\sone\b|\bhypolito\strês\b|\bin\b|\bippon\b|\bjanz\b|\bkoka\b|\bkorbut\b|\blargada\b|\blargar\b|\bmão\saberta\b|\bmarcar\sgol\b|\bmarcar\b|\bmatch\spoint\b|\bmidle\b|\bmiller\b|\bmoinho\b|\bmortal\b|\bmorte\ssúbita\b|\bnocaute\b|\bonda\b|\bout\b|\bparada\sde\smãos\b|\bparada\b|\bparar\b|\bpegada\b|\bpegar\b|\bpike\b|\bpirueta\b|\bpivot\b|\bpivote\b|\bponte\saérea\b|\bponto\b|\bpontuação\b|\bpullover\b|\brandolph\b|\brandy\b|\bretomada\b|\bretomar\b|\brolê\b|\bround-off\b|\brudolph\b|\brudy\sout\b|\brudy\b|\bsaída\b|\bsaltar\b|\bsalto\spak\b|\bsalto\b|\bset\spoint\b|\bside\b|\bstützkehre\b|\bsuple\b|\bsuplê\b|\btakedown\b|\btkachev\b|\btocar\b|\btoque\b|\btriffis\b|\btriple\sback\b|\btriplo-duplo\b|\btuck\b|\bvertical\b|\bvéu\b|\bvoar\b|\bvoluntary\b|\bvoo\b|\bwazari\b|\bwhipback\b|\bwipe-out\b|\byuko\b", str(x))))
df_frame["Julgamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badmirar\b|\bapaixonado\b|\bdelicioso\b|\berrar\b|\bestigmatizar\b|\bhonesto\b|\bicônico\b|\bimpecável\b|\brespeito\b|\bvalioso\b|\bvalorizar\b", str(x))))
df_frame["Julgamento_de_intensidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bforte\b", str(x))))
df_frame["Legalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcertificado\b|\bdireito\b|\blegal\b", str(x))))
df_frame["Lembrar_experiência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\besquecer\b|\binesquecível\b|\blembrança\b|\blembrar\b|\bmemória\b", str(x))))
df_frame["Ler"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bleitor\b|\bleitura\b|\bler\b", str(x))))
df_frame["Levar_tempo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bágil\b|\bdevagar\b|\bdevagarzinho\b|\bem\b|\bgradualmente\b|\blentamente\b|\blento\b|\blevar\b|\bligeiramente\b|\bprestatividade\b|\bpresteza\b|\brapidamente\b|\brápido\b", str(x))))
df_frame["Level_of_force_exertion"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bforça\b|\bforte\b|\bimpotente\b|\bpoderoso\b|\bpotência\b|\bsuave\b", str(x))))
df_frame["Level_of_force_resistance"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdureza\b|\bduro\b|\belástico\b|\bmais\b|\bresistente\b|\bsensível\b", str(x))))
df_frame["Libertar_prisioneiro"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blibertar\b", str(x))))
df_frame["Licença_temportária"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bférias\b", str(x))))
df_frame["Liderança"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badministrar\b|\bcapitão\b|\bgovernante\b|\blíder\b|\bliderado\b|\bprincesa\b|\breger\b|\brei\b", str(x))))
df_frame["Limitação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbarreira\b", str(x))))
df_frame["Limiting"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapenas\b|\blimitação\b|\blimitar\b|\bsó\b", str(x))))
df_frame["Locais_naturais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bar\slivre\b|\barquipélago\b|\barrecife\b|\bbacia\b|\bbaía\b|\bbalnear\b|\bbalneário\b|\bbarra\b|\bbarragem\b|\bbeleza\snatural\b|\bbosque\b|\bcachoeira\b|\bcampo\b|\bcanal\b|\bcascata\b|\bcatarata\b|\bcaudal\b|\bcaverna\b|\bchapada\b|\bcordilheira\b|\bcórrego\b|\bdeserto\b|\bduna\b|\benseada\b|\bestreito\b|\bfloresta\b|\bgaláxia\b|\bgruta\b|\bhidrotermal\b|\bilha\b|\bjardim\b|\blago\b|\blagoa\b|\blençol\b|\bmangue\b|\bmar\b|\bmargem\b|\bmata\b|\bmirante\b|\bmontanha\b|\bmontão\b|\bmonte\b|\bmorro\b|\bmundo\b|\bnatural\b|\bnatureza\b|\boceano\b|\borla\b|\bpantanal\b|\bparadisíaco\b|\bparque\secológico\b|\bparque\smunicipal\b|\bparque\snacional\b|\bparque\b|\bpasto\b|\bpenínsula\b|\bpico\sde\smontanha\b|\bpiscina\snatural\b|\bponto\spanorâmico\b|\bpraia\b|\bqueda\sd\ságua\b|\bqueda\sde\ságua\b|\brecife\b|\breserva\snatural\b|\breserva\b|\briacho\b|\bribeira\b|\bribeirão\b|\brio\b|\bsertão\b|\btrilha\sde\scaminhada\b|\bvale\b", str(x))))
df_frame["Locais_políticos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baldeia\b|\barquidiocese\b|\bbairro\b|\bcapital\b|\bcidade\b|\bcongresso\b|\bcontinente\b|\bdiocese\b|\bdistrito\b|\bestado\b|\beuropa\b|\bexterior\b|\bfavela\b|\bgoverno\b|\binternacionalmente\b|\bmetrópole\b|\bmundo\b|\bmunicipal\b|\bmunicípio\b|\bpaís\b|\bparóquia\b|\bplanalto\b|\bpovoado\b|\bprincipado\b|\btaba\b|\bterra\b|\bvila\b|\bvilarejo\b", str(x))))
df_frame["Locais_por_colocação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blocalização\b|\bposição\b", str(x))))
df_frame["Locais_por_entidade_características"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbairro\b|\bcinturão\b|\benclave\b", str(x))))
df_frame["Locais_por_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcampo\sde\sbatalha\b|\bcampo\b|\bcena\b|\bcenário\b|\bespaço\b|\blocal\b|\bpicadeiro\b|\bteatro \b", str(x))))
df_frame["Locais_por_propriedade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpropriedade\b|\bterreno\b", str(x))))
df_frame["Locais_por_uso"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bárea\sde\srecreação\b|\bárea\sindustrial\b|\bassociação\ssocial\b|\bassociação\b|\bbar\b|\bcadeia\b|\bcanto\sdo\ssilêncio\b|\bcárcere\b|\bcasa\sde\sshow\b|\bcemitério\b|\bcentro\seducacional\b|\bchafariz\b|\bcidade\sbase\b|\bcidade\ssede\b|\bcomplexo\b|\bescola\sde\sartes\b|\bescola\sde\sbalé\b|\bescola\sde\smúsica\b|\bescola\stécnica\b|\bescola\b|\bfaculdade\sde\sdireito\b|\bfaculdade\sde\sodontologia\b|\bfaculdade\b|\bfazenda\b|\bfundação\b|\bigreja\b|\bindústria\b|\binstituição\sde\sensino\b|\binstituição\seducacional\b|\binstituição\b|\binterior\b|\bmarco\shistórico\b|\bmeca\b|\bmonumento\b|\borganização\sde\sproteção\sdos\sanimais\b|\borganização\sde\sserviço\ssocial\b|\borganização\ssem\sfins\slucrativos\b|\borganização\b|\bporto\b|\bpraça\b|\bprisão\b|\bpub\b|\bquarto\b|\bquintal\b|\bsantuário\b|\bsede\b|\bseminário\b|\bsindicato\b|\buniversidade\sparticular\b|\buniversidade\b|\bUTI\b", str(x))))
df_frame["Local"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bambiente\b|\bárea\b|\bcentral\b|\bcentro\sda\scidade\b|\bcentro\b|\bespaço\b|\blocal\b|\blocalidade\b|\blocalização\b|\blugar\b|\bmancha\b|\bnúcleo\b|\bperiferia\b|\bplaneta\b|\bponto\b|\bregião\b|\bregional\b|\bsuperfície\b|\bTerra\b|\bterreno\b|\bterritório\b|\burbano\b|\bzona\b", str(x))))
df_frame["Localização_da_luz"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacender\b|\bbrilhante\b|\bbrilhar\b|\bbrilho\spálido\b|\bbrilho\b|\bcentelha\b|\bchamejar\b|\bcintilação\b|\bcintilante\b|\bcintilar\b|\bclaro\b|\bcoruscação\b|\bcoruscar\b|\besplendor\b|\bflamejar\b|\bflash\b|\biluminado\b|\biluminar\b|\bluminosidade\b|\bluminoso\b|\blustroso\b|\bluz\b|\bpiscante\b|\bpiscar\b|\brefulgência\b|\brefulgente\b|\brefulgir\b|\breluzir\b|\bresplandecente\b|\bresplandecer\b|\bresplendor\b|\bsolar\b|\bvislumbrar\b|\bvislumbre\b", str(x))))
df_frame["Localização_esperada_da_pessoa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcasa\b", str(x))))
df_frame["Localização_na_trajetória"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpassar\b", str(x))))
df_frame["Localização_no_tempo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bano\b|\bdia\b|\bem\b|\bhora\b|\btempo\b", str(x))))
df_frame["Localizar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bachar\b|\bencontrar\b", str(x))))
df_frame["Louvabilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bhonra\b", str(x))))
df_frame["Malfeitoria"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpecar\b", str(x))))
df_frame["Maneira"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baltamente\b|\bapaixonadamente\b|\batravés\sde\b|\batravés\b|\bauditivamente\b|\bcinestesicamente\b|\bcomo\b|\bconforme\b|\bcuriosamente\b|\bde\sum\sjeito\b|\bde\sverdade\b|\bdiretamente\b|\bdireto\b|\bincontrolavelmente\b|\bintencionalmente\b|\bjeito\b|\blevemente\b|\bliteralmente\b|\bmaneira\b|\bmaravilhosamente\b|\bmedida\b|\bnormalmente\b|\bobsessivamente\b|\bpoeticamente\b|\bprofundamente\b|\bprogressivo\b|\bradicalmente\b|\btranquilamente\b|\bvisualmente\b", str(x))))
df_frame["Manipulação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapertar\b|\bsegurar\b|\btocar\b", str(x))))
df_frame["Marca_corporal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcicatriz\b", str(x))))
df_frame["Massa_movimento Mass_motion"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafluir\b|\binundar\b", str(x))))
df_frame["Massa_quantificada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmedida\b|\bmuito\b|\bnenhum\b|\bnúmero\b|\bpeso\b|\btodo\b|\btudo\b", str(x))))
df_frame["Matar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmatar\b|\bmorto\b", str(x))))
df_frame["Medida_por_ação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbocado\b|\bpitada\b|\bpunhado\b", str(x))))
df_frame["Medida_volume"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcolher\sde\ssopa\b|\bfio\b|\bgota\b", str(x))))
df_frame["Medir_duração"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bano\b|\bdia\b|\bhora\b|\bmês\b|\bmilênio\b|\bminuto\b|\bnanossegundo\b|\bquinzena\b|\bsegundo\b|\bsemana\b|\btempo\b", str(x))))
df_frame["Meio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batravés\b|\bcanal\b|\bcomo\b|\bcomo\b|\bem\b|\bforma\b|\bjeito\b|\bmecanismo\b|\bmeio\b|\bmétodo\b|\bmídia\b|\bmodo\sde\soperação\b|\bpor\b|\bprocedimento\b|\bprocesso\b|\breceita\b|\btática\b|\btécnica\b", str(x))))
df_frame["Meios_de_comunicação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcelular\b|\btelefone\b", str(x))))
df_frame["Meios_de_transporte"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bavião\b|\bbalão\b|\bbalsa\b|\bbarca\b|\bbarco\sde\spasseio\b|\bbarco\b|\bbicicleta\b|\bbonde\b|\bcarro\b|\bfrescão\b|\bhelicóptero\b|\bmetrô\b|\bmotocicleta\b|\bnavio\sde\scruzeiro\b|\bnavio\b|\bônibus\sde\spasseio\b|\bônibus\b|\bparador\b|\btáxi\saéreo\b|\btáxi\b|\bteleférico\b|\btrailer\b|\btrem\b|\bvagão\sleito\b|\bveículo\b|\bveleiro\b", str(x))))
df_frame["Membro_das_forças_armadas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bartilheiro\b|\bcapitão\b|\bnavegador\b", str(x))))
df_frame["Memória"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blembrar\b|\brecordação\b", str(x))))
df_frame["Mirar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdirigido\b|\bmira\b", str(x))))
df_frame["Modalidades_esportivas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barremesso\b|\bborboleta\b|\bcanoagem\sslalom\b|\bcanoagem\svelocidade\b|\bciclismo\sBMX\scorrida\b|\bciclismo\sBMX\sfreestyle\b|\bciclismo\sBMX\smanobras\b|\bciclismo\sBMX\sracing\b|\bciclismo\sBMX\b|\bciclismo\sde\sestrada\b|\bciclismo\sde\spista\b|\bciclismo\smountain\sbike\b|\bconcurso\scompleto\sde\sequitação\b|\bcorrida\scom\sobstáculos\b|\bcorrida\sde\sfundo\b|\bcorrida\sde\slonga\sdistância\b|\bcorrida\sde\svelocidade\b|\bcorrida\b|\bcostas\b|\bcrawl\b|\bcrol\b|\bdecatlo\b|\bespada\b|\bestilo\slivre\b|\bflorete\b|\bginástica\sartística\b|\bginástica\sde\strampolim\b|\bginástica\srítmica\b|\bheptatlo\b|\bhipismo\sadestramento\b|\bhipismo\sCCE\b|\bhipismo\ssaltos\b|\blançamento\b|\bluta\sestilo\slivre\b|\bluta\sgreco-romana\b|\bmaratona\b|\bmarcha\satlética\b|\bmedley\b|\bmeio-fundo\b|\bmodalidade\b|\bnado\sborboleta\b|\bnado\scostas\b|\bnado\slivre\b|\bnado\speito\b|\bpark\b|\bpeito\b|\brevezamento\b|\bsabre\b|\bsalto\scom\svara\b|\bsalto\sem\saltura\b|\bsalto\sem\sdistância\b|\bsalto\striplo\b|\bsalto\b|\bstreet\b|\btrampolim\sacrobático\b", str(x))))
df_frame["Modo_de_viver"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baventureiro\b|\bboemia\b|\bboêmio\b|\bdeficiente\b|\bdeficiente\b|\bhippie\b|\bnatureba\b|\bnaturismo\b|\bvegano\b|\bvida\b|\bviver\b", str(x))))
df_frame["Morrer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafogar\b|\baguentar\b|\bfalecer\b|\bresistir\b", str(x))))
df_frame["Morte"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmorrer\b|\bmorte\b|\bperda\b", str(x))))
df_frame["Morto_ou_vivo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmortal\b|\bmortal\b|\bvida\b", str(x))))
df_frame["Móveis"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbanco\b|\bcadeira\b|\bcama\b|\bcarteira\b|\bcolchão\b|\bguarda-roupa\b|\bmesa\b|\bmóvel\b|\bpoltrona\b|\bprateleira\b|\bsofá-cama\b", str(x))))
df_frame["Movimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\b|\balterar\b|\bavançar\b|\bbalançar\b|\bbater\b|\bderrubar\b|\bdeslizar\b|\bdesviar\b|\bdirigir\b|\bdisparar\b|\bempurrar\b|\benrolar\b|\bespiralar\b|\bir\b|\bmover\b|\bmovimento\b|\bmudar\b|\bondular\b|\bpercorrer\b|\bpuxar\b|\bremover\b|\brodopiar\b|\brolar\b|\bsair\b|\bseguir\b|\bserpear\b|\bserpentear\b|\btrançar\b|\bviajar\b|\bvoar\b|\bvolta\b|\bvoltar\b|\bziguezaguear\b", str(x))))
df_frame["Movimento_corporal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baberto\b|\bcontorcer\b|\bestender\b|\bfechar\b|\bmexer\b|\bmorder\b|\bmover\b|\bsentar\b|\bvirar\b", str(x))))
df_frame["Movimento_direcional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcair\b|\bpor\b|\bsubmergir\b|\btombo\b", str(x))))
df_frame["Movimento_fluídico"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcorrente\b|\bfluido\b|\bgota\b", str(x))))
df_frame["Mudança_de_estado_operacional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapagar\b|\bligar\b|\bligar\b|\bligar\b", str(x))))
df_frame["Mudança_de_fase"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bgelar\b", str(x))))
df_frame["Mudança_de_temperatura_incoativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcalor\b", str(x))))
df_frame["Mudar_direção"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bvirar\b", str(x))))
df_frame["Mudar_duração_do_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bextensão\b|\bperpetuar\b|\bprolongar\b", str(x))))
df_frame["Mudar_posição_em_uma_escala"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batingir\b|\bchegar\b|\belevar\b|\bexplosão\b|\btriplicar\b", str(x))))
df_frame["Mudar_postura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdeitar\b", str(x))))
df_frame["Mudar_tempo_do_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdemora\b", str(x))))
df_frame["Nascer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bnascer\b|\bnascimento\b", str(x))))
df_frame["Negação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bnão\b|\bnunca\b|\bsem\b", str(x))))
df_frame["Negar_ou_conceder_permissão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baprovado\b|\baprovar\b", str(x))))
df_frame["Negócios"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacademia\sde\sdança\sdo\sventre\b|\bacademia\sde\sdança\b|\bacademia\sde\sginástica\b|\bacademia\b|\badega\b|\bagência\sde\sentretenimento\b|\bagência\sde\sturismo\b|\bagência\sde\sviagens\sa\spontos\sturísticos\b|\bagência\b|\bagropecuária\b|\bantiquário\b|\bassistência\smédica\b|\batacadista\b|\bbanco\b|\bbazar\b|\bboate\b|\bboite\b|\bbomboniere\b|\bbookstore\b|\bboutique\b|\bbutique\b|\bcaixa\seletrônico\b|\bcasa\sde\sdança\b|\bcasa\sde\sshow\b|\bcasa\snoturna\b|\bclub\b|\bcomércio\sde\spneu\b|\bconcessionária\b|\bconfecção\b|\bconfeitaria\b|\bconsultoria\sde\srecursos\shumanos\b|\bconsultório\b|\bcorporação\b|\bdelivery\b|\bdesenvolvedora\sde\simóveis\b|\bdestilaria\b|\bdistribuidor\sde\sbebidas\b|\bdoceria\b|\bdrogaria\b|\beditora\sde\sjornais\b|\beditora\b|\bempreendimento\b|\bempresa\sde\slembrancinhas\sde\sfesta\b|\bempresa\sde\sorganização\sde\seventos\b|\bempresa\sde\svigilância\b|\bempresa\b|\bentrega\sde\srefeições\sprontas\b|\bestabelecimento\b|\bfábrica\b|\bfarmácia\b|\bfeira\sde\sartesanato\b|\bfirma\b|\bfloricultura\b|\bfornecedor\sde\sartigos\shospitalares\b|\bfranquia\b|\bfrutaria\b|\bhamburgueria\b|\binvestimento\b|\bjoalheria\b|\bjornal\b|\bkaraokê\b|\blava-rápido\b|\blivraria\b|\blocal\scom\smúsica\sao\svivo\b|\blocal\spara\seventos\b|\bloja\sde\sacessórios\sautomotivos\b|\bloja\sde\sacessórios\sde\smoda\b|\bloja\sde\sartigos\spara\scama\smesa\se\sbanho\b|\bloja\sde\sartigos\spara\sdança\b|\bloja\sde\sartigos\spara\sfestas\b|\bloja\sde\sazulejos\b|\bloja\sde\sbrinquedos\b|\bloja\sde\scalçado\b|\bloja\sde\sCDs\susados\b|\bloja\sde\scolchões\b|\bloja\sde\sconveniência\b|\bloja\sde\scostura\b|\bloja\sde\sdecoração\b|\bloja\sde\sdepartamento\b|\bloja\sde\sdiscos\b|\bloja\sde\seletrodomésticos\b|\bloja\sde\seletrônicos\b|\bloja\sde\sjogos\b|\bloja\sde\slingerie\b|\bloja\sde\smadeiras\b|\bloja\sde\smateriais\sartísticos\b|\bloja\sde\smateriais\sde\sconstrução\b|\bloja\sde\smateriais\spara\sartesanato\b|\bloja\sde\smoda\sfeminina\b|\bloja\sde\smoda\sinfantil\b|\bloja\sde\smoda\smasculina\b|\bloja\sde\smóveis\sinfantis\b|\bloja\sde\smúsica\b|\bloja\sde\spresentes\b|\bloja\sde\sprodutos\snaturais\b|\bloja\sde\sração\b|\bloja\sde\sroupa\b|\bloja\sde\sroupas\sde\sbanho\b|\bloja\sde\sroupas\sde\scama\b|\bloja\sde\sroupas\sde\spraia\b|\bloja\sde\sroupas\spara\sbebês\b|\bloja\sde\svideogame\b|\bloja\spara\sbebê\b|\bloja\b|\bmercadinho\b|\bmercado\b|\bmercearia\b|\bmultinacional\b|\bnegociação\b|\bnegócio\b|\boficina\sde\scarroceria\b|\boperadora\b|\bótica\b|\bperfumaria\b|\bpet\sshop\b|\bpetshop\b|\bposto\sde\scombustível\b|\bposto\sde\sgasolina\b|\bprodutora\sde\scine\se\svídeo\b|\bpromoção\b|\bprovedor\sde\sinternet\b|\bsalão\sde\sbeleza\b|\bserviço\sde\sajuste\sde\sroupas\b|\bserviço\sde\salinhamento\se\sbalanceamento\b|\bserviço\spúblico\b|\bserviço\sveterinário\sde\semergência\b|\bspa\b|\bsupermercado\b|\bvenda\b|\bvinícola\b|\bwi-fi\b", str(x))))
df_frame["Nível_de_luz"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bescuridão\b|\bescuro\b|\bluminoso\b", str(x))))
df_frame["Nomeação_simples"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchamar\b", str(x))))
df_frame["Nomear"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bnomear\b", str(x))))
df_frame["Nome_simples"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\boração\b|\bpalavra\b|\bsigla\b|\btermo\b|\bverbo\b|\bvocábulo\b", str(x))))
df_frame["Notabilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdestacar-se\b|\bganhar\b|\bgrande\b|\bmaior\b|\bpequeno\b", str(x))))
df_frame["Números_cardinais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\b16\b|\b21\b|\bambos\b|\bbilhão\b|\bcatorze\b|\bcem\b|\bcento\b|\bcinco\b|\bcinquenta\se\sdois\b|\bcinquenta\b|\bdez\b|\bdezenove\b|\bdezesseis\b|\bdezessete\b|\bdezoito\b|\bdois\b|\bdoze\b|\bdual\b|\bdupla\b|\bduzentos\b|\bmeio\b|\bmil\b|\bmilhão\b|\bmilhar\b|\bnove\b|\bnoventa\b|\bnúmero\b|\boitenta\b|\boito\b|\bonze\b|\bpar\b|\bquarenta\b|\bquatorze\b|\bquatro\b|\bquinhentos\b|\bquinze\b|\bseis\b|\bsessenta\b|\bsete\b|\bsetenta\se\squatro\b|\bsetenta\b|\btrês\b|\btreze\b|\btrinta\se\scinco\b|\btrinta\se\sdois\b|\btrinta\se\snove\b|\btrinta\se\soito\b|\btrinta\se\squatro\b|\btrinta\se\sseis\b|\btrinta\se\ssete\b|\btrinta\se\strês\b|\btrinta\se\sum\b|\btrinta\b|\bum\b|\buma\b|\bvinte\se\scinco\b|\bvinte\se\sdois\b|\bvinte\se\snove\b|\bvinte\se\soito\b|\bvinte\se\squatro\b|\bvinte\se\sseis\b|\bvinte\se\ssete\b|\bvinte\se\strês\b|\bvinte\se\sum\b|\bvinte\b|\bzero\b|\bzilhão\b", str(x))))
df_frame["Números_ordinais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdécimo\snono\b|\bdécimo\ssétimo\b|\bdécimo\ssexto\b|\bdécimo\sterceiro\b|\bdécimo\b|\bdécimo\b|\bnono\b|\bnono\b|\boitavo\b|\boitavo\b|\bprimeiro\b|\bprimeiro\b|\bquarto\b|\bquarto\b|\bquinto\b|\bsegundo\b|\bsegundo\b|\bsétimo\b|\bsexto\b|\bsexto\b|\bterceiro\b|\bterceiro\b|\búltimo\b", str(x))))
df_frame["Obter_documento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdocumento\b|\bobter\b|\brenovar\b|\btirar\b", str(x))))
df_frame["Obviedade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bclaro\b|\bclaro\b|\bdisponível\b|\bevidente\b|\bimperceptível\b", str(x))))
df_frame["Ocorrência_condicional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcondicionado\b|\bse\b", str(x))))
df_frame["Oferecer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\boferecer\b|\boferta\b|\bservir\b", str(x))))
df_frame["Operar_um_sistema"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfuncionamento\b|\bfuncionar\b|\boperar\b", str(x))))
df_frame["Operar_veículo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bandar\b|\bmontar\b|\bpilotar\b|\bteleguiado\b", str(x))))
df_frame["Opinião"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bachar\b|\bacreditar\b|\bcrer\b|\bopinião\b|\bpensar\b|\bteoria\b|\bteoricamente\b|\bvisão\b", str(x))))
df_frame["Oportunidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchance\b|\boportunidade\b|\boportuno\b", str(x))))
df_frame["Organização"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagência\sde\snotícias\b|\bassociação\b|\bcartel\b|\bclube\b|\bcomitê\b|\bconselho\b|\bcorporação\b|\bdelegação\b|\bdesorganização\b|\bdesorganizar\b|\bempresa\b|\bfraternidade\b|\bgoverno\b|\bgrupo\b|\binteligência\b|\bjudiciário\b|\bjuntar\b|\bliga\b|\bmultinacional\b|\bordem\b|\borganização\b|\borganizar\b|\bórgão\b|\bparlamento\b|\bsociedade\b|\bunião\b", str(x))))
df_frame["Órgão_judicial"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bvara\b", str(x))))
df_frame["Origem"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafricano\b|\bamericano\b|\bárabe\b|\bargentino\b|\basiático\b|\bassírio\b|\bbizantino\b|\bbrasileiro\b|\bbritânico\b|\bcanadense\b|\bcapixaba\b|\bchinês\b|\bcolombiano\b|\bcubano\b|\bdatar\b|\bde\b|\begípcio\b|\bescocês\b|\bespanhol\b|\beuropeu\b|\bfinlandês\b|\bfrancês\b|\bgrego\b|\bholândes\b|\bindiano\b|\bindígena\b|\binternacional\b|\biraniano\b|\biraquiano\b|\birlandês\b|\bitaliano\b|\bjamaicano\b|\bjaponês\b|\bjordaniano\b|\blocal\b|\bmineiro\b|\bnacional\b|\bnacional\b|\boriental\b|\borigem\b|\botomano\b|\bportuguês\b|\bqueniano\b|\bromano\b|\brusso\b|\bsaudita\b|\bsírio\b|\bsuíço\b|\btupinambá\b|\bturco\b|\bvietnamita\b|\bvir\sde\b", str(x))))
df_frame["Origem_indígena"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bnativo\b", str(x))))
df_frame["Padrão_temporal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\britmo\b", str(x))))
df_frame["Parcialidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bneutro\b", str(x))))
df_frame["Parentesco"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagregado\b|\bavó\b|\bfilha\b|\bfilho\b|\birmã\b|\birmão\b|\bmadrinha\b|\bmãe\b|\bmamãe\b|\bmaterno\b|\bneto\b|\bpadrasto\b|\bpai\b|\bpapai\b|\bparente\b|\bprimo\b|\btio\b|\bvó\b", str(x))))
df_frame["Partes_de_roupas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbolso\b|\bbotão\b|\bcapuz\b|\bfita\b|\bsola\b", str(x))))
df_frame["Partes_do_corpo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbarriga\b|\bboca\b|\bbraço\b|\bcabeça\b|\bcabelo\b|\bcélula\b|\bcérebro\b|\bcintura\b|\bcolo\b|\bcoluna\b|\bcoração\b|\bcorpo\b|\bcostas\b|\bcostela\b|\bcotovelo\b|\bdedo\b|\bmão\b|\bmente\b|\bnariz\b|\bolho\b|\bombro\b|\bosso\b|\bpeito\b|\bperna\b|\brosto\b|\btesta\b", str(x))))
df_frame["Parte_como_segmentos_ordenados"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcamada\b", str(x))))
df_frame["Parte_elemento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baglomerado\b|\bbocado\b|\bcaco\b|\bchip\b|\bfarrapo\b|\bfatia\b|\bfragmento\b|\bgalho\b|\blâmina\b|\bmigalha\b|\bnaco\b|\bnódulo\b|\bpedaço\b|\bplaca\b|\btorrão\b|\btrecho\b", str(x))))
df_frame["Parte_interior_exterior"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexterior\b|\bexterno\b|\binterior\b|\binterno\b", str(x))))
df_frame["Parte_moldada"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balça\b|\bboca\b|\bborda\b|\bbraço\b|\bcasca\b|\bgraveto\b|\bperna\b", str(x))))
df_frame["Parte_orientacional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bápice\b|\bbaixo-ventre\b|\bbase\b|\bcanto\b|\bcimeira\b|\bcrista\b|\bdireita\b|\bdireito\b|\besquerda\b|\besquerdo\b|\bface\b|\bfrente\b|\bfrente\b|\bfrontal\b|\bfundo\b|\binferior\b|\binferior\b|\blado\b|\bleste\b|\bleste\b|\bnoroeste\b|\bnorte-sul\b|\bnorte\b|\bnorte\b|\bocidental\b|\boeste\b|\boeste\b|\boriental\b|\bpé\b|\bpico\b|\bposterior\b|\bretaguarda\b|\bsubdimensionado\b|\bsul\b|\bsul\b|\bsulista\b|\bsuperior\b|\btopo\b|\btraseiro\b|\bverso\b", str(x))))
df_frame["Parte_todo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcentésimo\b|\bcentro\b|\bcompleta\b|\bcompor\b|\bconstituir\b|\bdecomposição\b|\bdedo\b|\bdividir\b|\bfazer\sparte\b|\bfiapo\b|\bformar\b|\bfragmento\b|\bgancho\b|\bgota\b|\bintegrar\b|\binteiro\b|\binterno\b|\bmetade\b|\boitavo\b|\bparte\b|\bpertencer\b|\bpingo\b|\bponta\b|\bporção\b|\bprefixo\b|\bquinto\b|\bseção\b|\bsegmento\b|\bter\b|\bterceiro\b|\btrimestre\b", str(x))))
df_frame["Participação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbaladar\b|\bcelebrar\b|\bcomprometimento\b|\bintegrante\b|\bjogar\b|\bparticipação\b|\bparticipante\b|\bparticipar\b", str(x))))
df_frame["Partida_do_turista_alojamento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcheck-out\b|\bsaída\b", str(x))))
df_frame["Partida_do_turista_localidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcheck-in\b|\bembarcar\b|\bembarque\b", str(x))))
df_frame["Partir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafastar\b|\bdebandar\b|\bdeixar\b|\bdesaparecer\b|\bdesaparecimento\b|\bemergir\b|\bescapar\b|\bexôdo\b|\bfuga\b|\bir\sembora\b|\bir\b|\bpartir\b|\bsaída\b|\bsair\b|\bsumir\b", str(x))))
df_frame["Partitivo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bde\b|\bfora\b|\bparte\b", str(x))))
df_frame["Peça_arquitetônica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barco\b|\bazulejo\b|\bbalcão\b|\bchão\b|\bcornija\b|\bfachada\b|\bfundação\b|\bjanela\b|\blaje\b|\blance\b|\blareira\b|\bmurada\b|\bmureta\b|\bparapeito\b|\bparede\b|\bpatamar\b|\bpiso\b|\bporta\b|\btelhado\b|\bteto\b|\btrave\b", str(x))))
df_frame["Pedir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchamar\b|\bconvidar\b|\bconvite\b|\bdemanda\b|\bdemandar\b|\bmandar\b|\bordem\b|\bpedido\b|\bpedir\b|\bsolicitar\b", str(x))))
df_frame["Pegar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapanhar\b|\bapossar\b|\bapreender\b|\bapreensão\b|\bcomandar\b|\blevar\b|\bpegar\b", str(x))))
df_frame["Pegar_fogo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bqueimar\b", str(x))))
df_frame["Percepção_ativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badmirar\b|\barregalar\b|\bassistir\b|\bavistar\b|\bcheirar\b|\bcontemplar\b|\bdeparar\b|\bembasbacar\b|\bencarar\b|\benxergar\b|\bescutar\b|\bespiar\b|\bespionar\b|\bespreitada\b|\bespreitar\b|\bfungada\b|\bfungar\b|\bgosto\b|\bobservação\b|\bobservar\b|\bolhar\b|\bolhar\b|\bouvir\b|\bpalpar\b|\bprovar\b|\brelançar\b|\brelance\b|\bsaborear\b|\bsentir\b|\bver\b", str(x))))
df_frame["Período_de_tempo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\banoitecer\b|\bdia\sa\sdia\b|\bdia\b|\bhorário\b|\btempo\b|\bvida\b", str(x))))
df_frame["Persuasão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconvencer\b|\bmotivar\b", str(x))))
df_frame["Pessoas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balguém\b|\balguém\b|\bcara\b|\bcaráter\b|\bcavalheiro\b|\bcidadão\b|\bcolega\b|\bcompanheiro\b|\bdama\b|\bgalera\b|\bgaroto\b|\bgente\b|\bhomem\b|\bhumanidade\b|\bhumano\b|\bindivíduo\b|\bmenino\b|\bmoço\b|\bmortal\b|\bmulher\b|\bnenhum\b|\bninguém\b|\bninguém\b|\bpersonagem\b|\bpessoa\b|\bpessoal\b|\bpovo\b|\bpúblico\b|\bquem\b|\bquem\b|\brapaz\b|\bser\shumano\b|\bser\svivo\b|\btodo\smundo\b|\btodos\b|\bum\b|\bvida\b", str(x))))
df_frame["Pessoas_por_atividade_de_lazer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baventureiro\b|\bbackpacker\b|\bbanhista\b|\bfolião\b|\bfrequentador\b|\bgamer\b|\bgeek\b|\bjogador\b|\bmotoqueiro\b|\bnaturista\b|\bturista\b|\bviajante\b|\bvisitante\b", str(x))))
df_frame["Pessoas_por_atividade_transitória"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bobservador\b", str(x))))
df_frame["Pessoas_por_enquadramento_social"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcaipira\b|\bescravidão\b|\bescravo\b|\bmendigo\b|\bpedinte\b|\bsenhor\b", str(x))))
df_frame["Pessoas_por_etnia"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafro-brasileiro\b|\bbranco\b|\bcigano\b|\bíndio\b|\bnegro\b", str(x))))
df_frame["Pessoas_por_origem"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balagoano\b|\balemão\b|\bamericano\b|\baustríaco\b|\bboliviano\b|\bbrasileiro\b|\bbrasileiro\b|\bbritânico\b|\bcaliforniano\b|\bcarioca\b|\bescocês\b|\bespanhol\b|\bestrangeiro\b|\bET\b|\bfrancês\b|\bfrancesa\b|\bgrego\b|\bgringo\b|\bholandês\b|\binca\b|\bíndio\b|\binglês\b|\binglesa\b|\biraniano\b|\birlandês\b|\bitaliano\b|\bmexicano\b|\bnova\siorquino\b|\botomano\b|\bpersa\b|\bportuguês\b|\bturco\b", str(x))))
df_frame["Pessoas_por_religião"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbatista\b|\bbudismo\b|\bbudista\b|\bcandomblé\b|\bcatolicismo\b|\bcatólico\b|\bcristão\b|\bespírita\b|\bespiritismo\b|\bfanático\b|\bfiel\b|\binfiel\b|\bislamismo\b|\bjudaísmo\b|\bjudeu\b|\blaico\b|\bmórmon\b|\bmulçumano\b|\bpagão\b|\bprotestante\b|\bprotestantismo\b|\bumbanda\b", str(x))))
df_frame["Pessoas_por_residência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bvizinho\b", str(x))))
df_frame["Pessoas_por_vocação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babade\b|\bacadêmico\b|\badvogado\b|\bagente\sduplo\b|\bagente\b|\balfaiate\b|\baluno\b|\bambientalista\b|\bantropólogo\b|\bapóstolo\b|\barqueólogo\b|\barquiteto\b|\bartesão\b|\bartista\b|\bassistente\b|\bastrólogo\b|\bastronauta\b|\batendente\b|\batleta\b|\bator\b|\batriz\b|\bautor\b|\bbancário\b|\bbarman\b|\bbeato\b|\bbibliotecário\b|\bbiólogo\b|\bbispo\b|\bbombeiro\b|\bcabeleireiro\b|\bcaçador\b|\bcamareiro\b|\bcantor\b|\bcapitão\b|\bcardeal\b|\bcarpinteiro\b|\bcartógrafo\b|\bchefe\sde\scozinha\b|\bchefe\b|\bcientista\b|\bcirurgião\splástico\b|\bcomerciante\b|\bcomissário\sde\sbordo\b|\bconcursado\b|\bconsultor\b|\bcontador\b|\bcoreógrafo\b|\bcorrespondente\b|\bcostureiro\b|\bcoveiro\b|\bcozinheiro\b|\bcriado\b|\bdançarino\b|\bdedetizador\b|\bdelegado\b|\bdentista\b|\bdeputado\b|\bdesenhista\b|\bdesenvolvedor\sde\ssoftware\b|\bdesenvolvedor\sweb\b|\bdesigner\b|\bdetetive\sparticular\b|\bdetetive\b|\bdiácono\b|\bdiretor\b|\bdocente\b|\bdono\sde\scasa\b|\beditor\b|\beducador\b|\bempregado\sdoméstico\b|\bempresário\b|\benfermeiro\b|\bengenheiro\b|\bescritor\b|\bescriturário\b|\bespecialista\b|\bespeculador\b|\bespião\b|\besteticista\b|\bestudante\b|\bexecutivo\b|\bexplorador\b|\bextrativista\b|\bfabricante\b|\bfarmacêutico\b|\bfaxineiro\b|\bfazendeiro\b|\bfísico\b|\bfisioterapeuta\b|\bfotógrafo\b|\bfreira\b|\bfrentista\b|\bfuncionário\b|\bgaitista\b|\bgandula\b|\bgarçom\b|\bgarçonete\b|\bgarimpeiro\b|\bgerente\b|\bgovernador\b|\bguarda-costas\b|\bguia\sturístico\b|\bguia\b|\bhistoriador\b|\binstrumentista\b|\bjardineiro\b|\bjoalheiro\b|\bjornalista\b|\bjuiz\b|\blançador\b|\blinguista\b|\bmágico\b|\bmagistrado\b|\bmagnata\sdo\spetróleo\b|\bmalabarista\b|\bmanobrista\b|\bmaqueiro\b|\bmaquinista\b|\bmatemático\b|\bmecânico\b|\bmédico\b|\bmedium\b|\bmergulhador\b|\bmineiro\b|\bministro\b|\bmissionário\b|\bmonge\b|\bmonsenhor\b|\bmotoboy\b|\bmotorista\b|\bmúsico\b|\bneurocientista\b|\boficial\b|\boperador\sde\sturismo\b|\boperário\b|\bpadeiro\b|\bpadre\b|\bpalestrante\b|\bpalhaço\b|\bparaquedista\b|\bpastor\b|\bpedreiro\b|\bpesquisador\b|\bpiloto\b|\bpintor\b|\bpirata\b|\bpoeta\b|\bpolícia\scivil\b|\bpolícia\b|\bpolicial\sà\spaisana\b|\bpolicial\b|\bpolítico\b|\bporta-voz\b|\bprefeito\b|\bpresbítero\b|\bprodutor\b|\bprofessor\sde\sdança\sde\ssalão\b|\bprofessor\b|\bprofeta\b|\bprofissional\b|\bprofissional\b|\bprogramador\b|\bpsicólogo\b|\bpsiquiatra\b|\bquímico\b|\bradialista\b|\brecepcionista\b|\brecreador\b|\brecreador\b|\brei\smago\b|\brepórter\b|\bsacerdote\b|\bsecretário\b|\bsegurança\b|\bsenador\b|\bseringueiro\b|\bservente\b|\bservidor\b|\bsociologista\b|\bsocorrista\b|\bsoldado\b|\bsolista\b|\btabelião\b|\btaxista\b|\btécnico\b|\btoxicologista\b|\btrabalhador\b|\buniversitário\b|\bvendedor\b|\bveterinário\b|\bvoluntário\b|\bzelador\b", str(x))))
df_frame["Planejamento_do_turista"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bplanejamento\b|\bplanejar\b|\bpreparação\b", str(x))))
df_frame["Plantar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barborizar\b", str(x))))
df_frame["Plantas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bangiosperma\b|\bárvore\b|\bcoqueiro\b|\berva\sdaninha\b|\bflor\b|\bflora\b|\bfolha\b|\bfruto\b|\bgavinha\b|\bpalmeira\b|\bpau-brasil\b|\bpoligonáceo\b|\brosa\b|\btrepadeira\b|\btronco\b|\bvara\b", str(x))))
df_frame["Plenitude"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bvácuo\b", str(x))))
df_frame["Poder_aquisitivo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmultimilionário\b|\bpobre\b|\brico\b|\briqueza\b", str(x))))
df_frame["Polícia"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdelegacia\b|\bpolícia\b|\bpoliciamento\b", str(x))))
df_frame["Popularidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blegal\b|\bmaneiro\b|\bpopular\b|\bpopularizar\b|\bquente\b", str(x))))
df_frame["Posição_distribuída"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badornar\b|\balinhar\b|\bcercar\b|\bcobrir\b|\bdecoração\b|\bdecorar\b|\bdesarrumar\b|\bencapotar\b|\bencher\b|\benfeitar\b|\benvolver\b|\bincrustar\b|\blotar\b|\bornamentar\b|\bpavimentar\b|\bpontilhar\b|\brecobrir\b|\brevestir\b|\bsobre\b", str(x))))
df_frame["Posição_em_uma_escala"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\briqueza\b", str(x))))
df_frame["Posse"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapresentar\b|\bbem\b|\bcontar\b|\bconter\b|\bcustódia\b|\bde\b|\bdesejo\b|\bdeter\b|\bdono\b|\bfalta\b|\bfaltar\b|\bfalto\b|\bficar\b|\bfruir\b|\binsuficiente\b|\bmanter\b|\bobter\b|\bpatenteado\b|\bpatentear\b|\bpertencer\b|\bpertences\b|\bposse\b|\bpossessão\b|\bpossuir\b|\bpropriedade\b|\bproprietário\b|\bpróprio\b|\bquerer\b|\bter\b", str(x))))
df_frame["Possibilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdar\spara\b|\bdever\b|\bpoder\b|\bprovavelmente\b", str(x))))
df_frame["Possibilidades"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balternativa\b|\bchance\b|\bdever\b|\bescolha\b|\bfuturo\b|\bmaneira\b|\bopção\b|\bou\b|\bpoder\b|\bpossível\b|\buso\b", str(x))))
df_frame["Prática"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bprática\b|\bpraticante\b|\bpraticar\b|\btreinamento\b|\btreinar\b|\btreino\b", str(x))))
df_frame["Precisão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexato\b|\bicônico\b|\bprecisão\b", str(x))))
df_frame["Precisar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bnecessidade\b|\bprecisar\b|\bter\sque\b", str(x))))
df_frame["Prédios"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babrigo\b|\bacrópole\b|\baeroporto\b|\balojamento\b|\bambulatório\b|\banexo\b|\baquário\b|\barena\b|\barmazém\b|\barquivo\sestadual\b|\barranha-céu\b|\barranha-céus\b|\bauditório\b|\bbasílica\b|\bbiblioteca\b|\bcabana\b|\bcâmara\smunicipal\b|\bcampanário\b|\bcanil\b|\bcaravançará\b|\bcarvoaria\b|\bcasa\sda\sfazenda\b|\bcasa\sde\scampo\b|\bcasa\sde\scultura\b|\bcasa\sde\sjogos\b|\bcasa\sfluvial\b|\bcasa\b|\bcasebre\b|\bcastelo\b|\bcatedral\b|\bceleiro\b|\bcentro\scultural\b|\bcentro\sde\sarte\b|\bcentro\sde\sconferências\b|\bcentro\sde\sconvenções\b|\bcentro\sde\sdiversões\sinfantil\b|\bcentro\sde\seventos\b|\bcentro\sespírita\b|\bcentro\smédico\spúblico\b|\bcentro\smédico\b|\bcentro\stecnológico\b|\bchalé\b|\bcidadela\b|\bcine-theatro\b|\bcinema\b|\bcirco\b|\bclinica\sde\sreabilitação\b|\bclube\sde\sfutebol\b|\bclube\sde\stiro\b|\bcobertura\b|\bcompanhia\sde\ssaneamento\b|\bcompanhia\steatral\b|\bcondomínio\b|\bconservatório\b|\bconstrução\b|\bdelegacia\sde\spolícia\b|\bdepartamento\sde\spassaporte\b|\bdepartamento\sde\spolícia\sdo\sestado\b|\bdepartamento\sde\ssegurança\spública\b|\bdepartamento\suniversitário\b|\bdepartamento\b|\bdependência\b|\bdiscoteca\b|\bdomiciliar\b|\bdomicílio\b|\bdormitório\b|\bduplex\b|\bedifício\b|\bemergência\b|\bescola\sde\ssamba\b|\bescritório\sde\sempresa\b|\besquadrão\sde\sresgate\b|\bestábulo\b|\bestação\sde\srádio\b|\bestação\sde\stratamento\sde\ságua\b|\bestação\sferroviária\b|\bestádio\b|\bestrutura\b|\bestufa\b|\bfábrica\b|\bfarol\b|\bfazenda\b|\bfortaleza\b|\bforte\b|\bfortificação\b|\bgaleria\sde\sarte\b|\bgaleria\b|\bgalpão\b|\bgaragem\b|\bgazebo\b|\bguarda\smunicipal\b|\bhabitação\b|\bherdade\b|\bhipódromo\b|\bhospital\sgeral\b|\bhospital\sinfantil\b|\bhospital\smilitar\b|\bhospital\smunicipal\b|\bhospital\sparticular\b|\bhospital\spsiquiátrico\b|\bhospital\b|\biglu\b|\bigreja\sbatista\b|\bigreja\b|\bimobiliária\b|\bjardim\sbotânico\b|\bjardim\szoológico\b|\blar\b|\blivraria\b|\bmansão\b|\bmaternidade\b|\bmesquita\b|\bmosteiro\b|\bmuseu\sde\sarte\smoderna\b|\bmuseu\sde\sarte\b|\bmuseu\sdo\spatrimônio\b|\bmuseu\shistórico\slocal\b|\bmuseu\shistórico\b|\bmuseu\smarítimo\b|\bmuseu\smilitar\b|\bmuseu\b|\bpagode\b|\bpalácio\b|\bparque\sde\sdiversão\b|\bparque\stemático\b|\bpavilhão\sde\seventos\b|\bpavilhão\b|\bpensão\b|\bpetshop\b|\bpinacoteca\b|\bpirâmide\b|\bpoliclínica\b|\bposto\sde\ssaúde\scomunitário\b|\bpraça\b|\bprédio\b|\bprefeitura\b|\bpronto\satendimento\b|\bquartel\b|\bquiosque\b|\brepartição\spública\smunicipal\b|\bresidência\b|\brotunda\b|\bruína\b|\bsala\sde\sconcertos\b|\bsalão\sde\sdança\b|\bsalão\sde\sfesta\b|\bsalão\b|\bsauna\sgay\b|\bsauna\b|\bsecretaria\smunicipal\sde\ssegurança\b|\bsecretaria\smunicipal\sdo\smeio\sambiente\b|\bserviço\sde\ssaúde\smental\b|\bshopping\scenter\b|\bshopping\b|\bsinagoga\b|\bsolar\b|\bsupermercado\b|\bteatro\b|\btemplo\b|\btenda\síndia\b|\btenda\b|\btermas\b|\bterminal\b|\bteto\b|\btorre\b|\btriplex\b|\bvila\b|\bzoo\b|\bzoológico\b", str(x))))
df_frame["Preencher"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babastecer\b|\blotar\b|\bpintar\b", str(x))))
df_frame["Preferência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpreferência\b|\bpreferir\b|\bpreterir\b", str(x))))
df_frame["Preferred_alternative_scenario"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfavorito\b|\bpreferido\b", str(x))))
df_frame["Preliminares"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baclimatação\b|\baclimatar\b|\bconcentração\b|\bconcentrar\b", str(x))))
df_frame["Prendedor"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badesivo\b|\blacre\b", str(x))))
df_frame["Prender"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdomiciliar\b|\bprender\b|\bprisão\b", str(x))))
df_frame["Presença"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barraigado\b|\bfaltar\b|\bmanifesto\b|\bpresente\b", str(x))))
df_frame["Presságio"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bprenunciar\b|\bprenúncio\b", str(x))))
df_frame["Prevaricação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmentir\b", str(x))))
df_frame["Primeiro_na_classificação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bprincipalmente\b", str(x))))
df_frame["Probabilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bprobabilidade\b", str(x))))
df_frame["Processo_continuar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcorrer\b|\bficar\b|\bproceder\b", str(x))))
df_frame["Processo_estado_completo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcompleto\b", str(x))))
df_frame["Processo_iniciar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomeçar\b|\berupção\b|\bestrear\b|\binaugurar\b|\bincipiente\b|\biniciar\b|\binício\b|\birromper\b|\bnascente\b|\bpassar\b|\bprincipiar\b|\bsurgimento\b", str(x))))
df_frame["Processo_nuclear"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bradioatividade\b", str(x))))
df_frame["Processo_parar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcessar\b", str(x))))
df_frame["Procurar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbusca\b|\bbuscar\b|\bprocurado\b", str(x))))
df_frame["Profissionais_médicos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\benfermeiro\b", str(x))))
df_frame["Progression"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdesenvolver\b|\bprogressivamente\b", str(x))))
df_frame["Prohibiting_or_licensing"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badmitir\b|\baprovar\b|\bdeixar\b|\bpermitir\b|\bproibir\b", str(x))))
df_frame["Projeto"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bestratégia\b|\bplanejar\b|\bprograma\b|\bprojeto\b", str(x))))
df_frame["Propor_ideia"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bprojetar\b", str(x))))
df_frame["Propriedade_mental"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babstrato\b|\bartista\b|\bbagunceiro\b|\bbom\b|\bbrilhante\b|\bcriatividade\b|\bcriativo\b|\bcuidado\b|\bcuidadoso\b|\bcurioso\b|\bdoido\b|\bexcepcional\b|\bfilosófico\b|\bgenial\b|\bgenialidade\b|\bhumorado\b|\binteligência\b|\blouco\b|\bresponsável\b|\bsensível\b|\bsolícito\b|\btalentoso\b|\btímido\b|\bvergonha\b", str(x))))
df_frame["Prosperar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcrescer\b", str(x))))
df_frame["Prova"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bteste\b", str(x))))
df_frame["Proximidade_graduável"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafastado\b|\blonge\b|\bproximidade\b|\bpróximo\b|\bpróximo\b|\brente\b", str(x))))
df_frame["Proximidade_não_graduável"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badiante\b|\bao\slado\sde\b|\batrás\sde\b|\bdebaixo\sde\b|\bem\sfrente\sde\b|\bembaixo\sde\b|\bperto\sde\b|\brente\sa\b|\bsob\b", str(x))))
df_frame["Publicar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blançamento\b|\bpublicar\b", str(x))))
df_frame["Quadro_de_horários"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagendamento\b|\bprogramação\b|\broteiro\b", str(x))))
df_frame["Qualidades_de_cor"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmonocromático\b|\bpálido\b|\bvibrante\b", str(x))))
df_frame["Quantidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcento\b|\bdiverso\b|\bmais\sou\smenos\b|\bmais\b|\bmenos\b|\bmuito\b", str(x))))
df_frame["Quantidade_proporcional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baproximadamente\b|\baté\b|\bcerca\sde\b|\bmais\b|\bmuito\b|\bpouco\b|\bpouquinho\b|\bpraticamente\b|\bquase\b|\bvários\b", str(x))))
df_frame["Quebrar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrebentar\b|\bdescolar\b|\bquebrar\b|\bsoltar\b", str(x))))
df_frame["Queimar_com_fogo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfogo\b|\bfogueira\b", str(x))))
df_frame["Questionar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdúvida\b|\bperguntar\b|\bquestionamento\b|\bquestionar\b", str(x))))
df_frame["Razão"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpor\sisso\b|\bpor\sque\b|\bpor\b|\bporquê\b|\bprincípio\b|\brazão\b|\bsentido\b", str(x))))
df_frame["Reações_da_torcida"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baplaudir\b|\bapoiar\b|\bassistir\b|\bchorar\b|\bcomemoração\b|\bcomemorar\b|\bdespertar\b|\bexplodir\b|\bfrustração\b|\bgritar\b|\bobservar\b|\bovacionar\b|\btorcer\b|\bvaiar\b|\bver\b|\bvibrar\b", str(x))))
df_frame["Realização"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balcançar\b|\bconquista\b|\brealização\b", str(x))))
df_frame["Recipientes"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bburaco\b|\bcaixa\b|\bcesta\b|\bchopeira\b|\bcompartimento\b|\bcopo\b|\bcumbuca\b|\bembalagem\b|\bgaiola\b|\bgarrafa\b|\bpá\b|\bpoço\b|\bpote\b|\bsacola\b|\bvaso\b", str(x))))
df_frame["Reclamar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bqueixa\b|\breclamação\b", str(x))))
df_frame["Rede"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brede\b|\bweb\b", str(x))))
df_frame["Referir-se_pelo_nome"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bchamar\b|\bdesignação\b|\bendereçar\b|\bnome\b|\breferir\b", str(x))))
df_frame["Registro"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcópia\b|\bedição\b|\bobra\b", str(x))))
df_frame["Relação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\be\b|\bligação\b|\brelação\b", str(x))))
df_frame["Relação_de_duração"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdurante\b|\bdurar\b|\bperdurar\b|\bpor\b", str(x))))
df_frame["Relação_de_perfilamento_interior"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\b|\bdentre\b|\bdentro\sde\b|\bdentro\b|\bem\smeio\sa\b|\bem\b|\bentre\b|\bexterno\b|\bfora\b|\binterno\b|\bno\sinterior\sde\b|\bno\smeio\sde\b", str(x))))
df_frame["Relação_locativa"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bà\sfrente\sde\b|\bacima\sdo\ssolo\b|\bacima\b|\badjacente\b|\balém\sde\b|\balhures\b|\bali\b|\bao\slongo\sde\b|\baonde\b|\baqui\b|\baté\b|\batravés\sde\b|\bcá\b|\bcontinental\b|\bcontinental\b|\bdepois\b|\bdistante\b|\bem\stoda\sparte\b|\bem\stodo\b|\bem\b|\bembaixo\b|\bencontrar\b|\bentre\b|\benvolver\b|\bfora\b|\bfronteirar\b|\blá\b|\blonge\b|\bno\sar\b|\bno\stopo\b|\bonde\b|\bonipresente\b|\bpara\scima\b|\bpara\b|\bparalelo\sa\b|\bperto\b|\bremoto\b|\bsobre\b|\bsubterrâneo\b", str(x))))
df_frame["Relação_locativa_direcional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babaixo\b|\bacima\b|\bem\scima\b|\bem\sfrente\b|\bfora\sde\b|\bleste\b|\bleste\b|\bnordeste\b|\bnordeste\b|\bnoroeste\b|\bnoroeste\b|\bnorte\b|\bnorte\b|\boeste\b|\boeste\b|\bsudeste\b|\bsudeste\b|\bsudoeste\b|\bsudoeste\b|\bsul\b|\bsul\b", str(x))))
df_frame["Relações_pessoais"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacompanhante\b|\badultério\b|\bafastado\b|\bamado\b|\bamante\b|\bamigar-se\b|\bamigo\b|\bamizade\b|\bamoroso\b|\barrumar\b|\bcamarada\b|\bcasado\b|\bcasal\b|\bcasamento\b|\bcaso\b|\bcoabitação\b|\bcoabitar\b|\bcolega\b|\bcompanheirismo\b|\bcompanheiro\b|\bconjugal\b|\bcortejar\b|\bdivorciado\b|\bdivorciado\b|\bdormir\scom\b|\benamorada\b|\bencontrar\b|\benviuvar\b|\besposa\b|\besposo\b|\besposo\b|\bfamília\b|\bfamiliar\b|\bhomoafetivo\b|\bíntimo\b|\bmarido\b|\bnamorado\b|\bnamorado\b|\bnamorador\b|\bnamoro\b|\bnoiva\b|\bnoivado\b|\bnoivo\b|\bnoivo\b|\bpaquera\b|\bparceiro\b|\bparceria\b|\bpegação\b|\bpretendente\b|\brameira\b|\brelação\b|\brelacionamento\b|\bromance\b|\bsolteirão\b|\bsolteiro\b|\bsolteirona\b|\btérmino\b|\btraição\b|\bviúva\b|\bviúvo\b|\bviúvo\b", str(x))))
df_frame["Remover"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdedetização\b|\bextrair\b|\blavagem\b|\blavar\b|\bpré-lavagem\b|\bremoção\b|\bremover\b|\bretirar\b|\btirar\b", str(x))))
df_frame["Reparação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcompensar\b", str(x))))
df_frame["Representação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpontuar\b|\bselo\b|\bsimbolismo\b|\bsimbolo\b", str(x))))
df_frame["Representantes"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brepresentante\b", str(x))))
df_frame["Request_entity"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpedido\b|\bpedir\b|\bsolicitação\b|\bsolicitar\b", str(x))))
df_frame["Resgatar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bresgatar\b|\bsalvar\b", str(x))))
df_frame["Residência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacampado\b|\bacampamento\b|\bacampar\b|\bcampista\b|\bcolega\sde\squarto\b|\bficar\b|\bhabitado\b|\bhabitante\b|\bhabitar\b|\bhospedar\b|\blocatário\b|\bmorador\b|\bmorar\b|\bocupante\b|\bocupar\b|\bradicar\b|\bresidente\b|\bresidir\b|\bviver\b", str(x))))
df_frame["Resolver_problema"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconserto\b|\bresolver\b", str(x))))
df_frame["Respirar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bofegar\b|\brespiração\b|\bsuspirar\b", str(x))))
df_frame["Responsibility"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bresponsável\b", str(x))))
df_frame["Resto"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bresto\b", str(x))))
df_frame["Restringir_movimento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcativar\b|\breclusão\b", str(x))))
df_frame["Resumir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bresumir\b", str(x))))
df_frame["Retaining"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bguardar\b|\brealizar\b", str(x))))
df_frame["Retirar_da_participação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bwithdraw\b", str(x))))
df_frame["Reunir-se"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconcentrar\b|\benglobar\b|\breencontro\b|\breunir\b", str(x))))
df_frame["Revelar_secredo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconfessar\b|\bdesvendar\b", str(x))))
df_frame["Roubo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babstração\b|\babstrair\b|\bapropriação\sindevida\b|\barrastão\b|\bassalto\b|\bbatedor\sde\scarteira\b|\bbater\scarteira\b|\bde\sdedos\sleves\b|\bdesviar\b|\bdesvio\b|\bfurtar\b|\bfurto\b|\bladrão\b|\bpropina\b|\broubado\b|\broubar\b|\broubo\b", str(x))))
df_frame["Sair_de_um_lugar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babandonar\b|\bdeixar\b|\bdemitir\b|\bdeserção\b|\bdesertar\b|\bdesocupar\b|\bemigração\b|\bemigrante\b|\bemigrar\b|\bexilado\b|\bfugir\b|\bfugitivo\b|\binvestir\b|\bremover\b|\bretirada\b|\bretirar\b|\bseparar\b", str(x))))
df_frame["Sair_do_emprego"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baposentar\b", str(x))))
df_frame["Sanções"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badvertência\b|\bcartão\samarelo\b|\bcartão\spreto\b|\bcartão\svermelho\b|\bcartão\b|\bdesclassificação\b|\bdesqualificação\b|\bexclusão\b|\bexpulsão\b|\bimpedimento\b|\binelegibilidade\b|\blance-livre\b|\blateral\b|\bman-up\b|\bpasse\sà\sfrente\b|\bpassividade\b|\bpena\b|\bpenalidade\b|\bpênalti\b|\bsanção\b|\bsuspensão\b|\btiro\slivre\b|\bvantagem\b", str(x))))
df_frame["Satisfazer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\batender\b|\bsatisfazer\b", str(x))))
df_frame["Scheduling"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagendamento\b|\bagendar\b", str(x))))
df_frame["Sediar_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\borganizar\b|\bpaís\ssede\b|\breceber\b|\bsede\b|\bsediar\b", str(x))))
df_frame["Semelhança"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomum\b", str(x))))
df_frame["Sensação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmau-cheiro\b|\bperfume\b|\bsabor\b|\bsensação\b|\bsentir\b|\bvista\b", str(x))))
df_frame["Sentir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bastral\b|\bauto-estima\b|\bcalma\b|\bconsolo\b|\bemoção\b|\bentusiasmo\b|\bexperienciar\b|\bira\b|\borgulho\b|\bpaz\b|\bprazer\b|\bsensação\b|\bsentimento\b|\bsentir\b|\btranquilizar\b", str(x))))
df_frame["Sequência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bordem\b|\bseguido\b|\bsequência\b|\bsérie\b|\búltimo\b", str(x))))
df_frame["Serviço_em_alimentação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacepipe\b|\bacompanhamento\b|\bacompanhar\b|\baperitivo\b|\bcafé\sda\smanhã\b|\bcafé\b|\bcoffee\sbreak\b|\bcoffee-break\b|\bentrada\b|\bpetisco\b|\bpetit-déjeuner\b|\bprato\sprincipal\b|\bserviço\b|\bservir\b|\bsobremesa\b|\btira-gosto\b", str(x))))
df_frame["Serviço_turístico"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balmoço\b|\bcafé\sda\smanhã\b|\bcity\stour\b|\bdispor\b|\bjantar\b|\blanche\b|\bmeia\spensão\b|\bpacote\sturístico\b|\bpacote\b|\bpensão\scompleta\b|\bpetit-déjeuner\b|\brefeição\b|\bserviço\b|\btraslado\b", str(x))))
df_frame["Serviço_turístico_comprar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomprar\b|\bcontratar\b", str(x))))
df_frame["Serviço_turístico_pagar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpagamento\b|\bpagar\b", str(x))))
df_frame["Serviço_turístico_receber"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcobrar\b", str(x))))
df_frame["Serviço_turístico_reservar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\breserva\b|\breservar\b", str(x))))
df_frame["Serviço_turístico_vender"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagência\b|\bdisponibilizar\b|\boferecer\b|\boperador\b|\bproporcionar\b|\bter\b|\bvender\b", str(x))))
df_frame["Ser_afetado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafetar\b|\bter\b", str(x))))
df_frame["Ser_apto"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapto\b|\bsuficiente\b", str(x))))
df_frame["Ser_empregado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btrabalhar\b", str(x))))
df_frame["Ser_localizado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bencontrar\b|\bestar\b|\bficar\b|\blocalizado\b|\blocalizar\b|\bparadeiro\b|\bsituado\b|\bsituar\b", str(x))))
df_frame["Ser_necessário"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bessencial\b|\bexigido\b|\bindispensável\b|\bnecessário\b|\bnecessidade\b|\bnecessitar\b|\brequerido\b|\brequerimento\b", str(x))))
df_frame["Ser_nomeado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapelido\b|\bchamado\b|\bconhecido\scomo\b|\bconhecido\b", str(x))))
df_frame["Ser_obrigado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bafazer\b|\bdever\b|\bobrigar\b|\btarefa\b", str(x))))
df_frame["Ser_obrigatório"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcumprir\b|\bdever\b|\bexigência\b|\bfundamental\b|\bimprescindível\b|\bindispensável\b|\bmandatório\b|\bobrigar\b|\bobrigatoriamente\b|\bobrigatório\b|\brequisito\b|\bvital\b", str(x))))
df_frame["Ser_operacional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\badiantar\b|\bfunção\b|\bfuncional\b|\bfuncionar\b|\boperacional\b|\bquebrado\b", str(x))))
df_frame["Ser_relevante"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brelevante\b", str(x))))
df_frame["Sex"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcruzamento\b|\bcruzar\b", str(x))))
df_frame["Sharing"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcompartilhar\b|\bcompartilhável\b|\bdividido\b|\bdividir\b", str(x))))
df_frame["Simultaneidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bco-ocorrência\b|\bco-ocorrer\b|\bcoincidir\b|\bconcorrência\b|\bconcorrente\b|\bconjunção\b|\bsimultaneamente\b|\bsimultaneidade\b|\bsimultâneo\b", str(x))))
df_frame["Sinal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bindicar\b", str(x))))
df_frame["Sinceridade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdissimulado\b|\boblíquo\b|\bsincero\b", str(x))))
df_frame["Sistema"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\besquema\b|\bestrutura\b|\bsistema\b", str(x))))
df_frame["Sobreviver"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bviver\b", str(x))))
df_frame["Sofrer_mudança"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balternar-se\b|\bdesabar\b|\bguinar\b|\binexorável\b|\binstável\b|\bir\b|\bmudança\b|\bmudar\b|\boscilar\b|\btransição\b|\btransição\b|\btroca\b|\btrocar\b|\bvirar\b|\bvoltar\b", str(x))))
df_frame["Sofrer_transformação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconversão\b|\bconverter\b|\bdeixar\b|\btornar\b|\btransformar\b|\btransição\b|\btransmutação\b|\btransmutar\b|\btransubstanciação\b|\btransubstanciar\b", str(x))))
df_frame["Sons"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacústica\b|\bbarulhento\b|\bestalo\b|\bsom\b|\bvoz\b", str(x))))
df_frame["Spatial_contact"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bao\slado\b|\bcontato\b|\bcontra\b|\bem\scima\b|\bem\b|\bfazer\scontato\b|\bíngreme\b|\bno\stopo\b|\bsobre\b|\btangente\b|\btocar\b", str(x))))
df_frame["Status_de_sigilo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bsegredo\b", str(x))))
df_frame["Sub-região_temporal"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcomeço\b|\bfim\b|\bfinal\b|\binício\b|\binício\b|\bintermediário\b|\bmarço\b|\bmeio\b|\bposterior\b|\bprévio\b|\bprincípio\b|\btardio\b|\bvirada\b", str(x))))
df_frame["Subordinados_e_superiores"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bhierarquia\b", str(x))))
df_frame["Subpartes_de_artefato"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcabo\b|\bHD\b|\btela\b", str(x))))
df_frame["Subpartes_de_instalações_esportivas"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balto-falante\b|\balvo\b|\barco\sde\strês\spontos\b|\bárea\sde\saterrissagem\b|\bárea\sde\slançamento\b|\bárea\sde\slance\slivre\b|\bárea\sde\squeda\b|\bárea\sde\ssaque\b|\bárea\sinterna\sdo\sgol\b|\bárea\srestritiva\b|\bárea\stécnica\b|\bárea\b|\baro\b|\barquibancada\b|\bassento\b|\bbaliza\b|\bbanco\sde\sreservas\b|\bbanco\b|\bbandeira\sde\sescanteio\b|\bbandeira\b|\bbanheiro\b|\bbarra\b|\bbarreira\b|\bbase\sdo\srebatedor\b|\bbilheteria\b|\bbloco\sde\spartida\b|\bbloco\sinicial\b|\bcabine\sde\stransmissão\b|\bcaixa\sde\sareia\b|\bcaixa\sde\saterrissagem\b|\bcaixa\sdo\stécnico\b|\bcamarote\b|\bcesta\b|\bcírculo\scentral\b|\bcírculo\sde\sdisparo\b|\bcírculo\sde\slançamento\b|\bcolchão\b|\bencaixe\b|\bescanteio\b|\bfaixa\b|\bfosso\scom\ságua\b|\bfosso\sde\ságua\b|\bgaiola\b|\bgarrafão\b|\bgol\b|\bgramado\b|\bgrande\sárea\b|\bgrande-área\b|\blateral\b|\blimite\sde\scampo\sexterno\b|\blimite\sde\scampo\sinterno\b|\blinha\scentral\b|\blinha\sda\sgrande\sárea\b|\blinha\sda\spequena\sárea\b|\blinha\sde\sarremesso\slateral\b|\blinha\sde\sarremesso\slivre\b|\blinha\sde\sataque\b|\blinha\sde\sdez\smetros\b|\blinha\sde\sfalta\b|\blinha\sde\sfundo\b|\blinha\sde\sgol\b|\blinha\sde\slance\slivre\b|\blinha\sde\smeio-campo\b|\blinha\sde\srestrição\sdo\sgoleiro\b|\blinha\sde\ssaque\b|\blinha\sde\sseis\smetros\b|\blinha\sde\ssete\smetros\b|\blinha\sde\stiro\slivre\b|\blinha\sde\strês\spontos\b|\blinha\sde\svinte\se\sdois\smetros\b|\blinha\sde\svinte\se\strês\smetros\b|\blinha\sde\szona\smorta\b|\blinha\slateral\b|\bmarca\scentral\b|\bmarca\sde\spênalti\b|\bmastro\sde\sfalta\b|\bmeia-lua\b|\bmeio\sde\scampo\b|\bmeio-campo\b|\bmesa\b|\bmonte\sdo\slançador\b|\bobstáculo\b|\bpequena\sárea\b|\bpiscina\b|\bpista\sde\salerta\b|\bpista\b|\bplacar\seletrônico\b|\bplacar\b|\bplataforma\b|\bposte\b|\bprimeira\sbase\b|\bquarta\sbase\b|\braia\b|\brampa\b|\brede\b|\bringue\b|\bsarrafo\b|\bsegunda\sbase\b|\bsetor\b|\bstriking\scircle\b|\btabela\b|\btablado\b|\btábua\sde\simpulsão\b|\btábua\sde\ssalto\b|\btapete\b|\btatame\b|\btelão\b|\bterceira\sbase\b|\btoalete\b|\btoilette\b|\btrave\b|\btravessão\b|\btry\sline\b|\bzona\sde\saterrissagem\b|\bzona\sde\sdois\spontos\b|\bzona\sde\slançamento\b|\bzona\sde\spassagem\b|\bzona\sde\sserviço\b|\bzona\sde\strás\b|\bzona\sfrontal\b|\bzona\slivre\b", str(x))))
df_frame["Subpartes_de_prédios"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacademia\b|\badega\b|\bala\b|\baltar\b|\bandar\b|\bante-sala\b|\bantecâmara\b|\bapartamento\b|\bárea\sde\slazer\b|\bárea\sde\sserviço\b|\bárea\b|\batelier\b|\bbanheiro\sprivativo\b|\bbanheiro\b|\bbar\b|\bberçário\b|\bbrinquedoteca\b|\bcâmara\b|\bcampanário\b|\bcantina\b|\bcapela\b|\bcarrinho\b|\bcatacumba\b|\bcela\b|\bcerca\b|\bchancelaria\b|\bchão\b|\bcloset\b|\bcômodo\b|\bcopa\b|\bcorredor\b|\bcozinha\b|\bdepósito\sde\sbagagem\b|\bdepósito\b|\bdespensa\b|\belevador\spanorâmico\b|\belevador\b|\bescada\b|\bescritório\b|\bestúdio\b|\blavabo\b|\blavanderia\b|\blavatório\b|\bludoteca\b|\boficina\b|\bpiscina\b|\bporão\b|\bpresbitério\b|\bquadra\sde\stênis\b|\bquadra\b|\bquarto\sde\shóspedes\b|\bquarto\sprincipal\b|\bquarto\b|\bquitinete\b|\brefeitório\b|\brefúgio\b|\brestaurante\b|\bsacada\b|\bsacristia\b|\bsaguão\b|\bsala\sde\sestar\b|\bsala\sde\sestudos\b|\bsala\sde\sjantar\b|\bsala\sde\sTV\b|\bsala\b|\bsalão\b|\bsauna\b|\bsolário\b|\bsótão\b|\bspa\b|\bsubsolo\b|\bterraço\b|\btérreo\b|\bteto\b|\btoalete\b|\btoilette\b|\btorre\b|\bvaranda\b|\bvestiário\b|\bvestíbulo\b", str(x))))
df_frame["Subpartes_de_veículos"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bmotor\b|\bvolante\b", str(x))))
df_frame["Substâncias"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bágua\b|\bamaciante\b|\bar\b|\bareia\b|\bátomo\b|\bborracha\b|\bbronze\b|\bcascalho\b|\bdetergente\b|\bdiamante\b|\belétron\b|\bferro\b|\bincenso\b|\blágrima\b|\blátex\b|\bmadeira\b|\bmassinha\b|\bmatéria\b|\bmaterial\b|\bmetal\b|\bmineral\b|\bminério\b|\bmirra\b|\bmolécula\b|\bmonazite\b|\borgânico\b|\bouro\b|\bpapel\b|\bpedra\b|\bpirita\b|\bplástico\b|\bpoeira\b|\bpoluição\b|\bsangue\b|\bsubstância\b|\bterra\b", str(x))))
df_frame["Substância_por_fase"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\blíquido\b|\bsólido\b|\bviscoso\b", str(x))))
df_frame["Sucesso_ou_falha"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbem\ssucedido\b|\bconseguir\b|\bfracassar\b|\bperder\sgol\b|\bperder\b|\brodar\b", str(x))))
df_frame["Suficiência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babundante\b|\badequação\b|\badequadamente\b|\badequado\b|\bamplo\b|\bbastante\b|\bbastar\b|\bdemais\b|\bfartura\b|\binadequação\b|\binadequadamente\b|\binadequado\b|\binsuficiência\b|\binsuficiente\b|\binsuficientemente\b|\bsem\b|\bser\ssuficiente\b|\bservir\b|\bsuficiente\b|\bsuficientemente\b|\btanto\b", str(x))))
df_frame["Superar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpassar\b|\bultrapassar\b", str(x))))
df_frame["Tamanho"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balto\b|\bamplo\b|\bcolossal\b|\bdiminuto\b|\benorme\b|\bespaçoso\b|\bestrondoso\b|\bgigante\b|\bgigantesco\b|\bgrande\b|\bimenso\b|\bimensurável\b|\bínfimo\b|\binfinitesimal\b|\bjumbo\b|\bligeiro\b|\bliliputiano\b|\bmaior\b|\bmassivo\b|\bmediano\b|\bmédio\b|\bmeio-metro\b|\bmenor\b|\bmini\b|\bminiatura\b|\bminúsculo\b|\bpequeno\b|\bsubstancial\b|\bvolumoso\b", str(x))))
df_frame["Temer"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bamedrontado\b|\bapreensão\b|\bassustado\b|\baterrorizado\b|\blevar\ssusto\b|\bmedo\b|\bnervoso\b|\bpavor\b|\bsurtado\b|\bterror\b|\bviver\scom\smedo\b", str(x))))
df_frame["Temeridade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bpaciente\b", str(x))))
df_frame["Temperatura"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcongelante\b|\bescaldante\b|\bfresco\b|\bfrio\b|\bgelado\b|\bmorno\b|\bquente\b|\btemperatura\b|\btépido\b", str(x))))
df_frame["Temperatura_ambiente"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\babafado\b|\bcongelante\b|\bfresco\b|\bfrio\b|\bfrio\b|\bmorno\b|\bquente\b|\btemperatura\b", str(x))))
df_frame["Tempo_período_de_ação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdemorado\b|\bdemorar\b|\bdia\sa\sdia\b|\bdia\sa\sdia\b", str(x))))
df_frame["Tempo_relativo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\banterior\b|\bantigamente\b|\bantiguidade\b|\batrasado\b|\batualidade\b|\bcedo\b|\bconsecutivo\b|\bdepois\b|\benquanto\b|\bpassado\b|\bpróximo\b|\brecente\b|\bseguido\b|\btarde\b|\búltimo\b", str(x))))
df_frame["Tentar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdedicação\b|\bentregar\b|\besforçar\b|\btentar\b|\btentativa\b", str(x))))
df_frame["Tentar_persuadir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baconselhar\b|\bconselho\b|\bpor\scontra\sa\sparede\b|\bpressionar\b|\brecomendação\b|\brecomendado\b|\brecomendar\b", str(x))))
df_frame["Terminar_competição"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bperdedor\b|\bvencedor\b|\bvitória\b", str(x))))
df_frame["Ter_associado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcom\b", str(x))))
df_frame["Ter_ou_carecer_de_acesso"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacesso\b", str(x))))
df_frame["Ter_visita"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconvidar\b", str(x))))
df_frame["Teste_de_operação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btestar\b", str(x))))
df_frame["Texto"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bapresentação\b|\bartigo\b|\bcardápio\b|\bcartão\b|\bcartaz\b|\bcatálogo\b|\bcriação\splástica\b|\bdesenho\b|\bebook\b|\bfrase\b|\bguia\b|\bhistória\b|\binstrução\b|\bjornal\b|\blenda\b|\blinguagem\b|\blista\b|\bliteratura\b|\blivro\b|\bmapa\b|\bmenu\b|\bnarrativa\b|\bobra\b|\bpoema\b|\bpost\b|\bpostal\b|\bprosa\b|\bprovérbio\b|\bquadrinho\b|\brascunho\b|\breportagem\b|\bsentença\b|\bteatro\b|\btexto\b", str(x))))
df_frame["Texto_criação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bautobiografia\b|\bbiografia\b|\bcriação\stextual\b|\bcrônica\b|\bescrever\b|\blegendar\b|\bpoesia\b|\breportagem\b|\btexto\b", str(x))))
df_frame["Tipicalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcurioso\b|\bespecífico\b|\bestranho\b|\bparticular\b|\bprecioso\b|\btipicamente\b|\btípico\b", str(x))))
df_frame["Tipo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bde\b|\bespécie\b|\bmodo\b|\btipo\b", str(x))))
df_frame["Tomar_forma"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcurvar\b|\bdobrar\b|\benrolar\b|\btorcer\b", str(x))))
df_frame["Tomar_partido"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\sfavor\b|\bcontra\b|\bcontra\b|\bluta\b", str(x))))
df_frame["Tópico"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\srespeito\sde\b|\babordar\b|\bassunto\b|\bponto\b|\bsobre\b|\btema\b|\btópico\b", str(x))))
df_frame["Torcida"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bespectador\b|\bfã\b|\bplateia\b|\btelespectador\b|\btorcedor\b|\btorcida\b", str(x))))
df_frame["Tornar-se"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bficar\b", str(x))))
df_frame["Tornar-se_consciente"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bachar\b|\bciente\b|\bdescoberta\b|\bdescoberto\b|\bdescobrimento\b|\bdescobrir\b|\bdesmascarar\b|\bdetectar\b|\bdiscernir\b|\bdizer\b|\bencontrar\b|\bespionar\b|\bnotar\b|\bobservar\b|\bolhar\b|\bperceber\b|\breconhecer\b|\breconhecimento\b|\bregistrar\b", str(x))))
df_frame["Tornar-se_membro"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bingresso\b", str(x))))
df_frame["Tornar-se_não-operacional"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bfurar\b|\bquebrar\b|\bqueimar\b|\brasgar\b|\btrincar\b", str(x))))
df_frame["Tornar-se_separado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcompor\b|\bdividido\b|\bdividir\b|\bespalhar\b|\bseparar\b", str(x))))
df_frame["Tornar-se_solto"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bdescolar\b", str(x))))
df_frame["Tornar-se_visível"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparição\b|\bdespontar\b", str(x))))
df_frame["Torneio_de_eliminação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcabeça\sde\schave\b|\bchave\b|\bconquistar\svaga\b|\bdisputa\sdo\sterceiro\slugar\b|\bdisputar\b|\beliminação\b|\beliminar\b|\beliminatórias\b|\bfase\b|\bfinal\b|\bgrupo\b|\bmata-mata\b|\boitavas\sde\sfinal\b|\boitavas\b|\bpassar\b|\bquartas\sde\sfinal\b|\bquartas\b|\brepescagem\b|\bseguir\b|\bsemi\b|\bsemifinais\b|\bsistema\seliminatório\b|\btirar\b", str(x))))
df_frame["Totalizar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcompletar\b|\bno\stotal\b|\btotalizar\b", str(x))))
df_frame["Trabalhar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcarreira\b|\bdar\sduro\b|\bemprego\b|\btrabalhar\b|\btrabalho\b", str(x))))
df_frame["Traços_de_personalidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcachorro\b|\bcompetitividade\b|\bcoragem\b|\bcurioso\b|\bdescompromisso\b|\bforte\b|\bganancioso\b|\bhipócrita\b|\bmodesto\b|\bpersonalidade\b|\bpretensioso\b|\bresponsável\b|\bsensibilidade\b|\bteimoso\b|\btímido\b|\bvaidade\b|\bvalente\b", str(x))))
df_frame["Traduzir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btraduzido\b", str(x))))
df_frame["Trajar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcom\b|\broupa\b|\busar\b", str(x))))
df_frame["Transferir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brepassar\b|\btransferir\b", str(x))))
df_frame["Transição_para_uma_qualidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bficar\b", str(x))))
df_frame["Transição_para_uma_situação"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bficar\b|\bvir\b", str(x))))
df_frame["Transição_para_um_estado"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bcrescer\b|\bficar\b|\bvir\b", str(x))))
df_frame["Transportar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\barrastar\b|\bbuscar\b|\bcarregar\b|\bconduzir\b|\blevar\b|\bmóvel\b|\bpassageiro\b|\bpegar\b|\bportátil\b|\btaxímetro\b|\btransportar\b|\btransporte\b|\btrazer\b", str(x))))
df_frame["Transporte"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baéreo\b|\baeroporto\sregional\b|\baeroporto\b|\bagência\sde\saluguel\sde\scarros\b|\bagência\sde\sviagens\sde\shelicóptero\b|\bbicicletaria\b|\bcadeira\sde\srodas\b|\bestação\sde\smetrô\b|\bestação\b|\bestacionamento\b|\bitinerário\b|\blinha\b|\blocadora\sde\sveículos\b|\bmarina\b|\bparada\b|\bpassagem\b|\bponto\sde\sônibus\b|\bponto\sde\stáxi\b|\bponto\b|\bporto\b|\brodoviária\b|\brodoviário\b|\brota\b|\bserviço\sde\stransporte\b|\btrânsito\b|\btransporte\spúblico\b|\btransporte\b|\bvoo\b", str(x))))
df_frame["Tratar_e_maltratar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\btratar\b", str(x))))
df_frame["Trocar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bescambo\b", str(x))))
df_frame["Turismo_de_atração"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bagroturismo\b|\batração\sturística\b|\batração\b|\batrativo\b|\becoturismo\b|\bexibição\b|\bingresso\b|\bmeia-entrada\b", str(x))))
df_frame["Turismo_de_evento"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacontecimento\b|\bevento\b|\bingresso\b|\bmeia-entrada\b", str(x))))
df_frame["Unidade_calêndrica"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bà\snoite\b|\babril\b|\bagosto\b|\bano\b|\bdécada\b|\bdezembro\b|\bdia\b|\bépoca\b|\bferiado\b|\bfim\sde\ssemana\b|\bfim\sde\starde\b|\bfinal\sdo\sdia\b|\bhoje\b|\bhoje\b|\bjaneiro\b|\bjunho\b|\bmadrugada\b|\bmaio\b|\bmanhã\b|\bnoite\b|\bnoturno\b|\bnovembro\b|\bontem\b|\boutono\b|\boutubro\b|\bperíodo\b|\bpernoite\b|\bpôr-do-sol\b|\bquinta-feira\b|\bsábado\b|\bséculo\b|\bsegunda-feira\b|\bsemana\b|\bsexta-feira\b|\btarde\b|\btemporada\b|\bterça-feira\b|\bverão\b", str(x))))
df_frame["Usar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baplicar\b|\baproveitar\b|\bempregar\b|\bexploração\b|\breutilizar\b|\busado\b|\busar\b|\buso\b|\butilizado\b|\butilizar\b", str(x))))
df_frame["Usar_recurso"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bgastar\b|\busar\b", str(x))))
df_frame["Utensílios"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbandeja\b|\bcolher\b|\bespátula\b|\bpanela\b|\bporcelana\b|\btigela\b|\butensílio\b", str(x))))
df_frame["Utilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbom\b|\befetivo\b|\bespetacular\b|\besplêndido\b|\bexcelente\b|\bfantástico\b|\bideal\b|\binefetivo\b|\bmaravilhoso\b|\bótimo\b|\bperfeito\b|\bpreciso\b|\bprestativo\b|\brecurso\b|\bservir\b|\bsoberbo\b|\bútil\b|\butilidade\b|\bvaler\b|\bvalioso\b|\bvalor\b", str(x))))
df_frame["Valor_extremo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\binsignificante\b|\bmínimo\b", str(x))))
df_frame["Veículo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baeronave\b|\bambulância\b|\bautomóvel\b|\bavião\b|\bbalsa\b|\bbarco\b|\bbicicleta\b|\bbonde\b|\bbuggy\b|\bcaiaque\b|\bcaminhão\spipa\b|\bcaminhão\b|\bcanoa\b|\bcaravela\b|\bcarro\b|\bcarroça\b|\bcarruagem\b|\bcomboio\b|\bconversível\b|\bcruzeiro\b|\bescuna\b|\bhelicóptero\b|\biate\b|\blimusine\b|\bminivan\b|\bmoto\b|\bnau\b|\bnavio\b|\bônibus\b|\bpatinete\b|\bpedalinho\b|\bpicape\b|\bquadricicleta\b|\bquadriciclo\b|\bscooter\b|\bsedan\b|\bsubmarino\b|\btanque\b|\btáxi\b|\btobogã\b|\btrem\b|\btricíclo\b|\bvalsa\b|\bvan\b|\bveículo\b", str(x))))
df_frame["Veículo_aterrissagem"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baterrissar\b|\bpousar\b", str(x))))
df_frame["Vencer_o_oponente"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bperder\b", str(x))))
df_frame["Veredito"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bconvicção\b", str(x))))
df_frame["Verificação Verification"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bidentificar\b", str(x))))
df_frame["Versão_sequência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\báspero\b|\bfinal\b|\bfuncional\b|\bgrosseiro\b|\binicial\b|\boriginal\b|\bpreliminar\b|\brascunho\b", str(x))))
df_frame["Vestuário"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbermuda\b|\bbermudas\b|\bblusa\b|\bbraçadeira\b|\bcalça\b|\bcalçado\b|\bcalção\b|\bcamisa\b|\bcamiseta\b|\bcasaco\b|\bchapéu\b|\bchinelo\b|\bchuteira\b|\bcolete\b|\bfaixa\b|\bjaqueta\b|\bluva\b|\bmaiô\b|\bmalha\b|\bmeia\b|\bmeião\b|\bmoletom\b|\bnudismo\b|\bpaletó\b|\bquimono\b|\broupa\b|\bsaia\b|\bsamba-canção\b|\bsapatilha\b|\bsapato\b|\bseda\b|\bshort\b|\bsunga\b|\btecido\b|\btênis\b|\btoalha\b|\btouca\b|\buniforme\b|\buwagi\b|\bzubon\b", str(x))))
df_frame["Vetor_tempo"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\ba\spartir\sde\b|\ba\spropósito\b|\bainda\b|\banteriormente\b|\bantes\b|\bapós\b|\bassim\spor\sdiante\b|\baté\b|\batrás\b|\bdepois\b|\bdesde\b|\bem\sseguida\b|\benfim\b|\beventualmente\b|\bfinalmente\b|\bjá\b|\blogo\b|\bna\shora\b|\bpor\súltimo\b", str(x))))
df_frame["Viagem"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bexcursão\b|\bexpedição\b|\bfazer\sum\stour\b|\bfuga\b|\bitinerante\b|\bjornada\b|\bodisseia\b|\bperegrinação\b|\bsafari\b|\btour\b|\bviagem\b|\bviajante\b|\bviajar\b", str(x))))
df_frame["Vias"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\balameda\b|\bartéria\b|\bauto-estrada\b|\bavenida\b|\bbeco\ssem\ssaída\b|\bbulevar\b|\bcalçada\b|\bcalçadão\b|\bcaminho\sde\sacesso\b|\bcaminho\b|\bcurso\b|\besquina\b|\bestrada\b|\bfaixa\b|\bferrovia\b|\bfila\b|\bgaleria\b|\blinha\b|\bpassagem\ssubterrânea\b|\bpercurso\b|\bpista\b|\bponte\b|\bramal\b|\brodovia\b|\brota\b|\brua\b|\btrajeto\b|\btrilha\b|\btrilho\b|\btúnel\b|\bvereda\b|\bvia\sexpressa\b|\bviaduto\b", str(x))))
df_frame["Vício"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bviciado\b|\bviciado\b", str(x))))
df_frame["Violência"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bbrutalidade\b|\bselvageria\b|\bviolência\b", str(x))))
df_frame["Vir_a_existir"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\baparecer\b|\bdesenvolver\b|\bemergir\b|\bfeito\b|\bflorescer\b|\bmaterializar\b|\bnascer\b|\brealizar\b|\breaparecer\b", str(x))))
df_frame["Visitar"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\brevisitar\b|\bvisitação\b|\bvisitante\b|\bvisitar\b", str(x))))
df_frame["Volubilidade"] = df_frame["lema"].apply(lambda x: len(re.findall(r"\bacomodado\somisso\ssilencioso\b", str(x))))
df_frame.drop('Cenário_do_turismo_estada', axis = 1, inplace=True)
df_frame['frame_pred_sum']= df_frame.loc[:, "Abundância_distribuída": "Volubilidade"].sum(axis= 1 )
df_frame= df_frame.query('frame_pred_sum > 0')
df_frame['frame_pred'] = df_frame.loc[:, 'Abundância_distribuída': 'Volubilidade'].idxmax(axis =1)
return df_frame
coral_frames = coral_framenet()
coral_frames_f = pd.DataFrame(coral_frames.groupby(['tonal_units'])['frame_pred'].value_counts())
coral_frames_f.columns = ['Frequência']
coral_frames_f.reset_index(inplace=True)
coral_frames_f = coral_frames_f.query('frame_pred != "Cenário_do_turismo_estada"')
fig_final = px.sunburst(coral_frames_f.query('Frequência > 10'), path=['frame_pred'], color = 'tonal_units', values = 'Frequência',
color_continuous_scale='BuPu')
fig_final.update_layout(width=700, height=700, margin = dict(t=130, l=50, r=10, b=10), title_text= f'Principais cenas associadas ao léxico do C-ORAL-ESQ',
title_x=0.5, title_y = 0.899, title_font_size= 20)
#uniformtext=dict(minsize=13, mode='hide'))
fig_final.add_trace(go.Sunburst(
insidetextorientation='tangential'
))
fig_final.show() | 222.850394 | 3,060 | 0.726374 | 27,988 | 169,812 | 4.333679 | 0.245034 | 0.081548 | 0.063484 | 0.092208 | 0.344527 | 0.297565 | 0.267885 | 0.238732 | 0.235376 | 0.233397 | 0 | 0.000355 | 0.038996 | 169,812 | 762 | 3,061 | 222.850394 | 0.742895 | 0.00073 | 0 | 0.004071 | 0 | 0.377205 | 0.673682 | 0.602855 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.033921 | 0.013569 | null | null | 0.001357 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a957e5eb3595197af64a7e2318942b5f3173e91 | 46 | py | Python | d3rlpy/online/__init__.py | jamartinh/d3rlpy | 87f478451674ef769eb8ce74e3663c4d3b1c325d | [
"MIT"
] | null | null | null | d3rlpy/online/__init__.py | jamartinh/d3rlpy | 87f478451674ef769eb8ce74e3663c4d3b1c325d | [
"MIT"
] | 1 | 2020-11-17T22:35:50.000Z | 2020-11-17T22:35:50.000Z | d3rlpy/online/__init__.py | jamartinh/d3rlpy | 87f478451674ef769eb8ce74e3663c4d3b1c325d | [
"MIT"
] | null | null | null | from . import buffers
from . import explorers
| 15.333333 | 23 | 0.782609 | 6 | 46 | 6 | 0.666667 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 24 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0aa46ffe759f67eeb931946675b7a8cc55c88cd1 | 205 | py | Python | tests/wikipedia.py | elihschiff/ranking-h | f636fb1c9e2d41ad26e0508c269719bfecfdf7a7 | [
"MIT"
] | null | null | null | tests/wikipedia.py | elihschiff/ranking-h | f636fb1c9e2d41ad26e0508c269719bfecfdf7a7 | [
"MIT"
] | null | null | null | tests/wikipedia.py | elihschiff/ranking-h | f636fb1c9e2d41ad26e0508c269719bfecfdf7a7 | [
"MIT"
] | null | null | null | import util
def test_returns_wikipedia():
"""
Tests that querying with wikipedia returns the correct result.
"""
assert any("wikipedia" in x["url"] for x in util.send_query("wikipedia"))
| 22.777778 | 77 | 0.687805 | 28 | 205 | 4.928571 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 205 | 8 | 78 | 25.625 | 0.841463 | 0.302439 | 0 | 0 | 0 | 0 | 0.165354 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ad5fd8d025feb1d6f09ee273f5250a3c2b99307 | 45 | py | Python | python/dataingest/grammar/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/dataingest/grammar/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/dataingest/grammar/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | from .python_parse_api import PythonParseAPI
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0afe130f8759b0cf68e7ed4ce079651692c4df37 | 30,089 | py | Python | tests/checks/mock/test_kubernetes.py | WPMedia/dd-agent | 94c9ea0dc13037c1d413847d7c9a401e226a608e | [
"BSD-3-Clause"
] | 1 | 2019-12-22T22:14:24.000Z | 2019-12-22T22:14:24.000Z | tests/checks/mock/test_kubernetes.py | WPMedia/dd-agent | 94c9ea0dc13037c1d413847d7c9a401e226a608e | [
"BSD-3-Clause"
] | 3 | 2021-02-08T20:55:47.000Z | 2022-03-29T22:04:12.000Z | tests/checks/mock/test_kubernetes.py | WPMedia/dd-agent | 94c9ea0dc13037c1d413847d7c9a401e226a608e | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2010-2016
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
# stdlib
import mock
import unittest
import os
# 3p
import simplejson as json
# project
from tests.checks.common import AgentCheckTest, Fixtures
from checks import AgentCheck
from utils.kubernetes import KubeUtil
from utils.platform import Platform
CPU = "CPU"
MEM = "MEM"
FS = "fs"
NET = "net"
NET_ERRORS = "net_errors"
DISK = "disk"
DISK_USAGE = "disk_usage"
PODS = "pods"
LIM = "limits"
REQ = "requests"
CAP = "capacity"
METRICS = [
('kubernetes.memory.usage', MEM),
('kubernetes.filesystem.usage', FS),
('kubernetes.filesystem.usage_pct', FS),
('kubernetes.cpu.usage.total', CPU),
('kubernetes.network.tx_bytes', NET),
('kubernetes.network.rx_bytes', NET),
('kubernetes.network_errors', NET_ERRORS),
('kubernetes.diskio.io_service_bytes.stats.total', DISK),
('kubernetes.filesystem.usage_pct', DISK_USAGE),
('kubernetes.filesystem.usage', DISK_USAGE),
('kubernetes.pods.running', PODS),
('kubernetes.cpu.limits', LIM),
('kubernetes.cpu.requests', REQ),
('kubernetes.cpu.capacity', CAP),
('kubernetes.memory.limits', LIM),
('kubernetes.memory.requests', REQ),
('kubernetes.memory.capacity', CAP),
]
def KubeUtil_fake_retrieve_json_auth(url, auth_token, timeout=10):
if url.endswith("/namespaces"):
return json.loads(Fixtures.read_file("namespaces.json", string_escape=False))
if url.endswith("/events"):
return json.loads(Fixtures.read_file("events.json", string_escape=False))
return {}
class TestKubernetes(AgentCheckTest):
CHECK_NAME = 'kubernetes'
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics',
side_effect=lambda: json.loads(Fixtures.read_file("metrics_1.1.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False)))
def test_fail_1_1(self, *args):
# To avoid the disparition of some gauges during the second check
config = {
"instances": [{"host": "foo"}]
}
# Can't use run_check_twice due to specific metrics
self.run_check(config, force_reload=True)
self.assertServiceCheck("kubernetes.kubelet.check", status=AgentCheck.CRITICAL, tags=None, count=1)
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics',
side_effect=lambda: json.loads(Fixtures.read_file("metrics_1.1.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False)))
def test_metrics_1_1(self, *args):
# To avoid the disparition of some gauges during the second check
mocks = {
'_perform_kubelet_checks': lambda x: None,
}
config = {
"instances": [
{
"host": "foo",
"enable_kubelet_checks": False
}
]
}
# Can't use run_check_twice due to specific metrics
self.run_check_twice(config, mocks=mocks, force_reload=True)
expected_tags = [
(['container_name:/kubelet', 'pod_name:no_pod'], [MEM, CPU, NET, DISK]),
(['kube_replication_controller:propjoe', 'kube_namespace:default', 'container_name:k8s_POD.e4cc795_propjoe-dhdzk_default_ba151259-36e0-11e5-84ce-42010af01c62_ef0ed5f9', 'pod_name:default/propjoe-dhdzk'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['container_name:/kube-proxy', 'pod_name:no_pod'], [MEM, CPU, NET]),
(['kube_replication_controller:kube-dns-v8', 'kube_namespace:kube-system', 'container_name:k8s_POD.2688308a_kube-dns-v8-smhcb_kube-system_b80ffab3-3619-11e5-84ce-42010af01c62_295f14ff', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['container_name:/docker-daemon', 'pod_name:no_pod'], [MEM, CPU, DISK, NET]),
(['kube_replication_controller:kube-dns-v8', 'kube_namespace:kube-system', 'container_name:k8s_etcd.2e44beff_kube-dns-v8-smhcb_kube-system_b80ffab3-3619-11e5-84ce-42010af01c62_e3e504ad', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:fluentd-cloud-logging-kubernetes-minion', 'kube_namespace:kube-system', 'container_name:k8s_POD.e4cc795_fluentd-cloud-logging-kubernetes-minion-mu4w_kube-system_d0feac1ad02da9e97c4bf67970ece7a1_49dd977d', 'pod_name:kube-system/fluentd-cloud-logging-kubernetes-minion-mu4w'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:kube-dns-v8', 'kube_namespace:kube-system', 'container_name:k8s_skydns.1e752dc0_kube-dns-v8-smhcb_kube-system_b80ffab3-3619-11e5-84ce-42010af01c62_7c1345a1', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['container_name:/', 'pod_name:no_pod'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['container_name:/system/docker', 'pod_name:no_pod'], [MEM, CPU, DISK, NET]),
(['kube_replication_controller:propjoe', 'kube_namespace:default', 'container_name:k8s_propjoe.21f63023_propjoe-dhdzk_default_ba151259-36e0-11e5-84ce-42010af01c62_19879457', 'pod_name:default/propjoe-dhdzk'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['container_name:/system', 'pod_name:no_pod'], [MEM, CPU, NET, DISK]),
(['kube_replication_controller:kube-ui-v1', 'kube_namespace:kube-system', 'container_name:k8s_POD.3b46e8b9_kube-ui-v1-sv2sq_kube-system_b7e8f250-3619-11e5-84ce-42010af01c62_209ed1dc', 'pod_name:kube-system/kube-ui-v1-sv2sq'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:kube-dns-v8', 'kube_namespace:kube-system', 'container_name:k8s_kube2sky.1afa6a47_kube-dns-v8-smhcb_kube-system_b80ffab3-3619-11e5-84ce-42010af01c62_624bc34c', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:propjoe', 'kube_namespace:default', 'container_name:k8s_POD.e4cc795_propjoe-lkc3l_default_3a9b1759-4055-11e5-84ce-42010af01c62_45d1185b', 'pod_name:default/propjoe-lkc3l'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:haproxy-6db79c7bbcac01601ac35bcdb18868b3', 'kube_namespace:default', 'container_name:k8s_POD.e4cc795_haproxy-6db79c7bbcac01601ac35bcdb18868b3-rr7la_default_86527bf8-36cd-11e5-84ce-42010af01c62_5ad59bf3', 'pod_name:default/haproxy-6db79c7bbcac01601ac35bcdb18868b3-rr7la'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:haproxy-6db79c7bbcac01601ac35bcdb18868b3', 'kube_namespace:default', 'container_name:k8s_haproxy.69b6303b_haproxy-6db79c7bbcac01601ac35bcdb18868b3-rr7la_default_86527bf8-36cd-11e5-84ce-42010af01c62_a35b9731', 'pod_name:default/haproxy-6db79c7bbcac01601ac35bcdb18868b3-rr7la'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:kube-ui-v1','kube_namespace:kube-system', 'container_name:k8s_kube-ui.c17839c_kube-ui-v1-sv2sq_kube-system_b7e8f250-3619-11e5-84ce-42010af01c62_d2b9aa90', 'pod_name:kube-system/kube-ui-v1-sv2sq'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:propjoe','kube_namespace:default', 'container_name:k8s_propjoe.21f63023_propjoe-lkc3l_default_3a9b1759-4055-11e5-84ce-42010af01c62_9fe8b7b0', 'pod_name:default/propjoe-lkc3l'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:kube-dns-v8','kube_namespace:kube-system', 'container_name:k8s_healthz.4469a25d_kube-dns-v8-smhcb_kube-system_b80ffab3-3619-11e5-84ce-42010af01c62_241c34d1', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:fluentd-cloud-logging-kubernetes-minion','kube_namespace:kube-system', 'container_name:k8s_fluentd-cloud-logging.7721935b_fluentd-cloud-logging-kubernetes-minion-mu4w_kube-system_d0feac1ad02da9e97c4bf67970ece7a1_2c3c0879', 'pod_name:kube-system/fluentd-cloud-logging-kubernetes-minion-mu4w'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['container_name:dd-agent', 'pod_name:no_pod'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:l7-lb-controller', 'kube_namespace:kube-system'], [PODS]),
(['kube_replication_controller:redis-slave', 'kube_namespace:default'], [PODS]),
(['kube_replication_controller:frontend', 'kube_namespace:default'], [PODS]),
(['kube_replication_controller:heapster-v11', 'kube_namespace:kube-system'], [PODS]),
([], [LIM, REQ, CAP]) # container from kubernetes api doesn't have a corresponding entry in Cadvisor
]
for m, _type in METRICS:
for tags, types in expected_tags:
if _type in types:
self.assertMetric(m, count=1, tags=tags)
self.coverage_report()
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics',
side_effect=lambda: json.loads(Fixtures.read_file("metrics_1.1.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False)))
def test_historate_1_1(self, *args):
# To avoid the disparition of some gauges during the second check
mocks = {
'_perform_kubelet_checks': lambda x: None,
}
config = {
"instances": [
{
"host": "foo",
"enable_kubelet_checks": False,
"use_histogram": True,
}
]
}
# Can't use run_check_twice due to specific metrics
self.run_check_twice(config, mocks=mocks, force_reload=True)
metric_suffix = ["count", "avg", "median", "max", "95percentile"]
expected_tags = [
(['pod_name:no_pod'], [MEM, CPU, NET, DISK, DISK_USAGE, NET_ERRORS]),
(['kube_replication_controller:propjoe', 'kube_namespace:default', 'pod_name:default/propjoe-dhdzk'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:kube-dns-v8', 'kube_namespace:kube-system', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:fluentd-cloud-logging-kubernetes-minion', 'kube_namespace:kube-system', 'pod_name:kube-system/fluentd-cloud-logging-kubernetes-minion-mu4w'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:kube-dns-v8', 'kube_namespace:kube-system', 'pod_name:kube-system/kube-dns-v8-smhcb'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:propjoe', 'kube_namespace:default', 'pod_name:default/propjoe-dhdzk'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:kube-ui-v1','kube_namespace:kube-system', 'pod_name:kube-system/kube-ui-v1-sv2sq'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:propjoe', 'kube_namespace:default', 'pod_name:default/propjoe-lkc3l'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:haproxy-6db79c7bbcac01601ac35bcdb18868b3', 'kube_namespace:default', 'pod_name:default/haproxy-6db79c7bbcac01601ac35bcdb18868b3-rr7la'], [MEM, CPU, FS, NET, NET_ERRORS]),
(['kube_replication_controller:l7-lb-controller', 'kube_namespace:kube-system'], [PODS]),
(['kube_replication_controller:redis-slave', 'kube_namespace:default'], [PODS]),
(['kube_replication_controller:frontend', 'kube_namespace:default'], [PODS]),
(['kube_replication_controller:heapster-v11', 'kube_namespace:kube-system'], [PODS]),
([], [LIM, REQ, CAP]) # container from kubernetes api doesn't have a corresponding entry in Cadvisor
]
for m, _type in METRICS:
for m_suffix in metric_suffix:
for tags, types in expected_tags:
if _type in types:
self.assertMetric("{0}.{1}".format(m, m_suffix), count=1, tags=tags)
self.coverage_report()
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info',
side_effect=lambda: json.loads(Fixtures.read_file("machine_info_1.2.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics',
side_effect=lambda: json.loads(Fixtures.read_file("metrics_1.2.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False)))
def test_fail_1_2(self, *args):
# To avoid the disparition of some gauges during the second check
config = {
"instances": [{"host": "foo"}]
}
# Can't use run_check_twice due to specific metrics
self.run_check(config, force_reload=True)
self.assertServiceCheck("kubernetes.kubelet.check", status=AgentCheck.CRITICAL)
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info',
side_effect=lambda: json.loads(Fixtures.read_file("machine_info_1.2.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics',
side_effect=lambda: json.loads(Fixtures.read_file("metrics_1.2.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False)))
def test_metrics_1_2(self, *args):
mocks = {
'_perform_kubelet_checks': lambda x: None,
}
config = {
"instances": [
{
"host": "foo",
"enable_kubelet_checks": False
}
]
}
# Can't use run_check_twice due to specific metrics
self.run_check_twice(config, mocks=mocks, force_reload=True)
expected_tags = [
(['container_name:/kubelet', 'pod_name:no_pod'], [MEM, CPU, NET, DISK]),
(['container_name:k8s_POD.35220667_dd-agent-1rxlh_default_12c7be82-33ca-11e6-ac8f-42010af00003_f5cf585f',
'container_image:gcr.io/google_containers/pause:2.0', 'pod_name:default/dd-agent-1rxlh',
'kube_namespace:default', 'kube_app:dd-agent', 'kube_foo:bar','kube_bar:baz',
'kube_replication_controller:dd-agent'],
[MEM, CPU, FS, NET, NET_ERRORS]),
(['container_name:/', 'pod_name:no_pod'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['container_name:/system', 'pod_name:no_pod'], [MEM, CPU, NET, DISK]),
(['container_name:k8s_dd-agent.7b520f3f_dd-agent-1rxlh_default_12c7be82-33ca-11e6-ac8f-42010af00003_321fecb4',
'container_image:datadog/docker-dd-agent:massi_ingest_k8s_events', 'pod_name:default/dd-agent-1rxlh',
'kube_namespace:default', 'kube_app:dd-agent', 'kube_foo:bar',
'kube_bar:baz', 'kube_replication_controller:dd-agent'], [LIM, REQ, MEM, CPU, NET, DISK, DISK_USAGE]),
(['kube_replication_controller:dd-agent', 'kube_namespace:default'], [PODS]),
([], [LIM, REQ, CAP]) # container from kubernetes api doesn't have a corresponding entry in Cadvisor
]
for m, _type in METRICS:
for tags, types in expected_tags:
if _type in types:
self.assertMetric(m, count=1, tags=tags)
# Verify exact capacity values read from machine_info_1.2.json fixture.
self.assertMetric('kubernetes.cpu.capacity', value=2)
self.assertMetric('kubernetes.memory.capacity', value=8391204864)
self.coverage_report()
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info',
side_effect=lambda: json.loads(Fixtures.read_file("machine_info_1.2.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics',
side_effect=lambda: json.loads(Fixtures.read_file("metrics_1.2.json")))
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False)))
def test_historate_1_2(self, *args):
# To avoid the disparition of some gauges during the second check
mocks = {
'_perform_kubelet_checks': lambda x: None,
}
config = {
"instances": [
{
"host": "foo",
"enable_kubelet_checks": False,
"use_histogram": True,
}
]
}
# Can't use run_check_twice due to specific metrics
self.run_check_twice(config, mocks=mocks, force_reload=True)
metric_suffix = ["count", "avg", "median", "max", "95percentile"]
expected_tags = [
(['container_image:datadog/docker-dd-agent:massi_ingest_k8s_events', 'pod_name:default/dd-agent-1rxlh',
'kube_namespace:default', 'kube_app:dd-agent', 'kube_foo:bar','kube_bar:baz',
'kube_replication_controller:dd-agent'], [MEM, CPU, NET, DISK, DISK_USAGE, LIM, REQ]),
(['container_image:gcr.io/google_containers/pause:2.0', 'pod_name:default/dd-agent-1rxlh',
'kube_namespace:default', 'kube_app:dd-agent', 'kube_foo:bar','kube_bar:baz',
'kube_replication_controller:dd-agent'], [MEM, CPU, NET, NET_ERRORS, DISK_USAGE]),
(['pod_name:no_pod'], [MEM, CPU, FS, NET, NET_ERRORS, DISK]),
(['kube_replication_controller:dd-agent', 'kube_namespace:default'], [PODS]),
([], [LIM, REQ, CAP]) # container from kubernetes api doesn't have a corresponding entry in Cadvisor
]
for m, _type in METRICS:
for m_suffix in metric_suffix:
for tags, types in expected_tags:
if _type in types:
self.assertMetric("{0}.{1}".format(m, m_suffix), count=1, tags=tags)
self.coverage_report()
@mock.patch('utils.kubernetes.KubeUtil.get_node_info',
side_effect=lambda: ('Foo', 'Bar'))
@mock.patch('utils.kubernetes.KubeUtil.filter_pods_list',
side_effect=lambda x, y: x)
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth',
side_effect=KubeUtil_fake_retrieve_json_auth)
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list',
side_effect=lambda: json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False)))
def test_events(self, *args):
# default value for collect_events is False
config = {'instances': [{'host': 'foo'}]}
self.run_check(config, force_reload=True)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=0, exact_match=False)
# again, with the feature enabled
config = {'instances': [{'host': 'bar', 'collect_events': True}]}
self.run_check(config, force_reload=True)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=1, exact_match=False)
# with no namespaces, only catch event from 'default'
self.assertEvent('dd-agent-a769 SuccessfulDelete on Bar', count=0, exact_match=False)
# again, now the timestamp is set and the event is discarded b/c too old
self.run_check(config)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=0, exact_match=False)
@mock.patch('utils.kubernetes.KubeUtil.get_node_info',
side_effect=lambda: ('Foo', 'Bar'))
@mock.patch('utils.kubernetes.KubeUtil.filter_pods_list')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_json_auth',
side_effect=KubeUtil_fake_retrieve_json_auth)
@mock.patch('utils.kubernetes.KubeUtil.retrieve_machine_info')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_metrics')
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list')
def test_namespaced_events(self, *args):
# reset last event pulling time
KubeUtil().last_event_collection_ts = 0
# Verify that we are retro compatible with the old 'namespace' configuration key
config = {'instances': [{'host': 'bar', 'collect_events': True, 'namespace': 'test-namespace-1'}]}
self.run_check(config, force_reload=True)
self.assertEvent('dd-agent-a769 SuccessfulDelete on Bar', count=1, exact_match=False)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=1, exact_match=False)
# reset last event pulling time
KubeUtil().last_event_collection_ts = 0
# Using 'namespaces' list
config = {'instances': [{'host': 'bar', 'collect_events': True, 'namespaces': ['test-namespace-1', 'test-namespace-2']}]}
self.run_check(config, force_reload=True)
self.assertEvent('dd-agent-a769 SuccessfulDelete on Bar', count=1, exact_match=False)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=0, exact_match=False)
# reset last event pulling time
KubeUtil().last_event_collection_ts = 0
# Using 'namespace_name_regexp' (since 'namespaces' is not set it should
# fallback to ['default'] and add any namespaces that matched with the regexp
config = {'instances': [{'host': 'bar', 'collect_events': True, 'namespace_name_regexp': 'test-namespace.*'}]}
self.run_check(config, force_reload=True)
self.assertEvent('dd-agent-a769 SuccessfulDelete on Bar', count=1, exact_match=False)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=1, exact_match=False)
# reset last event pulling time
KubeUtil().last_event_collection_ts = 0
# muting the 'default' namespace
config = {'instances': [{'host': 'bar', 'collect_events': True, 'namespaces': [], 'namespace_name_regexp': 'test-namespace.*'}]}
self.run_check(config, force_reload=True)
self.assertEvent('dd-agent-a769 SuccessfulDelete on Bar', count=1, exact_match=False)
self.assertEvent('hello-node-47289321-91tfd Scheduled on Bar', count=0, exact_match=False)
class TestKubeutil(unittest.TestCase):
def setUp(self):
self.kubeutil = KubeUtil()
@mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list', side_effect=['foo'])
@mock.patch('utils.kubernetes.KubeUtil.extract_kube_labels')
def test_get_kube_labels(self, extract_kube_labels, retrieve_pods_list):
self.kubeutil.get_kube_labels(excluded_keys='bar')
retrieve_pods_list.assert_called_once()
extract_kube_labels.assert_called_once_with('foo', excluded_keys='bar')
def test_extract_kube_labels(self):
"""
Test with both 1.1 and 1.2 version payloads
"""
res = self.kubeutil.extract_kube_labels({}, ['foo'])
self.assertEqual(len(res), 0)
pods = json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False))
res = self.kubeutil.extract_kube_labels(pods, ['foo'])
labels = set(inn for out in res.values() for inn in out)
self.assertEqual(len(labels), 8)
res = self.kubeutil.extract_kube_labels(pods, ['k8s-app'])
labels = set(inn for out in res.values() for inn in out)
self.assertEqual(len(labels), 6)
pods = json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False))
res = self.kubeutil.extract_kube_labels(pods, ['foo'])
labels = set(inn for out in res.values() for inn in out)
self.assertEqual(len(labels), 3)
res = self.kubeutil.extract_kube_labels(pods, ['k8s-app'])
labels = set(inn for out in res.values() for inn in out)
self.assertEqual(len(labels), 3)
def test_extract_meta(self):
"""
Test with both 1.1 and 1.2 version payloads
"""
res = self.kubeutil.extract_meta({}, 'foo')
self.assertEqual(len(res), 0)
pods = json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False))
res = self.kubeutil.extract_meta(pods, 'foo')
self.assertEqual(len(res), 0)
res = self.kubeutil.extract_meta(pods, 'uid')
self.assertEqual(len(res), 6)
pods = json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False))
res = self.kubeutil.extract_meta(pods, 'foo')
self.assertEqual(len(res), 0)
res = self.kubeutil.extract_meta(pods, 'uid')
self.assertEqual(len(res), 4)
@mock.patch('utils.kubernetes.kubeutil.retrieve_json')
def test_retrieve_pods_list(self, retrieve_json):
self.kubeutil.retrieve_pods_list()
retrieve_json.assert_called_once_with(self.kubeutil.pods_list_url)
@mock.patch('utils.kubernetes.kubeutil.retrieve_json')
def test_retrieve_machine_info(self, retrieve_json):
self.kubeutil.retrieve_machine_info()
retrieve_json.assert_called_once_with(self.kubeutil.machine_info_url)
@mock.patch('utils.kubernetes.kubeutil.retrieve_json')
def test_retrieve_metrics(self, retrieve_json):
self.kubeutil.retrieve_metrics()
retrieve_json.assert_called_once_with(self.kubeutil.metrics_url)
def test_filter_pods_list(self):
"""
Test with both 1.1 and 1.2 version payloads
"""
res = self.kubeutil.filter_pods_list({}, 'foo')
self.assertEqual(len(res.get('items')), 0)
pods = json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False))
res = self.kubeutil.filter_pods_list(pods, '10.240.0.9')
self.assertEqual(len(res.get('items')), 5)
pods = json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False))
res = self.kubeutil.filter_pods_list(pods, 'foo')
self.assertEqual(len(res.get('items')), 0)
pods = json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False))
res = self.kubeutil.filter_pods_list(pods, '10.240.0.5')
self.assertEqual(len(res.get('items')), 1)
pods = json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False))
res = self.kubeutil.filter_pods_list(pods, 'foo')
self.assertEqual(len(res.get('items')), 0)
@mock.patch('utils.kubernetes.kubeutil.requests')
def test_retrieve_json_auth(self, r):
self.kubeutil.retrieve_json_auth('url', 'foo_tok')
r.get.assert_called_once_with('url', verify=False, timeout=10, headers={'Authorization': 'Bearer foo_tok'})
self.kubeutil.CA_CRT_PATH = __file__
self.kubeutil.retrieve_json_auth('url', 'foo_tok')
r.get.assert_called_with('url', verify=__file__, timeout=10, headers={'Authorization': 'Bearer foo_tok'})
def test_get_node_info(self):
with mock.patch('utils.kubernetes.KubeUtil._fetch_host_data') as f:
self.kubeutil.get_node_info()
f.assert_called_once()
f.reset_mock()
self.kubeutil._node_ip = 'foo'
self.kubeutil._node_name = 'bar'
ip, name = self.kubeutil.get_node_info()
self.assertEqual(ip, 'foo')
self.assertEqual(name, 'bar')
f.assert_not_called()
def test__fetch_host_data(self):
"""
Test with both 1.1 and 1.2 version payloads
"""
with mock.patch('utils.kubernetes.KubeUtil.retrieve_pods_list') as mock_pods:
self.kubeutil.host_name = 'dd-agent-1rxlh'
mock_pods.return_value = json.loads(Fixtures.read_file("pods_list_1.2.json", string_escape=False))
self.kubeutil._fetch_host_data()
self.assertEqual(self.kubeutil._node_ip, '10.240.0.9')
self.assertEqual(self.kubeutil._node_name, 'kubernetes-massi-minion-k23m')
self.kubeutil.host_name = 'heapster-v11-l8sh1'
mock_pods.return_value = json.loads(Fixtures.read_file("pods_list_1.1.json", string_escape=False))
self.kubeutil._fetch_host_data()
self.assertEqual(self.kubeutil._node_ip, '10.240.0.9')
self.assertEqual(self.kubeutil._node_name, 'gke-cluster-1-8046fdfa-node-ld35')
def test_get_auth_token(self):
KubeUtil.AUTH_TOKEN_PATH = '/foo/bar'
self.assertIsNone(KubeUtil.get_auth_token())
KubeUtil.AUTH_TOKEN_PATH = Fixtures.file('events.json') # any file could do the trick
self.assertIsNotNone(KubeUtil.get_auth_token())
def test_is_k8s(self):
os.unsetenv('KUBERNETES_PORT')
self.assertFalse(Platform.is_k8s())
os.environ['KUBERNETES_PORT'] = '999'
self.assertTrue(Platform.is_k8s())
def test_extract_event_tags(self):
events = json.loads(Fixtures.read_file("events.json", string_escape=False))['items']
for ev in events:
tags = KubeUtil().extract_event_tags(ev)
# there should be 4 tags except for some events where source.host is missing
self.assertTrue(len(tags) >= 3)
tag_names = [tag.split(':')[0] for tag in tags]
self.assertIn('reason', tag_names)
self.assertIn('namespace', tag_names)
self.assertIn('object_type', tag_names)
if len(tags) == 4:
self.assertIn('node_name', tag_names)
| 56.771698 | 375 | 0.673602 | 3,823 | 30,089 | 5.070102 | 0.102537 | 0.034824 | 0.03178 | 0.054481 | 0.818913 | 0.809524 | 0.793427 | 0.786256 | 0.752618 | 0.72708 | 0 | 0.043142 | 0.191897 | 30,089 | 529 | 376 | 56.879017 | 0.75402 | 0.066237 | 0 | 0.489157 | 0 | 0.019277 | 0.389316 | 0.31673 | 0 | 0 | 0 | 0 | 0.142169 | 1 | 0.055422 | false | 0 | 0.019277 | 0 | 0.089157 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e401b8e844e41239858430860bf3d61d585b26e7 | 57 | py | Python | shcollector/cfg/__init__.py | mwerlen/smart-home-collector | 083aa53fd4b7f3a9392ab0cbafc383ea69ea6315 | [
"MIT"
] | null | null | null | shcollector/cfg/__init__.py | mwerlen/smart-home-collector | 083aa53fd4b7f3a9392ab0cbafc383ea69ea6315 | [
"MIT"
] | 4 | 2021-01-04T07:34:00.000Z | 2021-03-01T20:06:18.000Z | shcollector/cfg/__init__.py | mwerlen/smart-home-collector | 083aa53fd4b7f3a9392ab0cbafc383ea69ea6315 | [
"MIT"
] | null | null | null | from cfg.config import Config
config: Config = Config()
| 14.25 | 29 | 0.754386 | 8 | 57 | 5.375 | 0.5 | 0.837209 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 57 | 3 | 30 | 19 | 0.895833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7c311caa8a7ec26c95d477c6042a309b370673ef | 3,214 | py | Python | allennlp/tests/training/metrics/covariance_test.py | tianjianjiang/allennlp | 0839f5c263911ec5ff04a2ebe575493c7e0436ef | [
"Apache-2.0"
] | 2 | 2021-04-27T19:56:28.000Z | 2021-08-19T05:34:37.000Z | allennlp/tests/training/metrics/covariance_test.py | tianjianjiang/allennlp | 0839f5c263911ec5ff04a2ebe575493c7e0436ef | [
"Apache-2.0"
] | 5 | 2021-05-03T14:40:33.000Z | 2021-05-03T14:40:34.000Z | allennlp/tests/training/metrics/covariance_test.py | tianjianjiang/allennlp | 0839f5c263911ec5ff04a2ebe575493c7e0436ef | [
"Apache-2.0"
] | 2 | 2019-12-04T16:55:13.000Z | 2019-12-06T18:47:15.000Z | import torch
import numpy as np
from numpy.testing import assert_allclose
from allennlp.common.testing import AllenNlpTestCase
from allennlp.training.metrics import Covariance
class CovarianceTest(AllenNlpTestCase):
def test_covariance_unmasked_computation(self):
covariance = Covariance()
batch_size = 100
num_labels = 10
predictions = np.random.randn(batch_size, num_labels).astype("float32")
labels = 0.5 * predictions + np.random.randn(batch_size, num_labels).astype("float32")
stride = 10
for i in range(batch_size // stride):
timestep_predictions = torch.FloatTensor(predictions[stride * i : stride * (i + 1), :])
timestep_labels = torch.FloatTensor(labels[stride * i : stride * (i + 1), :])
# Flatten the predictions and labels thus far, so numpy treats them as
# independent observations.
expected_covariance = np.cov(
predictions[: stride * (i + 1), :].reshape(-1),
labels[: stride * (i + 1), :].reshape(-1),
)[0, 1]
covariance(timestep_predictions, timestep_labels)
assert_allclose(expected_covariance, covariance.get_metric(), rtol=1e-5)
# Test reset
covariance.reset()
covariance(torch.FloatTensor(predictions), torch.FloatTensor(labels))
assert_allclose(
np.cov(predictions.reshape(-1), labels.reshape(-1))[0, 1],
covariance.get_metric(),
rtol=1e-5,
)
def test_covariance_masked_computation(self):
covariance = Covariance()
batch_size = 100
num_labels = 10
predictions = np.random.randn(batch_size, num_labels).astype("float32")
labels = 0.5 * predictions + np.random.randn(batch_size, num_labels).astype("float32")
# Random binary mask
mask = np.random.randint(0, 2, size=(batch_size, num_labels)).astype("float32")
stride = 10
for i in range(batch_size // stride):
timestep_predictions = torch.FloatTensor(predictions[stride * i : stride * (i + 1), :])
timestep_labels = torch.FloatTensor(labels[stride * i : stride * (i + 1), :])
timestep_mask = torch.FloatTensor(mask[stride * i : stride * (i + 1), :])
# Flatten the predictions, labels, and mask thus far, so numpy treats them as
# independent observations.
expected_covariance = np.cov(
predictions[: stride * (i + 1), :].reshape(-1),
labels[: stride * (i + 1), :].reshape(-1),
fweights=mask[: stride * (i + 1), :].reshape(-1),
)[0, 1]
covariance(timestep_predictions, timestep_labels, timestep_mask)
assert_allclose(expected_covariance, covariance.get_metric(), rtol=1e-5)
# Test reset
covariance.reset()
covariance(
torch.FloatTensor(predictions), torch.FloatTensor(labels), torch.FloatTensor(mask)
)
assert_allclose(
np.cov(predictions.reshape(-1), labels.reshape(-1), fweights=mask.reshape(-1))[0, 1],
covariance.get_metric(),
rtol=1e-5,
)
| 43.432432 | 99 | 0.610143 | 354 | 3,214 | 5.412429 | 0.189266 | 0.054802 | 0.041754 | 0.046973 | 0.802714 | 0.802714 | 0.798539 | 0.798539 | 0.768789 | 0.768789 | 0 | 0.028205 | 0.271935 | 3,214 | 73 | 100 | 44.027397 | 0.790598 | 0.07374 | 0 | 0.631579 | 0 | 0 | 0.011788 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 1 | 0.035088 | false | 0 | 0.087719 | 0 | 0.140351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c570a1ae985391f88639d366330ef7140dfe874 | 143 | py | Python | src/masonite/commands/Command.py | cercos/masonite | f7f220efa7fae833683e9f07ce13c3795a87d3b8 | [
"MIT"
] | 1,816 | 2018-02-14T01:59:51.000Z | 2022-03-31T17:09:20.000Z | src/masonite/commands/Command.py | cercos/masonite | f7f220efa7fae833683e9f07ce13c3795a87d3b8 | [
"MIT"
] | 340 | 2018-02-11T00:27:26.000Z | 2022-03-21T12:00:24.000Z | src/masonite/commands/Command.py | cercos/masonite | f7f220efa7fae833683e9f07ce13c3795a87d3b8 | [
"MIT"
] | 144 | 2018-03-18T00:08:16.000Z | 2022-02-26T01:51:58.000Z | from cleo import Command as BaseCommand
from ..utils.console import AddCommandColors
class Command(BaseCommand, AddCommandColors):
pass
| 17.875 | 45 | 0.804196 | 16 | 143 | 7.1875 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146853 | 143 | 7 | 46 | 20.428571 | 0.942623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7cad48be242058723b88c72cb3aadfabe9f40d34 | 3,949 | py | Python | benchmarks/Evolution/both/evo_tests/test_cases/test_situation_flag.py | nuprl/retic_performance | 621211c2f40251ce5364c33e72e4067e34a32013 | [
"MIT"
] | 3 | 2018-08-03T02:41:29.000Z | 2021-03-19T03:18:47.000Z | benchmarks/Evolution/both/evo_tests/test_cases/test_situation_flag.py | nuprl/retic_performance | 621211c2f40251ce5364c33e72e4067e34a32013 | [
"MIT"
] | 3 | 2018-02-04T17:53:56.000Z | 2018-11-10T17:06:57.000Z | benchmarks/Evolution/both/evo_tests/test_cases/test_situation_flag.py | nuprl/retic_performance | 621211c2f40251ce5364c33e72e4067e34a32013 | [
"MIT"
] | 1 | 2018-08-04T00:14:12.000Z | 2018-08-04T00:14:12.000Z | __author__ = 'Edwin Cowart, Kevin McDonough'
import unittest
from evolution.situation_flag import *
class TestSituationFlag(unittest.TestCase):
def test_situation_flag(self):
self.assertFalse(SituationFlag.ATTACKER is SituationFlag.DEFENDER)
self.assertTrue(SituationFlag.ATTACKER is SituationFlag.ATTACKER)
self.assertFalse(SituationFlag.ATTACKER is SituationFlag.DEFENDER_L_NEIGHBOR)
self.assertFalse(SituationFlag.ATTACKER is SituationFlag.DEFENDER_R_NEIGHBOR)
self.assertTrue(SituationFlag.DEFENDER is SituationFlag.DEFENDER)
self.assertFalse(SituationFlag.DEFENDER is SituationFlag.ATTACKER)
self.assertFalse(SituationFlag.DEFENDER is SituationFlag.DEFENDER_L_NEIGHBOR)
self.assertFalse(SituationFlag.DEFENDER is SituationFlag.DEFENDER_R_NEIGHBOR)
self.assertFalse(SituationFlag.DEFENDER_L_NEIGHBOR is SituationFlag.DEFENDER)
self.assertFalse(SituationFlag.DEFENDER_L_NEIGHBOR is SituationFlag.ATTACKER)
self.assertTrue(SituationFlag.DEFENDER_L_NEIGHBOR is SituationFlag.DEFENDER_L_NEIGHBOR)
self.assertFalse(SituationFlag.DEFENDER_L_NEIGHBOR is SituationFlag.DEFENDER_R_NEIGHBOR)
self.assertFalse(SituationFlag.DEFENDER_R_NEIGHBOR is SituationFlag.DEFENDER)
self.assertFalse(SituationFlag.DEFENDER_R_NEIGHBOR is SituationFlag.ATTACKER)
self.assertFalse(SituationFlag.DEFENDER_R_NEIGHBOR is SituationFlag.DEFENDER_L_NEIGHBOR)
self.assertTrue(SituationFlag.DEFENDER_R_NEIGHBOR is SituationFlag.DEFENDER_R_NEIGHBOR)
def test_is_belligerent(self):
self.assertTrue(SituationFlag.is_belligerent(SituationFlag.DEFENDER))
self.assertTrue(SituationFlag.is_belligerent(SituationFlag.ATTACKER))
self.assertFalse(SituationFlag.is_belligerent(SituationFlag.DEFENDER_L_NEIGHBOR))
self.assertFalse(SituationFlag.is_belligerent(SituationFlag.DEFENDER_R_NEIGHBOR))
def test_is_defender(self):
self.assertTrue(SituationFlag.is_defender(SituationFlag.DEFENDER))
self.assertFalse(SituationFlag.is_defender(SituationFlag.ATTACKER))
self.assertFalse(SituationFlag.is_defender(SituationFlag.DEFENDER_L_NEIGHBOR))
self.assertFalse(SituationFlag.is_defender(SituationFlag.DEFENDER_R_NEIGHBOR))
def test_is_attacker(self):
self.assertFalse(SituationFlag.is_attacker(SituationFlag.DEFENDER))
self.assertTrue(SituationFlag.is_attacker(SituationFlag.ATTACKER))
self.assertFalse(SituationFlag.is_attacker(SituationFlag.DEFENDER_L_NEIGHBOR))
self.assertFalse(SituationFlag.is_attacker(SituationFlag.DEFENDER_R_NEIGHBOR))
def test_is_defender_neighbor(self):
self.assertFalse(SituationFlag.is_defender_neighbor(SituationFlag.DEFENDER))
self.assertFalse(SituationFlag.is_defender_neighbor(SituationFlag.ATTACKER))
self.assertTrue(SituationFlag.is_defender_neighbor(SituationFlag.DEFENDER_L_NEIGHBOR))
self.assertTrue(SituationFlag.is_defender_neighbor(SituationFlag.DEFENDER_R_NEIGHBOR))
def test_is_defender_l_neighbor(self):
self.assertFalse(SituationFlag.is_defender_left_neighbor(SituationFlag.DEFENDER))
self.assertFalse(SituationFlag.is_defender_left_neighbor(SituationFlag.ATTACKER))
self.assertTrue(SituationFlag.is_defender_left_neighbor(SituationFlag.DEFENDER_L_NEIGHBOR))
self.assertFalse(SituationFlag.is_defender_left_neighbor(SituationFlag.DEFENDER_R_NEIGHBOR))
def test_is_defender_r_neighbor(self):
self.assertFalse(SituationFlag.is_defender_right_neighbor(SituationFlag.DEFENDER))
self.assertFalse(SituationFlag.is_defender_right_neighbor(SituationFlag.ATTACKER))
self.assertFalse(SituationFlag.is_defender_right_neighbor(SituationFlag.DEFENDER_L_NEIGHBOR))
self.assertTrue(SituationFlag.is_defender_right_neighbor(SituationFlag.DEFENDER_R_NEIGHBOR))
if __name__ == '__main__':
unittest.main()
| 58.940299 | 101 | 0.808306 | 417 | 3,949 | 7.33813 | 0.076739 | 0.288235 | 0.256209 | 0.156863 | 0.935294 | 0.890196 | 0.822222 | 0.554902 | 0.326471 | 0.098693 | 0 | 0 | 0.119017 | 3,949 | 66 | 102 | 59.833333 | 0.879563 | 0 | 0 | 0 | 0 | 0 | 0.009369 | 0 | 0 | 0 | 0 | 0 | 0.754717 | 1 | 0.132075 | false | 0 | 0.037736 | 0 | 0.188679 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7cce5aa62c86bae573151e21cc580b05d2b4a392 | 91 | py | Python | authlib/oauth2/rfc8414/__init__.py | danielfv/authlib | 7b11dd7d262574009ac10298ace6c48d6054057e | [
"BSD-3-Clause"
] | 1 | 2019-10-26T20:23:28.000Z | 2019-10-26T20:23:28.000Z | authlib/oauth2/rfc8414/__init__.py | danielfv/authlib | 7b11dd7d262574009ac10298ace6c48d6054057e | [
"BSD-3-Clause"
] | null | null | null | authlib/oauth2/rfc8414/__init__.py | danielfv/authlib | 7b11dd7d262574009ac10298ace6c48d6054057e | [
"BSD-3-Clause"
] | null | null | null | from .models import AuthorizationServerMetadata
from .well_known import get_well_known_url
| 30.333333 | 47 | 0.89011 | 12 | 91 | 6.416667 | 0.666667 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 91 | 2 | 48 | 45.5 | 0.927711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6b023534d0a5a01ce52fa3957bbd78fc0dd3506c | 204 | py | Python | calc.py | SashaPoraiko/academy-storage | 387f236971085fde605c2a12b53b1734a925759a | [
"Unlicense",
"MIT"
] | null | null | null | calc.py | SashaPoraiko/academy-storage | 387f236971085fde605c2a12b53b1734a925759a | [
"Unlicense",
"MIT"
] | 7 | 2020-06-05T23:54:27.000Z | 2022-02-10T10:36:29.000Z | calc.py | SashaPoraiko/academy-storage | 387f236971085fde605c2a12b53b1734a925759a | [
"Unlicense",
"MIT"
] | null | null | null | def add(a, b):
return a + b
def sub(a, b):
return a - b
def mul(a, b):
return a * b
def div(a, b):
return a / b
def sqrt(a):
return a ** 0.5
def pow(a, b):
return a ** b
| 8.869565 | 19 | 0.47549 | 42 | 204 | 2.309524 | 0.285714 | 0.206186 | 0.412371 | 0.463918 | 0.639175 | 0.536082 | 0 | 0 | 0 | 0 | 0 | 0.015504 | 0.367647 | 204 | 22 | 20 | 9.272727 | 0.736434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6b25c3e46a2c5a685df82c41f801e807fc075428 | 115 | py | Python | tools/__init__.py | okcd00/BertBasedCorrectionModels | 79297c36c64eaff6c4f3c316bc4110f442210991 | [
"Apache-2.0"
] | null | null | null | tools/__init__.py | okcd00/BertBasedCorrectionModels | 79297c36c64eaff6c4f3c316bc4110f442210991 | [
"Apache-2.0"
] | null | null | null | tools/__init__.py | okcd00/BertBasedCorrectionModels | 79297c36c64eaff6c4f3c316bc4110f442210991 | [
"Apache-2.0"
] | null | null | null | """
@Time : 2021-07-27 17:21:07
@File : __init__.py.py
@Author : okcd00
@Email : okcd00{at}qq.com
"""
| 16.428571 | 31 | 0.556522 | 18 | 115 | 3.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 0.243478 | 115 | 6 | 32 | 19.166667 | 0.482759 | 0.921739 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
86355947caf970a71e437f06ad7cd8ca9af86447 | 242 | py | Python | pshape/__init__.py | sam1902/pshape | b94b474ecd528284307907d85455e6252946fb95 | [
"BSD-3-Clause"
] | null | null | null | pshape/__init__.py | sam1902/pshape | b94b474ecd528284307907d85455e6252946fb95 | [
"BSD-3-Clause"
] | null | null | null | pshape/__init__.py | sam1902/pshape | b94b474ecd528284307907d85455e6252946fb95 | [
"BSD-3-Clause"
] | null | null | null | # The module is called pshape
# the file containing the function is called pshape
# and the function itself is called pshape.
# For package structure design, think of tqdm, they've got a similar deal going on
from pshape.pshape import pshape
| 40.333333 | 82 | 0.789256 | 40 | 242 | 4.775 | 0.675 | 0.125654 | 0.219895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173554 | 242 | 5 | 83 | 48.4 | 0.955 | 0.826446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86591d66e635dd08905393e315d83c162edc2b91 | 8,503 | py | Python | tests/test_benchmark_tf.py | abufadl/transformers | c84bb6eb92b654e04a82fada26417fcdab45f3af | [
"Apache-2.0"
] | 5 | 2020-12-05T12:10:34.000Z | 2021-03-04T19:01:25.000Z | tests/test_benchmark_tf.py | abufadl/transformers | c84bb6eb92b654e04a82fada26417fcdab45f3af | [
"Apache-2.0"
] | 2 | 2020-09-03T13:54:34.000Z | 2020-09-25T19:01:29.000Z | tests/test_benchmark_tf.py | abufadl/transformers | c84bb6eb92b654e04a82fada26417fcdab45f3af | [
"Apache-2.0"
] | 3 | 2020-10-10T10:56:18.000Z | 2020-12-04T20:54:39.000Z | import os
import tempfile
import unittest
from pathlib import Path
from transformers import AutoConfig, is_tf_available
from transformers.testing_utils import require_tf
if is_tf_available():
import tensorflow as tf
from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments
@require_tf
class TFBenchmarkTest(unittest.TestCase):
def check_results_dict_not_empty(self, results):
for model_result in results.values():
for batch_size, sequence_length in zip(model_result["bs"], model_result["ss"]):
result = model_result["result"][batch_size][sequence_length]
self.assertIsNotNone(result)
def test_inference_no_configs_eager(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
eager_mode=True,
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
def test_inference_no_configs_only_pretrain(self):
MODEL_ID = "sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
no_multi_process=True,
only_pretrain_model=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
def test_inference_no_configs_graph(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
def test_inference_with_configs_eager(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
eager_mode=True,
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args, [config])
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
def test_inference_with_configs_graph(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args, [config])
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
def test_train_no_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=True,
no_inference=True,
sequence_lengths=[8],
batch_sizes=[1],
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
results = benchmark.run()
self.check_results_dict_not_empty(results.time_train_result)
self.check_results_dict_not_empty(results.memory_train_result)
def test_train_with_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=True,
no_inference=True,
sequence_lengths=[8],
batch_sizes=[1],
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args, [config])
results = benchmark.run()
self.check_results_dict_not_empty(results.time_train_result)
self.check_results_dict_not_empty(results.memory_train_result)
def test_inference_encoder_decoder_with_configs(self):
MODEL_ID = "patrickvonplaten/t5-tiny-random"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args, configs=[config])
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
@unittest.skipIf(is_tf_available() and len(tf.config.list_physical_devices("GPU")) == 0, "Cannot do xla on CPU.")
def test_inference_no_configs_xla(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
use_xla=True,
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
results = benchmark.run()
self.check_results_dict_not_empty(results.time_inference_result)
self.check_results_dict_not_empty(results.memory_inference_result)
def test_save_csv_files(self):
MODEL_ID = "sshleifer/tiny-gpt2"
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
no_inference=False,
save_to_csv=True,
sequence_lengths=[8],
batch_sizes=[1],
inference_time_csv_file=os.path.join(tmp_dir, "inf_time.csv"),
inference_memory_csv_file=os.path.join(tmp_dir, "inf_mem.csv"),
env_info_csv_file=os.path.join(tmp_dir, "env.csv"),
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
benchmark.run()
self.assertTrue(Path(os.path.join(tmp_dir, "inf_time.csv")).exists())
self.assertTrue(Path(os.path.join(tmp_dir, "inf_mem.csv")).exists())
self.assertTrue(Path(os.path.join(tmp_dir, "env.csv")).exists())
def test_trace_memory(self):
MODEL_ID = "sshleifer/tiny-gpt2"
def _check_summary_is_not_empty(summary):
self.assertTrue(hasattr(summary, "sequential"))
self.assertTrue(hasattr(summary, "cumulative"))
self.assertTrue(hasattr(summary, "current"))
self.assertTrue(hasattr(summary, "total"))
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
no_inference=False,
sequence_lengths=[8],
batch_sizes=[1],
log_filename=os.path.join(tmp_dir, "log.txt"),
log_print=True,
trace_memory_line_by_line=True,
eager_mode=True,
no_multi_process=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
result = benchmark.run()
_check_summary_is_not_empty(result.inference_summary)
self.assertTrue(Path(os.path.join(tmp_dir, "log.txt")).exists())
| 39.920188 | 117 | 0.648595 | 914 | 8,503 | 5.676149 | 0.141138 | 0.035081 | 0.058597 | 0.069584 | 0.80185 | 0.774287 | 0.758867 | 0.738049 | 0.701041 | 0.688126 | 0 | 0.005445 | 0.265671 | 8,503 | 212 | 118 | 40.108491 | 0.825432 | 0 | 0 | 0.673575 | 0 | 0 | 0.047513 | 0.010937 | 0 | 0 | 0 | 0 | 0.046632 | 1 | 0.067358 | false | 0 | 0.041451 | 0 | 0.11399 | 0.005181 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8679efb1052dc965214aaea2ded1b11a4fe5be92 | 62 | py | Python | api/app/config/development.py | rdkap42/caedus-covid | f64a833bdf386708fcb9394f94026c48f8d474ee | [
"MIT"
] | 10 | 2020-03-17T21:21:50.000Z | 2020-04-30T02:30:47.000Z | api/app/config/production.py | rdkap42/caedus-covid | f64a833bdf386708fcb9394f94026c48f8d474ee | [
"MIT"
] | 5 | 2020-03-17T04:39:03.000Z | 2021-04-30T21:11:14.000Z | api/app/config/production.py | rdkap42/caedus-covid | f64a833bdf386708fcb9394f94026c48f8d474ee | [
"MIT"
] | null | null | null | from .base import BaseConfig
class Config(BaseConfig):
pass | 15.5 | 28 | 0.790323 | 8 | 62 | 6.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 62 | 4 | 29 | 15.5 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
867c62d072329752d8493634548419a768eb7c28 | 77,879 | py | Python | backend/api/tests/test_compliance_reporting.py | amichard/tfrs | ed3973016cc5c2ae48999d550a23b41a5ddad807 | [
"Apache-2.0"
] | 18 | 2017-05-10T21:55:11.000Z | 2021-03-01T16:41:32.000Z | backend/api/tests/test_compliance_reporting.py | amichard/tfrs | ed3973016cc5c2ae48999d550a23b41a5ddad807 | [
"Apache-2.0"
] | 1,167 | 2017-03-04T00:18:43.000Z | 2022-03-03T22:31:51.000Z | backend/api/tests/test_compliance_reporting.py | amichard/tfrs | ed3973016cc5c2ae48999d550a23b41a5ddad807 | [
"Apache-2.0"
] | 48 | 2017-03-09T17:19:39.000Z | 2022-02-24T16:38:17.000Z | # -*- coding: utf-8 -*-
# pylint: disable=no-member,invalid-name
"""
REST API Documentation for the NRsS TFRS Credit Trading Application
The Transportation Fuels Reporting System is being designed to streamline
compliance reporting for transportation fuel suppliers in accordance with
the Renewable & Low Carbon Fuel Requirements Regulation.
OpenAPI spec version: v1
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import json
from django.utils import timezone
from rest_framework import status
from api.models import OrganizationBalance
from api.models.CompliancePeriod import CompliancePeriod
from api.models.ComplianceReport import ComplianceReport, ComplianceReportStatus, ComplianceReportType, \
ComplianceReportWorkflowState
from api.models.NotificationMessage import NotificationMessage
from api.models.Organization import Organization
from .base_test_case import BaseTestCase
class TestComplianceReporting(BaseTestCase):
"""Tests for the compliance reporting endpoint"""
extra_fixtures = [
'test/test_compliance_reporting.json',
'test/test_fuel_codes.json',
'test/test_unit_of_measures.json',
'test/test_carbon_intensity_limits.json',
'test/test_default_carbon_intensities.json',
'test/test_energy_densities.json',
'test/test_energy_effectiveness_ratio.json',
'test/test_petroleum_carbon_intensities.json',
'test/test_transaction_types.json'
]
def _create_compliance_report(self, report_type="Compliance Report"):
report = ComplianceReport()
report.status = ComplianceReportWorkflowState.objects.create(
fuel_supplier_status=ComplianceReportStatus.objects.get_by_natural_key('Draft')
)
report.organization = Organization.objects.get_by_natural_key(
"Test Org 1")
report.compliance_period = CompliancePeriod.objects.get_by_natural_key('2018')
report.type = ComplianceReportType.objects.get_by_natural_key(report_type)
report.create_timestamp = timezone.now()
report.update_timestamp = timezone.now()
report.save()
report.refresh_from_db()
return report.id
def test_list_compliance_reports_fs1(self):
response = self.clients['fs_user_1'].get('/api/compliance_reports')
self.assertEqual(response.status_code, status.HTTP_200_OK)
compliance_reports = response.json()
self.assertEqual(len(compliance_reports), 3)
def test_list_compliance_reports_unauthorized(self):
response = self.clients['fs_user_2'].get('/api/compliance_reports')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_list_compliance_gov(self):
response = self.clients['gov_analyst'].get('/api/compliance_reports')
self.assertEqual(response.status_code, status.HTTP_200_OK)
compliance_reports = response.json()
self.assertEqual(len(compliance_reports), 1)
def test_get_compliance_report_details_authorized(self):
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].get('/api/compliance_reports/{id}'.format(id=rid))
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_get_compliance_report_details_unauthorized(self):
rid = self._create_compliance_report()
response = self.clients['fs_user_2'].get('/api/compliance_reports/{id}'.format(id=rid))
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_get_compliance_report_details_gov_authorized(self):
response = self.clients['gov_analyst'].get('/api/compliance_reports/2')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_get_compliance_report_details_gov_unauthorized(self):
response = self.clients['gov_analyst'].get('/api/compliance_reports/3')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_create_draft_compliance_report_authorized(self):
payload = {
'status': {'fuelSupplierStatus': 'Draft'},
'type': 'Compliance Report',
'compliance_period': '2017'
}
response = self.clients['fs_user_1'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
response = self.clients['fs_user_1'].get('/api/compliance_reports')
self.assertEqual(response.status_code, status.HTTP_200_OK)
compliance_reports = response.json()
self.assertEqual(len(compliance_reports), 4)
def test_row_ordering(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'CNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'fuelCode': None,
'intensity': 12
},
{
'fuelType': 'CNG',
'fuelClass': 'Diesel',
'quantity': 5,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'fuelCode': None,
'intensity': 13
}
]
},
'scheduleC': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'expectedUse': 'Other',
'rationale': 'Test rationale 1'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 20,
'expectedUse': 'Other',
'rationale': 'Test rationale 2 '
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 30,
'expectedUse': 'Other',
'rationale': 'Test rationale 3'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 40,
'expectedUse': 'Other',
'rationale': 'Test rationale 4 '
}
]
},
'scheduleA': {
'records': [
{
'tradingPartner': 'CD',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 98
},
{
'tradingPartner': 'AB',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 99
},
{
'tradingPartner': 'EF',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 100
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
},
{
'fuelType': 'CNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
,
{
'fuelType': 'CNG',
'fuelClass': 'Diesel',
'feedstock': 'Wheat',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response_data['scheduleA']['records'][0]['tradingPartner'], 'CD')
self.assertEqual(response_data['scheduleA']['records'][1]['tradingPartner'], 'AB')
self.assertEqual(response_data['scheduleA']['records'][2]['tradingPartner'], 'EF')
self.assertEqual(response_data['scheduleB']['records'][0]['quantity'], '10.00')
self.assertEqual(response_data['scheduleB']['records'][1]['quantity'], '5.00')
self.assertEqual(response_data['scheduleC']['records'][0]['quantity'], '10.00')
self.assertEqual(response_data['scheduleC']['records'][1]['quantity'], '20.00')
self.assertEqual(response_data['scheduleC']['records'][2]['quantity'], '30.00')
self.assertEqual(response_data['scheduleC']['records'][3]['quantity'], '40.00')
self.assertEqual(response_data['scheduleD']['sheets'][0]['fuelType'], 'LNG')
self.assertEqual(response_data['scheduleD']['sheets'][0]['feedstock'], 'Corn')
self.assertEqual(response_data['scheduleD']['sheets'][0]['inputs'][0]['value'], '10')
self.assertEqual(response_data['scheduleD']['sheets'][0]['inputs'][1]['value'], '20')
self.assertEqual(response_data['scheduleD']['sheets'][1]['fuelType'], 'CNG')
self.assertEqual(response_data['scheduleD']['sheets'][1]['feedstock'], 'Corn')
self.assertEqual(response_data['scheduleD']['sheets'][1]['inputs'][0]['value'], '10')
self.assertEqual(response_data['scheduleD']['sheets'][1]['inputs'][1]['value'], '20')
self.assertEqual(response_data['scheduleD']['sheets'][2]['fuelType'], 'CNG')
self.assertEqual(response_data['scheduleD']['sheets'][2]['feedstock'], 'Wheat')
self.assertEqual(response_data['scheduleD']['sheets'][2]['inputs'][0]['value'], '10')
self.assertEqual(response_data['scheduleD']['sheets'][2]['inputs'][1]['value'], '20')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_schedule_b_alternative_method(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': '23.50'
}
]
}
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response_data['scheduleB']['records'][0]['intensity'], '23.50')
def test_schedule_b_altnerative_method_no_intensity(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)'
# no intensity
}
]
}
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_schedule_b_alternative_method_fuel_code(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 1,
'fuelCode': 1 # invalid to supply fuel code
}
]
}
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_schedule_b_fuel_code_method_intensity(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (c)',
'intensity': 1, # invalid to supply intensity
'fuelCode': 1
}
]
}
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_schedule_b_d_integration_valid(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (A)',
'fuelCode': None,
'scheduleDSheetIndex': 1
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
},
{
'fuelType': 'CNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = json.loads(response.content.decode("utf-8"))
# I don't understand why the Django serializer doesn't call it scheduleDSheetIndex
self.assertEqual(response_data['scheduleB']['records'][0]['scheduleD_sheetIndex'], 1)
self.assertEqual(response_data['scheduleB']['records'][0]['intensity'], None)
def test_schedule_b_d_integration_invalid_null(self):
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (A)',
'fuelCode': None,
'scheduleDSheetIndex': None
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
},
{
'fuelType': 'CNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'B1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_submitted_compliance_report_authorized(self):
payload = {
'status': {'fuelSupplierStatus': 'Submitted'},
'type': 'Compliance Report',
'compliancePeriod': '2019'
}
response = self.clients['fs_user_1'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_patch_compliance_report(self):
payload = {
'scheduleC': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 88,
'expectedUse': 'Other',
'rationale': 'Patched'
}
]
},
'summary': {
'dieselClassRetained': '100',
'dieselClassDeferred': '200',
'gasolineClassRetained': '300',
'gasolineClassDeferred': '400'
}
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['scheduleC'])
self.assertEqual(len(response_data['scheduleC']['records']), 1)
self.assertIsNotNone(response_data['summary'])
self.assertEqual(response_data['summary']['dieselClassRetained'], '100.00')
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'scheduleA': {
'records': [
{
'tradingPartner': 'Test 2',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 4
}
]
},
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 11,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 33.2,
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 44,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 77.6,
}
]
},
'scheduleC': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 89,
'expectedUse': 'Other',
'rationale': 'Patched'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 88,
'expectedUse': 'Other',
'rationale': 'Patched Again'
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A2',
'value': '12.04',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'ZZ9ZZA',
'value': 'about 98',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['scheduleC'])
self.assertEqual(len(response_data['scheduleC']['records']), 2)
self.assertIsNotNone(response_data['scheduleA'])
self.assertEqual(len(response_data['scheduleA']['records']), 1)
self.assertIsNotNone(response_data['scheduleD'])
self.assertEqual(len(response_data['scheduleD']['sheets']), 1)
self.assertEqual(len(response_data['scheduleD']['sheets'][0]['inputs']), 2)
self.assertEqual(len(response_data['scheduleD']['sheets'][0]['outputs']), 12)
self.assertIsNotNone(response_data['summary'])
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.clients['fs_user_1'].get('/api/compliance_reports/{id}'
.format(id=rid))
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['scheduleC'])
self.assertEqual(len(response_data['scheduleC']['records']), 2)
self.assertIsNotNone(response_data['scheduleA'])
self.assertEqual(len(response_data['scheduleA']['records']), 1)
self.assertIsNotNone(response_data['scheduleD'])
self.assertEqual(len(response_data['scheduleD']['sheets']), 1)
self.assertEqual(len(response_data['scheduleD']['sheets'][0]['inputs']), 2)
self.assertEqual(len(response_data['scheduleD']['sheets'][0]['outputs']), 12)
payload = {
'scheduleC': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 88,
'expectedUse': 'Other',
'rationale': 'Patched'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 88,
'expectedUse': 'Other',
'rationale': 'Patched Again'
}
]
}
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['scheduleC'])
self.assertEqual(len(response_data['scheduleC']['records']), 2)
self.assertIsNotNone(response_data['scheduleA'])
self.assertEqual(len(response_data['scheduleA']['records']), 1)
self.assertIsNotNone(response_data['scheduleD'])
self.assertEqual(len(response_data['scheduleD']['sheets']), 1)
self.assertEqual(len(response_data['scheduleD']['sheets'][0]['inputs']), 2)
self.assertEqual(len(response_data['scheduleD']['sheets'][0]['outputs']), 12)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_update_draft_compliance_report_authorized(self):
payload = {
'status': {'fuelSupplierStatus': 'Submitted'},
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_revert_submitted_compliance_report_fails(self):
payload = {
'status': {'fuelSupplierStatus': 'Submitted'},
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {'fuelSupplierStatus': 'Draft'},
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_patch_submitted_fails(self):
payload = {
'status': {'fuelSupplierStatus': 'Submitted'},
}
rid = self._create_compliance_report()
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 211,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 88.8,
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 500,
'provisionOfTheAct': 'Section 6 (5) (c)',
'fuelCode': 1
}
]
},
'scheduleC': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 400,
'expectedUse': 'Other',
'rationale': 'Patched'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 200,
'expectedUse': 'Other',
'rationale': 'Patched Again'
}
]
},
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_create_draft_compliance_report_unauthorized(self):
payload = {
'status': {'fuelSupplierStatus': 'Draft'},
'type': 'Compliance Report',
'compliance_period': '2019'
}
response = self.clients['fs_user_2'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_create_draft_compliance_report_gov_unauthorized(self):
payload = {
'status': {'fuelSupplierStatus': 'Draft'},
'type': 'Compliance Report',
'compliance_period': '2019'
}
response = self.clients['gov_analyst'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_happy_signing_path_results_in_reduction(self):
initial_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
rid = self._create_compliance_report()
payload = {
'status': {
'fuelSupplierStatus': 'Submitted'
},
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10000,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 100,
},
]
},
'summary': {
'creditsOffset': 3,
}
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'analystStatus': 'Recommended'
}
}
response = self.clients['gov_analyst'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'managerStatus': 'Recommended'
}
}
response = self.clients['gov_manager'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'directorStatus': 'Accepted'
}
}
response = self.clients['gov_director'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.clients['fs_user_1'].get(
'/api/compliance_reports/{id}'.format(id=rid)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response_data['status']['fuelSupplierStatus'], 'Submitted')
self.assertEqual(response_data['status']['analystStatus'], None) # hidden
self.assertEqual(response_data['status']['managerStatus'], None) # hidden
self.assertEqual(response_data['status']['directorStatus'], 'Accepted')
self.assertEqual(response_data['actor'], 'FUEL_SUPPLIER')
self.assertListEqual(response_data['actions'], ['CREATE_SUPPLEMENTAL'])
self.assertEqual(response.status_code, status.HTTP_200_OK)
final_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
self.assertLess(final_balance, initial_balance)
def test_happy_signing_path_results_in_validation(self):
initial_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
rid = self._create_compliance_report()
payload = {
'status': {
'fuelSupplierStatus': 'Submitted'
},
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 1000000,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (A)',
'fuelCode': None,
'scheduleDSheetIndex': 0
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
'summary': {
'creditsOffset': 0,
}
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'analystStatus': 'Recommended'
}
}
response = self.clients['gov_analyst'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'managerStatus': 'Recommended'
}
}
response = self.clients['gov_manager'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'directorStatus': 'Accepted'
}
}
response = self.clients['gov_director'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.clients['fs_user_1'].get(
'/api/compliance_reports/{id}'.format(id=rid)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response_data['status']['fuelSupplierStatus'], 'Submitted')
self.assertEqual(response_data['status']['analystStatus'], None) # hidden
self.assertEqual(response_data['status']['managerStatus'], None) # hidden
self.assertEqual(response_data['status']['directorStatus'], 'Accepted')
self.assertEqual(response_data['actor'], 'FUEL_SUPPLIER')
self.assertListEqual(response_data['actions'], ['CREATE_SUPPLEMENTAL'])
self.assertEqual(response.status_code, status.HTTP_200_OK)
final_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
self.assertGreater(final_balance, initial_balance)
def test_happy_signing_path_results_in_validation(self):
initial_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
rid = self._create_compliance_report()
payload = {
'status': {
'fuelSupplierStatus': 'Submitted'
},
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 20,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (A)',
'fuelCode': None,
'scheduleDSheetIndex': 0
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 3000000,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 120,
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
'summary': {
'creditsOffset': 5,
}
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'analystStatus': 'Recommended'
}
}
response = self.clients['gov_analyst'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'managerStatus': 'Recommended'
}
}
response = self.clients['gov_manager'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'directorStatus': 'Accepted'
}
}
response = self.clients['gov_director'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.clients['fs_user_1'].get(
'/api/compliance_reports/{id}'.format(id=rid)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response_data['status']['fuelSupplierStatus'], 'Submitted')
self.assertEqual(response_data['status']['analystStatus'], None) # hidden
self.assertEqual(response_data['status']['managerStatus'], None) # hidden
self.assertEqual(response_data['status']['directorStatus'], 'Accepted')
self.assertEqual(response_data['actor'], 'FUEL_SUPPLIER')
self.assertListEqual(response_data['actions'], ['CREATE_SUPPLEMENTAL'])
self.assertEqual(response.status_code, status.HTTP_200_OK)
intermediate_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
self.assertLess(intermediate_balance, initial_balance)
# create a supplemental
payload = {
'supplements': rid,
'status': {'fuelSupplierStatus': 'Draft'},
'type': 'Compliance Report',
'compliancePeriod': '2019'
}
response = self.clients['fs_user_1'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
sid = response.json()['id']
payload = {
'status': {
'fuelSupplierStatus': 'Submitted'
},
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 40000000,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (A)',
'fuelCode': None,
'scheduleDSheetIndex': 0
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 30,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (B)',
'intensity': 120,
}
]
},
'summary': {
'creditsOffset': 0,
},
'supplementalNote': 'Forgot a railcar or two'
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=sid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'analystStatus': 'Recommended'
}
}
response = self.clients['gov_analyst'].patch(
'/api/compliance_reports/{id}'.format(id=sid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'managerStatus': 'Recommended'
}
}
response = self.clients['gov_manager'].patch(
'/api/compliance_reports/{id}'.format(id=sid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'status': {
'directorStatus': 'Accepted'
}
}
response = self.clients['gov_director'].patch(
'/api/compliance_reports/{id}'.format(id=sid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.clients['fs_user_1'].get(
'/api/compliance_reports/{id}'.format(id=sid)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response_data['status']['fuelSupplierStatus'], 'Submitted')
self.assertEqual(response_data['status']['analystStatus'], None) # hidden
self.assertEqual(response_data['status']['managerStatus'], None) # hidden
self.assertEqual(response_data['status']['directorStatus'], 'Accepted')
self.assertEqual(response_data['status']['directorStatus'], 'Accepted')
self.assertListEqual(response_data['actions'], ['CREATE_SUPPLEMENTAL'])
self.assertEqual(response.status_code, status.HTTP_200_OK)
final_balance = self.users['fs_user_1'].organization.organization_balance['validated_credits']
self.assertGreater(final_balance, initial_balance)
self.assertGreater(final_balance, intermediate_balance)
def test_create_supplemental(self):
rid = self._create_compliance_report()
payload = {
'status': {
'fuelSupplierStatus': 'Submitted'
},
'scheduleC': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 10,
'expectedUse': 'Other',
'rationale': 'Test rationale 1'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 20,
'expectedUse': 'Other',
'rationale': 'Test rationale 2 '
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 30,
'expectedUse': 'Other',
'rationale': 'Test rationale 3'
},
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 40,
'expectedUse': 'Other',
'rationale': 'Test rationale 4 '
}
]
},
'scheduleA': {
'records': [
{
'tradingPartner': 'CD',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 98
},
{
'tradingPartner': 'AB',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 99
},
{
'tradingPartner': 'EF',
'postalAddress': '123 Main St\nVictoria, BC',
'fuelClass': 'Diesel',
'transferType': 'Received',
'quantity': 100
}
]
},
'scheduleB': {
'records': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'quantity': 1000000,
'provisionOfTheAct': 'Section 6 (5) (d) (ii) (A)',
'fuelCode': None,
'scheduleDSheetIndex': 0
}
]
},
'scheduleD': {
'sheets': [
{
'fuelType': 'LNG',
'fuelClass': 'Diesel',
'feedstock': 'Corn',
'inputs': [
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '10',
'units': 'tonnes',
'description': 'test',
},
{
'worksheet_name': 'GHG Inputs',
'cell': 'A1',
'value': '20',
'units': 'percent',
}
],
'outputs': [
{'description': 'Fuel Dispensing', 'intensity': '1.3'},
{'description': 'Fuel Distribution and Storage', 'intensity': '1.3'},
{'description': 'Fuel Production', 'intensity': '1.3'},
{'description': 'Feedstock Transmission', 'intensity': '1.3'},
{'description': 'Feedstock Recovery', 'intensity': '1.3'},
{'description': 'Feedstock Upgrading', 'intensity': '1.3'},
{'description': 'Land Use Change', 'intensity': '1.3'},
{'description': 'Fertilizer Manufacture', 'intensity': '1.3'},
{'description': 'Gas Leaks and Flares', 'intensity': '1.3'},
{'description': 'CO₂ and H₂S Removed', 'intensity': '1.3'},
{'description': 'Emissions Displaced', 'intensity': '1.3'},
{'description': 'Fuel Use (High Heating Value)', 'intensity': '1.3'}
]
}
]
},
'summary': {
'creditsOffset': 0,
}
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=rid),
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'supplements': rid,
'status': {'fuelSupplierStatus': 'Draft'},
'type': 'Compliance Report',
'compliancePeriod': '2019'
}
response = self.clients['fs_user_1'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_create_draft_exclusion_report_authorized(self):
payload = {
'status': {'fuelSupplierStatus': 'Draft'},
'type': 'Exclusion Report',
'compliance_period': '2019'
}
response = self.clients['fs_user_1'].post(
'/api/compliance_reports',
content_type='application/json',
data=json.dumps(payload)
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
response = self.clients['fs_user_1'].get('/api/compliance_reports')
self.assertEqual(response.status_code, status.HTTP_200_OK)
compliance_reports = response.json()
self.assertEqual(len(compliance_reports), 4)
def test_patch_exclusion_report(self):
payload = {
'exclusionAgreement': {
'records': [{
'fuelType': "LNG",
'postalAddress':
"P.O. Box 294 Harrison Hot Springs, BC V0M 1K0",
'quantity': 1000,
'quantityNotSold': 500,
'transactionPartner': "Burden Propane Inc.",
'transactionType': "Purchased"
}]
}
}
compliance_report_id = self._create_compliance_report("Exclusion Report")
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=compliance_report_id),
content_type='application/json',
data=json.dumps(payload)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['exclusionAgreement'])
self.assertEqual(len(response_data['exclusionAgreement']['records']), 1)
self.assertEqual(response.status_code, status.HTTP_200_OK)
payload = {
'exclusionAgreement': {
'records': [{
'fuelType': "LNG",
'postalAddress':
"P.O. Box 294 Harrison Hot Springs, BC V0M 1K0",
'quantity': 1000,
'quantityNotSold': 500,
'transactionPartner': "Burden Propane Inc.",
'transactionType': "Purchased"
}, {
'fuelType': "Ethanol",
'postalAddress':
"1375 Hastings Street Victoria, BC V8Z 2W5",
'quantity': 2000,
'quantityNotSold': 750,
'transactionPartner': "Vancouver Island Propane Services Ltd.",
'transactionType': "Sold"
}]
}
}
response = self.clients['fs_user_1'].patch(
'/api/compliance_reports/{id}'.format(id=compliance_report_id),
content_type='application/json',
data=json.dumps(payload)
)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['exclusionAgreement'])
self.assertEqual(len(response_data['exclusionAgreement']['records']), 2)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.clients['fs_user_1'].get(
'/api/compliance_reports/{id}'.format(id=compliance_report_id))
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = json.loads(response.content.decode("utf-8"))
self.assertIsNotNone(response_data['exclusionAgreement'])
self.assertEqual(len(response_data['exclusionAgreement']['records']), 2)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_actions(self):
compliance_report_id = self._create_compliance_report()
reports_to_check = {
'Draft': compliance_report_id
}
compliance_report_id = self._create_compliance_report()
report = ComplianceReport.objects.get(id=compliance_report_id)
report.status.fuel_supplier_status = ComplianceReportStatus.objects.get_by_natural_key('Deleted')
report.status.save()
reports_to_check['Deleted'] = compliance_report_id
compliance_report_id = self._create_compliance_report()
report = ComplianceReport.objects.get(id=compliance_report_id)
report.status.fuel_supplier_status = ComplianceReportStatus.objects.get_by_natural_key('Submitted')
report.status.save()
reports_to_check['Submitted'] = compliance_report_id
compliance_report_id = self._create_compliance_report()
report = ComplianceReport.objects.get(id=compliance_report_id)
report.status.fuel_supplier_status = ComplianceReportStatus.objects.get_by_natural_key('Submitted')
report.status.analyst_status = ComplianceReportStatus.objects.get_by_natural_key('Recommended')
report.status.save()
reports_to_check['Approved1'] = compliance_report_id
compliance_report_id = self._create_compliance_report()
report = ComplianceReport.objects.get(id=compliance_report_id)
report.status.fuel_supplier_status = ComplianceReportStatus.objects.get_by_natural_key('Submitted')
report.status.analyst_status = ComplianceReportStatus.objects.get_by_natural_key('Recommended')
report.status.manager_status = ComplianceReportStatus.objects.get_by_natural_key('Recommended')
report.status.save()
reports_to_check['Approved2'] = compliance_report_id
compliance_report_id = self._create_compliance_report()
report = ComplianceReport.objects.get(id=compliance_report_id)
report.status.fuel_supplier_status = ComplianceReportStatus.objects.get_by_natural_key('Submitted')
report.status.analyst_status = ComplianceReportStatus.objects.get_by_natural_key('Recommended')
report.status.manager_status = ComplianceReportStatus.objects.get_by_natural_key('Recommended')
report.status.director_status = ComplianceReportStatus.objects.get_by_natural_key('Accepted')
report.status.save()
reports_to_check['ApprovedFinal'] = compliance_report_id
expected_actions = {
'Draft': {
'fs_user_1': {
'status': 200,
'actor': 'FUEL_SUPPLIER',
'actions': ['SUBMIT', 'DELETE']
},
'gov_analyst': {
'status': 404,
},
'gov_manager': {
'status': 404,
},
'gov_director': {
'status': 404,
}
},
'Deleted': {
'fs_user_1': {
'status': 404,
},
'gov_analyst': {
'status': 404,
},
'gov_manager': {
'status': 404,
},
'gov_director': {
'status': 404,
}
},
'Submitted': {
'fs_user_1': {
'status': 200,
'actor': 'FUEL_SUPPLIER',
'actions': ['CREATE_SUPPLEMENTAL']
},
'gov_analyst': {
'status': 200,
'actor': 'ANALYST',
'actions': ['RECOMMEND', 'DISCOMMEND', 'REQUEST_SUPPLEMENTAL']
},
'gov_manager': {
'status': 200,
'actor': 'MANAGER',
'actions': ['REQUEST_SUPPLEMENTAL']
},
'gov_director': {
'status': 200,
'actor': 'DIRECTOR',
'actions': []
}
},
'Approved1': {
'fs_user_1': {
'status': 200,
'actor': 'FUEL_SUPPLIER',
'actions': ['CREATE_SUPPLEMENTAL']
},
'gov_analyst': {
'status': 200,
'actor': 'ANALYST',
'actions': ['RETRACT', 'REQUEST_SUPPLEMENTAL']
},
'gov_manager': {
'status': 200,
'actor': 'MANAGER',
'actions': ['RECOMMEND', 'DISCOMMEND', 'RETURN', 'REQUEST_SUPPLEMENTAL']
},
'gov_director': {
'status': 200,
'actor': 'DIRECTOR',
'actions': []
}
},
'Approved2': {
'fs_user_1': {
'status': 200,
'actor': 'FUEL_SUPPLIER',
'actions': ['CREATE_SUPPLEMENTAL']
},
'gov_analyst': {
'status': 200,
'actor': 'ANALYST',
'actions': ['REQUEST_SUPPLEMENTAL']
},
'gov_manager': {
'status': 200,
'actor': 'MANAGER',
'actions': ['RETRACT', 'REQUEST_SUPPLEMENTAL']
},
'gov_director': {
'status': 200,
'actor': 'DIRECTOR',
'actions': ['ACCEPT', 'REJECT', 'RETURN']
}
},
'ApprovedFinal': {
'fs_user_1': {
'status': 200,
'actor': 'FUEL_SUPPLIER',
'actions': ['CREATE_SUPPLEMENTAL']
},
'gov_analyst': {
'status': 200,
'actor': 'ANALYST',
'actions': ['REQUEST_SUPPLEMENTAL']
},
'gov_manager': {
'status': 200,
'actor': 'MANAGER',
'actions': ['REQUEST_SUPPLEMENTAL']
},
'gov_director': {
'status': 200,
'actor': 'DIRECTOR',
'actions': []
}
},
}
for state, report_id in reports_to_check.items():
users_to_check = expected_actions[state]
for user, expected_result in users_to_check.items():
with self.subTest("Check actions for report in state {} with client {}".format(state, user)):
response = self.clients[user].get('/api/compliance_reports/{id}'.format(id=report_id))
response_data = json.loads(response.content.decode("utf-8"))
self.assertEqual(response.status_code, expected_result['status'])
if response.status_code == 200:
self.assertEqual(response_data['actor'], expected_result['actor'])
self.assertListEqual(response_data['actions'], expected_result['actions'])
| 41.27133 | 109 | 0.45685 | 5,866 | 77,879 | 5.897886 | 0.071429 | 0.038732 | 0.041969 | 0.076943 | 0.893921 | 0.881001 | 0.870278 | 0.849958 | 0.824811 | 0.804925 | 0 | 0.024014 | 0.416094 | 77,879 | 1,886 | 110 | 41.293213 | 0.736795 | 0.014869 | 0 | 0.67029 | 0 | 0 | 0.260837 | 0.024648 | 0 | 0 | 0 | 0 | 0.091787 | 1 | 0.018116 | false | 0 | 0.005435 | 0 | 0.025362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
86ab25e659ad96f51c333e9d7a29bc5c81610815 | 231 | py | Python | strimadec/models/utils/__init__.py | borea17/StrImaDec | 711e14d50ff816585b43c1509355983738b45ecb | [
"MIT"
] | null | null | null | strimadec/models/utils/__init__.py | borea17/StrImaDec | 711e14d50ff816585b43c1509355983738b45ecb | [
"MIT"
] | null | null | null | strimadec/models/utils/__init__.py | borea17/StrImaDec | 711e14d50ff816585b43c1509355983738b45ecb | [
"MIT"
] | null | null | null | from strimadec.models.utils.LossModels import DVAE_LossModel, DVAEST_LossModel
from strimadec.models.utils.accuracy import compute_accuracy
from strimadec.models.utils.kl_divergences import gaussian_kl, bernoulli_kl, categorical_kl | 77 | 91 | 0.887446 | 31 | 231 | 6.387097 | 0.516129 | 0.19697 | 0.287879 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 231 | 3 | 91 | 77 | 0.912442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
813e070b2103d765e526cefb9028dea8987382b7 | 615 | py | Python | docker/pycef/verify.py | ThisIsNotTheUserYouAreLookingFor/dockerfiles | f92673b0d15c457e4abe215cf260afbb5b25cf2e | [
"MIT"
] | 48 | 2018-12-12T12:18:09.000Z | 2022-03-05T02:23:42.000Z | docker/pycef/verify.py | ThisIsNotTheUserYouAreLookingFor/dockerfiles | f92673b0d15c457e4abe215cf260afbb5b25cf2e | [
"MIT"
] | 7,201 | 2018-12-24T17:14:17.000Z | 2022-03-31T13:39:12.000Z | docker/pycef/verify.py | ThisIsNotTheUserYouAreLookingFor/dockerfiles | f92673b0d15c457e4abe215cf260afbb5b25cf2e | [
"MIT"
] | 94 | 2018-12-17T10:59:21.000Z | 2022-03-29T12:59:30.000Z | import pycef
cef = "Jul 14 2020 00:49:42 myvxkp.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|WB:36|36|3|deviceExternalId=1 rt=Jun 21 2020 07:56:09 GMT+00:00 app=5 cnt=1 dpt=80 act=2 src=10.128.0.11 cs1Label=SLF_PolicyName cs1=Internal User Policy deviceDirection=2 cat=36 dvchost=CU-PRO1-8254-2 request=http://www.eicar.org/download/eicar.com.txt duser=TRENDMICROAPEX-\\admin shost=TRENDMICROAPEX- deviceProcessName=C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe cn3Label=Web_Reputation_Rating cn3=49 deviceFacility=Apex One cn2Label=SLF_SeverityLevel cn2=100 "
a = pycef.parse(cef)
| 123 | 579 | 0.796748 | 107 | 615 | 4.542056 | 0.82243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130053 | 0.074797 | 615 | 4 | 580 | 153.75 | 0.724077 | 0 | 0 | 0 | 0 | 0.333333 | 0.928455 | 0.479675 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d4a50b6f2ecaa3e5fd52e64cccb6579d7baa8c90 | 275 | py | Python | doublebeamforming/__init__.py | eileenrmartin/doubleBeamforming | affb2cf1815550d9d4c377d094eb154f84c3b30b | [
"MIT"
] | 9 | 2020-04-10T16:47:55.000Z | 2022-03-31T14:11:52.000Z | doublebeamforming/__init__.py | eileenrmartin/doubleBeamforming | affb2cf1815550d9d4c377d094eb154f84c3b30b | [
"MIT"
] | 1 | 2021-02-25T07:59:14.000Z | 2021-02-25T07:59:14.000Z | doublebeamforming/__init__.py | eileenrmartin/doubleBeamforming | affb2cf1815550d9d4c377d094eb154f84c3b30b | [
"MIT"
] | 4 | 2020-05-11T00:10:08.000Z | 2022-03-31T06:45:40.000Z | from .arrays import arrayPatch
from .newDBFFuncs import shiftFrqData
from .newDBFFuncs import phase1
from .newDBFFuncs import phase2
from .distFromAvg import calcDistFromAvg
from .traditionalXcorrsDBF import xCorrsAcrossArrays
from .traditionalXcorrsDBF import DBFAfterXcorrs | 39.285714 | 52 | 0.876364 | 28 | 275 | 8.607143 | 0.464286 | 0.186722 | 0.261411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.098182 | 275 | 7 | 53 | 39.285714 | 0.96371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4b5b49bbe3c75f462fd39cf4947ed7cb75513a0 | 43 | py | Python | bootcamp/chapter-1/hello-world.py | pushkar2112/Python-practice | 75f88eaa2b4f3c47570b1a11e0e221436551ce89 | [
"Apache-2.0"
] | 1 | 2021-11-23T08:36:43.000Z | 2021-11-23T08:36:43.000Z | bootcamp/chapter-1/hello-world.py | pushkar2112/Python-practice | 75f88eaa2b4f3c47570b1a11e0e221436551ce89 | [
"Apache-2.0"
] | 1 | 2021-07-18T12:39:40.000Z | 2021-09-08T09:48:16.000Z | bootcamp/chapter-1/hello-world.py | pushkar2112/Python-practice | 75f88eaa2b4f3c47570b1a11e0e221436551ce89 | [
"Apache-2.0"
] | null | null | null | print('hello world')
print('hello pushkar') | 21.5 | 22 | 0.744186 | 6 | 43 | 5.333333 | 0.666667 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 2 | 22 | 21.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d4cc961ac28848939fccfdb2a9032a2fc1d0fa3a | 205 | py | Python | lib/env/reward/__init__.py | devas123/Bitcoin-Trader-RL | 097cb0ba7428b2c4f997bdb0425a6153c23f9c83 | [
"MIT"
] | null | null | null | lib/env/reward/__init__.py | devas123/Bitcoin-Trader-RL | 097cb0ba7428b2c4f997bdb0425a6153c23f9c83 | [
"MIT"
] | null | null | null | lib/env/reward/__init__.py | devas123/Bitcoin-Trader-RL | 097cb0ba7428b2c4f997bdb0425a6153c23f9c83 | [
"MIT"
] | null | null | null | from lib.env.reward.IncrementalProfit import IncrementalProfit
from lib.env.reward.WeightedUnrealizedProfit import WeightedUnrealizedProfit
from lib.env.reward.BaseRewardStrategy import BaseRewardStrategy
| 51.25 | 76 | 0.897561 | 21 | 205 | 8.761905 | 0.380952 | 0.11413 | 0.163043 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058537 | 205 | 3 | 77 | 68.333333 | 0.953368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07d9e8fd7666559e84504e2ba7b0f07a06b626fb | 22 | py | Python | tests/sublime/__init__.py | percevalw/Term | 461be68e1755b2184778c2bc8e28ffa89a6043d5 | [
"MIT"
] | 4 | 2017-05-11T01:05:35.000Z | 2017-05-31T14:42:42.000Z | tests/sublime/__init__.py | percevalw/Term | 461be68e1755b2184778c2bc8e28ffa89a6043d5 | [
"MIT"
] | 1 | 2017-06-06T17:17:02.000Z | 2018-03-13T22:14:11.000Z | tests/sublime/__init__.py | percevalw/Term | 461be68e1755b2184778c2bc8e28ffa89a6043d5 | [
"MIT"
] | null | null | null | from .sublime import * | 22 | 22 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07f9a893764c34d9604c8baebfbc5d1d6247d1bc | 71,941 | py | Python | bot.py | meme8383/school-bot-demo | 54a08fd5ed1c21dafe814a6a67ef91883ad33b46 | [
"MIT"
] | null | null | null | bot.py | meme8383/school-bot-demo | 54a08fd5ed1c21dafe814a6a67ef91883ad33b46 | [
"MIT"
] | null | null | null | bot.py | meme8383/school-bot-demo | 54a08fd5ed1c21dafe814a6a67ef91883ad33b46 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# school-bot-demo
# All doxxing information has been removed.
#Image-------------------------------------------------------------------------
import re
#try:
# from PIL import Image
#except ImportError:
# import Image
#import pytesseract
#
#pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
#
#def readimage(imagepath):
# return(pytesseract.image_to_string(Image.open(imagepath)))
#
#
#def findclasses(theschedule):
# person = []
# for i in range(len(classdata)):
# try:
# m = re.search(classdata['Key'][i], theschedule.lower())
# if m:
# person.append(i)
# except AttributeError:
# continue
# if 7 in person and 18 in person:
# person.remove(7)
# return person
#Data--------------------------------------------------------------------------
import pandas as pd
botpath = ''
#botpath = './'
#botpath = ''
#botpath = ''
classdata = pd.read_csv(botpath + 'classes.csv')
classdata = classdata.set_index('ID')
usrdata = pd.read_csv(botpath + 'users.csv')
graderole = {'6': '6th Grade', '7': '7th Grade', '8': '8th Grade', '9': 'Freshman', '10': 'Sophomore', '11': 'Junior', '12': 'Senior', '13': 'Graduate', '14': 'Teacher'}
guestStatus = {0 : "Not in SCHOOL", 1 : "SCHOOL 1", 2 : "SCHOOL 2", 3 : "Other SCHOOL", '0' : "Not in SCHOOL", '1' : "SCHOOL 1", '2' : "SCHOOL 2", '3' : "Other SCHOOL"}
#Register----------------------------------------------------------------------
async def Register(user):
global usrdata
issues = 0
print(datetime.datetime.now(), "Registering", user.name)
await user.send("Welcome to the SCHOOL 1 discord (unofficial)! You may say 'cancel' at any point to exit and '" + prefix + "register' to retry.")
embed = discord.Embed(title = "Are you currently in SCHOOL? (Graduates included)", description = "0: Not in SCHOOL\n1: In SCHOOL 1\n2: SCHOOL 2\n3: Other SCHOOL School", color = discord.Color.dark_purple())
chooseGuest = await user.send(embed = embed)
emojilist = [str(i) + "\N{combining enclosing keycap}" for i in range(0,4)]
for i in emojilist:
await chooseGuest.add_reaction(i)
def check2(reaction, person):
nonlocal emojilist
return person == user and str(reaction) in emojilist
try:
reaction, _ = await client.wait_for('reaction_add', timeout = 600.0, check = check2)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at choose from list")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
guest = str(reaction)[0]
await user.send("What is your real name? (First and last, if you would not like to give your name say 'Anonymous')")
print(datetime.datetime.now(), user.name, "on step name")
while True:
def check(m):
return m.guild == None and m.author == user
try:
msg = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at name")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg.content.lower() == "cancel":
await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at name")
return None
elif ''.join(re.split(' |-|,', msg.content)).isalpha():
irlname = msg.content.lower()
break
else:
await user.send("Please only use letters a-z in your name. Enter your name again and contact an admin if you continue having issues.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at name")
continue
await user.send("Now, please say your grade (number 6-12, graduate = 13, teacher = 14)")
print(datetime.datetime.now(), user.name, "on step grade")
while True:
try:
msg2 = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at grade")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg2.content in graderole:
grade = msg2.content
break
elif msg2.content.lower() == "cancel":
await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at grade")
return None
else:
await user.send("Please only use numbers 6-14 in your grade. Enter your grade again and contact an admin if you continue having issues.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at grade")
continue
if guest == "1":
await user.send("Great, now begin to list your classes one by one (most abbreviations are allowed) or send a picture of your schedule (Coming soon!) and say 'done' when you are done. (Say done now to skip) (For precalc use 'pre-calc')")
print(datetime.datetime.now(), user.name, "on step classes")
listofclasses = []
while True:
if listofclasses:
embed = discord.Embed(title = "Classes for " + user.name + ":", description = ''.join([classdata.loc[i]['Name'] + "\n" for i in listofclasses]), color = discord.Color.dark_purple())
embed.set_footer(text = "Continue listing your classes and say 'done' when all of your classes are on this list")
embed.set_thumbnail(url = user.avatar_url)
await user.send(embed = embed)
try:
msg3 = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at classes")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg3.attachments:
await user.send("Feature not implemented yet, please list your classes through text.")
continue
# await user.send("Reading schedule...")
# await msg3.attachments[0].save(botpath + 'Saved/sched_' + user.name + '.png')
# print(datetime.datetime.now(), "Saved schedule from", user.name, "as sched_" + user.name + ".png")
# classes = pytesseract.image_to_string(Image.open(botpath + 'Saved/sched_' + user.name + '.png'))
# listofclasses.append(findclasses(classes))
# if len(listofclasses) >= 7:
# embed = discord.Embed(title = "Classes for " + user.name + ":", description = ''.join([classdata.loc[i]['Name'] + "\n" for i in listofclasses]), color = discord.Color.dark_purple())
# embed.set_thumbnail(url = user.avatar_url)
# await user.send(embed = embed)
# await user.send("Is this correct?")
# try:
# msg4 = await client.wait_for('message', timeout = 60.0, check = check)
# except asyncio.TimeoutError:
# print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at check classes")
# await user.send("Registration failed. You may do " + prefix + "register to retry.")
# return None
# if msg4.content.lower().startswith("y"):
# listofclasses.sort()
# usrdata = usrdata.append(pd.DataFrame({'User':['a' + str(user.id)], 'Classes':[str(listofclasses)], 'IRL' : [irlname], 'Grade' : [grade]}), sort = False, ignore_index = True)
# usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
# usrdata = pd.read_csv(botpath + 'users.csv')
# print(datetime.datetime.now(), "Registered", user.name, "with classes in users.csv and", issues, "issues")
# break
# elif msg4.content.lower() == "cancel":
# await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
# print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at image (Check classes)")
# return None
# else:
# await user.send("Please send a better image or say no to skip adding classes. You may contact an admin if you continue having issues.")
# issues += 1
# print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at image (incorrect classes)")
# continue
# else:
# await user.send("Only found " + str(len(listofclasses)) + " classes, please send a better image or say no to skip adding classes. You may contact an admin if you continue having issues.")
# issues += 1
# print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at image (too few classes - " + str(len(listofclasses)) + ")")
# continue
elif msg3.content.lower() == "cancel":
await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at classes (send)")
return None
elif msg3.content.lower() == "done":
if len(listofclasses) >= 7:
listofclasses.sort()
usrdata = usrdata.append(pd.DataFrame({'User':['a' + str(user.id)], 'Classes':[str(listofclasses)], 'IRL' : [irlname], 'Grade' : [grade], 'Guest' : [guest]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Registered", user.name, "with classes in users.csv and", issues, "issues")
break
elif listofclasses:
await user.send("You have only added " + str(len(listofclasses)) + " classes, are you sure?")
try:
msg4 = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at check classes")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg4.content.lower().startswith("y"):
listofclasses.sort
usrdata = usrdata.append(pd.DataFrame({'User':['a' + str(user.id)], 'Classes':[str(listofclasses)], 'IRL' : [irlname], 'Grade' : [grade], 'Guest' : [guest]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Registered", user.name, "with classes in users.csv and", issues, "issues")
break
elif msg4.content.lower() == "cancel":
await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at classes (Check classes)")
return None
else:
await user.send("Please continue listing classes one by one and say 'done' when all of your classes are added.")
continue
else:
await user.send("No classes added. Are you sure you would like to continue without adding your classes?")
try:
msg4 = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at check classes")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg4.content.lower().startswith("y"):
listofclasses = [0]
usrdata = usrdata.append(pd.DataFrame({'User':['a' + str(user.id)], 'Classes':['[0]'], 'IRL' : [irlname], 'Grade' : [grade], 'Guest' : [guest]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Registered", user.name, "without classes in users.csv and", issues, "issues")
break
elif msg4.content.lower() == "cancel":
await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at classes (Check classes)")
return None
else:
await user.send("Please continue listing classes one by one and say 'done' when all of your classes are added.")
continue
else:
classmatches = []
for i in range(len(classdata)):
matches = 0
for word in msg3.content.lower().split(" "):
if word == "i":
word = "1"
elif word == "ii":
word = "2"
elif word == "iii":
word = "3"
classname = classdata['Name'][i].lower().split(" ")
for part in range(len(classname)):
if classname[part] == "i":
classname[part] = "1"
elif classname[part] == "ii":
classname[part] = "2"
elif classname[part] == "iii":
classname[part] = "3"
classname = ''.join([i + " " for i in classname])[:-1]
if word in classname:
matches += 1
if matches == len(msg3.content.split(" ")):
classmatches.append(i)
if len(classmatches) == 0:
await user.send("Class " + msg3.content + " not found, please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at listclasses (class not found - " + msg3.content + ")")
continue
elif len(classmatches) == 1:
await user.send("Found class " + classdata['Name'][classmatches[0]] + ", is this correct?")
try:
msg4 = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at choose from list")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg4.content.lower().startswith("y"):
listofclasses.append(classmatches[0])
await user.send("Class " + classdata['Name'][classmatches[0]] + " added to your schedule.")
continue
else:
await user.send("Please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at listclasses (incorrect classes)")
continue
elif len(classmatches) > 8:
await user.send("Found " + str(len(classmatches)) + " matches, please be more specific.")
else:
embed = discord.Embed(title = "Multiple classes found, please select the correct one by number:", description = "0: None of these\n" + ''.join([str(j + 1) + ": " + classdata['Name'][classmatches[j]] + "\n" for j in range(len(classmatches))]), color = discord.Color.dark_purple())
chooseclass = await user.send(embed = embed)
emojilist = ['0\N{combining enclosing keycap}'] + [str(i + 1) + '\N{combining enclosing keycap}' for i in range(len(classmatches))]
for i in emojilist:
await chooseclass.add_reaction(i)
def check2(reaction, person):
nonlocal emojilist
return person == user and str(reaction) in emojilist
try:
reaction, _ = await client.wait_for('reaction_add', timeout = 300.0, check = check2)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at choose from list")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if str(reaction)[0] == "0":
await user.send("Please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at listclasses (incorrect classes)")
continue
else:
listofclasses.append(classmatches[int(str(reaction)[0]) - 1])
await user.send("Class " + classdata['Name'][classmatches[int(str(reaction)[0]) - 1]] + " added to your schedule.")
continue
else:
listofclasses = [0]
usrdata = usrdata.append(pd.DataFrame({'User':['a' + str(user.id)], 'Classes':['[0]'], 'IRL' : [irlname], 'Grade' : [grade], 'Guest' : [guest]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Registered", user.name, "without classes in users.csv and", issues, "issues")
if guest == "0":
await discord.utils.find(lambda m: m.id == user.id, schoolserver.members).add_roles(discord.utils.get(schoolserver.roles, name = "Not in SCHOOL"))
elif guest == "2":
await discord.utils.find(lambda m: m.id == user.id, schoolserver.members).add_roles(discord.utils.get(schoolserver.roles, name = "SCHOOL 2"))
elif guest == "3":
await discord.utils.find(lambda m: m.id == user.id, schoolserver.members).add_roles(discord.utils.get(schoolserver.roles, name = "Other SCHOOL"))
elif guest == "1":
await discord.utils.find(lambda m: m.id == user.id, schoolserver.members).add_roles(discord.utils.get(schoolserver.roles, name = graderole[grade]))
await user.send("Thank you for registering! Your info is now visible through the .userinfo (user) command and you will be given access to the proper channels")
await editwhois()
#Discord-----------------------------------------------------------------------
import asyncio
#import nest_asyncio
#nest_asyncio.apply()
import datetime
import discord
from discord.ext import commands
prefix = "."
client = commands.Bot(command_prefix = prefix)
client.remove_command('help')
schoolserver = ''
whoischannel = ''
@client.event
async def on_ready():
print(datetime.datetime.now(), "Connected as", client.user)
await client.change_presence(activity = discord.Game(".register to be added!"))
global schoolserver, whoischannel
schoolserver = client.get_guild(InsertID)
whoischannel = schoolserver.get_channel(InsertID)
global teacherlist, graduatelist, seniorlist, juniorlist, sophomorelist, freshmanlist, eighthlist, seventhlist, sixthlist, school2list, otherschoollist, notinschoollist
teacherlist = await whoischannel.fetch_message(InsertID)
graduatelist = await whoischannel.fetch_message(InsertID)
seniorlist = await whoischannel.fetch_message(InsertID)
juniorlist = await whoischannel.fetch_message(InsertID)
sophomorelist = await whoischannel.fetch_message(InsertID)
freshmanlist = await whoischannel.fetch_message(InsertID)
eighthlist = await whoischannel.fetch_message(InsertID)
seventhlist = await whoischannel.fetch_message(InsertID)
sixthlist = await whoischannel.fetch_message(InsertID)
school2list = await whoischannel.fetch_message(InsertID)
otherschoollist = await whoischannel.fetch_message(InsertID)
notinschoollist = await whoischannel.fetch_message(InsertID)
@client.event
async def on_member_join(member):
print(datetime.datetime.now(), member.name, "joined, attempting to register")
if 'a' + str(member.id) in usrdata.values:
print(datetime.datetime.now(), "Not registering", member.name + ", already registered")
else:
await Register(member)
@client.event
async def on_member_remove(member):
print(datetime.datetime.now, member.name, "left, attempting to remove from data")
global usrdata
if 'a' + str(member.id) in usrdata.values:
usrdata = usrdata.set_index('User')
usrdata = usrdata.drop('a' + str(member.id), axis = 0)
usrdata.to_csv(botpath + 'users.csv', encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Deleted info for", member.name, "from users.csv")
await editwhois()
else:
print(datetime.datetime.now, member.name, "was not registered")
@client.command()
async def ping(ctx):
await ctx.send("Pong! (Latency: " + str(round(client.latency * 1000, 1)) + " ms)")
print(datetime.datetime.now(), "Pinged by", ctx.author.name, ", latency was", str(round(client.latency * 1000, 1)), "ms")
@client.command()
async def reloadclasses(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command reloadclasses")
global classdata
if ctx.message.author.guild_permissions.administrator:
classdata = pd.read_csv(botpath + 'classes.csv')
classdata = classdata.set_index('ID')
await ctx.send("Reloaded classes.csv")
print(datetime.datetime.now(), "Reloaded classes.csv")
else:
print(datetime.datetime.now(), "Didn't reload, insufficient permissions")
await ctx.send("You do not have permissions for this command!")
@client.command()
async def reloadusers(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command reloadusers")
global usrdata
if ctx.message.author.guild_permissions.administrator:
usrdata = pd.read_csv(botpath + 'users.csv')
await ctx.send("Reloaded users.csv")
print(datetime.datetime.now(), "Reloaded users.csv")
else:
print(datetime.datetime.now(), "Didn't reload, insufficient permissions")
await ctx.send("You do not have permissions for this command!")
@client.command()
async def register(ctx, args = ''):
if args and ctx.message.author.guild_permissions.administrator:
try:
user = ctx.message.mentions[0]
await ctx.send("Messaged " + user.name)
except IndexError:
user = ctx.message.author
else:
user = ctx.message.author
print(datetime.datetime.now(), ctx.author.name, "did command register for", user.name)
if 'a' + str(user.id) in usrdata.values:
if user == ctx.message.author:
await ctx.send("Your info has already been saved! Use " + prefix + "delinfo to change it.")
else:
await ctx.send(user.name, "has already been registered!")
print(datetime.datetime.now(), "Not registering", user.name + ", already registered")
else:
if ctx.guild:
if user == ctx.message.author:
await ctx.send("You have been messaged, please answer the messages through DM")
elif user != ctx.message.author:
await ctx.send(user, "has been messaged.")
await Register(user)
@client.command()
async def delinfo(ctx, args = ''):
if ctx.message.author.guild_permissions.administrator:
try:
user = ctx.message.mentions[0]
except IndexError:
user = ctx.message.author
global usrdata
print(datetime.datetime.now(), ctx.author.name, "did command delinfo for", user)
if 'a' + str(user.id) in usrdata.values:
if user == ctx.message.author:
await ctx.send("Are you sure you want to delete your info? This cannot be undone, and you will have to re-do .register")
else:
await ctx.send("Are you sure you want to delete info for " + user.name + "? This cannot be undone.")
def check(m):
return m.channel == ctx.channel and m.author == ctx.author
try:
msg = await client.wait_for('message', check = check, timeout = 60.0)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Delinfo for", user.name, "failed: Timed out")
await ctx.send("Delinfo failed. You may do " + prefix + "delinfo to retry.")
return None
if msg.content.lower().startswith("y"):
await ctx.send("Deleting info...")
usrdata = usrdata.set_index('User')
usrdata = usrdata.drop('a' + str(user.id), axis = 0)
usrdata.to_csv(botpath + 'users.csv', encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
await ctx.send("Deleted info.")
print(datetime.datetime.now(), "Deleted info for", user.name, "from users.csv")
await editwhois()
else:
if user == ctx.message.author:
await ctx.send("Alright, I won't delete your info.")
else:
await ctx.send("Alright, I won't delete " + user.name + "'s info.")
else:
if user == ctx.message.author:
await ctx.send("You don't have your info saved! Use " + prefix + "register to add your info.")
else:
await ctx.send(user.name + " doesn't have their info saved!")
else:
print(datetime.datetime.now(), ctx.author.name, "did command delinfo, no permissions")
await ctx.send("You do not have permissions for this command!")
@client.command()
async def userinfo(ctx, arg = ""):
if arg:
try:
user = ctx.message.mentions[0]
except IndexError:
user = ctx.message.author
else:
user = ctx.message.author
print(datetime.datetime.now(),ctx.author.name, "did command userinfo for", user.name)
if 'a' + str(user.id) in usrdata.values:
for i in range(len(usrdata)):
if usrdata['User'][i] == 'a' + str(user.id):
embed = discord.Embed(color = discord.Color.dark_purple())
embed.set_author(name = "Info for " + user.name + ":", icon_url = user.avatar_url)
embed.add_field(name = "Name:", value = usrdata['IRL'][i].title(), inline = True)
embed.add_field(name = "Grade:", value = usrdata['Grade'][i], inline = True)
embed.add_field(name = "SCHOOL Status:", value = guestStatus[usrdata['Guest'][i]], inline = False)
embed.add_field(name = "Classes:", value = ''.join([classdata.loc[int(j)]['Name'] + "\n" for j in usrdata['Classes'][i][1:-1].split(', ')]), inline = False)
embed.set_thumbnail(url = user.avatar_url)
await ctx.send(embed = embed)
else:
if user == ctx.message.author:
await ctx.send("You are not registered! Use " + prefix + "register to add your info.")
else:
await ctx.send(user.name + " is not registered! Use " + prefix + "info to add your info.")
@client.command()
async def rawuserinfo(ctx, arg = ""):
if arg:
try:
user = ctx.message.mentions[0]
except IndexError:
user = ctx.message.author
else:
user = ctx.message.author
print(datetime.datetime.now(),ctx.author.name, "did command rawuserinfo for", user.name)
if 'a' + str(user.id) in usrdata.values:
for i in range(len(usrdata)):
if usrdata['User'][i] == 'a' + str(user.id):
await ctx.send(usrdata['User'][i] + ", " + str(usrdata['Guest'][i]) + ", " + str(usrdata['Grade'][i]) + ", " + str(usrdata['Classes'][i]) + ", "+ usrdata['IRL'][i])
else:
if user == ctx.message.author:
await ctx.send("You are not registered! Use " + prefix + "register to add your info.")
else:
await ctx.send(user.name + " is not registered! Use " + prefix + "info to add your info.")
@client.command()
async def getroles(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command getroles")
if 'a' + str(ctx.author.id) in usrdata.values:
for i in range(len(usrdata)):
if usrdata['User'][i] == 'a' + str(ctx.author.id):
if int(usrdata['Guest'][i]) == 1:
await ctx.author.add_roles(discord.utils.get(ctx.author.guild.roles, name = graderole[usrdata['Grade'][i]]))
else:
await ctx.author.add_roles(discord.utils.get(ctx.author.guild.roles, name = guestStatus[usrdata['Guest'][i]]))
else:
await ctx.send("You are not registered! Use " + prefix + "register to add your info.")
# @client.command()
# async def listusers(ctx):
# print(datetime.datetime.now(), ctx.author.name, "did command listusers")
# users = []
# for i in range(len(usrdata)):
# users.append(discord.utils.find(lambda m: m.id == int(usrdata['User'][i][1:]), schoolserver.members).mention + " - " + usrdata['IRL'][i].title())
# embed = discord.Embed(title = "Registered Users:", description = ''.join([j + "\n" for j in users]), color = discord.Color.dark_purple())
# embed.set_footer(text = "Total number of users: " + str(len(usrdata)))
# await ctx.send(embed = embed)
@client.command()
async def listclasses(ctx):
if ctx.message.author.guild_permissions.administrator:
print(datetime.datetime.now(), ctx.author.name, "did command listclasses")
classes = []
for i in range(1, int(len(classdata)/2)):
classes.append(classdata['Name'][i])
embed = discord.Embed(title = "Classes:", description = ''.join([", " + j for j in classes])[2:], color = discord.Color.dark_purple())
embed.set_footer(text = "Total number of classes: " + str(len(classdata) - 1))
await ctx.send(embed = embed)
classes = []
for i in range(int(len(classdata)/2), len(classdata)):
classes.append(classdata['Name'][i])
embed = discord.Embed(title = "Classes:", description = ''.join([", " + j for j in classes])[2:], color = discord.Color.dark_purple())
embed.set_footer(text = "Total number of classes: " + str(len(classdata) - 1))
await ctx.send(embed = embed)
else:
print(datetime.datetime.now(), ctx.author.name, "did command listclasses, no permissions")
await ctx.send("You do not have permissions for this command")
@client.command()
async def edit(ctx, name = '', change = '', *args):
if ctx.message.author.guild_permissions.administrator:
print(datetime.datetime.now(), ctx.author.name, "did command edit")
if name and change and args:
if change.lower() == "classes":
to_change = 1
elif change.lower() == "irl" or change.lower() == "name":
to_change = 2
elif change.lower() == "grade":
to_change = 3
elif change.lower() == "guest":
to_change = 4
else:
await ctx.send("Invalid syntax: use " + prefix + "edit (user) (field) (value)")
print(datetime.datetime.now(), ctx.author.name, "did command edit, invalid syntax")
return None
try:
user = ctx.message.mentions[0]
except IndexError:
await ctx.send("Invalid syntax: use " + prefix + "edit (user) (field) (value)")
print(datetime.datetime.now(), ctx.author.name, "did command edit, invalid syntax")
return None
global usrdata
for i in range(len(usrdata)):
if 'a' + str(user.id) == usrdata['User'][i]:
person = [usrdata['User'][i], usrdata['Classes'][i], usrdata['IRL'][i], usrdata['Grade'][i], usrdata['Guest'][i]]
await user.remove_roles(discord.utils.get(schoolserver.roles, name = graderole[str(person[3])]))
await user.remove_roles(discord.utils.get(schoolserver.roles, name = guestStatus[str(person[4])]))
if to_change == 2 or to_change == 1:
person[to_change] = "".join([" " + i for i in args])[1:]
else:
person[to_change] = args[0]
usrdata = usrdata.set_index('User')
usrdata = usrdata.drop('a' + str(user.id), axis = 0)
usrdata.to_csv(botpath + 'users.csv', encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
usrdata = usrdata.append(pd.DataFrame({'User' : [person[0]], 'Classes' : [person[1]], 'IRL' : [person[2]], 'Grade' : [person[3]], 'Guest' : [person[4]]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
if person[4] == "0":
await user.add_roles(discord.utils.get(schoolserver.roles, name = "Not in SCHOOL"))
elif person[4] == "2":
await user.add_roles(discord.utils.get(schoolserver.roles, name = "SCHOOL 2"))
elif person[4] == "3":
await user.add_roles(discord.utils.get(schoolserver.roles, name = "Other SCHOOL"))
elif person[4] == "1":
await user.add_roles(discord.utils.get(schoolserver.roles, name = graderole[str(person[3])]))
print(datetime.datetime.now(), "Updated", user.name, "in users.csv")
embed = discord.Embed(color = discord.Color.dark_purple())
embed.set_author(name = "Info for " + user.name + ":", icon_url = user.avatar_url)
embed.add_field(name = "Name:", value = person[2].title(), inline = True)
embed.add_field(name = "Grade:", value = person[3], inline = True)
embed.add_field(name = "SCHOOL Status:", value = guestStatus[person[4]], inline = False)
embed.add_field(name = "Classes:", value = ''.join([classdata.loc[int(j)]['Name'] + "\n" for j in person[1][1:-1].split(', ')]), inline = False)
embed.set_thumbnail(url = user.avatar_url)
await ctx.send("Updated info for " + user.name, embed = embed)
break
await editwhois()
else:
await ctx.send("Invalid syntax: use " + prefix + "edit (user) (field) (value)")
print(datetime.datetime.now(), ctx.author.name, "did command edit, invalid syntax")
else:
print(datetime.datetime.now(), ctx.author.name, "did command edit, no permissions")
await ctx.send("You do not have permissions for this command")
@client.command()
async def addclasses(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command addclasses")
await ctx.send("You have been messaged, please answer the messages through DM")
user = ctx.message.author
await user.send("Begin to list your classes one by one (most abbreviations are allowed) or send a picture of your schedule (Coming soon!) and say 'done' when you are done. (For precalc use 'pre-calc')")
listofclasses = []
issues = 0
global usrdata
while True:
if listofclasses:
embed = discord.Embed(title = "Classes for " + user.name + ":", description = ''.join([classdata.loc[i]['Name'] + "\n" for i in listofclasses]), color = discord.Color.dark_purple())
embed.set_footer(text = "Continue listing your classes and say 'done' when all of your classes are on this list")
embed.set_thumbnail(url = user.avatar_url)
await user.send(embed = embed)
def check(m):
return m.guild == None and m.author == user
try:
msg3 = await client.wait_for('message', timeout = 300.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Addclasses for", user.name, "failed: Timed out at classes")
await user.send("Addclasses failed. You may do " + prefix + "addclasses to retry.")
return None
if msg3.attachments:
await user.send("Feature not implemented yet, please list your classes through text.")
continue
# await user.send("Reading schedule...")
# await msg3.attachments[0].save(botpath + 'Saved/sched_' + user.name + '.png')
# print(datetime.datetime.now(), "Saved schedule from", user.name, "as sched_" + user.name + ".png")
# classes = pytesseract.image_to_string(Image.open(botpath + 'Saved/sched_' + user.name + '.png'))
# listofclasses.append(findclasses(classes))
# if len(listofclasses) >= 7:
# embed = discord.Embed(title = "Classes for " + user.name + ":", description = ''.join([classdata.loc[i]['Name'] + "\n" for i in listofclasses]), color = discord.Color.dark_purple())
# embed.set_thumbnail(url = user.avatar_url)
# await user.send(embed = embed)
# await user.send("Is this correct?")
#
# try:
# msg4 = await client.wait_for('message', timeout = 60.0, check = check)
# except asyncio.TimeoutError:
# print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at check classes")
# await user.send("Registration failed. You may do " + prefix + "register to retry.")
# return None
# if msg4.content.lower().startswith("y"):
# listofclasses.sort()
# usrdata = usrdata.append(pd.DataFrame({'User':['a' + str(user.id)], 'Classes':[str(listofclasses)], 'IRL' : [irlname], 'Grade' : [grade]}), sort = False, ignore_index = True)
# usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
# usrdata = pd.read_csv(botpath + 'users.csv')
# print(datetime.datetime.now(), "Registered", user.name, "with classes in users.csv and", issues, "issues")
# break
# elif msg4.content.lower() == "cancel":
# await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
# print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at image (Check classes)")
# return None
# else:
# await user.send("Please send a better image or say no to skip adding classes. You may contact an admin if you continue having issues.")
# issues += 1
# print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at image (incorrect classes)")
# continue
# else:
# await user.send("Only found " + str(len(listofclasses)) + " classes, please send a better image or say no to skip adding classes. You may contact an admin if you continue having issues.")
# issues += 1
# print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at image (too few classes - " + str(len(listofclasses)) + ")")
# continue
elif msg3.content.lower() == "cancel":
await user.send("Cancelled addclasses. You may do " + prefix + "addclasses to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled addclasses with", issues, "issues")
return None
elif msg3.content.lower() == "done":
if len(listofclasses) >= 7:
listofclasses.sort()
for i in range(len(usrdata)):
if 'a' + str(user.id) == usrdata['User'][i]:
person = [usrdata['User'][i], usrdata['Classes'][i], usrdata['IRL'][i], usrdata['Grade'][i], usrdata['Guest'][i]]
person[1] = listofclasses
usrdata = usrdata.set_index('User')
usrdata = usrdata.drop('a' + str(user.id), axis = 0)
usrdata.to_csv(botpath + 'users.csv', encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
usrdata = usrdata.append(pd.DataFrame({'User' : [person[0]], 'Classes' : [person[1]], 'IRL' : [person[2]], 'Grade' : [person[3]], 'Guest' : [person[4]]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Added classes for", user.name, "in users.csv")
embed = discord.Embed(color = discord.Color.dark_purple())
embed.set_author(name = "Info for " + user.name + ":", icon_url = user.avatar_url)
embed.add_field(name = "Name:", value = person[2].title(), inline = True)
embed.add_field(name = "Grade:", value = person[3], inline = True)
embed.add_field(name = "SCHOOL Status:", value = guestStatus[person[4]], inline = False)
embed.add_field(name = "Classes:", value = ''.join([classdata.loc[int(j)]['Name'] + "\n" for j in str(person[1])[1:-1].split(', ')]), inline = False)
embed.set_thumbnail(url = user.avatar_url)
await user.send("Updated info for " + user.name, embed = embed)
break
print(datetime.datetime.now(), "Added classes for", user.name, "in users.csv with", issues, "issues")
break
elif listofclasses:
await user.send("You have only added " + str(len(listofclasses)) + " classes, are you sure?")
try:
msg4 = await client.wait_for('message', timeout = 60.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Addclasses for", user.name, "failed: Timed out at check classes")
await user.send("Addclasses failed. You may do " + prefix + "register to retry.")
return None
if msg4.content.lower().startswith("y"):
listofclasses.sort
for i in range(len(usrdata)):
if 'a' + str(user.id) == usrdata['User'][i]:
person = [usrdata['User'][i], usrdata['Classes'][i], usrdata['IRL'][i], usrdata['Grade'][i]]
person[1] = listofclasses
usrdata = usrdata.set_index('User')
usrdata = usrdata.drop('a' + str(user.id), axis = 0)
usrdata.to_csv(botpath + 'users.csv', encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
usrdata = usrdata.append(pd.DataFrame({'User' : [person[0]], 'Classes' : [person[1]], 'IRL' : [person[2]], 'Grade' : [person[3]], 'Guest' : [person[4]]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Added classes for", user.name, "in users.csv")
embed = discord.Embed(color = discord.Color.dark_purple())
embed.set_author(name = "Info for " + user.name + ":", icon_url = user.avatar_url)
embed.add_field(name = "Name:", value = person[2].title(), inline = True)
embed.add_field(name = "Grade:", value = person[3], inline = True)
embed.add_field(name = "SCHOOL Status:", value = guestStatus[person[4]], inline = False)
embed.add_field(name = "Classes:", value = ''.join([classdata.loc[int(j)]['Name'] + "\n" for j in str(person[1])[1:-1].split(', ')]), inline = False)
embed.set_thumbnail(url = user.avatar_url)
await user.send("Updated info for " + user.name, embed = embed)
break
print(datetime.datetime.now(), "Added classes for", user.name, "with", issues, "issues")
break
elif msg4.content.lower() == "cancel":
await user.send("Cancelled addclasses. You may do " + prefix + "addclasses to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled addclasses with", issues, "issues at classes (Check classes)")
return None
else:
await user.send("Please continue listing classes one by one and say 'done' when all of your classes are added.")
continue
else:
await user.send("No classes added. Are you sure you would like to continue without adding your classes?")
try:
msg4 = await client.wait_for('message', timeout = 60.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Addclasses for", user.name, "failed: Timed out at check classes")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg4.content.lower().startswith("y"):
listofclasses = [0]
for i in range(len(usrdata)):
if 'a' + str(user.id) == usrdata['User'][i]:
person = [usrdata['User'][i], usrdata['Classes'][i], usrdata['IRL'][i], usrdata['Grade'][i]]
person[1] = listofclasses
usrdata = usrdata.set_index('User')
usrdata = usrdata.drop('a' + str(user.id), axis = 0)
usrdata.to_csv(botpath + 'users.csv', encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
usrdata = usrdata.append(pd.DataFrame({'User' : [person[0]], 'Classes' : [person[1]], 'IRL' : [person[2]], 'Grade' : [person[3]], 'Guest' : [person[4]]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
print(datetime.datetime.now(), "Added classes for", user.name, "in users.csv")
embed = discord.Embed(color = discord.Color.dark_purple())
embed.set_author(name = "Info for " + user.name + ":", icon_url = user.avatar_url)
embed.add_field(name = "Name:", value = person[2].title(), inline = True)
embed.add_field(name = "Grade:", value = person[3], inline = True)
embed.add_field(name = "SCHOOL Status:", value = guestStatus[person[4]], inline = False)
embed.add_field(name = "Classes:", value = ''.join([classdata.loc[int(j)]['Name'] + "\n" for j in str(person[1])[1:-1].split(', ')]), inline = False)
embed.set_thumbnail(url = user.avatar_url)
await user.send("Updated info for " + user.name, embed = embed)
break
print(datetime.datetime.now(), "Registered", user.name, "with classes in users.csv and", issues, "issues")
break
elif msg4.content.lower() == "cancel":
await user.send("Cancelled registration. You may do " + prefix + "register to retry.")
print(datetime.datetime.now(), "User", user.name, "cancelled registration with", issues, "issues at classes (Check classes)")
return None
else:
await user.send("Please continue listing classes one by one and say 'done' when all of your classes are added.")
continue
else:
classmatches = []
for i in range(len(classdata)):
matches = 0
for word in msg3.content.lower().split(" "):
if word == "i":
word = "1"
elif word == "ii":
word = "2"
elif word == "iii":
word = "3"
classname = classdata['Name'][i].lower().split(" ")
for part in range(len(classname)):
if classname[part] == "i":
classname[part] = "1"
elif classname[part] == "ii":
classname[part] = "2"
elif classname[part] == "iii":
classname[part] = "3"
classname = ''.join([i + " " for i in classname])[:-1]
if word in classname:
matches += 1
if matches == len(msg3.content.split(" ")):
classmatches.append(i)
if len(classmatches) == 0:
await user.send("Class " + msg3.content + " not found, please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at listclasses (class not found - " + msg3.content + ")")
continue
elif len(classmatches) == 1:
await user.send("Found class " + classdata['Name'][classmatches[0]] + ", is this correct?")
try:
msg4 = await client.wait_for('message', timeout = 60.0, check = check)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at choose from list")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if msg4.content.lower().startswith("y"):
listofclasses.append(classmatches[0])
await user.send("Class " + classdata['Name'][classmatches[0]] + " added to your schedule.")
continue
else:
await user.send("Please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at listclasses (incorrect classes)")
continue
elif len(classmatches) > 8:
await user.send("Found " + str(len(classmatches)) + " matches, please be more specific.")
else:
embed = discord.Embed(title = "Multiple classes found, please select the correct one by number:", description = "0: None of these\n" + ''.join([str(j + 1) + ": " + classdata['Name'][classmatches[j]] + "\n" for j in range(len(classmatches))]), color = discord.Color.dark_purple())
chooseclass = await user.send(embed = embed)
emojilist = ['0\N{combining enclosing keycap}'] + [str(i + 1) + '\N{combining enclosing keycap}' for i in range(len(classmatches))]
for i in emojilist:
await chooseclass.add_reaction(i)
def check2(reaction, person):
nonlocal emojilist
return person == user and str(reaction) in emojilist
try:
reaction, _ = await client.wait_for('reaction_add', timeout = 60.0, check = check2)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Registration for", user.name, "failed: Timed out at choose from list")
await user.send("Registration failed. You may do " + prefix + "register to retry.")
return None
if str(reaction)[0] == "0":
await user.send("Please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
issues += 1
print(datetime.datetime.now(), "User", user.name, "had issue", issues, "with register at listclasses (incorrect classes)")
continue
else:
listofclasses.append(classmatches[int(str(reaction)[0]) - 1])
await user.send("Class " + classdata['Name'][classmatches[int(str(reaction)[0]) - 1]] + " added to your schedule.")
continue
@client.command()
async def manregister(ctx, usermen = '', guest = '', grade = '', classes = '', *name):
if ctx.message.author.guild_permissions.administrator:
print(datetime.datetime.now(), ctx.author.name, "did command manregister")
if usermen and name and grade and classes and guest:
try:
user = ctx.message.mentions[0]
except IndexError:
await ctx.send("Invalid syntax: use " + prefix + "manregister (user) (grade) (classes, without spaces) (name)")
print(datetime.datetime.now(), ctx.author.name, "did command manregister, invalid syntax")
return None
global usrdata
if 'a' + str(user.id) in usrdata.values:
await ctx.send("User is already registered! Use " + prefix + "edit to edit their info.")
print(datetime.datetime.now(), ctx.author.name, "did command manregister, user registered")
return None
name = ''.join([" " + i for i in name])[1:]
classes = [int(i) for i in classes[1:-1].split(",")]
usrdata = usrdata.append(pd.DataFrame({'User': ["a" + str(user.id)], 'Classes' : [classes], 'IRL' : [name], 'Grade' : [grade], 'Guest' : [guest]}), sort = False, ignore_index = True)
usrdata.to_csv(botpath + 'users.csv', index = False, encoding = 'utf8')
usrdata = pd.read_csv(botpath + 'users.csv')
if int(guest) == 1:
await user.add_roles(discord.utils.get(schoolserver.roles, name = graderole[str(grade)]))
else:
await user.add_roles(discord.utils.get(schoolserver.roles, name = guestStatus[str(guest)]))
print(datetime.datetime.now(), "Updated", user.name, "in users.csv")
embed = discord.Embed(color = discord.Color.dark_purple())
embed.set_author(name = "Info for " + user.name + ":", icon_url = user.avatar_url)
embed.add_field(name = "Name:", value = name.title(), inline = True)
embed.add_field(name = "Grade:", value = grade, inline = True)
embed.add_field(name = "SCHOOL Status:", value = guestStatus[str(guest)], inline = False)
embed.add_field(name = "Classes:", value = ''.join([classdata.loc[int(j)]['Name'] + "\n" for j in classes[1:-1].split(', ')]), inline = False)
embed.set_thumbnail(url = user.avatar_url)
await ctx.send("Updated info for " + user.name, embed = embed)
await editwhois()
else:
await ctx.send("Invalid syntax: use " + prefix + "manregister (user) (grade) (classes, without spaces) (name)")
print(datetime.datetime.now(), ctx.author.name, "did command manregister, invalid syntax")
else:
print(datetime.datetime.now(), ctx.author.name, "did command manregister, no permissions")
await ctx.send("You do not have permissions for this command")
@client.command()
async def classinfo(ctx, *classn):
if not classn:
print(datetime.datetime.now(), ctx.author.name, "did command classinfo, no class specified")
await ctx.send("Invalid syntax: use " + prefix + "classinfo (class)")
return None
classn = ''.join([i + ' ' for i in classn])[:-1]
print(datetime.datetime.now(), ctx.author.name, "did command classinfo for", classn)
classmatches = []
for i in range(len(classdata)):
matches = 0
for word in classn.lower().split(" "):
if word == "i":
word = "1"
elif word == "ii":
word = "2"
elif word == "iii":
word = "3"
classname = classdata['Name'][i].lower().split(" ")
for part in range(len(classname)):
if classname[part] == "i":
classname[part] = "1"
elif classname[part] == "ii":
classname[part] = "2"
elif classname[part] == "iii":
classname[part] = "3"
classname = ''.join([i + " " for i in classname])[:-1]
if word in classname:
matches += 1
if matches == len(classn.split(" ")):
classmatches.append(i)
if len(classmatches) == 0:
await ctx.send("Class " + classn + " not found, please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed.")
return None
elif len(classmatches) == 1:
classn = classmatches[0]
elif len(classmatches) > 8:
await ctx.send("Found " + str(len(classmatches)) + " matches, please be more specific.")
return None
else:
embed = discord.Embed(title = "Multiple classes found, please select the correct one by number:", description = "0: None of these\n" + ''.join([str(j + 1) + ": " + classdata['Name'][classmatches[j]] + "\n" for j in range(len(classmatches))]), color = discord.Color.dark_purple())
chooseclass = await ctx.send(embed = embed)
emojilist = ['0\N{combining enclosing keycap}'] + [str(i + 1) + '\N{combining enclosing keycap}' for i in range(len(classmatches))]
for i in emojilist:
await chooseclass.add_reaction(i)
def check2(reaction, person):
nonlocal emojilist
return person == ctx.author and str(reaction) in emojilist
try:
reaction, _ = await client.wait_for('reaction_add', timeout = 60.0, check = check2)
except asyncio.TimeoutError:
print(datetime.datetime.now(), "Classinfo by", ctx.author.name, "failed: Timed out at choose from list")
await ctx.send("You took too long to choose, please do " + prefix + "classinfo to retry")
return None
if str(reaction)[0] == "0":
await ctx.send("Please try again. Write the class as it is written on the schedule, but abbreviations such as 'honors chem' and 'ap lang' are allowed. (For precalc use 'pre-calc')")
return None
else:
classn = classmatches[int(str(reaction)[0]) - 1]
users = []
for i in range(len(usrdata)):
usrclasses = usrdata['Classes'][i][1:-1].split(', ')
if str(classn) in usrclasses:
users.append(discord.utils.find(lambda m: m.id == int(usrdata['User'][i][1:]), schoolserver.members).mention + " - " + usrdata['IRL'][i].title())
embed = discord.Embed(title = "Info for " + classdata['Name'][classn] + ":", color = discord.Color.dark_purple())
if users:
embed.add_field(name = "Users in class:", value = ''.join([i + "\n" for i in users]), inline = True)
else:
embed.add_field(name = "Users in class:", value = "No users found", inline = True)
embed.set_footer(text = "ID: " + str(classn))
await ctx.send(embed = embed)
@client.command()
async def help(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command help")
embed = discord.Embed(title = "SCHOOL Bot Commands:", description = "**.ping**: Pings the bot and returns the bot's latency\n**.register**: Register yourself in the SCHOOL Bot system\n**.addclasses**: Add your classes in the SCHOOL Bot system\n**.getroles**: Get your grade role if you do not have it already\n**.userinfo (user)**: Get information about a user, such as name, grade, and classes\n**.classinfo (class)**: Get a list of users in a specific class\n", color = discord.Color.dark_purple())
embed.set_footer(text = "Use .adminhelp for help with admin commands")
embed.set_thumbnail(url = client.user.avatar_url)
await ctx.send(embed = embed)
@client.command()
async def adminhelp(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command adminhelp")
embed = discord.Embed(title = "SCHOOL Bot Admin Commands:", description = "**.register (user)**: Begin a user's registration process\n**.manregister (user) (grade) (classes) (name)**: Manually input a user's information\n**.delinfo (user)**: Delete a user's information\n**.edit (user) (field) (value)**: Edit a specific field in a user's info\n**.rawuserinfo (user)**: Get a user's information as it is in the system\n**.reloadclasses**: Reload the class database\n**.reloadusers**: Reload the user database\n**.whois**: Send the who-is messages (DON'T USE)\n**.reloadwhois**: Reload the who-is embeds", color = discord.Color.dark_purple())
embed.set_thumbnail(url = client.user.avatar_url)
if not ctx.author.guild_permissions.administrator:
embed.set_footer(text = "You do not have permissions to use these commands! Use .help for the commands you can use")
embed.set_author(name = "You do not have permissions to use these commands!")
await ctx.send(embed = embed)
#Who-is------------------------------------------------------------------------
@client.command()
async def whois(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command whois")
if ctx.message.author.guild_permissions.administrator:
global teacherlist, graduatelist, seniorlist, juniorlist, sophomorelist, freshmanlist, eighthlist, seventhlist, sixthlist, school2list, otherschoollist, notinschoollist
teacherlist = await ctx.send(embed = await gradeusers(14))
graduatelist = await ctx.send(embed = await gradeusers(13))
seniorlist = await ctx.send(embed = await gradeusers(12))
juniorlist = await ctx.send(embed = await gradeusers(11))
sophomorelist = await ctx.send(embed = await gradeusers(10))
freshmanlist = await ctx.send(embed = await gradeusers(9))
eighthlist = await ctx.send(embed = await gradeusers(8))
seventhlist = await ctx.send(embed = await gradeusers(7))
sixthlist = await ctx.send(embed = await gradeusers(6))
school2list = await ctx.send(embed = await guestusers(2))
otherschoollist = await ctx.send(embed = await guestusers(3))
notinschoollist = await ctx.send(embed = await guestusers(0))
else:
print(datetime.datetime.now(), ctx.author.name, "did command whois, no permissions")
await ctx.send("You do not have permissions for this command")
async def gradeusers(grade):
gradename = {14 : "Teachers", 13 : "Graduates", 12 : "Seniors", 11 : "Juniors" , 10 : "Sophomores", 9 : "Freshmen", 8 : "8th Grade", 7 : "7th Grade", 6 : "6th Grade"}
gradecolors = {14 : discord.Color.magenta(), 13 : discord.Color.green(), 12 : discord.Color.red(), 11 : discord.Color.purple(), 10 : discord.Color.gold(), 9 : discord.Color.teal(), 8 : discord.Color.blue(), 7 : discord.Color.dark_magenta(), 6 : discord.Color.dark_gold()}
users = []
global usrdata
for i in range(len(usrdata)):
if usrdata['Grade'][i] == grade and int(usrdata['Guest'][i]) == 1:
users.append(i)
if users:
embed = discord.Embed(title = gradename[grade], description = ''.join([discord.utils.find(lambda m: m.id == int(usrdata['User'][i][1:]), schoolserver.members).mention + " - " + usrdata['IRL'][i].title() + "\n" for i in users]), color = gradecolors[grade])
else:
embed = discord.Embed(title = gradename[grade], description = "None :)", color = gradecolors[grade])
embed.set_footer(text = "Length: " + str(len(users)))
return embed
async def guestusers(guest):
guestname = {0 : "Not in SCHOOL", 2 : "SCHOOL 2", 3 : "Other SCHOOL"}
guestcolors = {0 : discord.Color.darker_grey(), 2 : discord.Color.dark_blue(), 3 : discord.Color.light_grey()}
users = []
global usrdata
for i in range(len(usrdata)):
if usrdata['Guest'][i] == guest:
users.append(i)
if users:
embed = discord.Embed(title = guestname[guest], description = ''.join([discord.utils.find(lambda m: m.id == int(usrdata['User'][i][1:]), schoolserver.members).mention + " - " + usrdata['IRL'][i].title() + "\n" for i in users]), color = guestcolors[guest])
else:
embed = discord.Embed(title = guestname[guest], description = "None :)", color = guestcolors[guest])
embed.set_footer(text = "Length: " + str(len(users)))
return embed
async def editwhois():
print(datetime.datetime.now(), "Refreshing who-is")
global teacherlist, graduatelist, seniorlist, juniorlist, sophomorelist, freshmanlist, eighthlist, seventhlist, sixthlist, school2list, otherschoollist, notinschoollist
await teacherlist.edit(embed = await gradeusers(14))
await graduatelist.edit(embed = await gradeusers(13))
await seniorlist.edit(embed = await gradeusers(12))
await juniorlist.edit(embed = await gradeusers(11))
await sophomorelist.edit(embed = await gradeusers(10))
await freshmanlist.edit(embed = await gradeusers(9))
await eighthlist.edit(embed = await gradeusers(8))
await seventhlist.edit(embed = await gradeusers(7))
await sixthlist.edit(embed = await gradeusers(6))
await school2list.edit(embed = await guestusers(2))
await otherschoollist.edit(embed = await guestusers(3))
await notinschoollist.edit(embed = await guestusers(0))
print(datetime.datetime.now(), "Refreshed who-is")
@client.command()
async def refreshwhois(ctx):
print(datetime.datetime.now(), ctx.author.name, "did command refreshwhois")
if ctx.message.author.guild_permissions.administrator:
await ctx.send("Refreshing who-is...")
try:
await editwhois()
except:
await ctx.send("Error refreshing who-is. Check the log for details.")
else:
await ctx.send("Refreshed who-is.")
else:
print(datetime.datetime.now(), ctx.author.name, "did command refreshwhois, no permissions")
await ctx.send("You do not have permissions for this command")
#------------------------------------------------------------------------------
token = open(botpath + 'token.txt').read()
client.run(token)
| 59.65257 | 646 | 0.553676 | 8,088 | 71,941 | 4.892804 | 0.057122 | 0.034165 | 0.055189 | 0.063073 | 0.832714 | 0.80747 | 0.780659 | 0.758573 | 0.746266 | 0.729336 | 0 | 0.009621 | 0.31227 | 71,941 | 1,205 | 647 | 59.702075 | 0.790222 | 0.101361 | 0 | 0.69765 | 0 | 0.020299 | 0.214494 | 0.003695 | 0.003205 | 0 | 0 | 0 | 0 | 1 | 0.007479 | false | 0 | 0.00641 | 0.003205 | 0.056624 | 0.097222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed1bd21b57b985c2055e23111d56c9bbda8e3957 | 8,099 | py | Python | ltr/model/head/atom.py | DeepBrainsMe/PyDoctor_Final | 49ecfc64b2a2866e7f37cc79c1f32a817975f064 | [
"MIT"
] | 1 | 2021-05-19T06:46:05.000Z | 2021-05-19T06:46:05.000Z | ltr/model/head/atom.py | DeepBrainsMe/PyDoctor_Final | 49ecfc64b2a2866e7f37cc79c1f32a817975f064 | [
"MIT"
] | null | null | null | ltr/model/head/atom.py | DeepBrainsMe/PyDoctor_Final | 49ecfc64b2a2866e7f37cc79c1f32a817975f064 | [
"MIT"
] | null | null | null | import torch.nn as nn
import ltr.model.backbone as backbones
import ltr.model.head as headmodels
from ltr import model_constructor
class ATOMnet(nn.Module):
""" ATOM network module"""
def __init__(self, feature_extractor, kp_regressor, kp_regressor_layer, extractor_grad=True):
super(ATOMnet, self).__init__()
self.feature_extractor = feature_extractor
self.kp_regressor = kp_regressor
self.kp_regressor_layer = kp_regressor_layer
if not extractor_grad:
for p in self.feature_extractor.parameters():
p.requires_grad_(False)
def forward(self, train_imgs):
""" Forward pass
Note: If the training is done in sequence mode, that is, test_imgs.dim() == 5, then the batch dimension
corresponds to the first dimensions. test_imgs is thus of the form [sequence, batch, feature, row, col]
"""
# Extract backbone features
train_feat = self.extract_backbone_features(train_imgs.reshape(-1, *train_imgs.shape[-3:]))
train_feat_kpreg = self.get_backbone_kpreg_feat(train_feat)
# Obtain iou prediction
iou_pred = self.kp_regressor(train_feat_kpreg)
return iou_pred
def extract_backbone_features(self, im, layers=None):
if layers is None:
layers = self.kp_regressor_layer
return self.feature_extractor(im, layers)
def extract_features(self, im, layers):
return self.feature_extractor(im, layers)
def get_backbone_kpreg_feat(self, backbone_feat):
return [backbone_feat[l] for l in self.kp_regressor_layer]
class Classnet(nn.Module):
""" ATOM network module"""
def __init__(self, feature_extractor, cls_regressor, cls_regressor_layer, extractor_grad=True):
super(Classnet, self).__init__()
self.feature_extractor = feature_extractor
self.cls_regressor = cls_regressor
self.cls_regressor_layer = cls_regressor_layer
if not extractor_grad:
for p in self.feature_extractor.parameters():
p.requires_grad_(False)
def forward(self, train_imgs):
""" Forward pass
Note: If the training is done in sequence mode, that is, test_imgs.dim() == 5, then the batch dimension
corresponds to the first dimensions. test_imgs is thus of the form [sequence, batch, feature, row, col]
"""
# Extract backbone features
train_feat = self.extract_backbone_features(train_imgs.reshape(-1, *train_imgs.shape[-3:]))
train_feat_reg = self.get_backbone_reg_feat(train_feat)
# Obtain iou prediction
pred = self.cls_regressor(train_feat_reg[0])
return pred
def extract_backbone_features(self, im, layers=None):
if layers is None:
layers = self.cls_regressor_layer
return self.feature_extractor(im, layers)
def extract_features(self, im, layers):
return self.feature_extractor(im, layers)
def get_backbone_reg_feat(self, backbone_feat):
return [backbone_feat[l] for l in self.cls_regressor_layer]
class Siamesenet(nn.Module):
""" ATOM network module"""
def __init__(self, sag_feature_extractor,ax_feature_extractor, cls_regressor, cls_regressor_layer, extractor_grad=True):
super(Siamesenet, self).__init__()
self.sag_feature_extractor = sag_feature_extractor
self.ax_feature_extractor = ax_feature_extractor
self.cls_regressor = cls_regressor
self.cls_regressor_layer = cls_regressor_layer
if not extractor_grad:
for p in self.feature_extractor.parameters():
p.requires_grad_(False)
def forward(self, train_imgs_sag,train_imgs_ax):
""" Forward pass
Note: If the training is done in sequence mode, that is, test_imgs.dim() == 5, then the batch dimension
corresponds to the first dimensions. test_imgs is thus of the form [sequence, batch, feature, row, col]
"""
# Extract backbone features
train_backbone_feat_sag = self.extract_sag_backbone_features(train_imgs_sag.reshape(-1, *train_imgs_sag.shape[-3:]))
train_backbone_feat_ax = self.extract_ax_backbone_features(train_imgs_ax.reshape(-1, *train_imgs_ax.shape[-3:]))
train_feat_sag = self.get_backbone_feat(train_backbone_feat_sag)
train_feat_ax = self.get_backbone_feat(train_backbone_feat_ax)
# Obtain iou prediction
pred = self.cls_regressor(train_feat_sag,train_feat_ax)
return pred
def extract_sag_backbone_features(self, im, layers=None):
if layers is None:
layers = self.cls_regressor_layer
return self.sag_feature_extractor(im, layers)
def extract_ax_backbone_features(self, im, layers=None):
if layers is None:
layers = self.cls_regressor_layer
return self.ax_feature_extractor(im, layers)
def extract_sag_features(self, im, layers):
return self.sag_feature_extractor(im, layers)
def extract_ax_features(self, im, layers):
return self.ax_feature_extractor(im, layers)
def get_backbone_feat(self, backbone_feat):
return [backbone_feat[l] for l in self.cls_regressor_layer]
@model_constructor
def atom_resnet18(backbone_pretrained=True,num_cls=2):
# backbone
backbone_net = backbones.resnet18(pretrained=backbone_pretrained)
# Bounding box regressor
predictor = headmodels.Classifier(num_classes=num_cls)
net = Classnet(feature_extractor=backbone_net, cls_regressor=predictor, cls_regressor_layer=['layer4'],
extractor_grad=True)
return net
@model_constructor
def atom_resnet50_cls(backbone_pretrained=True,num_cls=2):
# backbone
backbone_net = backbones.resnet50(pretrained=backbone_pretrained)
# Bounding box regressor
predictor = headmodels.Classifier_50(num_classes=num_cls)
net = Classnet(feature_extractor=backbone_net, cls_regressor=predictor, cls_regressor_layer=['layer4'],
extractor_grad=True)
return net
@model_constructor
def atom_resnet50(segm_input_dim=(64, 256, 512, 1024), segm_inter_dim=(4, 16, 32, 64), segm_dim=(64, 64),
backbone_pretrained=True):
# backbone
backbone_net = backbones.resnet50(pretrained=backbone_pretrained)
# Bounding box regressor
kp_predictor = headmodels.KeyPointNet(segm_input_dim=segm_input_dim, segm_inter_dim=segm_inter_dim,
segm_dim=segm_dim)
net = ATOMnet(feature_extractor=backbone_net, kp_regressor=kp_predictor, kp_regressor_layer=['conv1', 'layer1','layer2', 'layer3'],
extractor_grad=True)
return net
@model_constructor
def siamese_res18(backbone_pretrained=True,num_cls=2):
# backbone
backbone_net_sag = backbones.ournet18(pretrained=backbone_pretrained)
backbone_net_ax = backbones.ournet18(pretrained=backbone_pretrained)
# Bounding box regressor
predictor = headmodels.SiamClassifier(num_classes=num_cls)
net = Siamesenet(sag_feature_extractor=backbone_net_sag,ax_feature_extractor=backbone_net_ax,
cls_regressor=predictor, cls_regressor_layer=['layer4'],extractor_grad=True)
return net
@model_constructor
def ours_res50(backbone_pretrained=True,num_cls=2):
# backbone
backbone_net_sag = backbones.ournet50(pretrained=backbone_pretrained)
# Bounding box regressor
predictor = headmodels.Classifier_50(num_classes=num_cls)
net = Classnet(feature_extractor=backbone_net_sag, cls_regressor=predictor, cls_regressor_layer=['layer4'],
extractor_grad=True)
return net
@model_constructor
def ours_res18(backbone_pretrained=True,num_cls=2):
# backbone
backbone_net_sag = backbones.ournet18(pretrained=backbone_pretrained)
# Bounding box regressor
predictor = headmodels.Classifier(num_classes=num_cls)
net = Classnet(feature_extractor=backbone_net_sag, cls_regressor=predictor, cls_regressor_layer=['layer4'],
extractor_grad=True)
return net | 37.322581 | 135 | 0.713792 | 1,037 | 8,099 | 5.249759 | 0.118611 | 0.08817 | 0.049963 | 0.02939 | 0.820353 | 0.788391 | 0.764879 | 0.749633 | 0.708303 | 0.690669 | 0 | 0.012282 | 0.205828 | 8,099 | 217 | 136 | 37.322581 | 0.834111 | 0.130757 | 0 | 0.572581 | 0 | 0 | 0.007668 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185484 | false | 0 | 0.032258 | 0.056452 | 0.403226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed1d932ccc902fe347cbae1b80f41d1f1429f748 | 66 | py | Python | src/core_pipeline.py | Tpool1/Asclepius | 760ab31a8933772faa76064a42b11ab6e12d6c9a | [
"MIT"
] | null | null | null | src/core_pipeline.py | Tpool1/Asclepius | 760ab31a8933772faa76064a42b11ab6e12d6c9a | [
"MIT"
] | null | null | null | src/core_pipeline.py | Tpool1/Asclepius | 760ab31a8933772faa76064a42b11ab6e12d6c9a | [
"MIT"
] | null | null | null | from plugins import *
from packages import *
from models import *
| 16.5 | 22 | 0.772727 | 9 | 66 | 5.666667 | 0.555556 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 66 | 3 | 23 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed34f35bdb57bf10a67421ab18a7355d085695f9 | 32,322 | py | Python | pymatgen/electronic_structure/tests/test_dos.py | naik-aakash/pymatgen | 394e0d71bf1d1025fcf75498cbb16aa3f41ce78c | [
"MIT"
] | null | null | null | pymatgen/electronic_structure/tests/test_dos.py | naik-aakash/pymatgen | 394e0d71bf1d1025fcf75498cbb16aa3f41ce78c | [
"MIT"
] | null | null | null | pymatgen/electronic_structure/tests/test_dos.py | naik-aakash/pymatgen | 394e0d71bf1d1025fcf75498cbb16aa3f41ce78c | [
"MIT"
] | null | null | null | # Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
import json
import os
import unittest
import numpy as np
from monty.serialization import loadfn
from pymatgen.core.periodic_table import Element
from pymatgen.core.structure import Structure
from pymatgen.electronic_structure.core import Orbital, OrbitalType, Spin
from pymatgen.electronic_structure.dos import (
DOS,
CompleteDos,
FermiDos,
LobsterCompleteDos,
)
from pymatgen.util.testing import PymatgenTest
class DosTest(unittest.TestCase):
def setUp(self):
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "complete_dos.json")) as f:
self.dos = CompleteDos.from_dict(json.load(f))
def test_get_gap(self):
dos = self.dos
self.assertAlmostEqual(dos.get_gap(), 2.0589, 4)
self.assertEqual(len(dos.energies), 301)
self.assertAlmostEqual(
dos.get_interpolated_gap(tol=0.001, abs_tol=False, spin=None)[0],
2.16815942458015,
7,
)
self.assertAlmostEqual(dos.get_cbm_vbm(), (3.8729, 1.8140000000000001))
self.assertAlmostEqual(dos.get_interpolated_value(9.9)[Spin.up], 1.744588888888891, 7)
self.assertAlmostEqual(dos.get_interpolated_value(9.9)[Spin.down], 1.756888888888886, 7)
self.assertRaises(ValueError, dos.get_interpolated_value, 1000)
def test_get_smeared_densities(self):
dos = self.dos
smeared = dos.get_smeared_densities(0.2)
dens = dos.densities
for spin in Spin:
self.assertAlmostEqual(sum(dens[spin]), sum(smeared[spin]))
def test_as_dict(self):
dos_dict = self.dos.as_dict()
self.assertIsInstance(dos_dict["energies"], list)
self.assertIsInstance(dos_dict["energies"][0], float)
self.assertNotIsInstance(dos_dict["energies"][0], np.float64)
self.assertIsInstance(dos_dict["densities"]["1"], list)
self.assertIsInstance(dos_dict["densities"]["1"][0], float)
self.assertNotIsInstance(dos_dict["densities"]["1"][0], np.float64)
class FermiDosTest(unittest.TestCase):
def setUp(self):
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "complete_dos.json")) as f:
self.dos = CompleteDos.from_dict(json.load(f))
self.dos = FermiDos(self.dos)
def test_doping_fermi(self):
T = 300
fermi0 = self.dos.efermi
frange = [fermi0 - 0.5, fermi0, fermi0 + 2.0, fermi0 + 2.2]
dopings = [self.dos.get_doping(fermi_level=f, temperature=T) for f in frange]
ref_dopings = [3.48077e21, 1.9235e18, -2.6909e16, -4.8723e19]
for i, c_ref in enumerate(ref_dopings):
self.assertLessEqual(abs(dopings[i] / c_ref - 1.0), 0.01)
calc_fermis = [self.dos.get_fermi(concentration=c, temperature=T) for c in ref_dopings]
for j, f_ref in enumerate(frange):
self.assertAlmostEqual(calc_fermis[j], f_ref, 4)
sci_dos = FermiDos(self.dos, bandgap=3.0)
self.assertEqual(sci_dos.get_gap(), 3.0)
old_cbm, old_vbm = self.dos.get_cbm_vbm()
old_gap = old_cbm - old_vbm
new_cbm, new_vbm = sci_dos.get_cbm_vbm()
self.assertAlmostEqual(new_cbm - old_cbm, (3.0 - old_gap) / 2.0)
self.assertAlmostEqual(old_vbm - new_vbm, (3.0 - old_gap) / 2.0)
for i, c_ref in enumerate(ref_dopings):
if c_ref < 0:
self.assertAlmostEqual(sci_dos.get_fermi(c_ref, temperature=T) - frange[i], 0.47, places=2)
else:
self.assertAlmostEqual(sci_dos.get_fermi(c_ref, temperature=T) - frange[i], -0.47, places=2)
self.assertAlmostEqual(sci_dos.get_fermi_interextrapolated(-1e26, 300), 7.5108, 4)
self.assertAlmostEqual(sci_dos.get_fermi_interextrapolated(1e26, 300), -1.4182, 4)
self.assertAlmostEqual(sci_dos.get_fermi_interextrapolated(0.0, 300), 2.9071, 4)
def test_as_dict(self):
dos_dict = self.dos.as_dict()
self.assertIsInstance(dos_dict["energies"], list)
self.assertIsInstance(dos_dict["energies"][0], float)
self.assertNotIsInstance(dos_dict["energies"][0], np.float64)
self.assertIsInstance(dos_dict["densities"]["1"], list)
self.assertIsInstance(dos_dict["densities"]["1"][0], float)
self.assertNotIsInstance(dos_dict["densities"]["1"][0], np.float64)
class CompleteDosTest(unittest.TestCase):
def setUp(self):
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "complete_dos.json")) as f:
self.dos = CompleteDos.from_dict(json.load(f))
def test_get_gap(self):
dos = self.dos
self.assertAlmostEqual(dos.get_gap(), 2.0589, 4, "Wrong gap from dos!")
self.assertEqual(len(dos.energies), 301)
self.assertAlmostEqual(
dos.get_interpolated_gap(tol=0.001, abs_tol=False, spin=None)[0],
2.16815942458015,
7,
)
spd_dos = dos.get_spd_dos()
self.assertEqual(len(spd_dos), 3)
el_dos = dos.get_element_dos()
self.assertEqual(len(el_dos), 4)
sum_spd = spd_dos[OrbitalType.s] + spd_dos[OrbitalType.p] + spd_dos[OrbitalType.d]
sum_element = None
for pdos in el_dos.values():
if sum_element is None:
sum_element = pdos
else:
sum_element += pdos
# The sums of the SPD or the element doses should be the same.
self.assertTrue((abs(sum_spd.energies - sum_element.energies) < 0.0001).all())
self.assertTrue((abs(sum_spd.densities[Spin.up] - sum_element.densities[Spin.up]) < 0.0001).all())
self.assertTrue((abs(sum_spd.densities[Spin.down] - sum_element.densities[Spin.down]) < 0.0001).all())
site = dos.structure[0]
self.assertIsNotNone(dos.get_site_dos(site))
self.assertAlmostEqual(sum(dos.get_site_dos(site).get_densities(Spin.up)), 2.0391)
self.assertAlmostEqual(sum(dos.get_site_dos(site).get_densities(Spin.down)), 2.0331999999999995)
self.assertIsNotNone(dos.get_site_orbital_dos(site, Orbital.s))
egt2g = dos.get_site_t2g_eg_resolved_dos(site)
self.assertAlmostEqual(sum(egt2g["e_g"].get_densities(Spin.up)), 0.0)
self.assertAlmostEqual(sum(egt2g["t2g"].get_densities(Spin.up)), 0.0)
egt2g = dos.get_site_t2g_eg_resolved_dos(dos.structure[4])
self.assertAlmostEqual(sum(egt2g["e_g"].get_densities(Spin.up)), 15.004399999999997)
self.assertAlmostEqual(sum(egt2g["t2g"].get_densities(Spin.up)), 22.910399999999999)
self.assertAlmostEqual(dos.get_cbm_vbm(), (3.8729, 1.8140000000000001))
self.assertAlmostEqual(dos.get_interpolated_value(9.9)[Spin.up], 1.744588888888891, 7)
self.assertAlmostEqual(dos.get_interpolated_value(9.9)[Spin.down], 1.756888888888886, 7)
self.assertRaises(ValueError, dos.get_interpolated_value, 1000)
def test_to_from_dict(self):
d = self.dos.as_dict()
dos = CompleteDos.from_dict(d)
el_dos = dos.get_element_dos()
self.assertEqual(len(el_dos), 4)
spd_dos = dos.get_spd_dos()
sum_spd = spd_dos[OrbitalType.s] + spd_dos[OrbitalType.p] + spd_dos[OrbitalType.d]
sum_element = None
for pdos in el_dos.values():
if sum_element is None:
sum_element = pdos
else:
sum_element += pdos
# The sums of the SPD or the element doses should be the same.
self.assertTrue((abs(sum_spd.energies - sum_element.energies) < 0.0001).all())
def test_str(self):
self.assertIsNotNone(str(self.dos))
def test_as_dict(self):
dos_dict = self.dos.as_dict()
self.assertIsInstance(dos_dict["energies"], list)
self.assertIsInstance(dos_dict["energies"][0], float)
self.assertNotIsInstance(dos_dict["energies"][0], np.float64)
self.assertIsInstance(dos_dict["densities"]["1"], list)
self.assertIsInstance(dos_dict["densities"]["1"][0], float)
self.assertNotIsInstance(dos_dict["densities"]["1"][0], np.float64)
class DOSTest(PymatgenTest):
def setUp(self):
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "complete_dos.json")) as f:
d = json.load(f)
y = list(zip(d["densities"]["1"], d["densities"]["-1"]))
self.dos = DOS(d["energies"], y, d["efermi"])
def test_get_gap(self):
dos = self.dos
self.assertAlmostEqual(dos.get_gap(), 2.0589, 4)
self.assertEqual(len(dos.x), 301)
self.assertAlmostEqual(
dos.get_interpolated_gap(tol=0.001, abs_tol=False, spin=None)[0],
2.16815942458015,
7,
)
self.assertArrayAlmostEqual(dos.get_cbm_vbm(), (3.8729, 1.8140000000000001))
self.assertAlmostEqual(dos.get_interpolated_value(9.9)[0], 1.744588888888891, 7)
self.assertAlmostEqual(dos.get_interpolated_value(9.9)[1], 1.756888888888886, 7)
self.assertRaises(ValueError, dos.get_interpolated_value, 1000)
self.assertArrayAlmostEqual(dos.get_cbm_vbm(spin=Spin.up), (3.8729, 1.2992999999999999))
self.assertArrayAlmostEqual(dos.get_cbm_vbm(spin=Spin.down), (4.645, 1.8140000000000001))
class SpinPolarizationTest(unittest.TestCase):
def test_spin_polarization(self):
dos_path = os.path.join(PymatgenTest.TEST_FILES_DIR, "dos_spin_polarization_mp-865805.json")
dos = loadfn(dos_path)
self.assertAlmostEqual(dos.spin_polarization, 0.6460514663341762)
class LobsterCompleteDosTest(unittest.TestCase):
def setUp(self):
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "LobsterCompleteDos_spin.json")) as f:
data_spin = json.load(f)
self.LobsterCompleteDOS_spin = LobsterCompleteDos.from_dict(data_spin)
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "LobsterCompleteDos_nonspin.json")) as f:
data_nonspin = json.load(f)
self.LobsterCompleteDOS_nonspin = LobsterCompleteDos.from_dict(data_nonspin)
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "structure_KF.json")) as f:
data_structure = json.load(f)
self.structure = Structure.from_dict(data_structure)
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "LobsterCompleteDos_MnO.json")) as f:
data_MnO = json.load(f)
self.LobsterCompleteDOS_MnO = LobsterCompleteDos.from_dict(data_MnO)
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "LobsterCompleteDos_MnO_nonspin.json")) as f:
data_MnO_nonspin = json.load(f)
self.LobsterCompleteDOS_MnO_nonspin = LobsterCompleteDos.from_dict(data_MnO_nonspin)
with open(os.path.join(PymatgenTest.TEST_FILES_DIR, "structure_MnO.json")) as f:
data_MnO = json.load(f)
self.structure_MnO = Structure.from_dict(data_MnO)
def test_get_site_orbital_dos(self):
# with spin polarization
energies_spin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
fermi = 0.0
PDOS_F_2s_up = [0.00000, 0.00159, 0.00000, 0.00011, 0.00000, 0.00069]
PDOS_F_2s_down = [0.00000, 0.00159, 0.00000, 0.00011, 0.00000, 0.00069]
PDOS_F_2py_up = [0.00000, 0.00160, 0.00000, 0.25801, 0.00000, 0.00029]
PDOS_F_2py_down = [0.00000, 0.00161, 0.00000, 0.25819, 0.00000, 0.00029]
PDOS_F_2pz_up = [0.00000, 0.00161, 0.00000, 0.25823, 0.00000, 0.00029]
PDOS_F_2pz_down = [0.00000, 0.00160, 0.00000, 0.25795, 0.00000, 0.00029]
PDOS_F_2px_up = [0.00000, 0.00160, 0.00000, 0.25805, 0.00000, 0.00029]
PDOS_F_2px_down = [0.00000, 0.00161, 0.00000, 0.25814, 0.00000, 0.00029]
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2s").energies.tolist(),
energies_spin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2s").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2s")
.densities[Spin.up]
.tolist(),
PDOS_F_2s_up,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2s")
.densities[Spin.down]
.tolist(),
PDOS_F_2s_down,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z").energies.tolist(),
energies_spin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_y")
.densities[Spin.up]
.tolist(),
PDOS_F_2py_up,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_y")
.densities[Spin.down]
.tolist(),
PDOS_F_2py_down,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_y").energies.tolist(),
energies_spin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_y").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z")
.densities[Spin.up]
.tolist(),
PDOS_F_2pz_up,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z")
.densities[Spin.down]
.tolist(),
PDOS_F_2pz_down,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_x").energies.tolist(),
energies_spin,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_x")
.densities[Spin.up]
.tolist(),
PDOS_F_2px_up,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_x")
.densities[Spin.down]
.tolist(),
PDOS_F_2px_down,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_spin.get_site_orbital_dos(site=self.structure[0], orbital="2p_x").efermi,
fermi,
)
# without spin polarization
energies_nonspin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
PDOS_F_2s = [0.00000, 0.00320, 0.00000, 0.00017, 0.00000, 0.00060]
PDOS_F_2py = [0.00000, 0.00322, 0.00000, 0.51635, 0.00000, 0.00037]
PDOS_F_2pz = [0.00000, 0.00322, 0.00000, 0.51636, 0.00000, 0.00037]
PDOS_F_2px = [0.00000, 0.00322, 0.00000, 0.51634, 0.00000, 0.00037]
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(
site=self.structure[0], orbital="2s"
).energies.tolist(),
energies_nonspin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2s").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2s")
.densities[Spin.up]
.tolist(),
PDOS_F_2s,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(
site=self.structure[0], orbital="2p_y"
).energies.tolist(),
energies_nonspin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2p_y").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2p_y")
.densities[Spin.up]
.tolist(),
PDOS_F_2py,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(
site=self.structure[0], orbital="2p_z"
).energies.tolist(),
energies_nonspin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2p_z")
.densities[Spin.up]
.tolist(),
PDOS_F_2pz,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(
site=self.structure[0], orbital="2p_x"
).energies.tolist(),
energies_nonspin,
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2p_x").efermi,
fermi,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_site_orbital_dos(site=self.structure[0], orbital="2p_x")
.densities[Spin.up]
.tolist(),
PDOS_F_2px,
)
def test_get_site_t2g_eg_resolved_dos(self):
# with spin polarization
energies = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
efermi = 0.0
PDOS_Mn_3dxy_up = [0.00000, 0.00001, 0.10301, 0.16070, 0.00070, 0.00060]
PDOS_Mn_3dxy_down = [0.00000, 0.00000, 0.00380, 0.00996, 0.03012, 0.21890]
PDOS_Mn_3dyz_up = [0.00000, 0.00001, 0.10301, 0.16070, 0.00070, 0.00060]
PDOS_Mn_3dyz_down = [0.00000, 0.00000, 0.00380, 0.00996, 0.03012, 0.21890]
PDOS_Mn_3dz2_up = [0.00000, 0.00001, 0.09608, 0.16941, 0.00028, 0.00028]
PDOS_Mn_3dz2_down = [0.00000, 0.00000, 0.00433, 0.00539, 0.06000, 0.19427]
PDOS_Mn_3dxz_up = [0.00000, 0.00001, 0.09746, 0.16767, 0.00036, 0.00034]
PDOS_Mn_3dxz_down = [0.00000, 0.00000, 0.00422, 0.00630, 0.05402, 0.19919]
PDOS_Mn_3dx2_up = [0.00000, 0.00001, 0.09330, 0.17289, 0.00011, 0.00015]
PDOS_Mn_3dx2_down = [0.00000, 0.00000, 0.00454, 0.00356, 0.07195, 0.18442]
PDOS_Mn_eg_up = (np.array(PDOS_Mn_3dx2_up) + np.array(PDOS_Mn_3dz2_up)).tolist()
PDOS_Mn_eg_down = (np.array(PDOS_Mn_3dx2_down) + np.array(PDOS_Mn_3dz2_down)).tolist()
PDOS_Mn_t2g_up = (np.array(PDOS_Mn_3dxy_up) + np.array(PDOS_Mn_3dxz_up) + np.array(PDOS_Mn_3dyz_up)).tolist()
PDOS_Mn_t2g_down = (
np.array(PDOS_Mn_3dxy_down) + np.array(PDOS_Mn_3dxz_down) + np.array(PDOS_Mn_3dyz_down)
).tolist()
for iel, el in enumerate(
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["e_g"]
.densities[Spin.up]
.tolist()
):
self.assertAlmostEqual(el, PDOS_Mn_eg_up[iel])
for iel, el in enumerate(
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["e_g"]
.densities[Spin.down]
.tolist()
):
self.assertAlmostEqual(el, PDOS_Mn_eg_down[iel])
for iel, el in enumerate(
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["t2g"]
.densities[Spin.up]
.tolist()
):
self.assertAlmostEqual(el, PDOS_Mn_t2g_up[iel])
for iel, el in enumerate(
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["t2g"]
.densities[Spin.down]
.tolist()
):
self.assertAlmostEqual(el, PDOS_Mn_t2g_down[iel])
self.assertListEqual(
energies,
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["e_g"].energies.tolist(),
)
self.assertListEqual(
energies,
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["t2g"].energies.tolist(),
)
self.assertEqual(
efermi,
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["e_g"].efermi,
)
self.assertEqual(
efermi,
self.LobsterCompleteDOS_MnO.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["t2g"].efermi,
)
# without spin polarization
energies_nonspin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
PDOS_Mn_3dxy = [0.00000, 0.00000, 0.02032, 0.16094, 0.33659, 0.01291]
PDOS_Mn_3dyz = [0.00000, 0.00000, 0.02032, 0.16126, 0.33628, 0.01290]
PDOS_Mn_3dz2 = [0.00000, 0.00000, 0.02591, 0.31460, 0.18658, 0.00509]
PDOS_Mn_3dxz = [0.00000, 0.00000, 0.02484, 0.28501, 0.21541, 0.00663]
PDOS_Mn_3dx2 = [0.00000, 0.00000, 0.02817, 0.37594, 0.12669, 0.00194]
PDOS_Mn_eg = (np.array(PDOS_Mn_3dx2) + np.array(PDOS_Mn_3dz2)).tolist()
PDOS_Mn_t2g = (np.array(PDOS_Mn_3dxy) + np.array(PDOS_Mn_3dxz) + np.array(PDOS_Mn_3dyz)).tolist()
for iel, el in enumerate(
self.LobsterCompleteDOS_MnO_nonspin.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["e_g"]
.densities[Spin.up]
.tolist()
):
self.assertAlmostEqual(el, PDOS_Mn_eg[iel])
for iel, el in enumerate(
self.LobsterCompleteDOS_MnO_nonspin.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["t2g"]
.densities[Spin.up]
.tolist()
):
self.assertAlmostEqual(el, PDOS_Mn_t2g[iel])
self.assertListEqual(
energies_nonspin,
self.LobsterCompleteDOS_MnO_nonspin.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])[
"e_g"
].energies.tolist(),
)
self.assertListEqual(
energies_nonspin,
self.LobsterCompleteDOS_MnO_nonspin.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])[
"t2g"
].energies.tolist(),
)
self.assertEqual(
efermi,
self.LobsterCompleteDOS_MnO_nonspin.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["e_g"].efermi,
)
self.assertEqual(
efermi,
self.LobsterCompleteDOS_MnO_nonspin.get_site_t2g_eg_resolved_dos(self.structure_MnO[1])["t2g"].efermi,
)
def test_get_spd_dos(self):
# with spin polarization
energies_spin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
fermi = 0.0
PDOS_F_2s_up = [0.00000, 0.00159, 0.00000, 0.00011, 0.00000, 0.00069]
PDOS_F_2s_down = [0.00000, 0.00159, 0.00000, 0.00011, 0.00000, 0.00069]
PDOS_F_2py_up = [0.00000, 0.00160, 0.00000, 0.25801, 0.00000, 0.00029]
PDOS_F_2py_down = [0.00000, 0.00161, 0.00000, 0.25819, 0.00000, 0.00029]
PDOS_F_2pz_up = [0.00000, 0.00161, 0.00000, 0.25823, 0.00000, 0.00029]
PDOS_F_2pz_down = [0.00000, 0.00160, 0.00000, 0.25795, 0.00000, 0.00029]
PDOS_F_2px_up = [0.00000, 0.00160, 0.00000, 0.25805, 0.00000, 0.00029]
PDOS_F_2px_down = [0.00000, 0.00161, 0.00000, 0.25814, 0.00000, 0.00029]
PDOS_K_3s_up = [0.00000, 0.00000, 0.00000, 0.00008, 0.00000, 0.00007]
PDOS_K_3s_down = [0.00000, 0.00000, 0.00000, 0.00008, 0.00000, 0.00007]
PDOS_K_4s_up = [0.00000, 0.00018, 0.00000, 0.02035, 0.00000, 0.02411]
PDOS_K_4s_down = [0.00000, 0.00018, 0.00000, 0.02036, 0.00000, 0.02420]
PDOS_K_3py_up = [0.00000, 0.26447, 0.00000, 0.00172, 0.00000, 0.00000]
PDOS_K_3py_down = [0.00000, 0.26446, 0.00000, 0.00172, 0.00000, 0.00000]
PDOS_K_3pz_up = [0.00000, 0.26446, 0.00000, 0.00172, 0.00000, 0.00000]
PDOS_K_3pz_down = [0.00000, 0.26447, 0.00000, 0.00172, 0.00000, 0.00000]
PDOS_K_3px_up = [0.00000, 0.26447, 0.00000, 0.00172, 0.00000, 0.00000]
PDOS_K_3px_down = [0.00000, 0.26446, 0.00000, 0.00172, 0.00000, 0.00000]
PDOS_s_up = (np.array(PDOS_F_2s_up) + np.array(PDOS_K_3s_up) + np.array(PDOS_K_4s_up)).tolist()
PDOS_s_down = (np.array(PDOS_F_2s_down) + np.array(PDOS_K_3s_down) + np.array(PDOS_K_4s_down)).tolist()
PDOS_p_up = (
np.array(PDOS_F_2py_up)
+ np.array(PDOS_F_2pz_up)
+ np.array(PDOS_F_2px_up)
+ np.array(PDOS_K_3py_up)
+ np.array(PDOS_K_3pz_up)
+ np.array(PDOS_K_3px_up)
).tolist()
PDOS_p_down = (
np.array(PDOS_F_2py_down)
+ np.array(PDOS_F_2pz_down)
+ np.array(PDOS_F_2px_down)
+ np.array(PDOS_K_3py_down)
+ np.array(PDOS_K_3pz_down)
+ np.array(PDOS_K_3px_down)
).tolist()
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_spd_dos()[OrbitalType(0)].energies.tolist(),
energies_spin,
)
self.assertEqual(self.LobsterCompleteDOS_spin.get_spd_dos()[OrbitalType(0)].efermi, fermi)
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_spin.get_spd_dos()[OrbitalType(0)].densities[Spin.up].tolist()
):
self.assertAlmostEqual(listel, PDOS_s_up[ilistel])
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_spin.get_spd_dos()[OrbitalType(0)].densities[Spin.down].tolist()
):
self.assertAlmostEqual(listel, PDOS_s_down[ilistel])
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_spin.get_spd_dos()[OrbitalType(1)].densities[Spin.up].tolist()
):
self.assertAlmostEqual(listel, PDOS_p_up[ilistel])
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_spin.get_spd_dos()[OrbitalType(1)].densities[Spin.down].tolist()
):
self.assertAlmostEqual(listel, PDOS_p_down[ilistel])
# without spin polarization
energies_nonspin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
PDOS_F_2s = [0.00000, 0.00320, 0.00000, 0.00017, 0.00000, 0.00060]
PDOS_F_2py = [0.00000, 0.00322, 0.00000, 0.51635, 0.00000, 0.00037]
PDOS_F_2pz = [0.00000, 0.00322, 0.00000, 0.51636, 0.00000, 0.00037]
PDOS_F_2px = [0.00000, 0.00322, 0.00000, 0.51634, 0.00000, 0.00037]
PDOS_K_3s = [0.00000, 0.00000, 0.00000, 0.00005, 0.00000, 0.00004]
PDOS_K_4s = [0.00000, 0.00040, 0.00000, 0.04039, 0.00000, 0.02241]
PDOS_K_3py = [0.00000, 0.52891, 0.00000, 0.00345, 0.00000, 0.00000]
PDOS_K_3pz = [0.00000, 0.52891, 0.00000, 0.00345, 0.00000, 0.00000]
PDOS_K_3px = [0.00000, 0.52891, 0.00000, 0.00345, 0.00000, 0.00000]
PDOS_s = (np.array(PDOS_F_2s) + np.array(PDOS_K_3s) + np.array(PDOS_K_4s)).tolist()
PDOS_p = (
np.array(PDOS_F_2py)
+ np.array(PDOS_F_2pz)
+ np.array(PDOS_F_2px)
+ np.array(PDOS_K_3py)
+ np.array(PDOS_K_3pz)
+ np.array(PDOS_K_3px)
).tolist()
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_spd_dos()[OrbitalType(0)].energies.tolist(),
energies_nonspin,
)
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_nonspin.get_spd_dos()[OrbitalType(0)].densities[Spin.up].tolist()
):
self.assertAlmostEqual(listel, PDOS_s[ilistel])
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_nonspin.get_spd_dos()[OrbitalType(1)].densities[Spin.up].tolist()
):
self.assertAlmostEqual(listel, PDOS_p[ilistel])
def test_get_element_spd_dos(self):
# with spin polarization
energies_spin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
fermi = 0.0
PDOS_F_2s_up = [0.00000, 0.00159, 0.00000, 0.00011, 0.00000, 0.00069]
PDOS_F_2s_down = [0.00000, 0.00159, 0.00000, 0.00011, 0.00000, 0.00069]
PDOS_F_2py_up = [0.00000, 0.00160, 0.00000, 0.25801, 0.00000, 0.00029]
PDOS_F_2py_down = [0.00000, 0.00161, 0.00000, 0.25819, 0.00000, 0.00029]
PDOS_F_2pz_up = [0.00000, 0.00161, 0.00000, 0.25823, 0.00000, 0.00029]
PDOS_F_2pz_down = [0.00000, 0.00160, 0.00000, 0.25795, 0.00000, 0.00029]
PDOS_F_2px_up = [0.00000, 0.00160, 0.00000, 0.25805, 0.00000, 0.00029]
PDOS_F_2px_down = [0.00000, 0.00161, 0.00000, 0.25814, 0.00000, 0.00029]
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)].energies.tolist(),
energies_spin,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)]
.densities[Spin.up]
.tolist(),
PDOS_F_2s_up,
)
self.assertListEqual(
self.LobsterCompleteDOS_spin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)]
.densities[Spin.down]
.tolist(),
PDOS_F_2s_down,
)
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_spin.get_element_spd_dos(el=Element("F"))[OrbitalType(1)]
.densities[Spin.up]
.tolist()
):
self.assertAlmostEqual(
listel,
(np.array(PDOS_F_2px_up) + np.array(PDOS_F_2py_up) + np.array(PDOS_F_2pz_up)).tolist()[ilistel],
)
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_spin.get_element_spd_dos(el=Element("F"))[OrbitalType(1)]
.densities[Spin.down]
.tolist()
):
self.assertAlmostEqual(
listel,
(np.array(PDOS_F_2px_down) + np.array(PDOS_F_2py_down) + np.array(PDOS_F_2pz_down)).tolist()[ilistel],
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_spin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)].efermi,
fermi,
)
# without spin polarization
energies_nonspin = [-11.25000, -7.50000, -3.75000, 0.00000, 3.75000, 7.50000]
efermi = 0.0
PDOS_F_2s = [0.00000, 0.00320, 0.00000, 0.00017, 0.00000, 0.00060]
PDOS_F_2py = [0.00000, 0.00322, 0.00000, 0.51635, 0.00000, 0.00037]
PDOS_F_2pz = [0.00000, 0.00322, 0.00000, 0.51636, 0.00000, 0.00037]
PDOS_F_2px = [0.00000, 0.00322, 0.00000, 0.51634, 0.00000, 0.00037]
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)].energies.tolist(),
energies_nonspin,
)
self.assertListEqual(
self.LobsterCompleteDOS_nonspin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)]
.densities[Spin.up]
.tolist(),
PDOS_F_2s,
)
for ilistel, listel in enumerate(
self.LobsterCompleteDOS_nonspin.get_element_spd_dos(el=Element("F"))[OrbitalType(1)]
.densities[Spin.up]
.tolist()
):
self.assertAlmostEqual(
listel,
(np.array(PDOS_F_2px) + np.array(PDOS_F_2py) + np.array(PDOS_F_2pz)).tolist()[ilistel],
)
self.assertAlmostEqual(
self.LobsterCompleteDOS_nonspin.get_element_spd_dos(el=Element("F"))[OrbitalType(0)].efermi,
efermi,
)
if __name__ == "__main__":
unittest.main()
| 44.216142 | 120 | 0.629664 | 4,356 | 32,322 | 4.431818 | 0.070707 | 0.061538 | 0.065631 | 0.027299 | 0.855944 | 0.809531 | 0.784719 | 0.775654 | 0.746128 | 0.721005 | 0 | 0.144364 | 0.243054 | 32,322 | 730 | 121 | 44.276712 | 0.644691 | 0.012561 | 0 | 0.545166 | 0 | 0 | 0.020374 | 0.004921 | 0 | 0 | 0 | 0 | 0.207607 | 1 | 0.031696 | false | 0 | 0.015848 | 0 | 0.057052 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed4abcd9f46a2612b41f900dd184f4d3e896891c | 12,266 | py | Python | genonets/test/test_epistasis.py | fkhalid/genonets | 0dcd2e35ebf6957b8d0934e6033e2c962938c18a | [
"MIT"
] | 4 | 2016-03-01T10:43:40.000Z | 2021-07-17T14:53:04.000Z | genonets/test/test_epistasis.py | fkhalid/genonets | 0dcd2e35ebf6957b8d0934e6033e2c962938c18a | [
"MIT"
] | 15 | 2016-04-13T10:54:49.000Z | 2020-11-07T16:17:34.000Z | genonets/test/test_epistasis.py | fkhalid/genonets | 0dcd2e35ebf6957b8d0934e6033e2c962938c18a | [
"MIT"
] | 1 | 2016-03-01T10:46:44.000Z | 2016-03-01T10:46:44.000Z |
import tempfile
import genonets.test.utils as utils
import genonets.test.compare_result_files as comparator
from genonets.cmdl_handler import CmdParser
from genonets.interface import Genonets
from genonets.constants import AnalysisConstants as Ac
class TestEpistasis:
@staticmethod
def run_test(cmd_args, ground_truth_dir, data_dir):
args = CmdParser(arguments=cmd_args).get_args()
gn = Genonets(args)
gn.create()
gn.analyze(analyses=[Ac.EPISTASIS])
gn.save_network_results()
gn.save_genotype_results()
assert utils.num_files_matches(ground_truth_dir, data_dir)
assert comparator.compare_genotype_set_measures(
ground_truth_dir, data_dir
)
@staticmethod
def test_1():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_1'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test1_input.tsv',
'--codon-alphabet=RNA',
'--genetic-code-file=genonets/test/data/inputs/epistasis/code_standard.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_2():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_2'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test2_input.tsv',
'--codon-alphabet=RNA',
'--genetic-code-file=genonets/test/data/inputs/epistasis/code_standard.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_3():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_3'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test3_input.tsv',
'--codon-alphabet=RNA',
'--genetic-code-file=genonets/test/data/inputs/epistasis/code_standard.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_4():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_4'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test4_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_5():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_5'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test5_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_6():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_6'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test6_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_7():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_7'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test7_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_8():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_8'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test8_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_9():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_9'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test9_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_10():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_10'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test10_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_11():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_11'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test11_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_12():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_12'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test12_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_13():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_13'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test13_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_14():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_14'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test14_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_15():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_15'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test15_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_16():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_16'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test16_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_17():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_17'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test17_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_18():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_18'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test18_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_19():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_19'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test19_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_20():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_20'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test20_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_21():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_21'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test21_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_22():
ground_truth_dir = 'genonets/test/data/ground_truth/epistasis/test_22'
with tempfile.TemporaryDirectory(prefix='test_epistasis_') as data_dir:
cmd_args = [
'--alphabet=Protein',
'--tau=0.0',
'--input-file=genonets/test/data/inputs/epistasis/test22_input.tsv',
f'--output-path={data_dir}'
]
TestEpistasis.run_test(cmd_args, ground_truth_dir, data_dir)
| 35.865497 | 92 | 0.59563 | 1,367 | 12,266 | 5.066569 | 0.079005 | 0.109587 | 0.095004 | 0.064973 | 0.908461 | 0.905429 | 0.905429 | 0.905429 | 0.899653 | 0.8946 | 0 | 0.017011 | 0.285912 | 12,266 | 341 | 93 | 35.970674 | 0.773718 | 0 | 0 | 0.602996 | 0 | 0 | 0.34415 | 0.263922 | 0 | 0 | 0 | 0 | 0.007491 | 1 | 0.086142 | false | 0 | 0.022472 | 0 | 0.11236 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed552265a7647f2a260306c434e014749837af8f | 24 | py | Python | setup.py | UnicycleDumpTruck/VetRFID | a679bf231cda1011692c92c476fda7c540a12687 | [
"MIT"
] | null | null | null | setup.py | UnicycleDumpTruck/VetRFID | a679bf231cda1011692c92c476fda7c540a12687 | [
"MIT"
] | null | null | null | setup.py | UnicycleDumpTruck/VetRFID | a679bf231cda1011692c92c476fda7c540a12687 | [
"MIT"
] | null | null | null | # TODO this whole file!
| 12 | 23 | 0.708333 | 4 | 24 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 1 | 24 | 24 | 0.894737 | 0.875 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed670a0a56338ac3d01e739c93eb089c51b71a83 | 41 | py | Python | tdasampling/__init__.py | P-Edwards/tdasampling | 6c5d3683ca920b32f8bdb997ea2aa47f81158bd6 | [
"MIT"
] | 2 | 2019-03-20T11:06:32.000Z | 2020-04-05T23:52:11.000Z | tdasampling/__init__.py | P-Edwards/tdasampling | 6c5d3683ca920b32f8bdb997ea2aa47f81158bd6 | [
"MIT"
] | 1 | 2020-04-24T08:39:33.000Z | 2020-04-24T15:49:23.000Z | tdasampling/__init__.py | P-Edwards/tdasampling | 6c5d3683ca920b32f8bdb997ea2aa47f81158bd6 | [
"MIT"
] | null | null | null | from .algorithm import sampling_algorithm | 41 | 41 | 0.902439 | 5 | 41 | 7.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed6e723a5cb39f265b06f8494849397744ced038 | 54 | py | Python | addc/version.py | carsonfarmer/AddC | 175829cafbf852b4106d4290d6fdd67a7ba57dcd | [
"MIT"
] | null | null | null | addc/version.py | carsonfarmer/AddC | 175829cafbf852b4106d4290d6fdd67a7ba57dcd | [
"MIT"
] | null | null | null | addc/version.py | carsonfarmer/AddC | 175829cafbf852b4106d4290d6fdd67a7ba57dcd | [
"MIT"
] | null | null | null | version = '0.1.0.dev-5c375d3'
short_version = '0.1.0'
| 18 | 29 | 0.666667 | 11 | 54 | 3.181818 | 0.545455 | 0.457143 | 0.514286 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229167 | 0.111111 | 54 | 2 | 30 | 27 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed6eb21af826866fd0eb0c86f468b6ec4ffd4397 | 183 | py | Python | pymbolic/sympy_interface.py | thomasgibson/pymbolic | a4a873f10bfc4c17dec92fe047a4638298cd63fc | [
"MIT"
] | 70 | 2015-08-10T20:24:24.000Z | 2022-03-31T04:08:35.000Z | pymbolic/sympy_interface.py | thomasgibson/pymbolic | a4a873f10bfc4c17dec92fe047a4638298cd63fc | [
"MIT"
] | 48 | 2015-04-22T16:13:07.000Z | 2022-03-25T04:27:13.000Z | pymbolic/sympy_interface.py | thomasgibson/pymbolic | a4a873f10bfc4c17dec92fe047a4638298cd63fc | [
"MIT"
] | 20 | 2015-11-20T18:47:11.000Z | 2021-09-28T23:44:21.000Z | from pymbolic.interop.sympy import * # noqa
from warnings import warn
warn("pymbolic.sympy_interface is deprecated. Use pymbolic.interop.sympy instead",
DeprecationWarning)
| 30.5 | 82 | 0.781421 | 22 | 183 | 6.454545 | 0.636364 | 0.211268 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147541 | 183 | 5 | 83 | 36.6 | 0.910256 | 0.021858 | 0 | 0 | 0 | 0 | 0.418079 | 0.259887 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
71ec3cdf27f852eff07099ee6c9b4a1b95a260c8 | 37 | py | Python | translator/__init__.py | UST-MICO/msg_translator_prototype | 4c15fe526168ea1e284ce467de44f2f452197d21 | [
"Apache-2.0"
] | null | null | null | translator/__init__.py | UST-MICO/msg_translator_prototype | 4c15fe526168ea1e284ce467de44f2f452197d21 | [
"Apache-2.0"
] | null | null | null | translator/__init__.py | UST-MICO/msg_translator_prototype | 4c15fe526168ea1e284ce467de44f2f452197d21 | [
"Apache-2.0"
] | null | null | null | from translator.translator import *
| 12.333333 | 35 | 0.810811 | 4 | 37 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 2 | 36 | 18.5 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c2e02f5923dd28a09feefbb7758d4a8e8e7249e | 4,129 | py | Python | examples/09-interpreter.py | Tomas1861/bijou | 8db9a42a138c7480385c752c8106e35dd067a493 | [
"MIT"
] | 1 | 2020-02-04T15:16:58.000Z | 2020-02-04T15:16:58.000Z | examples/09-interpreter.py | Tomas1861/bijou | 8db9a42a138c7480385c752c8106e35dd067a493 | [
"MIT"
] | null | null | null | examples/09-interpreter.py | Tomas1861/bijou | 8db9a42a138c7480385c752c8106e35dd067a493 | [
"MIT"
] | null | null | null | import sys
sys.path.append('..')
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from bijou.learner import Learner
from bijou.data import Dataset, DataLoader, DataBunch
from bijou.metrics import accuracy
from bijou.callbacks import Interpreter
from bijou.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt
x_train, y_train, x_valid, y_valid, x_test, y_test = mnist()
train_ds, valid_ds, test_ds = Dataset(x_train, y_train), Dataset(x_valid, y_valid), Dataset(x_test, y_test)
bs = 128
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_dl = DataLoader(valid_ds, batch_size=bs)
test_dl = DataLoader(test_ds, batch_size=bs)
data = DataBunch(train_dl, valid_dl)
in_dim = data.train_ds.x.shape[1]
h_dim = 128
model = nn.Sequential(nn.Linear(in_dim, h_dim), nn.ReLU(), nn.Linear(h_dim, 10))
opt = optim.SGD(model.parameters(), lr=0.35)
loss_func = F.cross_entropy
learner = Learner(model, opt, loss_func, data, metrics=[accuracy], callbacks=Interpreter())
learner.fit(3)
learner.test(test_dl)
def loss_noreduction(pred, target):
return F.cross_entropy(pred, target, reduction='none')
scores, xs, ys, preds, indecies = learner.interpreter.top_data(metric=loss_noreduction,
k=10, target='train', largest=True)
print(scores)
print(indecies)
plt.figure(figsize=[12, 6])
for i in range(10):
plt.subplot(2, 5, i+1)
plt.imshow(xs[i].view([28, -1]))
plt.title(f'{ys[i]} --> {np.argmax(preds[i])}')
# m = learner.interpreter.confusion_matrix()
learner.interpreter.plot_confusion(target='train', class_names=range(10))
learner.interpreter.plot_confusion(target='val', class_names=range(10))
learner.interpreter.plot_confusion(target='test', class_names=range(10))
mcfs = learner.interpreter.most_confused()
print([[c[0], len(c[1])]for c in mcfs])
plt.show()
# import sys
# sys.path.append('..')
# import torch
# import torch.nn as nn
# import torch.nn.functional as F
# from torch import optim
# from bijou.learner import Learner
# from bijou.data import Dataset, DataLoader, DataBunch
# from bijou.metrics import accuracy
# from bijou.callbacks import Interpreter
# from datasets import mnist_data
# import matplotlib.pyplot as plt
# import numpy as np
# if torch.cuda.is_available():
# torch.cuda.manual_seed_all(1)
# else:
# torch.manual_seed(1)
# # 1. ------ 数据
# x_train, y_train, x_valid, y_valid = mnist_data()
# x_test = x_valid[:500]
# y_test = y_valid[:500]
# train_ds, valid_ds, test_ds = Dataset(x_train, y_train), Dataset(x_valid, y_valid), Dataset(x_test, y_test)
# bs = 128
# train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
# valid_dl = DataLoader(valid_ds, batch_size=bs, shuffle=True)
# test_dl = DataLoader(test_ds, batch_size=bs, shuffle=True)
# data = DataBunch(train_dl, valid_dl)
# # 2. ------ 模型和优化器
# in_dim = data.train_ds.x.shape[1]
# out_dim = y_train.max().item()+1
# h_dim = 50
# model = nn.Sequential(nn.Linear(in_dim, h_dim), nn.ReLU(), nn.Linear(h_dim, out_dim))
# opt = optim.SGD(model.parameters(), lr=0.35)
# # 3. ------ learner
# loss_func = F.cross_entropy
# learner = Learner(model, opt, loss_func, data, metrics=[accuracy], callbacks=Interpreter())
# # 4. ------ fit
# learner.fit(1)
# # 5. ------ test
# learner.test(test_dl)
# def loss(pred, target):
# return F.cross_entropy(pred, target, reduction='none')
# scores, xs, ys, preds, indecies = learner.interpreter.top_data(loss, k=10, target='train', largest=True)
# print(scores)
# print(indecies)
# # print(xs)
# plt.figure(figsize=[12, 6])
# for i in range(10):
# plt.subplot(2, 5, i+1)
# plt.imshow(xs[i].view([28, -1]))
# plt.title(f'{ys[i]} --> {np.argmax(preds[i])}')
# # m = learner.interpreter.confusion_matrix()
# learner.interpreter.plot_confusion(target='train', class_names=range(10))
# learner.interpreter.plot_confusion(target='val', class_names=range(10))
# learner.interpreter.plot_confusion(target='test', class_names=range(10))
# mcfs = learner.interpreter.most_confused()
# print([[c[0], len(c[1])]for c in mcfs])
# plt.show()
| 29.92029 | 109 | 0.708888 | 648 | 4,129 | 4.354938 | 0.191358 | 0.076541 | 0.023388 | 0.02764 | 0.884125 | 0.864989 | 0.821049 | 0.801559 | 0.722892 | 0.722892 | 0 | 0.022086 | 0.133689 | 4,129 | 137 | 110 | 30.138686 | 0.766844 | 0.517317 | 0 | 0 | 0 | 0 | 0.029076 | 0.010903 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.25 | 0.022727 | 0.295455 | 0.068182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c42c0bcfdf14afa04d92a0a99943790fc91ca79 | 52,302 | py | Python | tests/src/OneLogin/saml2_tests/auth_test.py | cerebro-data/python-saml | 3bda379bf4cf893f0cd2727a67a5656bda24dae9 | [
"MIT"
] | null | null | null | tests/src/OneLogin/saml2_tests/auth_test.py | cerebro-data/python-saml | 3bda379bf4cf893f0cd2727a67a5656bda24dae9 | [
"MIT"
] | null | null | null | tests/src/OneLogin/saml2_tests/auth_test.py | cerebro-data/python-saml | 3bda379bf4cf893f0cd2727a67a5656bda24dae9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2014, OneLogin, Inc.
# All rights reserved.
from base64 import b64decode, b64encode
import json
from os.path import dirname, join, exists
import unittest
from teamcity import is_running_under_teamcity
from teamcity.unittestpy import TeamcityTestRunner
from urlparse import urlparse, parse_qs
from onelogin.saml2.auth import OneLogin_Saml2_Auth
from onelogin.saml2.constants import OneLogin_Saml2_Constants
from onelogin.saml2.settings import OneLogin_Saml2_Settings
from onelogin.saml2.utils import OneLogin_Saml2_Utils
from onelogin.saml2.logout_request import OneLogin_Saml2_Logout_Request
from onelogin.saml2.errors import OneLogin_Saml2_Error
class OneLogin_Saml2_Auth_Test(unittest.TestCase):
data_path = join(dirname(dirname(dirname(dirname(__file__)))), 'data')
settings_path = join(dirname(dirname(dirname(dirname(__file__)))), 'settings')
def loadSettingsJSON(self, name='settings1.json'):
filename = join(self.settings_path, name)
if exists(filename):
stream = open(filename, 'r')
settings = json.load(stream)
stream.close()
return settings
else:
raise Exception('Settings json file does not exist')
def file_contents(self, filename):
f = open(filename, 'r')
content = f.read()
f.close()
return content
def get_request(self):
return {
'http_host': 'example.com',
'script_name': '/index.html',
'get_data': {}
}
def testGetSettings(self):
"""
Tests the get_settings method of the OneLogin_Saml2_Auth class
Build a OneLogin_Saml2_Settings object with a setting array
and compare the value returned from the method of the
auth object
"""
settings_info = self.loadSettingsJSON()
settings = OneLogin_Saml2_Settings(settings_info)
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
auth_settings = auth.get_settings()
self.assertEqual(settings.get_sp_data(), auth_settings.get_sp_data())
def testGetSSOurl(self):
"""
Tests the get_sso_url method of the OneLogin_Saml2_Auth class
"""
settings_info = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertEqual(auth.get_sso_url(), sso_url)
def testGetSLOurl(self):
"""
Tests the get_slo_url method of the OneLogin_Saml2_Auth class
"""
settings_info = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertEqual(auth.get_slo_url(), slo_url)
def testGetSessionIndex(self):
"""
Tests the get_session_index method of the OneLogin_Saml2_Auth class
"""
settings_info = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
self.assertIsNone(auth.get_session_index())
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': message
}
auth2 = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
self.assertIsNone(auth2.get_session_index())
auth2.process_response()
self.assertEqual('_6273d77b8cde0c333ec79d22a9fa0003b9fe2d75cb', auth2.get_session_index())
def testGetSessionExpiration(self):
"""
Tests the get_session_expiration method of the OneLogin_Saml2_Auth class
"""
settings_info = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
self.assertIsNone(auth.get_session_expiration())
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': message
}
auth2 = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
self.assertIsNone(auth2.get_session_expiration())
auth2.process_response()
self.assertEqual(1392802621, auth2.get_session_expiration())
def testGetLastErrorReason(self):
"""
Tests the get_last_error_reason method of the OneLogin_Saml2_Auth class
Case Invalid Response
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'response1.xml.base64'))
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_response()
self.assertEqual(auth.get_last_error_reason(), 'Signature validation failed. SAML Response rejected')
def testProcessNoResponse(self):
"""
Tests the process_response method of the OneLogin_Saml2_Auth class
Case No Response, An exception is throw
"""
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=self.loadSettingsJSON())
with self.assertRaisesRegexp(OneLogin_Saml2_Error, 'SAML Response not found'):
auth.process_response()
self.assertEqual(auth.get_errors(), ['invalid_binding'])
def testProcessResponseInvalid(self):
"""
Tests the process_response method of the OneLogin_Saml2_Auth class
Case Invalid Response, After processing the response the user
is not authenticated, attributes are notreturned, no nameID and
the error array is not empty, contains 'invalid_response
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'response1.xml.base64'))
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_response()
self.assertFalse(auth.is_authenticated())
self.assertEqual(len(auth.get_attributes()), 0)
self.assertEqual(auth.get_nameid(), None)
self.assertEqual(auth.get_attribute('uid'), None)
self.assertEqual(auth.get_errors(), ['invalid_response'])
def testProcessResponseInvalidRequestId(self):
"""
Tests the process_response method of the OneLogin_Saml2_Auth class
Case Invalid Response, Invalid requestID
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'unsigned_response.xml.base64'))
plain_message = b64decode(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/acs.php', current_url)
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': b64encode(plain_message)
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
request_id = 'invalid'
auth.process_response(request_id)
self.assertEqual('No Signature found. SAML Response rejected', auth.get_last_error_reason())
auth.set_strict(True)
auth.process_response(request_id)
self.assertEqual(auth.get_errors(), ['invalid_response'])
self.assertEqual('The InResponseTo of the Response: _57bcbf70-7b1f-012e-c821-782bcb13bb38, does not match the ID of the AuthNRequest sent by the SP: invalid', auth.get_last_error_reason())
valid_request_id = '_57bcbf70-7b1f-012e-c821-782bcb13bb38'
auth.process_response(valid_request_id)
self.assertEqual('No Signature found. SAML Response rejected', auth.get_last_error_reason())
def testProcessResponseValid(self):
"""
Tests the process_response method of the OneLogin_Saml2_Auth class
Case Valid Response, After processing the response the user
is authenticated, attributes are returned, also has a nameID and
the error array is empty
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_response()
self.assertTrue(auth.is_authenticated())
self.assertEqual(len(auth.get_errors()), 0)
self.assertEqual('492882615acf31c8096b627245d76ae53036c090', auth.get_nameid())
attributes = auth.get_attributes()
self.assertNotEqual(len(attributes), 0)
self.assertEqual(auth.get_attribute('mail'), attributes['mail'])
session_index = auth.get_session_index()
self.assertEqual('_6273d77b8cde0c333ec79d22a9fa0003b9fe2d75cb', session_index)
def testRedirectTo(self):
"""
Tests the redirect_to method of the OneLogin_Saml2_Auth class
(phpunit raises an exception when a redirect is executed, the
exception is catched and we check that the targetURL is correct)
Case redirect without url parameter
"""
request_data = self.get_request()
relay_state = 'http://sp.example.com'
request_data['get_data']['RelayState'] = relay_state
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
target_url = auth.redirect_to()
self.assertEqual(target_url, relay_state)
def testRedirectTowithUrl(self):
"""
Tests the redirect_to method of the OneLogin_Saml2_Auth class
(phpunit raises an exception when a redirect is executed, the
exception is catched and we check that the targetURL is correct)
Case redirect with url parameter
"""
request_data = self.get_request()
relay_state = 'http://sp.example.com'
url_2 = 'http://sp2.example.com'
request_data['get_data']['RelayState'] = relay_state
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
target_url = auth.redirect_to(url_2)
self.assertEqual(target_url, url_2)
def testProcessNoSLO(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case No Message, An exception is throw
"""
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=self.loadSettingsJSON())
with self.assertRaisesRegexp(OneLogin_Saml2_Error, 'SAML LogoutRequest/LogoutResponse not found'):
auth.process_slo(True)
def testProcessSLOResponseInvalid(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Invalid Logout Response
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_responses', 'logout_response_deflated.xml.base64'))
request_data['get_data']['SAMLResponse'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_slo(True)
self.assertEqual(len(auth.get_errors()), 0)
auth.set_strict(True)
auth.process_slo(True)
# The Destination fails
self.assertEqual(auth.get_errors(), ['invalid_logout_response'])
auth.set_strict(False)
auth.process_slo(True)
self.assertEqual(len(auth.get_errors()), 0)
def testProcessSLOResponseNoSucess(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Logout Response not sucess
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_responses', 'invalids', 'status_code_responder.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLResponse'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.set_strict(True)
auth.process_slo(True)
self.assertEqual(auth.get_errors(), ['logout_not_success'])
def testProcessSLOResponseRequestId(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Logout Response with valid and invalid Request ID
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_responses', 'logout_response_deflated.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLResponse'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
request_id = 'wrongID'
auth.set_strict(True)
auth.process_slo(True, request_id)
self.assertEqual(auth.get_errors(), ['invalid_logout_response'])
request_id = 'ONELOGIN_21584ccdfaca36a145ae990442dcd96bfe60151e'
auth.process_slo(True, request_id)
self.assertEqual(len(auth.get_errors()), 0)
def testProcessSLOResponseValid(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Valid Logout Response
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_responses', 'logout_response_deflated.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLResponse'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
# FIXME
# if (!isset($_SESSION)) {
# $_SESSION = array();
# }
# $_SESSION['samltest'] = true;
auth.set_strict(True)
auth.process_slo(True)
self.assertEqual(len(auth.get_errors()), 0)
# FIXME
# // Session keep alive
# $this->assertTrue(isset($_SESSION['samltest']));
# $this->assertTrue($_SESSION['samltest']);
def testProcessSLOResponseValidDeletingSession(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Valid Logout Response, validating deleting the local session
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_responses', 'logout_response_deflated.xml.base64'))
# FIXME
# if (!isset($_SESSION)) {
# $_SESSION = array();
# }
# $_SESSION['samltest'] = true;
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLResponse'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.set_strict(True)
auth.process_slo(False)
self.assertEqual(len(auth.get_errors()), 0)
# FIXME
# $this->assertFalse(isset($_SESSION['samltest']));
def testProcessSLORequestInvalidValid(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Invalid Logout Request
"""
settings_info = self.loadSettingsJSON()
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request_deflated.xml.base64'))
request_data['get_data']['SAMLRequest'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
target_url = auth.process_slo(True)
parsed_query = parse_qs(urlparse(target_url)[4])
self.assertEqual(len(auth.get_errors()), 0)
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLResponse', parsed_query)
# self.assertNotIn('RelayState', parsed_query)
auth.set_strict(True)
auth.process_slo(True)
# Fail due destination missmatch
self.assertEqual(auth.get_errors(), ['invalid_logout_request'])
auth.set_strict(False)
target_url_2 = auth.process_slo(True)
parsed_query_2 = parse_qs(urlparse(target_url_2)[4])
self.assertEqual(len(auth.get_errors()), 0)
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url_2)
self.assertIn('SAMLResponse', parsed_query_2)
# self.assertNotIn('RelayState', parsed_query_2)
def testProcessSLORequestNotOnOrAfterFailed(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Logout Request NotOnOrAfter failed
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_requests', 'invalids', 'not_after_failed.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLRequest'] = message
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.set_strict(True)
auth.process_slo(True)
self.assertEqual(auth.get_errors(), ['invalid_logout_request'])
def testProcessSLORequestDeletingSession(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Valid Logout Request, validating that the local session is deleted,
a LogoutResponse is created and a redirection executed
"""
settings_info = self.loadSettingsJSON()
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request_deflated.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLRequest'] = message
# FIXME
# if (!isset($_SESSION)) {
# $_SESSION = array();
# }
# $_SESSION['samltest'] = true;
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
auth.set_strict(True)
target_url = auth.process_slo(True)
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLResponse', parsed_query)
# self.assertNotIn('RelayState', parsed_query)
# FIXME // Session is not alive
# $this->assertFalse(isset($_SESSION['samltest']));
# $_SESSION['samltest'] = true;
auth.set_strict(True)
target_url_2 = auth.process_slo(True)
target_url_2 = auth.process_slo(True)
parsed_query_2 = parse_qs(urlparse(target_url_2)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url_2)
self.assertIn('SAMLResponse', parsed_query_2)
# self.assertNotIn('RelayState', parsed_query_2)
# FIXME // Session is alive
# $this->assertTrue(isset($_SESSION['samltest']));
# $this->assertTrue($_SESSION['samltest']);
def testProcessSLORequestRelayState(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Valid Logout Request, validating the relayState,
a LogoutResponse is created and a redirection executed
"""
settings_info = self.loadSettingsJSON()
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request_deflated.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLRequest'] = message
request_data['get_data']['RelayState'] = 'http://relaystate.com'
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
auth.set_strict(True)
target_url = auth.process_slo(False)
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLResponse', parsed_query)
self.assertIn('RelayState', parsed_query)
self.assertIn('http://relaystate.com', parsed_query['RelayState'])
def testProcessSLORequestSignedResponse(self):
"""
Tests the process_slo method of the OneLogin_Saml2_Auth class
Case Valid Logout Request, validating the relayState,
a signed LogoutResponse is created and a redirection executed
"""
settings_info = self.loadSettingsJSON()
settings_info['security']['logoutResponseSigned'] = True
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request_deflated.xml.base64'))
# In order to avoid the destination problem
plain_message = OneLogin_Saml2_Utils.decode_base64_and_inflate(message)
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
plain_message = plain_message.replace('http://stuff.com/endpoints/endpoints/sls.php', current_url)
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(plain_message)
request_data['get_data']['SAMLRequest'] = message
request_data['get_data']['RelayState'] = 'http://relaystate.com'
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
auth.set_strict(True)
target_url = auth.process_slo(False)
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLResponse', parsed_query)
self.assertIn('RelayState', parsed_query)
self.assertIn('SigAlg', parsed_query)
self.assertIn('Signature', parsed_query)
self.assertIn('http://relaystate.com', parsed_query['RelayState'])
self.assertIn(OneLogin_Saml2_Constants.RSA_SHA1, parsed_query['SigAlg'])
def testLogin(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Login with no parameters. An AuthnRequest is built an redirect executed
"""
settings_info = self.loadSettingsJSON()
request_data = self.get_request()
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
target_url = auth.login()
parsed_query = parse_qs(urlparse(target_url)[4])
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertIn(sso_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
self.assertIn('RelayState', parsed_query)
hostname = OneLogin_Saml2_Utils.get_self_host(request_data)
self.assertIn(u'http://%s/index.html' % hostname, parsed_query['RelayState'])
def testLoginWithUnicodeSettings(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Login with unicode settings. An AuthnRequest is built an redirect executed
"""
settings_info = self.loadSettingsJSON('settings6.json')
request_data = self.get_request()
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
target_url = auth.login()
parsed_query = parse_qs(urlparse(target_url)[4])
hostname = OneLogin_Saml2_Utils.get_self_host(request_data)
self.assertIn(u'http://%s/index.html' % hostname, parsed_query['RelayState'])
def testLoginWithRelayState(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Login with relayState. An AuthnRequest is built with a the
RelayState in the assertion is built and redirect executed
"""
settings_info = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
relay_state = 'http://sp.example.com'
target_url = auth.login(relay_state)
parsed_query = parse_qs(urlparse(target_url)[4])
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertIn(sso_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
self.assertIn('RelayState', parsed_query)
self.assertIn(relay_state, parsed_query['RelayState'])
def testLoginSigned(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Login signed. An AuthnRequest signed is built an redirect executed
"""
settings_info = self.loadSettingsJSON()
settings_info['security']['authnRequestsSigned'] = True
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
return_to = u'http://example.com/returnto'
target_url = auth.login(return_to)
parsed_query = parse_qs(urlparse(target_url)[4])
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertIn(sso_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
self.assertIn('RelayState', parsed_query)
self.assertIn('SigAlg', parsed_query)
self.assertIn('Signature', parsed_query)
self.assertIn(return_to, parsed_query['RelayState'])
self.assertIn(OneLogin_Saml2_Constants.RSA_SHA1, parsed_query['SigAlg'])
def testLoginForceAuthN(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Login with no parameters. A AuthN Request is built with ForceAuthn and redirect executed
"""
settings_info = self.loadSettingsJSON()
return_to = u'http://example.com/returnto'
sso_url = settings_info['idp']['singleSignOnService']['url']
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url = auth.login(return_to)
parsed_query = parse_qs(urlparse(target_url)[4])
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertIn(sso_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
request = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query['SAMLRequest'][0])
self.assertNotIn('ForceAuthn="true"', request)
auth_2 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url_2 = auth_2.login(return_to, False, False)
parsed_query_2 = parse_qs(urlparse(target_url_2)[4])
self.assertIn(sso_url, target_url_2)
self.assertIn('SAMLRequest', parsed_query_2)
request_2 = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query_2['SAMLRequest'][0])
self.assertNotIn('ForceAuthn="true"', request_2)
auth_3 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url_3 = auth_3.login(return_to, True, False)
parsed_query_3 = parse_qs(urlparse(target_url_3)[4])
self.assertIn(sso_url, target_url_3)
self.assertIn('SAMLRequest', parsed_query_3)
request_3 = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query_3['SAMLRequest'][0])
self.assertIn('ForceAuthn="true"', request_3)
def testLoginIsPassive(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Login with no parameters. A AuthN Request is built with IsPassive and redirect executed
"""
settings_info = self.loadSettingsJSON()
return_to = u'http://example.com/returnto'
sso_url = settings_info['idp']['singleSignOnService']['url']
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url = auth.login(return_to)
parsed_query = parse_qs(urlparse(target_url)[4])
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertIn(sso_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
request = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query['SAMLRequest'][0])
self.assertNotIn('IsPassive="true"', request)
auth_2 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url_2 = auth_2.login(return_to, False, False)
parsed_query_2 = parse_qs(urlparse(target_url_2)[4])
self.assertIn(sso_url, target_url_2)
self.assertIn('SAMLRequest', parsed_query_2)
request_2 = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query_2['SAMLRequest'][0])
self.assertNotIn('IsPassive="true"', request_2)
auth_3 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url_3 = auth_3.login(return_to, False, True)
parsed_query_3 = parse_qs(urlparse(target_url_3)[4])
self.assertIn(sso_url, target_url_3)
self.assertIn('SAMLRequest', parsed_query_3)
request_3 = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query_3['SAMLRequest'][0])
self.assertIn('IsPassive="true"', request_3)
def testLoginSetNameIDPolicy(self):
"""
Tests the login method of the OneLogin_Saml2_Auth class
Case Logout with no parameters. A AuthN Request is built with and without NameIDPolicy
"""
settings_info = self.loadSettingsJSON()
return_to = u'http://example.com/returnto'
sso_url = settings_info['idp']['singleSignOnService']['url']
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url = auth.login(return_to)
parsed_query = parse_qs(urlparse(target_url)[4])
sso_url = settings_info['idp']['singleSignOnService']['url']
self.assertIn(sso_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
request = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query['SAMLRequest'][0])
self.assertIn('<samlp:NameIDPolicy', request)
auth_2 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url_2 = auth_2.login(return_to, False, False, True)
parsed_query_2 = parse_qs(urlparse(target_url_2)[4])
self.assertIn(sso_url, target_url_2)
self.assertIn('SAMLRequest', parsed_query_2)
request_2 = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query_2['SAMLRequest'][0])
self.assertIn('<samlp:NameIDPolicy', request_2)
auth_3 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
target_url_3 = auth_3.login(return_to, False, False, False)
parsed_query_3 = parse_qs(urlparse(target_url_3)[4])
self.assertIn(sso_url, target_url_3)
self.assertIn('SAMLRequest', parsed_query_3)
request_3 = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query_3['SAMLRequest'][0])
self.assertNotIn('<samlp:NameIDPolicy', request_3)
def testLogout(self):
"""
Tests the logout method of the OneLogin_Saml2_Auth class
Case Logout with no parameters. A logout Request is built and redirect
executed
"""
settings_info = self.loadSettingsJSON()
request_data = self.get_request()
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
target_url = auth.logout()
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
self.assertIn('RelayState', parsed_query)
hostname = OneLogin_Saml2_Utils.get_self_host(request_data)
self.assertIn(u'http://%s/index.html' % hostname, parsed_query['RelayState'])
def testLogoutWithRelayState(self):
"""
Tests the logout method of the OneLogin_Saml2_Auth class
Case Logout with relayState. A logout Request with a the RelayState in
the assertion is built and redirect executed
"""
settings_info = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
relay_state = 'http://sp.example.com'
target_url = auth.logout(relay_state)
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
self.assertIn('RelayState', parsed_query)
self.assertIn(relay_state, parsed_query['RelayState'])
def testLogoutSigned(self):
"""
Tests the logout method of the OneLogin_Saml2_Auth class
Case Logout signed. A logout Request signed in
the assertion is built and redirect executed
"""
settings_info = self.loadSettingsJSON()
settings_info['security']['logoutRequestSigned'] = True
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
return_to = u'http://example.com/returnto'
target_url = auth.logout(return_to)
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
self.assertIn('RelayState', parsed_query)
self.assertIn('SigAlg', parsed_query)
self.assertIn('Signature', parsed_query)
self.assertIn(return_to, parsed_query['RelayState'])
self.assertIn(OneLogin_Saml2_Constants.RSA_SHA1, parsed_query['SigAlg'])
def testLogoutNoSLO(self):
"""
Tests the logout method of the OneLogin_Saml2_Auth class
Case IdP no SLO endpoint.
"""
settings_info = self.loadSettingsJSON()
del settings_info['idp']['singleLogoutService']
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
with self.assertRaisesRegexp(OneLogin_Saml2_Error, 'The IdP does not support Single Log Out'):
# The Header of the redirect produces an Exception
auth.logout('http://example.com/returnto')
def testLogoutNameIDandSessionIndex(self):
"""
Tests the logout method of the OneLogin_Saml2_Auth class
Case nameID and sessionIndex as parameters.
"""
settings_info = self.loadSettingsJSON()
request_data = self.get_request()
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_info)
name_id = 'name_id_example'
session_index = 'session_index_example'
target_url = auth.logout(name_id=name_id, session_index=session_index)
parsed_query = parse_qs(urlparse(target_url)[4])
slo_url = settings_info['idp']['singleLogoutService']['url']
self.assertIn(slo_url, target_url)
self.assertIn('SAMLRequest', parsed_query)
logout_request = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query['SAMLRequest'][0])
name_id_from_request = OneLogin_Saml2_Logout_Request.get_nameid(logout_request)
sessions_index_in_request = OneLogin_Saml2_Logout_Request.get_session_indexes(logout_request)
self.assertIn(session_index, sessions_index_in_request)
self.assertEqual(name_id, name_id_from_request)
def testLogoutNameID(self):
"""
Tests the logout method of the OneLogin_Saml2_Auth class
Case nameID loaded after process SAML Response
"""
request_data = self.get_request()
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
del request_data['get_data']
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_response()
name_id_from_response = auth.get_nameid()
name_id_format_from_response = auth.get_nameid_format()
target_url = auth.logout()
parsed_query = parse_qs(urlparse(target_url)[4])
self.assertIn('SAMLRequest', parsed_query)
logout_request = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query['SAMLRequest'][0])
name_id_from_request = OneLogin_Saml2_Logout_Request.get_nameid(logout_request)
name_id_format_from_request = OneLogin_Saml2_Logout_Request.get_nameid_format(logout_request)
self.assertEqual(name_id_from_response, name_id_from_request)
self.assertEqual(name_id_format_from_response, name_id_format_from_request)
new_name_id = "new_name_id"
new_name_id_format = "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress"
target_url_2 = auth.logout(name_id=new_name_id, name_id_format=new_name_id_format)
parsed_query = parse_qs(urlparse(target_url_2)[4])
self.assertIn('SAMLRequest', parsed_query)
logout_request = OneLogin_Saml2_Utils.decode_base64_and_inflate(parsed_query['SAMLRequest'][0])
name_id_from_request = OneLogin_Saml2_Logout_Request.get_nameid(logout_request)
name_id_format_from_request = OneLogin_Saml2_Logout_Request.get_nameid_format(logout_request)
self.assertEqual(new_name_id, name_id_from_request)
self.assertEqual(new_name_id_format, name_id_format_from_request)
def testSetStrict(self):
"""
Tests the set_strict method of the OneLogin_Saml2_Auth
"""
settings_info = self.loadSettingsJSON()
settings_info['strict'] = False
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings_info)
settings = auth.get_settings()
self.assertFalse(settings.is_strict())
auth.set_strict(True)
settings = auth.get_settings()
self.assertTrue(settings.is_strict())
auth.set_strict(False)
settings = auth.get_settings()
self.assertFalse(settings.is_strict())
with self.assertRaises(AssertionError):
auth.set_strict('42')
def testIsAuthenticated(self):
"""
Tests the is_authenticated method of the OneLogin_Saml2_Auth
"""
request_data = self.get_request()
del request_data['get_data']
message = self.file_contents(join(self.data_path, 'responses', 'response1.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_response()
self.assertFalse(auth.is_authenticated())
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=self.loadSettingsJSON())
auth.process_response()
self.assertTrue(auth.is_authenticated())
def testGetNameId(self):
"""
Tests the get_nameid method of the OneLogin_Saml2_Auth
"""
settings = self.loadSettingsJSON()
request_data = self.get_request()
del request_data['get_data']
message = self.file_contents(join(self.data_path, 'responses', 'response1.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings)
auth.process_response()
self.assertFalse(auth.is_authenticated())
self.assertEqual(auth.get_nameid(), None)
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings)
auth.process_response()
self.assertTrue(auth.is_authenticated())
self.assertEqual("492882615acf31c8096b627245d76ae53036c090", auth.get_nameid())
settings_2 = self.loadSettingsJSON('settings2.json')
message = self.file_contents(join(self.data_path, 'responses', 'signed_message_encrypted_assertion2.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_2)
auth.process_response()
self.assertTrue(auth.is_authenticated())
self.assertEqual("25ddd7d34a7d79db69167625cda56a320adf2876", auth.get_nameid())
def testGetNameIdFormat(self):
"""
Tests the get_nameid_format method of the OneLogin_Saml2_Auth
"""
settings = self.loadSettingsJSON()
request_data = self.get_request()
del request_data['get_data']
message = self.file_contents(join(self.data_path, 'responses', 'response1.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings)
auth.process_response()
self.assertFalse(auth.is_authenticated())
self.assertEqual(auth.get_nameid_format(), None)
message = self.file_contents(join(self.data_path, 'responses', 'valid_response.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings)
auth.process_response()
self.assertTrue(auth.is_authenticated())
self.assertEqual("urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress", auth.get_nameid_format())
settings_2 = self.loadSettingsJSON('settings2.json')
message = self.file_contents(join(self.data_path, 'responses', 'signed_message_encrypted_assertion2.xml.base64'))
request_data['post_data'] = {
'SAMLResponse': message
}
auth = OneLogin_Saml2_Auth(request_data, old_settings=settings_2)
auth.process_response()
self.assertTrue(auth.is_authenticated())
self.assertEqual("urn:oasis:names:tc:SAML:2.0:nameid-format:unspecified", auth.get_nameid_format())
def testBuildRequestSignature(self):
"""
Tests the build_request_signature method of the OneLogin_Saml2_Auth
"""
settings = self.loadSettingsJSON()
message = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request_deflated.xml.base64'))
relay_state = 'http://relaystate.com'
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings)
signature = auth.build_request_signature(message, relay_state)
valid_signature = 'Pb1EXAX5TyipSJ1SndEKZstLQTsT+1D00IZAhEepBM+OkAZQSToivu3njgJu47HZiZAqgXZFgloBuuWE/+GdcSsRYEMkEkiSDWTpUr25zKYLJDSg6GNo6iAHsKSuFt46Z54Xe/keYxYP03Hdy97EwuuSjBzzgRc5tmpV+KC7+a0='
self.assertEqual(signature, valid_signature)
settings['sp']['privatekey'] = ''
settings['custom_base_path'] = u'invalid/path/'
auth2 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings)
with self.assertRaisesRegexp(OneLogin_Saml2_Error, "Trying to sign the SAMLRequest but can't load the SP private key"):
auth2.build_request_signature(message, relay_state)
def testBuildResponseSignature(self):
"""
Tests the build_response_signature method of the OneLogin_Saml2_Auth
"""
settings = self.loadSettingsJSON()
message = self.file_contents(join(self.data_path, 'logout_responses', 'logout_response_deflated.xml.base64'))
relay_state = 'http://relaystate.com'
auth = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings)
signature = auth.build_response_signature(message, relay_state)
valid_signature = 'IcyWLRX6Dz3wHBfpcUaNLVDMGM3uo6z2Z11Gjq0/APPJaHboKGljffsgMVAGBml497yckq+eYKmmz+jpURV9yTj2sF9qfD6CwX2dEzSzMdRzB40X7pWyHgEJGIhs6BhaOt5oXEk4T+h3AczERqpVYFpL00yo7FNtyQkhZFpHFhM='
self.assertEqual(signature, valid_signature)
settings['sp']['privatekey'] = ''
settings['custom_base_path'] = u'invalid/path/'
auth2 = OneLogin_Saml2_Auth(self.get_request(), old_settings=settings)
with self.assertRaisesRegexp(OneLogin_Saml2_Error, "Trying to sign the SAMLResponse but can't load the SP private key"):
auth2.build_response_signature(message, relay_state)
def testGetLastSAMLResponse(self):
settings = self.loadSettingsJSON()
message = self.file_contents(join(self.data_path, 'responses', 'signed_message_response.xml.base64'))
message_wrapper = {'post_data': {'SAMLResponse': message}}
auth = OneLogin_Saml2_Auth(message_wrapper, old_settings=settings)
auth.process_response()
expected_message = self.file_contents(join(self.data_path, 'responses', 'pretty_signed_message_response.xml'))
self.assertEqual(auth.get_last_response_xml(True), expected_message)
# with encrypted assertion
message = self.file_contents(join(self.data_path, 'responses', 'valid_encrypted_assertion.xml.base64'))
message_wrapper = {'post_data': {'SAMLResponse': message}}
auth = OneLogin_Saml2_Auth(message_wrapper, old_settings=settings)
auth.process_response()
decrypted_response = self.file_contents(join(self.data_path, 'responses', 'decrypted_valid_encrypted_assertion.xml'))
self.assertEqual(auth.get_last_response_xml(False), decrypted_response)
pretty_decrypted_response = self.file_contents(join(self.data_path, 'responses', 'pretty_decrypted_valid_encrypted_assertion.xml'))
self.assertEqual(auth.get_last_response_xml(True), pretty_decrypted_response)
def testGetLastAuthnRequest(self):
settings = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth({'http_host': 'localhost', 'script_name': 'thing'}, old_settings=settings)
auth.login()
expectedFragment = (
'Destination="http://idp.example.com/SSOService.php"\n'
' ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"\n'
' AssertionConsumerServiceURL="http://stuff.com/endpoints/endpoints/acs.php"\n'
' >\n'
' <saml:Issuer>http://stuff.com/endpoints/metadata.php</saml:Issuer>\n'
' <samlp:NameIDPolicy\n'
' Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"\n'
' AllowCreate="true" />\n'
' <samlp:RequestedAuthnContext Comparison="exact">\n'
' <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef>\n'
' </samlp:RequestedAuthnContext>\n</samlp:AuthnRequest>'
)
self.assertIn(expectedFragment, auth.get_last_request_xml())
def testGetLastLogoutRequest(self):
settings = self.loadSettingsJSON()
auth = OneLogin_Saml2_Auth({'http_host': 'localhost', 'script_name': 'thing'}, old_settings=settings)
auth.logout()
expectedFragment = (
' Destination="http://idp.example.com/SingleLogoutService.php">\n'
' <saml:Issuer>http://stuff.com/endpoints/metadata.php</saml:Issuer>\n'
' <saml:NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity" SPNameQualifier="http://stuff.com/endpoints/metadata.php">http://idp.example.com/</saml:NameID>\n'
' \n </samlp:LogoutRequest>'
)
self.assertIn(expectedFragment, auth.get_last_request_xml())
request = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request.xml'))
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(request)
message_wrapper = {'get_data': {'SAMLRequest': message}}
auth = OneLogin_Saml2_Auth(message_wrapper, old_settings=settings)
auth.process_slo()
self.assertEqual(request, auth.get_last_request_xml())
def testGetLastLogoutResponse(self):
settings = self.loadSettingsJSON()
request = self.file_contents(join(self.data_path, 'logout_requests', 'logout_request.xml'))
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(request)
message_wrapper = {'get_data': {'SAMLRequest': message}}
auth = OneLogin_Saml2_Auth(message_wrapper, old_settings=settings)
auth.process_slo()
expectedFragment = (
'Destination="http://idp.example.com/SingleLogoutService.php"\n'
' InResponseTo="ONELOGIN_21584ccdfaca36a145ae990442dcd96bfe60151e"\n>\n'
' <saml:Issuer>http://stuff.com/endpoints/metadata.php</saml:Issuer>\n'
' <samlp:Status>\n'
' <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success" />\n'
' </samlp:Status>\n'
'</samlp:LogoutResponse>'
)
self.assertIn(expectedFragment, auth.get_last_response_xml())
response = self.file_contents(join(self.data_path, 'logout_responses', 'logout_response.xml'))
message = OneLogin_Saml2_Utils.deflate_and_base64_encode(response)
message_wrapper = {'get_data': {'SAMLResponse': message}}
auth = OneLogin_Saml2_Auth(message_wrapper, old_settings=settings)
auth.process_slo()
self.assertEqual(response, auth.get_last_response_xml())
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = unittest.TextTestRunner()
unittest.main(testRunner=runner)
| 47.808044 | 200 | 0.691006 | 6,081 | 52,302 | 5.644631 | 0.064463 | 0.067414 | 0.053984 | 0.033037 | 0.825666 | 0.803525 | 0.777218 | 0.761486 | 0.738645 | 0.719417 | 0 | 0.018479 | 0.20743 | 52,302 | 1,093 | 201 | 47.851784 | 0.809567 | 0.125311 | 0 | 0.678862 | 0 | 0.012195 | 0.174387 | 0.05543 | 0 | 0 | 0 | 0.002745 | 0.224932 | 1 | 0.066396 | false | 0.006775 | 0.017615 | 0.001355 | 0.092141 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c7e9da1cffdbd204394e6b3620ac62efe8b3389 | 13,381 | py | Python | datamart/unit_tests/test_index_builder.py | cybergla/datamart | ba377b2d2a25acb8efe8e636b2c6579d3863713f | [
"MIT"
] | null | null | null | datamart/unit_tests/test_index_builder.py | cybergla/datamart | ba377b2d2a25acb8efe8e636b2c6579d3863713f | [
"MIT"
] | null | null | null | datamart/unit_tests/test_index_builder.py | cybergla/datamart | ba377b2d2a25acb8efe8e636b2c6579d3863713f | [
"MIT"
] | null | null | null | from datamart.utils import Utils
from datamart.index_builder import IndexBuilder
import unittest
import pandas as pd
class TestIndexBuilder(unittest.TestCase):
def setUp(self):
self.ib = IndexBuilder()
self.global_datamart_id = 10000
self.df_for_global = pd.DataFrame({
"city": ["abu dhabi", "ajman", "dubai", "sharjah"],
'date': ["2018-10-05", "2014-02-23", "2020-09-23T00:10:00", "2023213"]
})
self.df_for_variable = pd.DataFrame({
'date': ["2018-10-05", "2014-02-23", "2020-09-23T00:10:00", "2023213"]
})
@Utils.test_print
def test_construct_variable_metadata_with_empty_variable(self):
variable_metadata = self.ib.construct_variable_metadata(
description={},
global_datamart_id=self.global_datamart_id,
col_offset=0,
data=self.df_for_variable
)
expected = {
'datamart_id': 10001,
'semantic_type': [],
'name': 'date',
'description': 'column name: date, dtype: object',
'temporal_coverage': {'start': '2014-02-23T00:00:00', 'end': '2020-09-23T00:10:00'}
}
self.assertEqual(variable_metadata.value, expected)
@Utils.test_print
def test_construct_variable_metadata_1(self):
variable_description = {
"name": "date",
"description": "the date of data",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Time"
],
"temporal_coverage": {
"start": "1874-10-13",
"end": "2018-10-01"
}
}
variable_metadata = self.ib.construct_variable_metadata(
description=variable_description,
global_datamart_id=self.global_datamart_id,
col_offset=0
)
expected = {
'datamart_id': 10001,
'name': 'date',
'description': 'the date of data',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Time'],
'temporal_coverage': {
'start': '1874-10-13T00:00:00',
'end': '2018-10-01T00:00:00'
}
}
self.assertEqual(variable_metadata.value, expected)
@Utils.test_print
def test_construct_variable_metadata_1_with_data(self):
variable_description = {
"description": "the date of data",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Time"
],
"temporal_coverage": {
"start": None,
"end": None
}
}
variable_metadata = self.ib.construct_variable_metadata(
description=variable_description,
global_datamart_id=self.global_datamart_id,
col_offset=0,
data=self.df_for_variable
)
expected = {
'datamart_id': 10001,
'name': 'date',
'description': 'the date of data',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Time'],
'temporal_coverage': {
'start': '2014-02-23T00:00:00',
'end': '2020-09-23T00:10:00'
}
}
self.assertEqual(variable_metadata.value, expected)
@Utils.test_print
def test_construct_variable_metadata_2(self):
variable_description = {
"name": "city",
"description": "the city data belongs to",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Location"
],
"named_entity": [
"abu dhabi",
"ajman",
"dubai",
"sharjah",
"kabul",
"kandahar",
"algiers",
"annaba",
"batna"
]
}
variable_metadata = self.ib.construct_variable_metadata(
description=variable_description,
global_datamart_id=self.global_datamart_id,
col_offset=0
)
expected = {
'datamart_id': 10001,
'name': 'city',
'description': 'the city data belongs to',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Location'],
'named_entity': ['abu dhabi', 'ajman', 'dubai', 'sharjah', 'kabul', 'kandahar', 'algiers', 'annaba',
'batna']
}
self.assertEqual(variable_metadata.value, expected)
@Utils.test_print
def test_construct_variable_metadata_2_with_data(self):
data = {
"city": [
"abu dhabi",
"ajman",
"dubai",
"sharjah",
"kabul",
"kandahar",
"algiers",
"annaba",
"batna"
]
}
df = pd.DataFrame(data)
variable_description = {
"name": "city",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Location"
],
"named_entity": None
}
variable_metadata = self.ib.construct_variable_metadata(
description=variable_description,
global_datamart_id=self.global_datamart_id,
col_offset=0,
data=df
)
expected = {
'datamart_id': 10001,
'name': 'city',
'description': 'column name: city, dtype: object',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Location'],
'named_entity': ['abu dhabi', 'ajman', 'dubai', 'sharjah', 'kabul', 'kandahar', 'algiers', 'annaba',
'batna']
}
self.assertEqual(variable_metadata.value, expected)
@Utils.test_print
def test_construct_global_metadata(self):
self.ib.current_global_index = 10000
description = {
"title": "TAVG",
"description": "Average temperature (tenths of degrees C)[Note that TAVG from source 'S' corresponds to an average for the period ending at 2400 UTC rather than local midnight]",
"url": "https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt",
"keywords": [
"Average Temperature."
],
"provenance": {"resource": "noaa.org"},
"materialization": {
"python_path": "noaa_materializer",
"arguments": {
"type": "TAVG"
}
},
"variables": [
{
"name": "date",
"description": "the date of data",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Time"
],
"temporal_coverage": {
"start": "1874-10-13",
"end": "2018-10-01"
}
},
{
"name": "city",
"description": "the city data belongs to",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Location"
],
"named_entity": [
"abu dhabi",
"ajman",
"dubai",
"sharjah"
]
}
],
"date_updated": "2018-09-28"
}
global_metadata = self.ib.construct_global_metadata(
description=description
)
expected = {
'datamart_id': 20000,
'title': 'TAVG',
'description': "Average temperature (tenths of degrees C)[Note that TAVG from source 'S' corresponds to an average for the period ending at 2400 UTC rather than local midnight]",
'url': 'https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt',
'keywords': ['Average Temperature.'],
'date_updated': '2018-09-28T00:00:00',
'provenance': {"resource": "noaa.org"},
'materialization': {
'python_path': 'noaa_materializer',
'arguments': {'type': 'TAVG'}
},
'variables': [
{
'datamart_id': 20001,
'name': 'date',
'description': 'the date of data',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Time'],
'temporal_coverage': {'start': '1874-10-13T00:00:00', 'end': '2018-10-01T00:00:00'}
},
{
'datamart_id': 20002,
'name': 'city',
'description': 'the city data belongs to',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Location'],
'named_entity': ['abu dhabi', 'ajman', 'dubai', 'sharjah']
}
]
}
self.assertEqual(global_metadata.value, expected)
@Utils.test_print
def test_construct_global_metadata_with_data(self):
self.ib.current_global_index = 10000
description = {
"url": "https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt",
"keywords": [
"Average Temperature."
],
"provenance": {"resource": "noaa.org"},
"materialization": {
"python_path": "noaa_materializer",
"arguments": {
"type": "TAVG"
}
},
"variables": [
{
"name": "city",
"description": "the city data belongs to",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Location"
],
"named_entity": None
},
{
"name": "date",
"description": "the date of data",
"semantic_type": [
"https://metadata.datadrivendiscovery.org/types/Time"
],
"temporal_coverage": None
}
],
"date_updated": "2018-09-28"
}
global_metadata = self.ib.construct_global_metadata(
description=description,
data=self.df_for_global
)
expected = {
'datamart_id': 20000,
'title': 'city date',
'description': 'city : object, date : object',
'url': 'https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt',
'keywords': ['Average Temperature.'],
'date_updated': '2018-09-28T00:00:00',
'provenance': {"resource": "noaa.org"},
'materialization': {'python_path': 'noaa_materializer', 'arguments': {'type': 'TAVG'}},
'variables': [
{
'datamart_id': 20001,
'name': 'city',
'description': 'the city data belongs to',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Location'],
'named_entity': ['abu dhabi', 'ajman', 'dubai', 'sharjah']
},
{
'datamart_id': 20002,
'name': 'date',
'description': 'the date of data',
'semantic_type': ['https://metadata.datadrivendiscovery.org/types/Time'],
'temporal_coverage': {'start': '2014-02-23T00:00:00', 'end': '2020-09-23T00:10:00'}
}
]
}
self.assertEqual(global_metadata.value, expected)
@Utils.test_print
def test_construct_global_metadata_with_basic_fields(self):
self.ib.current_global_index = 10000
description = {
"materialization": {
"python_path": "noaa_materializer"
}
}
global_metadata = self.ib.construct_global_metadata(
description=description,
data=self.df_for_global
)
expected = {
'datamart_id': 20000,
'materialization': {'python_path': 'noaa_materializer', 'arguments': None},
'variables': [
{
'datamart_id': 20001,
'semantic_type': [],
'name': 'city',
'description': 'column name: city, dtype: object'
},
{
'datamart_id': 20002,
'semantic_type': [],
'name': 'date',
'description': 'column name: date, dtype: object',
'temporal_coverage': {'start': '2014-02-23T00:00:00', 'end': '2020-09-23T00:10:00'}
}
],
'title': 'city date',
'description': 'city : object, date : object',
'keywords': ['city', 'date']
}
self.assertEqual(global_metadata.value, expected)
| 36.862259 | 190 | 0.483746 | 1,128 | 13,381 | 5.560284 | 0.126773 | 0.03986 | 0.043367 | 0.063776 | 0.915019 | 0.904815 | 0.884885 | 0.881218 | 0.817283 | 0.817283 | 0 | 0.052657 | 0.391152 | 13,381 | 362 | 191 | 36.964088 | 0.717197 | 0 | 0 | 0.685294 | 0 | 0.017647 | 0.333383 | 0 | 0 | 0 | 0 | 0 | 0.023529 | 1 | 0.026471 | false | 0 | 0.011765 | 0 | 0.041176 | 0.023529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92d37c656251084a6dc18dcfeb81378e33f5b251 | 240 | py | Python | demo/fib/test_fib.py | joshkel/automated-testing-with-pytest | 79686bb6c4a84f3d4782656db74641c0ceef4618 | [
"MIT"
] | 2 | 2019-02-10T15:33:09.000Z | 2019-02-10T18:25:10.000Z | demo/fib/test_fib.py | joshkel/automated-testing-with-pytest | 79686bb6c4a84f3d4782656db74641c0ceef4618 | [
"MIT"
] | null | null | null | demo/fib/test_fib.py | joshkel/automated-testing-with-pytest | 79686bb6c4a84f3d4782656db74641c0ceef4618 | [
"MIT"
] | null | null | null | from fib import fib
def test_fib():
assert fib(0) == 0
assert fib(1) == 1
assert fib(3) == 2
assert fib(4) == 3
assert fib(5) == 5
def test_negative_fib():
pass
def test_big_fib():
assert fib(30) == 832040
| 13.333333 | 28 | 0.575 | 40 | 240 | 3.325 | 0.425 | 0.406015 | 0.180451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.2875 | 240 | 17 | 29 | 14.117647 | 0.672515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 1 | 0.272727 | true | 0.090909 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
92e5e817837c94fe6d75059a8abd5b6db38969c7 | 90,882 | py | Python | tests/puzzle/test_solver.py | robin92/poetry | 7cc684981983963dc202e1a249a4b66667b468bd | [
"MIT"
] | null | null | null | tests/puzzle/test_solver.py | robin92/poetry | 7cc684981983963dc202e1a249a4b66667b468bd | [
"MIT"
] | null | null | null | tests/puzzle/test_solver.py | robin92/poetry | 7cc684981983963dc202e1a249a4b66667b468bd | [
"MIT"
] | null | null | null | from pathlib import Path
from typing import TYPE_CHECKING
from typing import Any
from typing import Dict
from typing import List
from typing import Optional
from typing import Type
import pytest
from cleo.io.null_io import NullIO
from poetry.core.packages.dependency import Dependency
from poetry.core.packages.package import Package
from poetry.core.packages.project_package import ProjectPackage
from poetry.core.packages.vcs_dependency import VCSDependency
from poetry.core.version.markers import parse_marker
from poetry.factory import Factory
from poetry.puzzle import Solver
from poetry.puzzle.exceptions import SolverProblemError
from poetry.puzzle.provider import Provider as BaseProvider
from poetry.repositories.installed_repository import InstalledRepository
from poetry.repositories.pool import Pool
from poetry.repositories.repository import Repository
from poetry.utils.env import MockEnv
from tests.helpers import get_dependency
from tests.helpers import get_package
from tests.repositories.test_legacy_repository import (
MockRepository as MockLegacyRepository,
)
from tests.repositories.test_pypi_repository import MockRepository as MockPyPIRepository
if TYPE_CHECKING:
import httpretty
from poetry.installation.operations import OperationTypes
from poetry.puzzle.transaction import Transaction
DEFAULT_SOURCE_REF = (
VCSDependency("poetry", "git", "git@github.com:python-poetry/poetry.git").branch
or "HEAD"
)
class Provider(BaseProvider):
def set_package_python_versions(self, python_versions: str) -> None:
self._package.python_versions = python_versions
self._python_constraint = self._package.python_constraint
@pytest.fixture()
def io() -> NullIO:
return NullIO()
@pytest.fixture()
def package() -> ProjectPackage:
return ProjectPackage("root", "1.0")
@pytest.fixture()
def installed() -> InstalledRepository:
return InstalledRepository()
@pytest.fixture()
def locked() -> Repository:
return Repository()
@pytest.fixture()
def repo() -> Repository:
return Repository()
@pytest.fixture()
def pool(repo: Repository) -> Pool:
return Pool([repo])
@pytest.fixture()
def solver(
package: ProjectPackage,
pool: Pool,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
) -> Solver:
return Solver(
package, pool, installed, locked, io, provider=Provider(package, pool, io)
)
def check_solver_result(
transaction: "Transaction",
expected: List[Dict[str, Any]],
synchronize: bool = False,
) -> List["OperationTypes"]:
for e in expected:
if "skipped" not in e:
e["skipped"] = False
result = []
ops = transaction.calculate_operations(synchronize=synchronize)
for op in ops:
if op.job_type == "update":
result.append(
{
"job": "update",
"from": op.initial_package,
"to": op.target_package,
"skipped": op.skipped,
}
)
else:
job = "install"
if op.job_type == "uninstall":
job = "remove"
result.append({"job": job, "package": op.package, "skipped": op.skipped})
assert result == expected
return ops
def test_solver_install_single(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
repo.add_package(package_a)
transaction = solver.solve([get_dependency("A")])
check_solver_result(transaction, [{"job": "install", "package": package_a}])
def test_solver_remove_if_no_longer_locked(
solver: Solver, locked: Repository, installed: InstalledRepository
):
package_a = get_package("A", "1.0")
installed.add_package(package_a)
locked.add_package(package_a)
transaction = solver.solve()
check_solver_result(transaction, [{"job": "remove", "package": package_a}])
def test_remove_non_installed(solver: Solver, repo: Repository, locked: Repository):
package_a = get_package("A", "1.0")
locked.add_package(package_a)
repo.add_package(package_a)
request = []
transaction = solver.solve(request)
check_solver_result(transaction, [])
def test_install_non_existing_package_fail(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("B", "1"))
package_a = get_package("A", "1.0")
repo.add_package(package_a)
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_with_deps(solver: Solver, repo: Repository, package: ProjectPackage):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
new_package_b = get_package("B", "1.1")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(new_package_b)
package_a.add_dependency(get_dependency("B", "<1.1"))
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
def test_install_honours_not_equal(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
new_package_b11 = get_package("B", "1.1")
new_package_b12 = get_package("B", "1.2")
new_package_b13 = get_package("B", "1.3")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(new_package_b11)
repo.add_package(new_package_b12)
repo.add_package(new_package_b13)
package_a.add_dependency(get_dependency("B", "<=1.3,!=1.3,!=1.2"))
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": new_package_b11},
{"job": "install", "package": package_a},
],
)
def test_install_with_deps_in_order(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package.add_dependency(Factory.create_dependency("C", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
package_b.add_dependency(get_dependency("A", ">=1.0"))
package_b.add_dependency(get_dependency("C", ">=1.0"))
package_c.add_dependency(get_dependency("A", ">=1.0"))
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a},
{"job": "install", "package": package_c},
{"job": "install", "package": package_b},
],
)
def test_install_installed(
solver: Solver,
repo: Repository,
installed: InstalledRepository,
package: ProjectPackage,
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
installed.add_package(package_a)
repo.add_package(package_a)
transaction = solver.solve()
check_solver_result(
transaction, [{"job": "install", "package": package_a, "skipped": True}]
)
def test_update_installed(
solver: Solver,
repo: Repository,
installed: InstalledRepository,
package: ProjectPackage,
):
package.add_dependency(Factory.create_dependency("A", "*"))
installed.add_package(get_package("A", "1.0"))
package_a = get_package("A", "1.0")
new_package_a = get_package("A", "1.1")
repo.add_package(package_a)
repo.add_package(new_package_a)
transaction = solver.solve()
check_solver_result(
transaction, [{"job": "update", "from": package_a, "to": new_package_a}]
)
def test_update_with_use_latest(
solver: Solver,
repo: Repository,
installed: InstalledRepository,
package: ProjectPackage,
locked: Repository,
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
installed.add_package(get_package("A", "1.0"))
package_a = get_package("A", "1.0")
new_package_a = get_package("A", "1.1")
package_b = get_package("B", "1.0")
new_package_b = get_package("B", "1.1")
repo.add_package(package_a)
repo.add_package(new_package_a)
repo.add_package(package_b)
repo.add_package(new_package_b)
locked.add_package(package_a)
locked.add_package(package_b)
transaction = solver.solve(use_latest=[package_b.name])
check_solver_result(
transaction,
[
{"job": "install", "package": package_a, "skipped": True},
{"job": "install", "package": new_package_b},
],
)
def test_solver_sets_groups(solver: Solver, repo: Repository, package: ProjectPackage):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*", groups=["dev"]))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_b.add_dependency(Factory.create_dependency("C", "~1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
assert ops[0].package.category == "dev"
assert ops[2].package.category == "dev"
assert ops[1].package.category == "main"
def test_solver_respects_root_package_python_versions(
solver: Solver, repo: Repository, package: ProjectPackage
):
solver.provider.set_package_python_versions("~3.4")
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_b.python_versions = "^3.3"
package_c = get_package("C", "1.0")
package_c.python_versions = "^3.4"
package_c11 = get_package("C", "1.1")
package_c11.python_versions = "^3.6"
package_b.add_dependency(Factory.create_dependency("C", "^1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_c11)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
def test_solver_fails_if_mismatch_root_python_versions(
solver: Solver, repo: Repository, package: ProjectPackage
):
solver.provider.set_package_python_versions("^3.4")
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_b.python_versions = "^3.6"
package_c = get_package("C", "1.0")
package_c.python_versions = "~3.3"
package_b.add_dependency(Factory.create_dependency("C", "~1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_solves_optional_and_compatible_packages(
solver: Solver, repo: Repository, package: ProjectPackage
):
solver.provider.set_package_python_versions("~3.4")
package.extras["foo"] = [get_dependency("B")]
package.add_dependency(
Factory.create_dependency("A", {"version": "*", "python": "^3.4"})
)
package.add_dependency(
Factory.create_dependency("B", {"version": "*", "optional": True})
)
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_b.python_versions = "^3.3"
package_c = get_package("C", "1.0")
package_c.python_versions = "^3.4"
package_b.add_dependency(Factory.create_dependency("C", "^1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
def test_solver_does_not_return_extras_if_not_requested(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_b.extras = {"foo": [get_dependency("C", "^1.0")]}
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
def test_solver_returns_extras_if_requested(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(
Factory.create_dependency("B", {"version": "*", "extras": ["foo"]})
)
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
dep = get_dependency("C", "^1.0", optional=True)
dep.marker = parse_marker("extra == 'foo'")
package_b.extras = {"foo": [dep]}
package_b.add_dependency(dep)
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
assert ops[-1].package.marker.is_any()
assert ops[0].package.marker.is_any()
@pytest.mark.parametrize("enabled_extra", ["one", "two", None])
def test_solver_returns_extras_only_requested(
solver: Solver,
repo: Repository,
package: ProjectPackage,
enabled_extra: Optional[bool],
):
extras = [enabled_extra] if enabled_extra is not None else []
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(
Factory.create_dependency("B", {"version": "*", "extras": extras})
)
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c10 = get_package("C", "1.0")
package_c20 = get_package("C", "2.0")
dep10 = get_dependency("C", "1.0", optional=True)
dep10._in_extras.append("one")
dep10.marker = parse_marker("extra == 'one'")
dep20 = get_dependency("C", "2.0", optional=True)
dep20._in_extras.append("two")
dep20.marker = parse_marker("extra == 'two'")
package_b.extras = {"one": [dep10], "two": [dep20]}
package_b.add_dependency(dep10)
package_b.add_dependency(dep20)
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c10)
repo.add_package(package_c20)
transaction = solver.solve()
expected = [
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
]
if enabled_extra is not None:
expected.insert(
0,
{
"job": "install",
"package": package_c10 if enabled_extra == "one" else package_c20,
},
)
ops = check_solver_result(
transaction,
expected,
)
assert ops[-1].package.marker.is_any()
assert ops[0].package.marker.is_any()
@pytest.mark.parametrize("enabled_extra", ["one", "two", None])
def test_solver_returns_extras_when_multiple_extras_use_same_dependency(
solver: Solver,
repo: Repository,
package: ProjectPackage,
enabled_extra: Optional[bool],
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
dep = get_dependency("C", "*", optional=True)
dep._in_extras.append("one")
dep._in_extras.append("two")
package_b.extras = {"one": [dep], "two": [dep]}
package_b.add_dependency(dep)
extras = [enabled_extra] if enabled_extra is not None else []
package_a.add_dependency(
Factory.create_dependency("B", {"version": "*", "extras": extras})
)
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
transaction = solver.solve()
expected = [
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
]
if enabled_extra is not None:
expected.insert(0, {"job": "install", "package": package_c})
ops = check_solver_result(
transaction,
expected,
)
assert ops[-1].package.marker.is_any()
assert ops[0].package.marker.is_any()
@pytest.mark.parametrize("enabled_extra", ["one", "two", None])
def test_solver_returns_extras_only_requested_nested(
solver: Solver,
repo: Repository,
package: ProjectPackage,
enabled_extra: Optional[bool],
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c10 = get_package("C", "1.0")
package_c20 = get_package("C", "2.0")
dep10 = get_dependency("C", "1.0", optional=True)
dep10._in_extras.append("one")
dep10.marker = parse_marker("extra == 'one'")
dep20 = get_dependency("C", "2.0", optional=True)
dep20._in_extras.append("two")
dep20.marker = parse_marker("extra == 'two'")
package_b.extras = {"one": [dep10], "two": [dep20]}
package_b.add_dependency(dep10)
package_b.add_dependency(dep20)
extras = [enabled_extra] if enabled_extra is not None else []
package_a.add_dependency(
Factory.create_dependency("B", {"version": "*", "extras": extras})
)
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c10)
repo.add_package(package_c20)
transaction = solver.solve()
expected = [
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
]
if enabled_extra is not None:
expected.insert(
0,
{
"job": "install",
"package": package_c10 if enabled_extra == "one" else package_c20,
},
)
ops = check_solver_result(transaction, expected)
assert ops[-1].package.marker.is_any()
assert ops[0].package.marker.is_any()
def test_solver_returns_prereleases_if_requested(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package.add_dependency(
Factory.create_dependency("C", {"version": "*", "allow-prereleases": True})
)
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_c_dev = get_package("C", "1.1-beta.1")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_c_dev)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
{"job": "install", "package": package_c_dev},
],
)
def test_solver_does_not_return_prereleases_if_not_requested(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package.add_dependency(Factory.create_dependency("C", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_c_dev = get_package("C", "1.1-beta.1")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_c_dev)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
{"job": "install", "package": package_c},
],
)
def test_solver_sub_dependencies_with_requirements(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_d = get_package("D", "1.0")
package_c.add_dependency(
Factory.create_dependency("D", {"version": "^1.0", "python": "<4.0"})
)
package_a.add_dependency(Factory.create_dependency("C", "*"))
package_b.add_dependency(Factory.create_dependency("D", "^1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_d)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_d},
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
op = ops[1]
assert op.package.marker.is_any()
def test_solver_sub_dependencies_with_requirements_complex(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "python": "<5.0"})
)
package.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "<5.0"})
)
package.add_dependency(
Factory.create_dependency("C", {"version": "^1.0", "python": "<4.0"})
)
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_d = get_package("D", "1.0")
package_e = get_package("E", "1.0")
package_f = get_package("F", "1.0")
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "<4.0"})
)
package_a.add_dependency(
Factory.create_dependency("D", {"version": "^1.0", "python": "<4.0"})
)
package_b.add_dependency(
Factory.create_dependency("E", {"version": "^1.0", "platform": "win32"})
)
package_b.add_dependency(
Factory.create_dependency("F", {"version": "^1.0", "python": "<5.0"})
)
package_c.add_dependency(
Factory.create_dependency("F", {"version": "^1.0", "python": "<4.0"})
)
package_d.add_dependency(Factory.create_dependency("F", "*"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_d)
repo.add_package(package_e)
repo.add_package(package_f)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_e},
{"job": "install", "package": package_f},
{"job": "install", "package": package_b},
{"job": "install", "package": package_d},
{"job": "install", "package": package_a},
{"job": "install", "package": package_c},
],
)
def test_solver_sub_dependencies_with_not_supported_python_version(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("^3.5")
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_b.python_versions = "<2.0"
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "<2.0"})
)
repo.add_package(package_a)
repo.add_package(package_b)
transaction = solver.solve()
check_solver_result(transaction, [{"job": "install", "package": package_a}])
def test_solver_sub_dependencies_with_not_supported_python_version_transitive(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("^3.4")
package.add_dependency(
Factory.create_dependency("httpx", {"version": "^0.17.1", "python": "^3.6"})
)
httpx = get_package("httpx", "0.17.1")
httpx.python_versions = ">=3.6"
httpcore = get_package("httpcore", "0.12.3")
httpcore.python_versions = ">=3.6"
sniffio_1_1_0 = get_package("sniffio", "1.1.0")
sniffio_1_1_0.python_versions = ">=3.5"
sniffio = get_package("sniffio", "1.2.0")
sniffio.python_versions = ">=3.5"
httpx.add_dependency(
Factory.create_dependency("httpcore", {"version": ">=0.12.1,<0.13"})
)
httpx.add_dependency(Factory.create_dependency("sniffio", {"version": "*"}))
httpcore.add_dependency(Factory.create_dependency("sniffio", {"version": "==1.*"}))
repo.add_package(httpx)
repo.add_package(httpcore)
repo.add_package(sniffio)
repo.add_package(sniffio_1_1_0)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": sniffio, "skipped": False},
{"job": "install", "package": httpcore, "skipped": False},
{"job": "install", "package": httpx, "skipped": False},
],
)
def test_solver_with_dependency_in_both_default_and_dev_dependencies(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("^3.5")
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(
Factory.create_dependency(
"A", {"version": "*", "extras": ["foo"]}, groups=["dev"]
)
)
package_a = get_package("A", "1.0")
package_a.extras["foo"] = [get_dependency("C")]
package_a.add_dependency(
Factory.create_dependency("C", {"version": "^1.0", "optional": True})
)
package_a.add_dependency(Factory.create_dependency("B", {"version": "^1.0"}))
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_c.add_dependency(Factory.create_dependency("D", "^1.0"))
package_d = get_package("D", "1.0")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_d)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_d},
{"job": "install", "package": package_b},
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
],
)
d = ops[0].package
b = ops[1].package
c = ops[2].package
a = ops[3].package
assert d.category == "dev"
assert b.category == "main"
assert c.category == "dev"
assert a.category == "main"
def test_solver_with_dependency_in_both_main_and_dev_dependencies_with_one_more_dependent( # noqa: E501
solver: Solver, repo: Repository, package: Package
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("E", "*"))
package.add_dependency(
Factory.create_dependency(
"A", {"version": "*", "extras": ["foo"]}, groups=["dev"]
)
)
package_a = get_package("A", "1.0")
package_a.extras["foo"] = [get_dependency("C")]
package_a.add_dependency(
Factory.create_dependency("C", {"version": "^1.0", "optional": True})
)
package_a.add_dependency(Factory.create_dependency("B", {"version": "^1.0"}))
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_c.add_dependency(Factory.create_dependency("D", "^1.0"))
package_d = get_package("D", "1.0")
package_e = get_package("E", "1.0")
package_e.add_dependency(Factory.create_dependency("A", "^1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_d)
repo.add_package(package_e)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_b},
{"job": "install", "package": package_d},
{"job": "install", "package": package_a},
{"job": "install", "package": package_c},
{"job": "install", "package": package_e},
],
)
b = ops[0].package
d = ops[1].package
a = ops[2].package
c = ops[3].package
e = ops[4].package
assert b.category == "main"
assert d.category == "dev"
assert a.category == "main"
assert c.category == "dev"
assert e.category == "main"
def test_solver_with_dependency_and_prerelease_sub_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(Factory.create_dependency("B", ">=1.0.0.dev2"))
repo.add_package(package_a)
repo.add_package(get_package("B", "0.9.0"))
repo.add_package(get_package("B", "1.0.0.dev1"))
repo.add_package(get_package("B", "1.0.0.dev2"))
repo.add_package(get_package("B", "1.0.0.dev3"))
package_b = get_package("B", "1.0.0.dev4")
repo.add_package(package_b)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
def test_solver_circular_dependency(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(Factory.create_dependency("B", "^1.0"))
package_b = get_package("B", "1.0")
package_b.add_dependency(Factory.create_dependency("A", "^1.0"))
package_b.add_dependency(Factory.create_dependency("C", "^1.0"))
package_c = get_package("C", "1.0")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
assert ops[0].package.category == "main"
def test_solver_circular_dependency_chain(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(Factory.create_dependency("B", "^1.0"))
package_b = get_package("B", "1.0")
package_b.add_dependency(Factory.create_dependency("C", "^1.0"))
package_c = get_package("C", "1.0")
package_c.add_dependency(Factory.create_dependency("D", "^1.0"))
package_d = get_package("D", "1.0")
package_d.add_dependency(Factory.create_dependency("B", "^1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_d)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_d},
{"job": "install", "package": package_c},
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
assert ops[0].package.category == "main"
def test_solver_dense_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
# The root package depends on packages A0...An-1,
# And package Ai depends on packages A0...Ai-1
# This graph is a transitive tournament
packages = []
n = 22
for i in range(n):
package_ai = get_package("a" + str(i), "1.0")
repo.add_package(package_ai)
packages.append(package_ai)
package.add_dependency(Factory.create_dependency("a" + str(i), "^1.0"))
for j in range(i):
package_ai.add_dependency(Factory.create_dependency("a" + str(j), "^1.0"))
transaction = solver.solve()
check_solver_result(
transaction, [{"job": "install", "package": packages[i]} for i in range(n)]
)
def test_solver_duplicate_dependencies_same_constraint(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "2.7"})
)
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": ">=3.4"})
)
package_b = get_package("B", "1.0")
repo.add_package(package_a)
repo.add_package(package_b)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
def test_solver_duplicate_dependencies_different_constraints(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "<3.4"})
)
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^2.0", "python": ">=3.4"})
)
package_b10 = get_package("B", "1.0")
package_b20 = get_package("B", "2.0")
repo.add_package(package_a)
repo.add_package(package_b10)
repo.add_package(package_b20)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_b10},
{"job": "install", "package": package_b20},
{"job": "install", "package": package_a},
],
)
def test_solver_duplicate_dependencies_different_constraints_same_requirements(
solver: Solver, repo: Repository, package: Package
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(Factory.create_dependency("B", {"version": "^1.0"}))
package_a.add_dependency(Factory.create_dependency("B", {"version": "^2.0"}))
package_b10 = get_package("B", "1.0")
package_b20 = get_package("B", "2.0")
repo.add_package(package_a)
repo.add_package(package_b10)
repo.add_package(package_b20)
with pytest.raises(SolverProblemError) as e:
solver.solve()
expected = """\
Because a (1.0) depends on both B (^1.0) and B (^2.0), a is forbidden.
So, because no versions of a match !=1.0
and root depends on A (*), version solving failed."""
assert str(e.value) == expected
def test_solver_duplicate_dependencies_different_constraints_merge_no_markers(
solver: Solver, repo: Repository, package: Package
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "1.0"))
package_a10 = get_package("A", "1.0")
package_a10.add_dependency(Factory.create_dependency("C", {"version": "^1.0"}))
package_a20 = get_package("A", "2.0")
package_a20.add_dependency(
Factory.create_dependency("C", {"version": "^2.0"}) # incompatible with B
)
package_a20.add_dependency(
Factory.create_dependency("C", {"version": "!=2.1", "python": "3.10"})
)
package_b = get_package("B", "1.0")
package_b.add_dependency(Factory.create_dependency("C", {"version": "<2.0"}))
package_c10 = get_package("C", "1.0")
package_c20 = get_package("C", "2.0")
package_c21 = get_package("C", "2.1")
repo.add_package(package_a10)
repo.add_package(package_a20)
repo.add_package(package_b)
repo.add_package(package_c10)
repo.add_package(package_c20)
repo.add_package(package_c21)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_c10},
{"job": "install", "package": package_a10}, # only a10, not a20
{"job": "install", "package": package_b},
],
)
def test_solver_duplicate_dependencies_sub_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("A", "*"))
package_a = get_package("A", "1.0")
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "<3.4"})
)
package_a.add_dependency(
Factory.create_dependency("B", {"version": "^2.0", "python": ">=3.4"})
)
package_b10 = get_package("B", "1.0")
package_b20 = get_package("B", "2.0")
package_b10.add_dependency(Factory.create_dependency("C", "1.2"))
package_b20.add_dependency(Factory.create_dependency("C", "1.5"))
package_c12 = get_package("C", "1.2")
package_c15 = get_package("C", "1.5")
repo.add_package(package_a)
repo.add_package(package_b10)
repo.add_package(package_b20)
repo.add_package(package_c12)
repo.add_package(package_c15)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_c12},
{"job": "install", "package": package_c15},
{"job": "install", "package": package_b10},
{"job": "install", "package": package_b20},
{"job": "install", "package": package_a},
],
)
def test_solver_fails_if_dependency_name_does_not_match_package(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(
Factory.create_dependency(
"my-demo", {"git": "https://github.com/demo/demo.git"}
)
)
with pytest.raises(RuntimeError):
solver.solve()
def test_solver_does_not_get_stuck_in_recursion_on_circular_dependency(
solver: Solver, repo: Repository, package: Package
):
package_a = get_package("A", "1.0")
package_a.add_dependency(Factory.create_dependency("B", "^1.0"))
package_b = get_package("B", "1.0")
package_b.add_dependency(Factory.create_dependency("C", "^1.0"))
package_c = get_package("C", "1.0")
package_c.add_dependency(Factory.create_dependency("B", "^1.0"))
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
package.add_dependency(Factory.create_dependency("A", "^1.0"))
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
def test_solver_can_resolve_git_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
package.add_dependency(
Factory.create_dependency("demo", {"git": "https://github.com/demo/demo.git"})
)
transaction = solver.solve()
demo = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference=DEFAULT_SOURCE_REF,
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
ops = check_solver_result(
transaction,
[{"job": "install", "package": pendulum}, {"job": "install", "package": demo}],
)
op = ops[1]
assert op.package.source_type == "git"
assert op.package.source_reference == DEFAULT_SOURCE_REF
assert op.package.source_resolved_reference.startswith("9cf87a2")
def test_solver_can_resolve_git_dependencies_with_extras(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
package.add_dependency(
Factory.create_dependency(
"demo", {"git": "https://github.com/demo/demo.git", "extras": ["foo"]}
)
)
transaction = solver.solve()
demo = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference=DEFAULT_SOURCE_REF,
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
check_solver_result(
transaction,
[
{"job": "install", "package": cleo},
{"job": "install", "package": pendulum},
{"job": "install", "package": demo},
],
)
@pytest.mark.parametrize(
"ref",
[{"branch": "a-branch"}, {"tag": "a-tag"}, {"rev": "9cf8"}],
ids=["branch", "tag", "rev"],
)
def test_solver_can_resolve_git_dependencies_with_ref(
solver: Solver, repo: Repository, package: Package, ref: Dict[str, str]
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
demo = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference=ref[list(ref.keys())[0]],
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
git_config = {demo.source_type: demo.source_url}
git_config.update(ref)
package.add_dependency(Factory.create_dependency("demo", git_config))
transaction = solver.solve()
ops = check_solver_result(
transaction,
[{"job": "install", "package": pendulum}, {"job": "install", "package": demo}],
)
op = ops[1]
assert op.package.source_type == "git"
assert op.package.source_reference == ref[list(ref.keys())[0]]
assert op.package.source_resolved_reference.startswith("9cf87a2")
def test_solver_does_not_trigger_conflict_for_python_constraint_if_python_requirement_is_compatible( # noqa: E501
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.4")
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "python": "^3.6"})
)
package_a = get_package("A", "1.0.0")
package_a.python_versions = ">=3.6"
repo.add_package(package_a)
transaction = solver.solve()
check_solver_result(transaction, [{"job": "install", "package": package_a}])
def test_solver_does_not_trigger_conflict_for_python_constraint_if_python_requirement_is_compatible_multiple( # noqa: E501
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.4")
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "python": "^3.6"})
)
package.add_dependency(
Factory.create_dependency("B", {"version": "^1.0", "python": "^3.5.3"})
)
package_a = get_package("A", "1.0.0")
package_a.python_versions = ">=3.6"
package_a.add_dependency(Factory.create_dependency("B", "^1.0"))
package_b = get_package("B", "1.0.0")
package_b.python_versions = ">=3.5.3"
repo.add_package(package_a)
repo.add_package(package_b)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_b},
{"job": "install", "package": package_a},
],
)
def test_solver_triggers_conflict_for_dependency_python_not_fully_compatible_with_package_python( # noqa: E501
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.4")
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "python": "^3.5"})
)
package_a = get_package("A", "1.0.0")
package_a.python_versions = ">=3.6"
repo.add_package(package_a)
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_finds_compatible_package_for_dependency_python_not_fully_compatible_with_package_python( # noqa: E501
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.4")
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "python": "^3.5"})
)
package_a101 = get_package("A", "1.0.1")
package_a101.python_versions = ">=3.6"
package_a100 = get_package("A", "1.0.0")
package_a100.python_versions = ">=3.5"
repo.add_package(package_a100)
repo.add_package(package_a101)
transaction = solver.solve()
check_solver_result(transaction, [{"job": "install", "package": package_a100}])
def test_solver_does_not_trigger_new_resolution_on_duplicate_dependencies_if_only_extras( # noqa: E501
solver: Solver, repo: Repository, package: Package
):
dep1 = Dependency.create_from_pep_508('B (>=1.0); extra == "foo"')
dep1.activate()
dep2 = Dependency.create_from_pep_508('B (>=2.0); extra == "bar"')
dep2.activate()
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "extras": ["foo", "bar"]})
)
package_a = get_package("A", "1.0.0")
package_a.extras = {"foo": [dep1], "bar": [dep2]}
package_a.add_dependency(dep1)
package_a.add_dependency(dep2)
package_b2 = get_package("B", "2.0.0")
package_b1 = get_package("B", "1.0.0")
repo.add_package(package_a)
repo.add_package(package_b1)
repo.add_package(package_b2)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": package_b2},
{"job": "install", "package": package_a},
],
)
assert str(ops[0].package.marker) == ""
assert str(ops[1].package.marker) == ""
def test_solver_does_not_raise_conflict_for_locked_conditional_dependencies(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.4")
package.add_dependency(
Factory.create_dependency("A", {"version": "^1.0", "python": "^3.6"})
)
package.add_dependency(Factory.create_dependency("B", "^1.0"))
package_a = get_package("A", "1.0.0")
package_a.python_versions = ">=3.6"
package_a.marker = parse_marker(
'python_version >= "3.6" and python_version < "4.0"'
)
package_b = get_package("B", "1.0.0")
repo.add_package(package_a)
repo.add_package(package_b)
solver._locked = Repository([package_a])
transaction = solver.solve(use_latest=[package_b.name])
check_solver_result(
transaction,
[
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
def test_solver_returns_extras_if_requested_in_dependencies_and_not_in_root_package(
solver: Solver, repo: Repository, package: Package
):
package.add_dependency(Factory.create_dependency("A", "*"))
package.add_dependency(Factory.create_dependency("B", "*"))
package.add_dependency(Factory.create_dependency("C", "*"))
package_a = get_package("A", "1.0")
package_b = get_package("B", "1.0")
package_c = get_package("C", "1.0")
package_d = get_package("D", "1.0")
package_b.add_dependency(
Factory.create_dependency("C", {"version": "^1.0", "extras": ["foo"]})
)
package_c.add_dependency(
Factory.create_dependency("D", {"version": "^1.0", "optional": True})
)
package_c.extras = {"foo": [Factory.create_dependency("D", "^1.0")]}
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
repo.add_package(package_d)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_d},
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
{"job": "install", "package": package_b},
],
)
def test_solver_should_not_resolve_prerelease_version_if_not_requested(
solver: Solver, repo: Repository, package: Package
):
package.add_dependency(Factory.create_dependency("A", "~1.8.0"))
package.add_dependency(Factory.create_dependency("B", "^0.5.0"))
package_a185 = get_package("A", "1.8.5")
package_a19b1 = get_package("A", "1.9b1")
package_b = get_package("B", "0.5.0")
package_b.add_dependency(Factory.create_dependency("A", ">=1.9b1"))
repo.add_package(package_a185)
repo.add_package(package_a19b1)
repo.add_package(package_b)
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_ignores_dependencies_with_incompatible_python_full_version_marker(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("^3.6")
package.add_dependency(Factory.create_dependency("A", "^1.0"))
package.add_dependency(Factory.create_dependency("B", "^2.0"))
package_a = get_package("A", "1.0.0")
package_a.add_dependency(
Dependency.create_from_pep_508(
'B (<2.0); platform_python_implementation == "PyPy" and python_full_version'
' < "2.7.9"'
)
)
package_b200 = get_package("B", "2.0.0")
package_b100 = get_package("B", "1.0.0")
repo.add_package(package_a)
repo.add_package(package_b100)
repo.add_package(package_b200)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a},
{"job": "install", "package": package_b200},
],
)
def test_solver_git_dependencies_update(
solver: Solver, repo: Repository, package: Package, installed: InstalledRepository
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
demo_installed = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference=DEFAULT_SOURCE_REF,
source_resolved_reference="123456",
)
demo = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference=DEFAULT_SOURCE_REF,
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
installed.add_package(demo_installed)
package.add_dependency(
Factory.create_dependency("demo", {"git": "https://github.com/demo/demo.git"})
)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{"job": "install", "package": pendulum},
{"job": "update", "from": demo_installed, "to": demo},
],
)
op = ops[1]
assert op.job_type == "update"
assert op.package.source_type == "git"
assert op.package.source_reference == DEFAULT_SOURCE_REF
assert op.package.source_resolved_reference.startswith("9cf87a2")
assert op.initial_package.source_resolved_reference == "123456"
def test_solver_git_dependencies_update_skipped(
solver: Solver, repo: Repository, package: Package, installed: InstalledRepository
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
demo = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference="master",
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
installed.add_package(demo)
package.add_dependency(
Factory.create_dependency("demo", {"git": "https://github.com/demo/demo.git"})
)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": pendulum},
{"job": "install", "package": demo, "skipped": True},
],
)
def test_solver_git_dependencies_short_hash_update_skipped(
solver: Solver, repo: Repository, package: Package, installed: InstalledRepository
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
demo = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
installed.add_package(demo)
package.add_dependency(
Factory.create_dependency(
"demo", {"git": "https://github.com/demo/demo.git", "rev": "9cf87a2"}
)
)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": pendulum},
{
"job": "install",
"package": Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
source_resolved_reference=(
"9cf87a285a2d3fbb0b9fa621997b3acc3631ed24"
),
),
"skipped": True,
},
],
)
def test_solver_can_resolve_directory_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
repo.add_package(pendulum)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "git"
/ "github.com"
/ "demo"
/ "demo"
).as_posix()
package.add_dependency(Factory.create_dependency("demo", {"path": path}))
transaction = solver.solve()
demo = Package("demo", "0.1.2", source_type="directory", source_url=path)
ops = check_solver_result(
transaction,
[{"job": "install", "package": pendulum}, {"job": "install", "package": demo}],
)
op = ops[1]
assert op.package.name == "demo"
assert op.package.version.text == "0.1.2"
assert op.package.source_type == "directory"
assert op.package.source_url == path
def test_solver_can_resolve_directory_dependencies_nested_editable(
repo: Repository,
pool: Pool,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
base = Path(__file__).parent.parent / "fixtures" / "project_with_nested_local"
poetry = Factory().create_poetry(cwd=base)
package = poetry.package
solver = Solver(
package, pool, installed, locked, io, provider=Provider(package, pool, io)
)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"quix",
"1.2.3",
source_type="directory",
source_url=(base / "quix").as_posix(),
),
"skipped": False,
},
{
"job": "install",
"package": Package(
"bar",
"1.2.3",
source_type="directory",
source_url=(base / "bar").as_posix(),
),
"skipped": False,
},
{
"job": "install",
"package": Package(
"foo",
"1.2.3",
source_type="directory",
source_url=(base / "foo").as_posix(),
),
"skipped": False,
},
],
)
for op in ops:
assert op.package.source_type == "directory"
assert op.package.develop is True
def test_solver_can_resolve_directory_dependencies_with_extras(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "git"
/ "github.com"
/ "demo"
/ "demo"
).as_posix()
package.add_dependency(
Factory.create_dependency("demo", {"path": path, "extras": ["foo"]})
)
transaction = solver.solve()
demo = Package("demo", "0.1.2", source_type="directory", source_url=path)
ops = check_solver_result(
transaction,
[
{"job": "install", "package": cleo},
{"job": "install", "package": pendulum},
{"job": "install", "package": demo},
],
)
op = ops[2]
assert op.package.name == "demo"
assert op.package.version.text == "0.1.2"
assert op.package.source_type == "directory"
assert op.package.source_url == path
def test_solver_can_resolve_sdist_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
repo.add_package(pendulum)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "distributions"
/ "demo-0.1.0.tar.gz"
).as_posix()
package.add_dependency(Factory.create_dependency("demo", {"path": path}))
transaction = solver.solve()
demo = Package("demo", "0.1.0", source_type="file", source_url=path)
ops = check_solver_result(
transaction,
[{"job": "install", "package": pendulum}, {"job": "install", "package": demo}],
)
op = ops[1]
assert op.package.name == "demo"
assert op.package.version.text == "0.1.0"
assert op.package.source_type == "file"
assert op.package.source_url == path
def test_solver_can_resolve_sdist_dependencies_with_extras(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "distributions"
/ "demo-0.1.0.tar.gz"
).as_posix()
package.add_dependency(
Factory.create_dependency("demo", {"path": path, "extras": ["foo"]})
)
transaction = solver.solve()
demo = Package("demo", "0.1.0", source_type="file", source_url=path)
ops = check_solver_result(
transaction,
[
{"job": "install", "package": cleo},
{"job": "install", "package": pendulum},
{"job": "install", "package": demo},
],
)
op = ops[2]
assert op.package.name == "demo"
assert op.package.version.text == "0.1.0"
assert op.package.source_type == "file"
assert op.package.source_url == path
def test_solver_can_resolve_wheel_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
repo.add_package(pendulum)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "distributions"
/ "demo-0.1.0-py2.py3-none-any.whl"
).as_posix()
package.add_dependency(Factory.create_dependency("demo", {"path": path}))
transaction = solver.solve()
demo = Package("demo", "0.1.0", source_type="file", source_url=path)
ops = check_solver_result(
transaction,
[{"job": "install", "package": pendulum}, {"job": "install", "package": demo}],
)
op = ops[1]
assert op.package.name == "demo"
assert op.package.version.text == "0.1.0"
assert op.package.source_type == "file"
assert op.package.source_url == path
def test_solver_can_resolve_wheel_dependencies_with_extras(
solver: Solver, repo: Repository, package: ProjectPackage
):
pendulum = get_package("pendulum", "2.0.3")
cleo = get_package("cleo", "1.0.0")
repo.add_package(pendulum)
repo.add_package(cleo)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "distributions"
/ "demo-0.1.0-py2.py3-none-any.whl"
).as_posix()
package.add_dependency(
Factory.create_dependency("demo", {"path": path, "extras": ["foo"]})
)
transaction = solver.solve()
demo = Package("demo", "0.1.0", source_type="file", source_url=path)
ops = check_solver_result(
transaction,
[
{"job": "install", "package": cleo},
{"job": "install", "package": pendulum},
{"job": "install", "package": demo},
],
)
op = ops[2]
assert op.package.name == "demo"
assert op.package.version.text == "0.1.0"
assert op.package.source_type == "file"
assert op.package.source_url == path
def test_solver_can_solve_with_legacy_repository_using_proper_dists(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
repo = MockLegacyRepository()
pool = Pool([repo])
solver = Solver(package, pool, installed, locked, io)
package.add_dependency(Factory.create_dependency("isort", "4.3.4"))
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"futures",
"3.2.0",
source_type="legacy",
source_url=repo.url,
source_reference=repo.name,
),
},
{
"job": "install",
"package": Package(
"isort",
"4.3.4",
source_type="legacy",
source_url=repo.url,
source_reference=repo.name,
),
},
],
)
futures = ops[0].package
assert futures.python_versions == ">=2.6, <3"
def test_solver_can_solve_with_legacy_repository_using_proper_python_compatible_dists(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
repo = MockLegacyRepository()
pool = Pool([repo])
solver = Solver(package, pool, installed, locked, io)
package.add_dependency(Factory.create_dependency("isort", "4.3.4"))
transaction = solver.solve()
check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"isort",
"4.3.4",
source_type="legacy",
source_url=repo.url,
source_reference=repo.name,
),
}
],
)
def test_solver_skips_invalid_versions(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
repo = MockPyPIRepository()
pool = Pool([repo])
solver = Solver(package, pool, installed, locked, io)
package.add_dependency(Factory.create_dependency("trackpy", "^0.4"))
transaction = solver.solve()
check_solver_result(
transaction, [{"job": "install", "package": get_package("trackpy", "0.4.1")}]
)
def test_multiple_constraints_on_root(
package: ProjectPackage, solver: Solver, repo: Repository
):
package.add_dependency(
Factory.create_dependency("foo", {"version": "^1.0", "python": "^2.7"})
)
package.add_dependency(
Factory.create_dependency("foo", {"version": "^2.0", "python": "^3.7"})
)
foo15 = get_package("foo", "1.5.0")
foo25 = get_package("foo", "2.5.0")
repo.add_package(foo15)
repo.add_package(foo25)
transaction = solver.solve()
check_solver_result(
transaction,
[{"job": "install", "package": foo15}, {"job": "install", "package": foo25}],
)
def test_solver_chooses_most_recent_version_amongst_repositories(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
package.add_dependency(Factory.create_dependency("tomlkit", {"version": "^0.5"}))
repo = MockLegacyRepository()
pool = Pool([repo, MockPyPIRepository()])
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
ops = check_solver_result(
transaction, [{"job": "install", "package": get_package("tomlkit", "0.5.3")}]
)
assert ops[0].package.source_type is None
assert ops[0].package.source_url is None
def test_solver_chooses_from_correct_repository_if_forced(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
package.add_dependency(
Factory.create_dependency("tomlkit", {"version": "^0.5", "source": "legacy"})
)
repo = MockLegacyRepository()
pool = Pool([repo, MockPyPIRepository()])
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"tomlkit",
"0.5.2",
source_type="legacy",
source_url=repo.url,
source_reference=repo.name,
),
}
],
)
assert ops[0].package.source_url == "http://legacy.foo.bar"
def test_solver_chooses_from_correct_repository_if_forced_and_transitive_dependency(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
package.add_dependency(Factory.create_dependency("foo", "^1.0"))
package.add_dependency(
Factory.create_dependency("tomlkit", {"version": "^0.5", "source": "legacy"})
)
repo = Repository()
foo = get_package("foo", "1.0.0")
foo.add_dependency(Factory.create_dependency("tomlkit", "^0.5.0"))
repo.add_package(foo)
pool = Pool([MockLegacyRepository(), repo, MockPyPIRepository()])
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"tomlkit",
"0.5.2",
source_type="legacy",
source_url="http://legacy.foo.bar",
source_reference="legacy",
),
},
{"job": "install", "package": foo},
],
)
assert ops[0].package.source_url == "http://legacy.foo.bar"
assert ops[1].package.source_type is None
assert ops[1].package.source_url is None
def test_solver_does_not_choose_from_secondary_repository_by_default(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
package.add_dependency(Factory.create_dependency("clikit", {"version": "^0.2.0"}))
pool = Pool()
pool.add_repository(MockPyPIRepository(), secondary=True)
pool.add_repository(MockLegacyRepository())
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"pastel",
"0.1.0",
source_type="legacy",
source_url="http://legacy.foo.bar",
source_reference="legacy",
),
},
{"job": "install", "package": get_package("pylev", "1.3.0")},
{
"job": "install",
"package": Package(
"clikit",
"0.2.4",
source_type="legacy",
source_url="http://legacy.foo.bar",
source_reference="legacy",
),
},
],
)
assert ops[0].package.source_url == "http://legacy.foo.bar"
assert ops[1].package.source_type is None
assert ops[1].package.source_url is None
assert ops[2].package.source_url == "http://legacy.foo.bar"
def test_solver_chooses_from_secondary_if_explicit(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
package.python_versions = "^3.7"
package.add_dependency(
Factory.create_dependency("clikit", {"version": "^0.2.0", "source": "PyPI"})
)
pool = Pool()
pool.add_repository(MockPyPIRepository(), secondary=True)
pool.add_repository(MockLegacyRepository())
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
ops = check_solver_result(
transaction,
[
{
"job": "install",
"package": Package(
"pastel",
"0.1.0",
source_type="legacy",
source_url="http://legacy.foo.bar",
source_reference="legacy",
),
},
{"job": "install", "package": get_package("pylev", "1.3.0")},
{"job": "install", "package": get_package("clikit", "0.2.4")},
],
)
assert ops[0].package.source_url == "http://legacy.foo.bar"
assert ops[1].package.source_type is None
assert ops[1].package.source_url is None
assert ops[2].package.source_type is None
assert ops[2].package.source_url is None
def test_solver_discards_packages_with_empty_markers(
package: ProjectPackage,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
pool: Pool,
repo: Repository,
):
package.python_versions = "~2.7 || ^3.4"
package.add_dependency(
Factory.create_dependency(
"a", {"version": "^0.1.0", "markers": "python_version >= '3.4'"}
)
)
package_a = get_package("a", "0.1.0")
package_a.add_dependency(
Factory.create_dependency(
"b", {"version": "^0.1.0", "markers": "python_version < '3.2'"}
)
)
package_a.add_dependency(Factory.create_dependency("c", "^0.2.0"))
package_b = get_package("b", "0.1.0")
package_c = get_package("c", "0.2.0")
repo.add_package(package_a)
repo.add_package(package_b)
repo.add_package(package_c)
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_c},
{"job": "install", "package": package_a},
],
)
def test_solver_does_not_raise_conflict_for_conditional_dev_dependencies(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.5")
package.add_dependency(
Factory.create_dependency(
"A", {"version": "^1.0", "python": "~2.7"}, groups=["dev"]
)
)
package.add_dependency(
Factory.create_dependency(
"A", {"version": "^2.0", "python": "^3.5"}, groups=["dev"]
)
)
package_a100 = get_package("A", "1.0.0")
package_a200 = get_package("A", "2.0.0")
repo.add_package(package_a100)
repo.add_package(package_a200)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a100},
{"job": "install", "package": package_a200},
],
)
def test_solver_does_not_loop_indefinitely_on_duplicate_constraints_with_extras(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.5")
package.add_dependency(
Factory.create_dependency(
"requests", {"version": "^2.22.0", "extras": ["security"]}
)
)
requests = get_package("requests", "2.22.0")
requests.add_dependency(Factory.create_dependency("idna", ">=2.5,<2.9"))
requests.add_dependency(
Factory.create_dependency(
"idna", {"version": ">=2.0.0", "markers": "extra == 'security'"}
)
)
requests.extras["security"] = [get_dependency("idna", ">=2.0.0")]
idna = get_package("idna", "2.8")
repo.add_package(requests)
repo.add_package(idna)
transaction = solver.solve()
check_solver_result(
transaction,
[{"job": "install", "package": idna}, {"job": "install", "package": requests}],
)
def test_solver_does_not_fail_with_locked_git_and_non_git_dependencies(
repo: Repository,
package: Package,
locked: Repository,
pool: Pool,
installed: InstalledRepository,
io: NullIO,
):
package.add_dependency(
Factory.create_dependency("demo", {"git": "https://github.com/demo/demo.git"})
)
package.add_dependency(Factory.create_dependency("a", "^1.2.3"))
git_package = Package(
"demo",
"0.1.2",
source_type="git",
source_url="https://github.com/demo/demo.git",
source_reference=DEFAULT_SOURCE_REF,
source_resolved_reference="9cf87a285a2d3fbb0b9fa621997b3acc3631ed24",
)
installed.add_package(git_package)
locked.add_package(get_package("a", "1.2.3"))
locked.add_package(git_package)
repo.add_package(get_package("a", "1.2.3"))
repo.add_package(Package("pendulum", "2.1.2"))
solver = Solver(package, pool, installed, locked, io)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": get_package("a", "1.2.3")},
{"job": "install", "package": git_package, "skipped": True},
],
)
def test_ignore_python_constraint_no_overlap_dependencies(
solver: Solver, repo: Repository, package: ProjectPackage
):
pytest = get_package("demo", "1.0.0")
pytest.add_dependency(
Factory.create_dependency(
"configparser", {"version": "^1.2.3", "python": "<3.2"}
)
)
package.add_dependency(
Factory.create_dependency("demo", {"version": "^1.0.0", "python": "^3.6"})
)
repo.add_package(pytest)
repo.add_package(get_package("configparser", "1.2.3"))
transaction = solver.solve()
check_solver_result(
transaction,
[{"job": "install", "package": pytest}],
)
def test_solver_should_not_go_into_an_infinite_loop_on_duplicate_dependencies(
solver: Solver, repo: Repository, package: Package
):
solver.provider.set_package_python_versions("~2.7 || ^3.5")
package.add_dependency(Factory.create_dependency("A", "^1.0"))
package_a = get_package("A", "1.0.0")
package_a.add_dependency(Factory.create_dependency("B", "*"))
package_a.add_dependency(
Factory.create_dependency(
"B", {"version": "^1.0", "markers": "implementation_name == 'pypy'"}
)
)
package_b20 = get_package("B", "2.0.0")
package_b10 = get_package("B", "1.0.0")
repo.add_package(package_a)
repo.add_package(package_b10)
repo.add_package(package_b20)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_b10},
{"job": "install", "package": package_b20},
{"job": "install", "package": package_a},
],
)
def test_solver_synchronize_single(
package: ProjectPackage,
pool: Pool,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
solver = Solver(package, pool, installed, locked, io)
package_a = get_package("a", "1.0")
installed.add_package(package_a)
transaction = solver.solve()
check_solver_result(
transaction, [{"job": "remove", "package": package_a}], synchronize=True
)
@pytest.mark.skip(reason="Poetry no longer has critical package requirements")
def test_solver_with_synchronization_keeps_critical_package(
package: ProjectPackage,
pool: Pool,
installed: InstalledRepository,
locked: Repository,
io: NullIO,
):
solver = Solver(package, pool, installed, locked, io)
package_pip = get_package("setuptools", "1.0")
installed.add_package(package_pip)
transaction = solver.solve()
check_solver_result(transaction, [])
def test_solver_cannot_choose_another_version_for_directory_dependencies(
solver: Solver, repo: Repository, package: Package
):
pendulum = get_package("pendulum", "2.0.3")
demo = get_package("demo", "0.1.0")
foo = get_package("foo", "1.2.3")
foo.add_dependency(Factory.create_dependency("demo", "<0.1.2"))
repo.add_package(foo)
repo.add_package(demo)
repo.add_package(pendulum)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "git"
/ "github.com"
/ "demo"
/ "demo"
).as_posix()
package.add_dependency(Factory.create_dependency("demo", {"path": path}))
package.add_dependency(Factory.create_dependency("foo", "^1.2.3"))
# This is not solvable since the demo version is pinned
# via the directory dependency
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_cannot_choose_another_version_for_file_dependencies(
solver: Solver, repo: Repository, package: Package
):
pendulum = get_package("pendulum", "2.0.3")
demo = get_package("demo", "0.0.8")
foo = get_package("foo", "1.2.3")
foo.add_dependency(Factory.create_dependency("demo", "<0.1.0"))
repo.add_package(foo)
repo.add_package(demo)
repo.add_package(pendulum)
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "distributions"
/ "demo-0.1.0-py2.py3-none-any.whl"
).as_posix()
package.add_dependency(Factory.create_dependency("demo", {"path": path}))
package.add_dependency(Factory.create_dependency("foo", "^1.2.3"))
# This is not solvable since the demo version is pinned
# via the file dependency
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_cannot_choose_another_version_for_git_dependencies(
solver: Solver, repo: Repository, package: Package
):
pendulum = get_package("pendulum", "2.0.3")
demo = get_package("demo", "0.0.8")
foo = get_package("foo", "1.2.3")
foo.add_dependency(Factory.create_dependency("demo", "<0.1.0"))
repo.add_package(foo)
repo.add_package(demo)
repo.add_package(pendulum)
package.add_dependency(
Factory.create_dependency("demo", {"git": "https://github.com/demo/demo.git"})
)
package.add_dependency(Factory.create_dependency("foo", "^1.2.3"))
# This is not solvable since the demo version is pinned
# via the file dependency
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_cannot_choose_another_version_for_url_dependencies(
solver: Solver,
repo: Repository,
package: Package,
http: Type["httpretty.httpretty"],
):
path = (
Path(__file__).parent.parent
/ "fixtures"
/ "distributions"
/ "demo-0.1.0-py2.py3-none-any.whl"
)
http.register_uri(
"GET",
"https://foo.bar/demo-0.1.0-py2.py3-none-any.whl",
body=path.read_bytes(),
streaming=True,
)
pendulum = get_package("pendulum", "2.0.3")
demo = get_package("demo", "0.0.8")
foo = get_package("foo", "1.2.3")
foo.add_dependency(Factory.create_dependency("demo", "<0.1.0"))
repo.add_package(foo)
repo.add_package(demo)
repo.add_package(pendulum)
package.add_dependency(
Factory.create_dependency(
"demo",
{"url": "https://foo.bar/distributions/demo-0.1.0-py2.py3-none-any.whl"},
)
)
package.add_dependency(Factory.create_dependency("foo", "^1.2.3"))
# This is not solvable since the demo version is pinned
# via the git dependency
with pytest.raises(SolverProblemError):
solver.solve()
def test_solver_should_not_update_same_version_packages_if_installed_has_no_source_type(
solver: Solver, repo: Repository, package: Package, installed: InstalledRepository
):
package.add_dependency(Factory.create_dependency("foo", "1.0.0"))
foo = Package(
"foo",
"1.0.0",
source_type="legacy",
source_url="https://foo.bar",
source_reference="custom",
)
repo.add_package(foo)
installed.add_package(get_package("foo", "1.0.0"))
transaction = solver.solve()
check_solver_result(
transaction, [{"job": "install", "package": foo, "skipped": True}]
)
def test_solver_should_use_the_python_constraint_from_the_environment_if_available(
solver: Solver, repo: Repository, package: Package, installed: InstalledRepository
):
solver.provider.set_package_python_versions("~2.7 || ^3.5")
package.add_dependency(Factory.create_dependency("A", "^1.0"))
a = get_package("A", "1.0.0")
a.add_dependency(
Factory.create_dependency(
"B", {"version": "^1.0.0", "markers": 'python_version < "3.2"'}
)
)
b = get_package("B", "1.0.0")
b.python_versions = ">=2.6, <3"
repo.add_package(a)
repo.add_package(b)
with solver.use_environment(MockEnv((2, 7, 18))):
transaction = solver.solve()
check_solver_result(
transaction,
[{"job": "install", "package": b}, {"job": "install", "package": a}],
)
def test_solver_should_resolve_all_versions_for_multiple_duplicate_dependencies(
solver: Solver, repo: Repository, package: Package
):
package.python_versions = "~2.7 || ^3.5"
package.add_dependency(
Factory.create_dependency(
"A", {"version": "^1.0", "markers": "python_version < '3.5'"}
)
)
package.add_dependency(
Factory.create_dependency(
"A", {"version": "^2.0", "markers": "python_version >= '3.5'"}
)
)
package.add_dependency(
Factory.create_dependency(
"B", {"version": "^3.0", "markers": "python_version < '3.5'"}
)
)
package.add_dependency(
Factory.create_dependency(
"B", {"version": "^4.0", "markers": "python_version >= '3.5'"}
)
)
package_a10 = get_package("A", "1.0.0")
package_a20 = get_package("A", "2.0.0")
package_b30 = get_package("B", "3.0.0")
package_b40 = get_package("B", "4.0.0")
repo.add_package(package_a10)
repo.add_package(package_a20)
repo.add_package(package_b30)
repo.add_package(package_b40)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": package_a10},
{"job": "install", "package": package_a20},
{"job": "install", "package": package_b30},
{"job": "install", "package": package_b40},
],
)
def test_solver_should_not_raise_errors_for_irrelevant_python_constraints(
solver: Solver, repo: Repository, package: Package
):
package.python_versions = "^3.6"
solver.provider.set_package_python_versions("^3.6")
package.add_dependency(
Factory.create_dependency("dataclasses", {"version": "^0.7", "python": "<3.7"})
)
dataclasses = get_package("dataclasses", "0.7")
dataclasses.python_versions = ">=3.6, <3.7"
repo.add_package(dataclasses)
transaction = solver.solve()
check_solver_result(transaction, [{"job": "install", "package": dataclasses}])
def test_solver_can_resolve_transitive_extras(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(Factory.create_dependency("requests", "^2.24.0"))
package.add_dependency(Factory.create_dependency("PyOTA", "^2.1.0"))
requests = get_package("requests", "2.24.0")
requests.add_dependency(Factory.create_dependency("certifi", ">=2017.4.17"))
dep = get_dependency("PyOpenSSL", ">=0.14")
requests.add_dependency(
Factory.create_dependency("PyOpenSSL", {"version": ">=0.14", "optional": True})
)
requests.extras["security"] = [dep]
pyota = get_package("PyOTA", "2.1.0")
pyota.add_dependency(
Factory.create_dependency(
"requests", {"version": ">=2.24.0", "extras": ["security"]}
)
)
repo.add_package(requests)
repo.add_package(pyota)
repo.add_package(get_package("certifi", "2017.4.17"))
repo.add_package(get_package("pyopenssl", "0.14"))
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": get_package("certifi", "2017.4.17")},
{"job": "install", "package": get_package("pyopenssl", "0.14")},
{"job": "install", "package": requests},
{"job": "install", "package": pyota},
],
)
def test_solver_can_resolve_for_packages_with_missing_extras(
solver: Solver, repo: Repository, package: ProjectPackage
):
package.add_dependency(
Factory.create_dependency(
"django-anymail", {"version": "^6.0", "extras": ["postmark"]}
)
)
django_anymail = get_package("django-anymail", "6.1.0")
django_anymail.add_dependency(Factory.create_dependency("django", ">=2.0"))
django_anymail.add_dependency(Factory.create_dependency("requests", ">=2.4.3"))
django_anymail.add_dependency(
Factory.create_dependency("boto3", {"version": "*", "optional": True})
)
django_anymail.extras["amazon_ses"] = [Factory.create_dependency("boto3", "*")]
django = get_package("django", "2.2.0")
boto3 = get_package("boto3", "1.0.0")
requests = get_package("requests", "2.24.0")
repo.add_package(django_anymail)
repo.add_package(django)
repo.add_package(boto3)
repo.add_package(requests)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": django},
{"job": "install", "package": requests},
{"job": "install", "package": django_anymail},
],
)
def test_solver_can_resolve_python_restricted_package_dependencies(
solver: Solver, repo: Repository, package: Package, locked: Repository
):
package.add_dependency(
Factory.create_dependency("futures", {"version": "^3.3.0", "python": "~2.7"})
)
package.add_dependency(
Factory.create_dependency("pre-commit", {"version": "^2.6", "python": "^3.6.1"})
)
futures = Package("futures", "3.3.0")
futures.python_versions = ">=2.6, <3"
pre_commit = Package("pre-commit", "2.7.1")
pre_commit.python_versions = ">=3.6.1"
locked.add_package(futures)
locked.add_package(pre_commit)
repo.add_package(futures)
repo.add_package(pre_commit)
transaction = solver.solve(use_latest=["pre-commit"])
check_solver_result(
transaction,
[
{"job": "install", "package": futures},
{"job": "install", "package": pre_commit},
],
)
def test_solver_should_not_raise_errors_for_irrelevant_transitive_python_constraints(
solver: Solver, repo: Repository, package: Package
):
package.python_versions = "~2.7 || ^3.5"
solver.provider.set_package_python_versions("~2.7 || ^3.5")
package.add_dependency(Factory.create_dependency("virtualenv", "^20.4.3"))
package.add_dependency(
Factory.create_dependency("pre-commit", {"version": "^2.6", "python": "^3.6.1"})
)
virtualenv = get_package("virtualenv", "20.4.3")
virtualenv.python_versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
virtualenv.add_dependency(
Factory.create_dependency(
"importlib-resources", {"version": "*", "markers": 'python_version < "3.7"'}
)
)
pre_commit = Package("pre-commit", "2.7.1")
pre_commit.python_versions = ">=3.6.1"
pre_commit.add_dependency(
Factory.create_dependency(
"importlib-resources", {"version": "*", "markers": 'python_version < "3.7"'}
)
)
importlib_resources = get_package("importlib-resources", "5.1.2")
importlib_resources.python_versions = ">=3.6"
importlib_resources_3_2_1 = get_package("importlib-resources", "3.2.1")
importlib_resources_3_2_1.python_versions = (
"!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
)
repo.add_package(virtualenv)
repo.add_package(pre_commit)
repo.add_package(importlib_resources)
repo.add_package(importlib_resources_3_2_1)
transaction = solver.solve()
check_solver_result(
transaction,
[
{"job": "install", "package": importlib_resources_3_2_1},
{"job": "install", "package": pre_commit},
{"job": "install", "package": virtualenv},
],
)
| 29.487995 | 123 | 0.623589 | 10,720 | 90,882 | 5.035541 | 0.038246 | 0.081436 | 0.055242 | 0.097293 | 0.850873 | 0.820011 | 0.798614 | 0.761583 | 0.720198 | 0.701469 | 0 | 0.028346 | 0.224038 | 90,882 | 3,081 | 124 | 29.497566 | 0.737114 | 0.006063 | 0 | 0.650518 | 0 | 0.001656 | 0.124636 | 0.007751 | 0 | 0 | 0 | 0 | 0.033126 | 1 | 0.04058 | false | 0 | 0.015735 | 0.002899 | 0.060041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
131823b7fef12579fb2e78d85e90a13d00cc8f0c | 60 | py | Python | app/check.py | zevaverbach/binary_quiz | cd230a60e71191e984336fc31b0ce9cee8932615 | [
"MIT"
] | null | null | null | app/check.py | zevaverbach/binary_quiz | cd230a60e71191e984336fc31b0ce9cee8932615 | [
"MIT"
] | null | null | null | app/check.py | zevaverbach/binary_quiz | cd230a60e71191e984336fc31b0ce9cee8932615 | [
"MIT"
] | null | null | null |
def check(binary_string):
return int(binary_string, 2)
| 15 | 32 | 0.733333 | 9 | 60 | 4.666667 | 0.777778 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.166667 | 60 | 3 | 33 | 20 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
132879e39dac8130b423e45a1caae3dc5e260952 | 8,460 | py | Python | model.py | akkaze/tf2-unet | 552fba0d234a69a40c11447aff59fde2ddd11d29 | [
"MIT"
] | 1 | 2020-02-16T05:32:06.000Z | 2020-02-16T05:32:06.000Z | model.py | akkaze/tf2-unet | 552fba0d234a69a40c11447aff59fde2ddd11d29 | [
"MIT"
] | null | null | null | model.py | akkaze/tf2-unet | 552fba0d234a69a40c11447aff59fde2ddd11d29 | [
"MIT"
] | 2 | 2020-02-16T05:32:07.000Z | 2020-05-05T10:14:25.000Z | import numpy as np
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras import backend as keras
def unet(input_size=(64, 80, 1), num_classes=10, use_sep_conv=False, use_deconv=False):
inputs = Input(input_size)
if use_sep_conv:
conv1 = Conv2D(8, 1, padding='same')(inputs)
conv1 = Conv2D(16, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(conv1))
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(16, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(conv1))
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
else:
conv1 = Conv2D(8, 3, padding='same', kernel_initializer='he_normal')(inputs)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(8, 3, padding='same', kernel_initializer='he_normal')(conv1)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
if use_sep_conv:
conv2 = Conv2D(20, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(pool1))
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(20, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(conv2))
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
else:
conv2 = Conv2D(12, 3, padding='same', kernel_initializer='he_normal')(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(12, 3, padding='same', kernel_initializer='he_normal')(conv2)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
if use_sep_conv:
conv3 = Conv2D(32, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(pool2))
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(32, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(conv3))
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
else:
conv3 = Conv2D(16, 3, padding='same', kernel_initializer='he_normal')(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(16, 3, padding='same', kernel_initializer='he_normal')(conv3)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
if use_sep_conv:
conv4 = Conv2D(32, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(pool3))
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(32, 1, padding='same',
kernel_initializer='he_normal')(DepthwiseConv2D(3,
padding='same',
kernel_initializer='he_normal')(conv4))
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
else:
conv4 = Conv2D(16, 3, padding='same', kernel_initializer='he_normal')(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(16, 3, padding='same', kernel_initializer='he_normal')(conv4)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
if use_sep_conv:
up5 = Conv2D(48, 1, padding='same', kernel_initializer='he_normal')(DepthwiseConv2D(
3, padding='same', kernel_initializer='he_normal')(UpSampling2D(size=(2, 2),
interpolation='bilinear')(conv4)))
elif use_deconv:
up5 = Conv2DTranspose(12, 3, 2, activation='relu', padding='same', kernel_initializer='he_normal')((conv4))
else:
up5 = Conv2D(12, 3, activation='relu', padding='same',
kernel_initializer='he_normal')(UpSampling2D(size=(2, 2), interpolation='bilinear')(conv4))
up5 = BatchNormalization()(up5)
up5 = Activation('relu')(up5)
merge5 = Concatenate(axis=3)([conv3, up5])
conv5 = Conv2D(12, 3, padding='same', kernel_initializer='he_normal')(merge5)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
conv5 = Conv2D(12, 3, padding='same', kernel_initializer='he_normal')(conv5)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
if use_sep_conv:
up6 = Conv2D(36, 1, padding='same', kernel_initializer='he_normal')(DepthwiseConv2D(
3, padding='same', kernel_initializer='he_normal')(UpSampling2D(size=(2, 2),
interpolation='bilinear')(conv5)))
elif use_deconv:
up6 = Conv2DTranspose(12, 3, 2, padding='same', kernel_initializer='he_normal')((conv5))
else:
up6 = Conv2D(12, 3, padding='same',
kernel_initializer='he_normal')(UpSampling2D(size=(2, 2), interpolation='bilinear')(conv5))
up6 = BatchNormalization()(up6)
up6 = Activation('relu')(up6)
merge6 = Concatenate(axis=3)([conv2, up6])
conv6 = Conv2D(12, 3, padding='same', kernel_initializer='he_normal')(merge6)
conv6 = BatchNormalization()(conv6)
conv6 = Activation('relu')(conv6)
conv6 = Conv2D(12, 3, padding='same', kernel_initializer='he_normal')(conv6)
conv6 = BatchNormalization()(conv6)
conv6 = Activation('relu')(conv6)
if use_sep_conv:
up7 = Conv2D(24, 1, padding='same', kernel_initializer='he_normal')(DepthwiseConv2D(
3, padding='same', kernel_initializer='he_normal')(UpSampling2D(size=(2, 2),
interpolation='bilinear')(conv6)))
elif use_deconv:
up7 = Conv2DTranspose(8, 3, 2, padding='same', kernel_initializer='he_normal')((conv6))
else:
up7 = Conv2D(8, 3, padding='same',
kernel_initializer='he_normal')(UpSampling2D(size=(2, 2), interpolation='bilinear')(conv6))
up7 = BatchNormalization()(up7)
up7 = Activation('relu')(up7)
merge7 = Concatenate(axis=3)([conv1, up7])
conv7 = Conv2D(8, 3, padding='same', kernel_initializer='he_normal')(merge7)
conv7 = BatchNormalization()(conv7)
conv7 = Activation('relu')(conv7)
conv7 = Conv2D(8, 3, padding='same', kernel_initializer='he_normal')(conv7)
conv7 = BatchNormalization()(conv7)
conv7 = Activation('relu')(conv7)
conv8 = Conv2D(num_classes, 1, activation='softmax')(conv7)
model = Model(inputs=inputs, outputs=conv8)
return model | 54.935065 | 115 | 0.542553 | 795 | 8,460 | 5.633962 | 0.099371 | 0.105604 | 0.159411 | 0.262559 | 0.80777 | 0.793034 | 0.793034 | 0.717794 | 0.686537 | 0.671578 | 0 | 0.064419 | 0.33026 | 8,460 | 154 | 116 | 54.935065 | 0.726085 | 0 | 0 | 0.612245 | 0 | 0 | 0.084269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006803 | false | 0 | 0.027211 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
135acf8fa84bc4acdada8d3618edd7b78229ed1b | 153 | py | Python | pdfbuilder/utils.py | VadimShmatov/pdfbuilder | a7db707dcf8979d123d35cbcbeaf7e7de37ca8aa | [
"MIT"
] | null | null | null | pdfbuilder/utils.py | VadimShmatov/pdfbuilder | a7db707dcf8979d123d35cbcbeaf7e7de37ca8aa | [
"MIT"
] | null | null | null | pdfbuilder/utils.py | VadimShmatov/pdfbuilder | a7db707dcf8979d123d35cbcbeaf7e7de37ca8aa | [
"MIT"
] | null | null | null | import random
import string
def random_string(length):
return ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(length)) | 25.5 | 96 | 0.771242 | 21 | 153 | 5.47619 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124183 | 153 | 6 | 96 | 25.5 | 0.858209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
135fbf3da152dfa9377b2bc2ce0947176245fbb5 | 112 | py | Python | zisan/FileTools/__init__.py | JintuZheng/zisan | 84b30d1ee91754d4351841a2077c78146028adfc | [
"MIT"
] | 40 | 2020-02-14T07:03:16.000Z | 2022-03-07T10:52:18.000Z | zisan/FileTools/__init__.py | EpsilionJT/zisan | 84b30d1ee91754d4351841a2077c78146028adfc | [
"MIT"
] | 1 | 2021-09-04T07:40:26.000Z | 2021-09-04T14:51:03.000Z | zisan/FileTools/__init__.py | EpsilionJT/zisan | 84b30d1ee91754d4351841a2077c78146028adfc | [
"MIT"
] | 9 | 2020-02-24T01:08:11.000Z | 2021-12-15T07:35:14.000Z | from .tools import pngToJpg,plot_one_box,getFiles,add_roi,scaleTransfrom,get_random_files,newMatUC3,roi_cutPoint | 112 | 112 | 0.901786 | 17 | 112 | 5.588235 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.026786 | 112 | 1 | 112 | 112 | 0.862385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13cde924f804823c64751daca990f64c3697601e | 67 | py | Python | python_version/lessons/__init__.py | bojanbg/orbital-academy | e9c262dfb1681cd877855723a94dae58a57e34c5 | [
"MIT"
] | 1 | 2019-09-14T13:29:54.000Z | 2019-09-14T13:29:54.000Z | python_version/lessons/__init__.py | bojanbg/orbital-academy | e9c262dfb1681cd877855723a94dae58a57e34c5 | [
"MIT"
] | null | null | null | python_version/lessons/__init__.py | bojanbg/orbital-academy | e9c262dfb1681cd877855723a94dae58a57e34c5 | [
"MIT"
] | null | null | null | from lesson import *
from lesson_2 import *
from lesson_5 import *
| 16.75 | 22 | 0.776119 | 11 | 67 | 4.545455 | 0.454545 | 0.6 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.179104 | 67 | 3 | 23 | 22.333333 | 0.872727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
13e6ced01722c11a8f9623ce3340360c143a43a0 | 33 | py | Python | rmtplot/__init__.py | schifzt/rmtplot-package | ad75d5ac8a3666e710ee46dfd06d3567c60c86e4 | [
"MIT"
] | null | null | null | rmtplot/__init__.py | schifzt/rmtplot-package | ad75d5ac8a3666e710ee46dfd06d3567c60c86e4 | [
"MIT"
] | null | null | null | rmtplot/__init__.py | schifzt/rmtplot-package | ad75d5ac8a3666e710ee46dfd06d3567c60c86e4 | [
"MIT"
] | null | null | null | from rmtplot.core import RMTplot
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13f2048c465d7bf78d346f1aab6effeebdc1e2c6 | 49 | py | Python | backintime/timeframes_candle/__init__.py | akim-mukhtarov/backtesting | 2d0491b919885eeddd62c4079c9c7292381cb4f9 | [
"MIT"
] | null | null | null | backintime/timeframes_candle/__init__.py | akim-mukhtarov/backtesting | 2d0491b919885eeddd62c4079c9c7292381cb4f9 | [
"MIT"
] | null | null | null | backintime/timeframes_candle/__init__.py | akim-mukhtarov/backtesting | 2d0491b919885eeddd62c4079c9c7292381cb4f9 | [
"MIT"
] | null | null | null | from .timeframes_candle import TimeframesCandle
| 24.5 | 48 | 0.877551 | 5 | 49 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b91d655a943929cca05ea72fa7a51a350c064ef9 | 101 | py | Python | src/big_torch/train/__init__.py | Denchidlo/big-torch | f5a65e6216e46e6d4fe98670c52618e4cccc8163 | [
"MIT"
] | null | null | null | src/big_torch/train/__init__.py | Denchidlo/big-torch | f5a65e6216e46e6d4fe98670c52618e4cccc8163 | [
"MIT"
] | 1 | 2021-11-21T13:11:31.000Z | 2021-11-22T00:18:29.000Z | src/big_torch/train/__init__.py | Denchidlo/big-torch | f5a65e6216e46e6d4fe98670c52618e4cccc8163 | [
"MIT"
] | null | null | null | from . import fabric
from . import frame_generators
from . import optimizers
from . import callbacks
| 20.2 | 30 | 0.80198 | 13 | 101 | 6.153846 | 0.538462 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158416 | 101 | 4 | 31 | 25.25 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b94092451fbed8406989b4697eb5567b18a44218 | 16,074 | py | Python | components/efuse/test_efuse_host/efuse_tests.py | 123swk123/esp-idf | a117c94a27de3c4a49bf4b6bbc19b8eab7c9f972 | [
"Apache-2.0"
] | 12 | 2021-04-15T14:15:27.000Z | 2022-01-17T03:40:35.000Z | components/efuse/test_efuse_host/efuse_tests.py | 123swk123/esp-idf | a117c94a27de3c4a49bf4b6bbc19b8eab7c9f972 | [
"Apache-2.0"
] | 5 | 2020-04-30T03:47:19.000Z | 2021-03-31T02:10:11.000Z | components/efuse/test_efuse_host/efuse_tests.py | 123swk123/esp-idf | a117c94a27de3c4a49bf4b6bbc19b8eab7c9f972 | [
"Apache-2.0"
] | 13 | 2019-12-31T21:22:09.000Z | 2022-03-07T15:55:27.000Z | #!/usr/bin/env python
from __future__ import print_function, division
import unittest
import sys
try:
import efuse_table_gen
except ImportError:
sys.path.append("..")
import efuse_table_gen
'''
To run the test on local PC:
cd ~/esp/esp-idf/components/efuse/test_efuse_host/
./efuse_tests.py
'''
class Py23TestCase(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(Py23TestCase, self).__init__(*args, **kwargs)
try:
self.assertRaisesRegex
except AttributeError:
# assertRaisesRegexp is deprecated in Python3 but assertRaisesRegex doesn't exist in Python2
# This fix is used in order to avoid using the alias from the six library
self.assertRaisesRegex = self.assertRaisesRegexp
class CSVParserTests(Py23TestCase):
def test_general(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 0, 5, Use for test name 1
name2, EFUSE_BLK3, 5, 4, Use for test name 2
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].efuse_block, 'EFUSE_BLK3')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 5)
self.assertEqual(t[0].comment, 'Use for test name 1')
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].efuse_block, 'EFUSE_BLK3')
self.assertEqual(t[1].bit_start, 5)
self.assertEqual(t[1].bit_count, 4)
self.assertEqual(t[1].comment, 'Use for test name 2')
def test_seq_bit_start1_fill(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, , 5,
name2, EFUSE_BLK3, , 4,
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 5)
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].bit_start, 5)
self.assertEqual(t[1].bit_count, 4)
def test_seq_bit_start2_fill(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, , 5,
name2, EFUSE_BLK2, , 4,
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 5)
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].bit_start, 0)
self.assertEqual(t[1].bit_count, 4)
def test_seq_bit_start3_fill(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, , 5,
name2, EFUSE_BLK2, , 4,
name3, EFUSE_BLK2, 5, 4,
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 5)
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].bit_start, 0)
self.assertEqual(t[1].bit_count, 4)
self.assertEqual(t[2].field_name, 'name3')
self.assertEqual(t[2].bit_start, 5)
self.assertEqual(t[2].bit_count, 4)
def test_seq_bit_start4_fill(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, , 5,
name2, EFUSE_BLK2, , 4,
, EFUSE_BLK2, , 4,
name1, EFUSE_BLK3, , 5,
"""
with self.assertRaisesRegex(efuse_table_gen.InputError, "Field names must be unique"):
efuse_table_gen.FuseTable.from_csv(csv)
def test_seq_bit_start5_fill(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, , 5,
name2, EFUSE_BLK2, , 4,
, EFUSE_BLK2, , 4,
name3, EFUSE_BLK3, 5, 5,
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 5)
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].bit_start, 0)
self.assertEqual(t[1].bit_count, 4)
self.assertEqual(t[2].field_name, 'name2')
self.assertEqual(t[2].bit_start, 4)
self.assertEqual(t[2].bit_count, 4)
self.assertEqual(t[3].field_name, 'name3')
self.assertEqual(t[3].bit_start, 5)
self.assertEqual(t[3].bit_count, 5)
def test_overlapping_bit_start_fail(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 1, 5, Use for test name 1
name2, EFUSE_BLK3, 5, 4, Use for test name 2
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
with self.assertRaisesRegex(efuse_table_gen.InputError, "overlap"):
t.verify()
def test_empty_field_name_fail(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
, EFUSE_BLK3, , 5,
name2, EFUSE_BLK2, , 4,
"""
with self.assertRaisesRegex(efuse_table_gen.InputError, "missing field name"):
efuse_table_gen.FuseTable.from_csv(csv)
def test_unique_field_name_fail(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 0, 5, Use for test name 1
name1, EFUSE_BLK3, 5, 4, Use for test name 2
"""
with self.assertRaisesRegex(efuse_table_gen.InputError, "Field names must be unique"):
efuse_table_gen.FuseTable.from_csv(csv)
def test_bit_count_empty_fail(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 0, , Use for test name 1
name2, EFUSE_BLK3, 5, 4, Use for test name 2
"""
with self.assertRaisesRegex(efuse_table_gen.InputError, "empty"):
efuse_table_gen.FuseTable.from_csv(csv)
def test_bit_start_num_fail(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, k, 5, Use for test name 1
name2, EFUSE_BLK3, 5, 4, Use for test name 2
"""
with self.assertRaisesRegex(efuse_table_gen.InputError, "Invalid field value"):
efuse_table_gen.FuseTable.from_csv(csv)
def test_join_entry(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK2, 0, 6, Use for test name 1
name2, EFUSE_BLK2, 6, 5, Use for test name 2
name3, EFUSE_BLK3, 20, 5, Use for test name 3
, EFUSE_BLK3, 30, 5, Use for test name 3
name4, EFUSE_BLK2, 30, 5, Use for test name 4
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].efuse_block, 'EFUSE_BLK2')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 6)
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].efuse_block, 'EFUSE_BLK2')
self.assertEqual(t[1].bit_start, 6)
self.assertEqual(t[1].bit_count, 5)
self.assertEqual(t[2].field_name, 'name3')
self.assertEqual(t[2].efuse_block, 'EFUSE_BLK3')
self.assertEqual(t[2].bit_start, 20)
self.assertEqual(t[2].bit_count, 5)
self.assertEqual(t[3].field_name, 'name3')
self.assertEqual(t[3].efuse_block, 'EFUSE_BLK3')
self.assertEqual(t[3].bit_start, 30)
self.assertEqual(t[3].bit_count, 5)
self.assertEqual(t[4].field_name, 'name4')
self.assertEqual(t[4].efuse_block, 'EFUSE_BLK2')
self.assertEqual(t[4].bit_start, 30)
self.assertEqual(t[4].bit_count, 5)
def test_block_fail(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK5, 0, 5, Use for test name 1
name2, EFUSE_BLK3, 5, 4, Use for test name 2
"""
with self.assertRaisesRegex(efuse_table_gen.InputError, "'efuse_block' should consist from EFUSE_BLK0..EFUSE_BLK3"):
efuse_table_gen.FuseTable.from_csv(csv)
def test_field_size_is_ok(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK0, 0, 224, Use for test name 1
name2, EFUSE_BLK1, 0, 256, Use for test name 2
"""
efuse_table_gen.max_blk_len = 256
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
def test_field_blk3_size_is_more(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 190, 1, Use for test name 1
name2, EFUSE_BLK3, 191, 5, Use for test name 2
"""
efuse_table_gen.max_blk_len = 192
t = efuse_table_gen.FuseTable.from_csv(csv)
with self.assertRaisesRegex(efuse_table_gen.InputError, "The field is outside the boundaries"):
t.verify()
def test_field_blk1_size_is_more(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK0, 0, 224, Use for test name 1
name2, EFUSE_BLK1, 1, 256, Use for test name 2
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
with self.assertRaisesRegex(efuse_table_gen.InputError, "The field is outside the boundaries"):
t.verify()
class VerificationTests(Py23TestCase):
def test_general(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 0, 5, Use for test name 1
name2, EFUSE_BLK3, 5, 4, Use for test name 2
name1_1, EFUSE_BLK2, 0, 5, Use for test name 1_1
name2_1, EFUSE_BLK2, 5, 4, Use for test name 2_1
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
t.verify()
self.assertEqual(t[0].field_name, 'name1')
self.assertEqual(t[0].efuse_block, 'EFUSE_BLK3')
self.assertEqual(t[0].bit_start, 0)
self.assertEqual(t[0].bit_count, 5)
self.assertEqual(t[1].field_name, 'name2')
self.assertEqual(t[1].efuse_block, 'EFUSE_BLK3')
self.assertEqual(t[1].bit_start, 5)
self.assertEqual(t[1].bit_count, 4)
self.assertEqual(t[2].field_name, 'name1_1')
self.assertEqual(t[2].efuse_block, 'EFUSE_BLK2')
self.assertEqual(t[2].bit_start, 0)
self.assertEqual(t[2].bit_count, 5)
self.assertEqual(t[3].field_name, 'name2_1')
self.assertEqual(t[3].efuse_block, 'EFUSE_BLK2')
self.assertEqual(t[3].bit_start, 5)
self.assertEqual(t[3].bit_count, 4)
def test_custom_use_only_BLK3(self):
csv = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 0, 5, Use for test name 1
name2, EFUSE_BLK2, 5, 4, Use for test name 2
"""
t = efuse_table_gen.FuseTable.from_csv(csv)
with self.assertRaisesRegex(efuse_table_gen.ValidationError, "custom_table should use only EFUSE_BLK3"):
t.verify("custom_table")
def test_common_and_custom_table_use_the_same_bits(self):
csv_common = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name1, EFUSE_BLK3, 0, 5, Use for test name 1
name2, EFUSE_BLK2, 5, 4, Use for test name 2
"""
common_table = efuse_table_gen.FuseTable.from_csv(csv_common)
common_table.verify("common_table")
two_tables = common_table
csv_custom = """
# field_name, efuse_block(EFUSE_BLK0..EFUSE_BLK3), bit_start(0..255), bit_count, comment
name3, EFUSE_BLK3, 20, 5, Use for test name 1
name4, EFUSE_BLK3, 4, 1, Use for test name 2
"""
custom_table = efuse_table_gen.FuseTable.from_csv(csv_custom)
custom_table.verify("custom_table")
two_tables += custom_table
with self.assertRaisesRegex(efuse_table_gen.InputError, "overlaps"):
two_tables.verify()
if __name__ == "__main__":
unittest.main()
| 47 | 124 | 0.513438 | 1,874 | 16,074 | 4.15048 | 0.080043 | 0.152353 | 0.16251 | 0.062998 | 0.834276 | 0.816148 | 0.789663 | 0.732193 | 0.715222 | 0.688866 | 0 | 0.054992 | 0.386836 | 16,074 | 341 | 125 | 47.13783 | 0.734172 | 0.011385 | 0 | 0.649819 | 0 | 0 | 0.509852 | 0.047013 | 0 | 0 | 0 | 0 | 0.33213 | 1 | 0.072202 | false | 0 | 0.021661 | 0 | 0.104693 | 0.00361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9472508d4a7d71d06e12e198a2c8284021dd5f7 | 154 | py | Python | 32-inheritance/Chef.py | davwheat-bhasvic/btec-summer-work | fbf7fed6cb852fe72cbc55bb571aafbf7d34e13c | [
"MIT"
] | null | null | null | 32-inheritance/Chef.py | davwheat-bhasvic/btec-summer-work | fbf7fed6cb852fe72cbc55bb571aafbf7d34e13c | [
"MIT"
] | null | null | null | 32-inheritance/Chef.py | davwheat-bhasvic/btec-summer-work | fbf7fed6cb852fe72cbc55bb571aafbf7d34e13c | [
"MIT"
] | null | null | null | class Chef:
def make_chicken(self):
print("The chef makes chicken")
def make_special(self):
print("The chef makes a cottage pie") | 25.666667 | 45 | 0.642857 | 22 | 154 | 4.409091 | 0.590909 | 0.14433 | 0.247423 | 0.329897 | 0.43299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25974 | 154 | 6 | 45 | 25.666667 | 0.850877 | 0 | 0 | 0 | 0 | 0 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0.4 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
b950b4f97f3b15db7782b54c4df8a84bb2ec8fba | 294 | py | Python | agronet_be/AgronetApp/serializers/__init__.py | lauraC4MP0/Prueba-github | 291fc266fc0a8efc80ab36dd6eb4bff3e98e7c1f | [
"MIT"
] | 1 | 2021-10-06T00:39:08.000Z | 2021-10-06T00:39:08.000Z | agronet_be/AgronetApp/serializers/__init__.py | lauraC4MP0/Prueba-github | 291fc266fc0a8efc80ab36dd6eb4bff3e98e7c1f | [
"MIT"
] | null | null | null | agronet_be/AgronetApp/serializers/__init__.py | lauraC4MP0/Prueba-github | 291fc266fc0a8efc80ab36dd6eb4bff3e98e7c1f | [
"MIT"
] | 1 | 2021-10-03T13:39:31.000Z | 2021-10-03T13:39:31.000Z | from .citySerializer import CitySerializer
from .departamentSerializer import DepartamentSerializer
from .orderDetailSerializer import OrderDetailSerializer
from .orderSerializer import orderSerializer
from .productSerializer import ProductSerializer
from .userSerializer import UserSerializer
| 42 | 56 | 0.897959 | 24 | 294 | 11 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 294 | 6 | 57 | 49 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9b97f103c772ef8fd9d8893f33b5d6829cac358 | 41 | py | Python | fan_tools/django/contrib/postgres/fields/__init__.py | micro-fan/fan_tools | 6e146ac4bf6fbe5119a03eb931498c45776a8928 | [
"MIT"
] | 1 | 2021-12-29T19:27:34.000Z | 2021-12-29T19:27:34.000Z | fan_tools/django/contrib/postgres/fields/__init__.py | micro-fan/fan_tools | 6e146ac4bf6fbe5119a03eb931498c45776a8928 | [
"MIT"
] | 1 | 2021-10-30T18:47:05.000Z | 2021-10-30T18:47:05.000Z | fan_tools/django/contrib/postgres/fields/__init__.py | micro-fan/fan_tools | 6e146ac4bf6fbe5119a03eb931498c45776a8928 | [
"MIT"
] | 8 | 2016-10-18T09:22:52.000Z | 2020-02-05T15:10:07.000Z | from .ltree import * # noqa: F401, F403
| 20.5 | 40 | 0.658537 | 6 | 41 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 0.219512 | 41 | 1 | 41 | 41 | 0.65625 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9c9059d2b85055a43e81e610a003cdf0c85477f | 65,749 | py | Python | test/integration/component/test_portable_ip.py | schubergphilis/cloudstack | c4a69c27b127d503ae91a64aab45d7f954d3ca89 | [
"Apache-2.0"
] | 2 | 2015-02-10T07:21:58.000Z | 2021-05-07T08:52:17.000Z | test/integration/component/test_portable_ip.py | schubergphilis/cloudstack | c4a69c27b127d503ae91a64aab45d7f954d3ca89 | [
"Apache-2.0"
] | 2 | 2015-06-11T02:17:06.000Z | 2015-06-22T20:46:42.000Z | test/integration/component/test_portable_ip.py | schubergphilis/cloudstack | c4a69c27b127d503ae91a64aab45d7f954d3ca89 | [
"Apache-2.0"
] | 4 | 2015-05-25T15:53:52.000Z | 2018-05-23T14:08:07.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
""" Tests for Portable public IP Ranges feature
Test Plan: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Portable+IP+Test+Execution
Feature Specifications: https://cwiki.apache.org/confluence/display/CLOUDSTACK/portable+public+IP
"""
from marvin.cloudstackTestCase import cloudstackTestCase
from marvin.lib.utils import cleanup_resources
from marvin.lib.base import (VirtualMachine,
PublicIPAddress,
Network,
NetworkOffering,
ServiceOffering,
NATRule,
Account,
PortablePublicIpRange,
StaticNATRule,
FireWallRule)
from marvin.lib.common import (get_zone,
get_template,
get_domain,
get_region,
get_pod,
isIpInDesiredState,
getPortableIpRangeServices)
from netaddr import IPAddress
from marvin.sshClient import SshClient
from marvin.codes import FAILED
from nose.plugins.attrib import attr
class Services:
"""Test Multiple IP Ranges
"""
def __init__(self):
self.services = {
"account": {
"email": "test@test.com",
"firstname": "Test",
"lastname": "User",
"username": "test",
# Random characters are appended for unique
# username
"password": "password",
},
"service_offering": {
"name": "Tiny Instance",
"displaytext": "Tiny Instance",
"cpunumber": 1,
"cpuspeed": 200, # in MHz
"memory": 256, # In MBs
},
"network_offering": {
"name": 'Network offering portable ip',
"displaytext": 'Network offering-VR services',
"guestiptype": 'Isolated',
"supportedservices": 'Dhcp,Dns,SourceNat,PortForwarding,Vpn,Firewall,Lb,UserData,StaticNat',
"traffictype": 'GUEST',
"availability": 'Optional',
"serviceProviderList": {
"Dhcp": 'VirtualRouter',
"Dns": 'VirtualRouter',
"SourceNat": 'VirtualRouter',
"PortForwarding": 'VirtualRouter',
"Vpn": 'VirtualRouter',
"Firewall": 'VirtualRouter',
"Lb": 'VirtualRouter',
"UserData": 'VirtualRouter',
"StaticNat": 'VirtualRouter',
},
},
"network": {
"name": "Test Network - Portable IP",
"displaytext": "Test Network - Portable IP",
},
"network1": {
"name": "Test Network 1 - Portable IP",
"displaytext": "Test Network 1 - Portable IP",
},
"network2": {
"name": "Test Network 2 - Portable IP",
"displaytext": "Test Network 2 - Portable IP",
},
"disk_offering": {
"displaytext": "Small Disk",
"name": "Small Disk",
"disksize": 1
},
"natrule": {
"privateport": 22,
"publicport": 22,
"protocol": "TCP",
"cidr" : '0.0.0.0/0',
},
"small":
# Create a small virtual machine instance with disk offering
{
"displayname": "testserver",
"username": "root", # VM creds for SSH
"password": "password",
"ssh_port": 22,
"hypervisor": 'XenServer',
"privateport": 22,
"publicport": 22,
"protocol": 'TCP',
},
"vm1":
# Create a small virtual machine instance with disk offering
{
"displayname": "vm1",
"username": "root", # VM creds for SSH
"password": "password",
"ssh_port": 22,
"hypervisor": 'XenServer',
"privateport": 22,
"publicport": 22,
"protocol": 'TCP',
},
"vm2":
# Create a small virtual machine instance with disk offering
{
"displayname": "vm2",
"username": "root", # VM creds for SSH
"password": "password",
"ssh_port": 22,
"hypervisor": 'XenServer',
"privateport": 22,
"publicport": 22,
"protocol": 'TCP',
},
"ostype": 'CentOS 5.3 (64-bit)'
}
class TestCreatePortablePublicIpRanges(cloudstackTestCase):
"""Test Create Portable IP Ranges
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestCreatePortablePublicIpRanges, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
cls._cleanup = []
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.cleanup = []
return
def tearDown(self):
try:
#Clean up, terminate the resources created
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_create_portable_ip_range(self):
"""Test create new portable ip range
"""
# 1. Create new portable ip range with root admin api
# 2. Portable ip range should be created successfully
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
portable_ip_range_services["regionid"] = self.region.id
try:
#create new portable ip range
new_portable_ip_range = PortablePublicIpRange.create(self.apiclient,
portable_ip_range_services)
self.cleanup.append(new_portable_ip_range)
except Exception as e:
self.fail("Failed to create portable IP range: %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_create_portable_ip_range_non_root_admin(self):
"""Test create new portable ip range with non admin root account
"""
# 1. Create new portable ip range with non root admin api client
# 2. Portable ip range should not be created
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
try:
self.account = Account.create(
self.apiclient,
self.services["account"],
domainid=self.domain.id
)
self.cleanup.append(self.account)
self.api_client_user = self.testClient.getUserApiClient(
UserName=self.account.name,
DomainName=self.account.domain
)
portable_ip_range_services["regionid"] = self.region.id
self.debug("Trying to create portable ip range with non root-admin api client, should raise exception")
with self.assertRaises(Exception):
portable_ip_range = PortablePublicIpRange.create(self.api_client_user,
portable_ip_range_services)
self.cleanup.append(portable_ip_range)
except Exception as e:
self.fail(e)
return
@attr(tags=["advanced", "selfservice"])
def test_create_portable_ip_range_invalid_region(self):
"""Test create portable ip range with invalid region id"""
# 1. Try to create new portable ip range with invalid region id
# 2. Portable ip range creation should fail
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
portable_ip_range_services["regionid"] = -1
#create new portable ip range
self.debug("Trying to create portable ip range with wrong region id")
with self.assertRaises(Exception):
portable_ip_range = PortablePublicIpRange.create(self.apiclient,
portable_ip_range_services)
self.cleanup.append(portable_ip_range)
return
class TestDeletePortablePublicIpRanges(cloudstackTestCase):
"""Test delete Portable IP Ranges
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestDeletePortablePublicIpRanges, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
cls._cleanup = []
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
portable_ip_range_services["regionid"] = self.region.id
#create new portable ip range
self.portable_ip_range = PortablePublicIpRange.create(self.apiclient,
portable_ip_range_services)
self.cleanup = []
return
def tearDown(self):
try:
#Clean up, terminate the resources created
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_delete_portable_ip_range(self):
"""Test delete ip range
"""
# 1. Try to delete the created range with root admin api client
# 2. Portable range should be deleted successfully
self.portable_ip_range.delete(self.apiclient)
return
@attr(tags=["advanced", "selfservice"])
def test_delete_portable_ip_range_non_root_admin(self):
"""Test delete ip range - non admin root
"""
# 1. Try to delete the created range with non root admin api client
# 2. Portable range deletion should fail
try:
self.account = Account.create(
self.apiclient,
self.services["account"],
domainid=self.domain.id
)
self.cleanup.append(self.account)
self.api_client_user = self.testClient.getUserApiClient(
UserName=self.account.name,
DomainName=self.account.domain
)
except Exception as e:
self.fail(e)
try:
with self.assertRaises(Exception):
self.portable_ip_range.delete(self.api_client_user)
except Exception as e:
self.fail(e)
finally:
self.portable_ip_range.delete(self.apiclient)
return
@attr(tags=["advanced", "selfservice"])
def test_delete_portable_ip_range_in_use(self):
"""Test delete ip range
"""
# 1. Associate a portable ip
# 2. Try to delete the portable ip range with root admin api client
# 3. Portable ip range should not be deleted unless currently used ip is disassociated
try:
self.account = Account.create(
self.apiclient,
self.services["account"],
domainid=self.domain.id
)
self.cleanup.append(self.account)
self.network_offering = NetworkOffering.create(
self.apiclient,
self.services["network_offering"],
conservemode=False
)
# Enable Network offering
self.network_offering.update(self.apiclient, state='Enabled')
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
except Exception as e:
self.fail(e)
try:
with self.assertRaises(Exception):
self.debug("Trying to Delete portable ip range with root-admin api, this should fail")
self.portable_ip_range.delete(self.apiclient)
except Exception as e:
self.fail(e)
finally:
self.debug("Disassociating portable ip")
portableip.delete(self.apiclient)
self.debug("Deleting portable ip range")
self.portable_ip_range.delete(self.apiclient)
return
class TestListPortablePublicIpRanges(cloudstackTestCase):
"""Test List Portable IP Ranges
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestListPortablePublicIpRanges, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
cls._cleanup = []
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
#create new portable ip range
self.portable_ip_range_services = getPortableIpRangeServices(self.config)
if self.portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
self.portable_ip_range_services["regionid"] = self.region.id
self.debug("Creating new portable IP range with startip:%s and endip:%s" %
(str(self.portable_ip_range_services["startip"]),
str(self.portable_ip_range_services["endip"])))
#create new portable ip range
self.portable_ip_range = PortablePublicIpRange.create(self.apiclient,
self.portable_ip_range_services)
self.debug("Created new portable IP range with startip:%s and endip:%s and id:%s" %
(self.portable_ip_range.startip,
self.portable_ip_range.endip,
self.portable_ip_range.id))
self.cleanup = [self.portable_ip_range, ]
return
def tearDown(self):
try:
#Clean up, terminate the resources created
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_list_portable_ip_range(self):
"""Test list portable ip ranges
"""
# 1. Create new portable ip range
# 2. Try to list ip ranges with root admin api client
# 3. Portable ip ranges should list properly
list_portable_ip_range = PortablePublicIpRange.list(self.apiclient,
id=self.portable_ip_range.id)
self.assertEqual(
isinstance(list_portable_ip_range, list),
True,
"List portable IP ranges should not return an empty response"
)
portable_ip_range = list_portable_ip_range[0]
self.assertEqual(str(portable_ip_range.startip), str(self.portable_ip_range_services["startip"]),
"Listed startip not matching with the startip of created public ip range")
self.assertEqual(str(portable_ip_range.endip), str(self.portable_ip_range_services["endip"]),
"Listed endip not matching with the endip of created public ip range")
self.assertEqual(str(portable_ip_range.gateway), str(self.portable_ip_range_services["gateway"]),
"Listed gateway not matching with the gateway of created public ip range")
self.assertEqual(str(portable_ip_range.netmask), str(self.portable_ip_range_services["netmask"]),
"Listed netmask not matching with the netmask of created public ip range")
return
@attr(tags=["advanced","swamy", "selfservice"])
def test_list_portable_ip_range_non_root_admin(self):
"""Test list portable ip ranges with non admin root account
"""
# 1. Create new portable ip range
# 2. Try to list ip ranges with root non admin api client
# 3. Portable ip ranges listing should fail
self.account = Account.create(
self.apiclient,
self.services["account"],
domainid=self.domain.id
)
self.cleanup.append(self.account)
self.api_client_user = self.testClient.getUserApiClient(
UserName=self.account.name,
DomainName=self.account.domain
)
self.debug("Trying to list portable ip ranges with non root-admin api, should raise exception")
with self.assertRaises(Exception):
PortablePublicIpRange.list(self.api_client_user,
id=self.portable_ip_range.id)
return
class TestAssociatePublicIp(cloudstackTestCase):
"""Test associate Portable IP/ non portable public ip
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestAssociatePublicIp, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
# Set Zones and disk offerings
cls.services["small"]["zoneid"] = cls.zone.id
cls.services["small"]["template"] = template.id
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id,
admin=True
)
cls._cleanup = [cls.account, ]
cls.network_offering = NetworkOffering.create(
cls.api_client,
cls.services["network_offering"],
conservemode=False
)
# Enable Network offering
cls.network_offering.update(cls.api_client, state='Enabled')
cls.network = Network.create(
cls.api_client,
cls.services["network"],
accountid=cls.account.name,
domainid=cls.account.domainid,
networkofferingid=cls.network_offering.id,
zoneid=cls.zone.id
)
return
@classmethod
def tearDownClass(cls):
try:
# Disable Network offering
cls.network_offering.update(cls.api_client, state='Disabled')
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.cleanup = []
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
portable_ip_range_services["regionid"] = self.region.id
#create new portable ip range
self.portable_ip_range = PortablePublicIpRange.create(self.apiclient,
portable_ip_range_services)
self.cleanup.append(self.portable_ip_range)
return
def tearDown(self):
try:
#Clean up, terminate the resources created
self.network_offering.update(self.apiclient, state='Disabled')
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_associate_ip_address(self):
""" Test assocoate public ip address
"""
# 1. Create new portable ip range
# 2. Create a network and associate public ip without mentioning (isportable)
# 3. Create a network and associate public ip with isportable=False
# 4. Create a network and associate public ip with isPortable=True
# 5. All three public ip associations should succeed
self.debug("Associating default public ip address with network: %s" % self.network.id)
publicipaddress = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated default public ip address: %s" % publicipaddress.ipaddress.ipaddress)
self.debug("Associating public ip address with network: %s with isportable=False" % self.network.id)
publicipaddressnotportable = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=False
)
self.debug("Associated public ip address (not portable): %s" % publicipaddressnotportable.ipaddress.ipaddress)
publicipaddressnotportable.delete(self.apiclient)
self.debug("Associating public ip address with network: %s with isportable=True" % self.network.id)
publicipaddressportable = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
self.debug("Associated public ip address (portable): %s" % publicipaddressportable.ipaddress.ipaddress)
publicipaddressportable.delete(self.apiclient)
return
@attr(tags=["advanced", "selfservice"])
def test_associate_ip_address_invalid_zone(self):
""" Test Associate IP with invalid zone id
"""
# 1. Create new portable ip range
# 2. try to associate a portable ip with invalid region id
# 3. IP association should fail
self.debug("Trying to associate portable public ip with invalid zone id, this should fail")
with self.assertRaises(Exception):
publicipaddress = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid = -1,
domainid=self.account.domainid,
regionid = self.region.id,
isportable=True
)
publicipaddress.delete(self.apiclient)
return
@attr(tags=["advanced", "provisioning"])
def test_associate_ip_address_services_enable_disable(self):
""" Test enabling and disabling NAT, Firewall services on portable ip
"""
# 1. Create new portable ip range
# 2. Associate a portable ip
# 3. Enable NAT and Firewall rules on this portable ip
# 4. Disable NAT and Firewall rules created
# 5. Enabling and disabling ofthe rules should be successful
self.service_offering = ServiceOffering.create(
self.apiclient,
self.services["service_offering"]
)
self.cleanup.append(self.service_offering)
try:
self.debug("DeployingVirtual Machine")
self.virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["small"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids = [self.network.id],
mode=self.services['mode']
)
self.debug("Created virtual machine instance: %s with ssh_ip: %s" %
(self.virtual_machine.id, self.virtual_machine.ssh_ip))
except Exception as e:
self.fail("Exception while deploying vm : %s" % e)
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
self.debug("created public ip address (portable): %s" % portableip.ipaddress.ipaddress)
response = isIpInDesiredState(self.apiclient, portableip.ipaddress.id, state="allocated")
exceptionOccured = response[0]
ipInDesiredState = response[1]
exceptionMessage = response[2]
if (exceptionOccured or (not ipInDesiredState)):
portableip.delete(self.apiclient)
self.fail(exceptionMessage)
try:
# Open up firewall port for SSH
self.debug("Opening firewall on the portable public ip")
fw_rule = FireWallRule.create(
self.apiclient,
ipaddressid=portableip.ipaddress.id,
protocol=self.services["natrule"]["protocol"],
cidrlist=[self.services["natrule"]["cidr"]],
startport=self.services["natrule"]["publicport"],
endport=self.services["natrule"]["publicport"]
)
#Create NAT rule
self.debug("Creating NAT rule on the portable public ip")
nat_rule = NATRule.create(
self.apiclient,
self.virtual_machine,
self.services["natrule"],
portableip.ipaddress.id
)
except Exception as e:
portableip.delete(self.apiclient)
self.fail("Error: %s" % e)
try:
self.debug("Trying to SSH to ip: %s" % portableip.ipaddress.ipaddress)
SshClient(portableip.ipaddress.ipaddress,
self.services['natrule']["publicport"],
self.virtual_machine.username,
self.virtual_machine.password
)
except Exception as e:
self.fail("Exception while SSHing : %s" % e)
finally:
self.debug("Deleting firewall rule")
fw_rule.delete(self.apiclient)
self.debug("Deleting NAT rule")
nat_rule.delete(self.apiclient)
self.debug("disassocoating portable ip: %s" % portableip.ipaddress.ipaddress)
portableip.delete(self.apiclient)
return
@attr(tags=["advanced", "selfservice"])
def test_associate_ip_address_no_free_ip(self):
""" Test assocoate public ip address
"""
# 1. Create new portable ip range
# 2. Create a network and associate all available portbale public ips
# 5. Try to associate portable ip, it should fail
associatedipaddresses = []
startip_int = int(IPAddress(self.portable_ip_range.startip))
endip_int = int(IPAddress(self.portable_ip_range.endip))
totalportableips = ((endip_int - startip_int) + 1)
self.debug(totalportableips)
for x in range(0, totalportableips):
self.debug("Associating public ip address with network: %s with isportable=True" % self.network.id)
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
associatedipaddresses.append(portableip)
self.debug("Associated public ip address (portable): %s" % portableip.ipaddress.ipaddress)
self.debug("Trying to associate portable public ip when no free ips available, this should fail")
with self.assertRaises(Exception):
portableipaddress = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
portableipaddress.delete(self.apiclient)
self.debug("Associating portable ip address failed")
self.debug("Disassociating previously associated ip addresses")
for x in range(0, totalportableips):
associatedipaddresses[x].delete(self.apiclient)
return
class TestDisassociatePublicIp(cloudstackTestCase):
"""Test Disassociate Portable IP/ non portable IP
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestDisassociatePublicIp, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
# Set Zones and disk offerings
cls.services["small"]["zoneid"] = cls.zone.id
cls.services["small"]["template"] = template.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id,
admin=True
)
cls._cleanup.append(cls.account)
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
cls.network_offering = NetworkOffering.create(
cls.api_client,
cls.services["network_offering"],
conservemode=False
)
# Enable Network offering
cls.network_offering.update(cls.api_client, state='Enabled')
cls._cleanup.append(cls.network_offering)
cls.network = Network.create(
cls.api_client,
cls.services["network"],
accountid=cls.account.name,
domainid=cls.account.domainid,
networkofferingid=cls.network_offering.id,
zoneid=cls.zone.id
)
cls.virtual_machine = VirtualMachine.create(
cls.api_client,
cls.services["small"],
accountid=cls.account.name,
domainid=cls.account.domainid,
serviceofferingid=cls.service_offering.id,
networkids = [cls.network.id],
mode=cls.services['mode']
)
return
@classmethod
def tearDownClass(cls):
try:
# Disable Network offering
cls.network_offering.update(cls.api_client, state='Disabled')
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.cleanup = []
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
portable_ip_range_services["regionid"] = self.region.id
#create new portable ip range
new_portable_ip_range = PortablePublicIpRange.create(self.apiclient,
portable_ip_range_services)
self.cleanup.append(new_portable_ip_range)
return
def tearDown(self):
try:
#Clean up, terminate the resources created
self.network_offering.update(self.apiclient, state='Disabled')
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_disassociate_ip_address_no_services(self):
""" Test disassociating portable ip
"""
# 1. Create new portable ip range
# 2. Associate a portable ip
# 3. Disassociate the portable ip with root admin api client
# 4. Disassociating should be successful
try:
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
portableip.delete(self.apiclient)
except Exception as e:
raise Exception("Exception occured: %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_disassociate_ip_address_services_enabled(self):
""" Test disassociating portable ip
"""
# 1. Create new portable ip range
# 2. Associate a portable ip
# 3. Enable NAT and Firewall services on this portable IP
# 4. Disassociate the portable ip with root admin api client
# 5. Disassociating should be successful
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
response = isIpInDesiredState(self.apiclient, portableip.ipaddress.id, state="allocated")
exceptionOccured = response[0]
ipInDesiredState = response[1]
exceptionMessage = response[2]
if (exceptionOccured or (not ipInDesiredState)):
portableip.delete(self.apiclient)
self.fail(exceptionMessage)
try:
# Open up firewall port for SSH
self.debug("Opening firewall on the portable public ip")
FireWallRule.create(
self.apiclient,
ipaddressid=portableip.ipaddress.id,
protocol=self.services["natrule"]["protocol"],
cidrlist=[self.services["natrule"]["cidr"]],
startport=self.services["natrule"]["publicport"],
endport=self.services["natrule"]["publicport"]
)
#Create NAT rule
self.debug("Creating NAT rule on the portable public ip")
NATRule.create(
self.apiclient,
self.virtual_machine,
self.services["natrule"],
portableip.ipaddress.id
)
except Exception as e:
portableip.delete(self.apiclient)
self.fail("Error: %s" % e)
try:
portableip.delete(self.apiclient)
except Exception as e:
raise Exception("Exception while disassociating portable ip: %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_disassociate_ip_address_other_account(self):
""" Test disassociating portable IP with non-owner account
"""
# 1. Create new portable ip range
# 2. Associate a portable ip
# 3. Try to Disassociate the portable ip with an account which is not owner of portable ip
# 4. Disassociating should fail
try:
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
except Exception as e:
self.fail("Failed to create portable ip: %s" % e)
try:
self.otherAccount = Account.create(
self.apiclient,
self.services["account"],
domainid=self.domain.id
)
self.cleanup.append(self.otherAccount)
self.apiclientOtherAccount = self.testClient.getUserApiClient(
UserName=self.otherAccount.name,
DomainName=self.otherAccount.domain
)
# Trying to disassociate portable ip using
# api client of other account than the one
# used to create portable ip
with self.assertRaises(Exception):
portableip.delete(self.apiclientOtherAccount)
# Disassociate IP using api client of account used to create it
portableip.delete(self.apiclient)
except Exception as e:
self.fail("Exception while disassociating portable ip: %s" % e)
return
class TestDeleteAccount(cloudstackTestCase):
""" Test Delete Account having portable ip
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestDeleteAccount, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.services['mode'] = cls.zone.networktype
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
# Set Zones and disk offerings
cls.services["small"]["zoneid"] = cls.zone.id
cls.services["small"]["template"] = template.id
cls._cleanup = []
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
portable_ip_range_services = getPortableIpRangeServices(self.config)
if portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
self.cleanup = []
try:
self.account = Account.create(
self.apiclient,
self.services["account"],
domainid=self.domain.id,
admin=True
)
self.cleanup.append(self.account)
portable_ip_range_services["regionid"] = self.region.id
#create new portable ip range
new_portable_ip_range = PortablePublicIpRange.create(self.apiclient,
portable_ip_range_services)
self.cleanup.append(new_portable_ip_range)
self.network_offering = NetworkOffering.create(
self.apiclient,
self.services["network_offering"],
conservemode=False
)
# Enable Network offering
self.network_offering.update(self.apiclient, state='Enabled')
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.cleanup.append(self.network_offering)
except Exception as e:
self.fail("Exception in setupClass: %s" % e)
return
def tearDown(self):
try:
#Clean up, terminate the resources created
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced", "selfservice"])
def test_delete_account_services_disabled(self):
""" test delete account with portable ip with no services enabled
"""
# 1. Associate a portable ip to an account
# 2. Delete account
# 3. Account should get deleted successfully
try:
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
self.account.delete(self.apiclient)
with self.assertRaises(Exception):
PublicIPAddress.list(self.apiclient,
id=portableip.ipaddress.id)
except Exception as e:
self.fail(e)
return
@attr(tags=["advanced", "selfservice"])
def test_delete_account_services_enabled(self):
""" test delete account with portable ip with PF and firewall services enabled
"""
# 1. Associate a portable ip to an account
# 2. Enabled PF and Firewall rules on this IP
# 3. Delete account
# 4. Account should get deleted successfully
self.service_offering = ServiceOffering.create(
self.apiclient,
self.services["service_offering"]
)
self.cleanup.append(self.service_offering)
self.debug("Deploying Virtual Machine")
self.virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["small"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
mode=self.services['mode']
)
self.debug("Created virtual machine instance: %s" % self.virtual_machine.id)
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id,
isportable=True
)
self.debug("created public ip address (portable): %s" % portableip.ipaddress.ipaddress)
response = isIpInDesiredState(self.apiclient, portableip.ipaddress.id, state="allocated")
exceptionOccured = response[0]
ipInDesiredState = response[1]
exceptionMessage = response[2]
if (exceptionOccured or (not ipInDesiredState)):
portableip.delete(self.apiclient)
self.account.delete(self.apiclient)
self.fail(exceptionMessage)
try:
# Open up firewall port for SSH
self.debug("Opening firewall on the portable public ip")
FireWallRule.create(
self.apiclient,
ipaddressid=portableip.ipaddress.id,
protocol=self.services["natrule"]["protocol"],
cidrlist=[self.services["natrule"]["cidr"]],
startport=self.services["natrule"]["publicport"],
endport=self.services["natrule"]["publicport"]
)
#Create NAT rule
self.debug("Creating NAT rule on the portable public ip")
NATRule.create(
self.apiclient,
self.virtual_machine,
self.services["natrule"],
portableip.ipaddress.id
)
except Exception as e:
portableip.delete(self.apiclient)
self.account.delete(self.apiclient)
self.fail("Error %s" % e)
self.debug("Deleting account: %s :" % self.account.name)
self.account.delete(self.apiclient)
self.debug("Trying to list the ip address associated with deleted account, \
should throw exception")
with self.assertRaises(Exception):
PublicIPAddress.list(self.apiclient,
id=portableip.ipaddress.id)
return
class TestPortableIpTransferAcrossNetworks(cloudstackTestCase):
"""Test Transfer Portable IP Across Networks
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestPortableIpTransferAcrossNetworks, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.region = get_region(cls.api_client)
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.pod = get_pod(cls.api_client, cls.zone.id)
cls.services['mode'] = cls.zone.networktype
cls.services["domainid"] = cls.domain.id
cls.services["zoneid"] = cls.zone.id
cls.services["regionid"] = cls.region.id
template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
# Set Zones and disk offerings
cls.services["vm1"]["zoneid"] = cls.zone.id
cls.services["vm1"]["template"] = template.id
cls.services["vm2"]["zoneid"] = cls.zone.id
cls.services["vm2"]["template"] = template.id
cls._cleanup = []
# Set Zones and Network offerings
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id,
admin=True
)
cls._cleanup.append(cls.account)
cls.network_offering = NetworkOffering.create(
cls.api_client,
cls.services["network_offering"],
conservemode=False
)
cls._cleanup.append(cls.network_offering)
# Enable Network offering
cls.network_offering.update(cls.api_client, state='Enabled')
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls.network1 = Network.create(
cls.api_client,
cls.services["network1"],
accountid=cls.account.name,
domainid=cls.account.domainid,
networkofferingid=cls.network_offering.id,
zoneid=cls.zone.id
)
cls.virtual_machine1 = VirtualMachine.create(
cls.api_client,
cls.services["vm1"],
accountid=cls.account.name,
domainid=cls.account.domainid,
serviceofferingid=cls.service_offering.id,
networkids = [cls.network1.id],
)
cls.network2 = Network.create(
cls.api_client,
cls.services["network2"],
accountid=cls.account.name,
domainid=cls.account.domainid,
networkofferingid=cls.network_offering.id,
zoneid=cls.zone.id
)
cls.virtual_machine2 = VirtualMachine.create(
cls.api_client,
cls.services["vm2"],
accountid=cls.account.name,
domainid=cls.account.domainid,
serviceofferingid=cls.service_offering.id,
networkids = [cls.network2.id],
)
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
#create new portable ip range
self.portable_ip_range_services = getPortableIpRangeServices(self.config)
if self.portable_ip_range_services is FAILED:
self.skipTest('Failed to read config values related to portable ip range')
self.portable_ip_range_services["regionid"] = self.region.id
#create new portable ip range
self.portable_ip_range = PortablePublicIpRange.create(self.apiclient,
self.portable_ip_range_services)
self.cleanup = [self.portable_ip_range, ]
return
def tearDown(self):
try:
#Clean up, terminate the resources created
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags=["advanced","swamy", "selfservice"])
def test_list_portable_ip_range_non_root_admin(self):
"""Test list portable ip ranges with non admin root account
"""
# 1. Create new network 1 and associate portable IP 1
# 2. Have at least 1 VM in network1
# 3. Create a new network 2 and at least 1 VM in network 2
# 2. enable static NAT on portable IP 1 with a VM in network 2
# 3. SSH to the VM in network 2
portableip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network1.id,
isportable=True
)
response = isIpInDesiredState(self.apiclient, portableip.ipaddress.id, state="allocated")
exceptionOccured = response[0]
ipInDesiredState = response[1]
exceptionMessage = response[2]
if (exceptionOccured or (not ipInDesiredState)):
portableip.delete(self.apiclient)
self.fail(exceptionMessage)
self.debug("created public ip address (portable): %s" % portableip.ipaddress.ipaddress)
#Create NAT rule
self.debug("Creating NAT rule on the portable public ip")
try:
# Enable Static NAT for VM
StaticNATRule.enable(
self.apiclient,
portableip.ipaddress.id,
self.virtual_machine2.id,
networkid=self.network2.id
)
# Open up firewall port for SSH
self.debug("Opening firewall on the portable public ip")
FireWallRule.create(
self.apiclient,
ipaddressid=portableip.ipaddress.id,
protocol=self.services["natrule"]["protocol"],
cidrlist=[self.services["natrule"]["cidr"]],
startport=self.services["natrule"]["publicport"],
endport=self.services["natrule"]["publicport"]
)
except Exception as e:
portableip.delete(self.apiclient)
self.fail("Error: %s" % e)
static_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network2.id,
listall=True,
isstaticnat=True,
ipaddress=portableip.ipaddress.ipaddress,
)
self.assertEqual(
isinstance(static_nat_list, list),
True,
"List Public IP should return a valid static NAT info that was created on portable ip"
)
self.assertTrue(
static_nat_list[0].ipaddress == portableip.ipaddress.ipaddress and static_nat_list[0].virtualmachineid==self.virtual_machine2.id,
"There is some issue in transferring portable ip {} across networks".format(portableip.ipaddress.ipaddress)
)
try:
self.debug("Trying to SSH to ip: %s" % portableip.ipaddress.ipaddress)
SshClient(portableip.ipaddress.ipaddress,
self.services['natrule']["publicport"],
self.virtual_machine2.username,
self.virtual_machine2.password
)
except Exception as e:
self.fail("Exception while SSHing : %s" % e)
finally:
self.debug("disassociating portable ip: %s" % portableip.ipaddress.ipaddress)
portableip.delete(self.apiclient)
| 42.861147 | 153 | 0.517088 | 5,845 | 65,749 | 5.723867 | 0.069461 | 0.05709 | 0.060079 | 0.027349 | 0.808136 | 0.780189 | 0.75553 | 0.722143 | 0.699546 | 0.680536 | 0 | 0.004194 | 0.405314 | 65,749 | 1,533 | 154 | 42.889106 | 0.851458 | 0.105948 | 0 | 0.715309 | 0 | 0 | 0.111411 | 0.001163 | 0 | 0 | 0 | 0 | 0.015219 | 1 | 0.042077 | false | 0.005372 | 0.007162 | 0 | 0.096688 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a47f952111e4a2d0058c8565ccc47c85c688791 | 87 | py | Python | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_Variables_single_or_double_quotes.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 5 | 2021-06-02T23:44:25.000Z | 2021-12-27T16:21:57.000Z | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_Variables_single_or_double_quotes.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 22 | 2021-05-31T01:33:25.000Z | 2021-10-18T18:32:39.000Z | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_Variables_single_or_double_quotes.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 3 | 2021-06-19T03:37:47.000Z | 2021-08-31T00:49:51.000Z | x = "Sanu"
print(x)
# double quotes are the same as single quotes:
x = "Sanu"
print(x)
| 14.5 | 46 | 0.666667 | 16 | 87 | 3.625 | 0.625 | 0.172414 | 0.344828 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195402 | 87 | 5 | 47 | 17.4 | 0.828571 | 0.505747 | 0 | 1 | 0 | 0 | 0.195122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
dbe5089cb402351335353878ecd16c097727254b | 36 | py | Python | pyxeljs/hello.py | cie/python | b953f5e9d159abe9bd865c9642595a37ac43661b | [
"CC-BY-4.0"
] | 1 | 2019-11-19T01:06:36.000Z | 2019-11-19T01:06:36.000Z | pyxeljs/hello.py | cie/python | b953f5e9d159abe9bd865c9642595a37ac43661b | [
"CC-BY-4.0"
] | 1 | 2020-05-07T22:09:11.000Z | 2020-05-08T06:52:10.000Z | pyxeljs/hello.py | cie/python | b953f5e9d159abe9bd865c9642595a37ac43661b | [
"CC-BY-4.0"
] | null | null | null | def __getattr__(sg):
return 12
| 9 | 20 | 0.666667 | 5 | 36 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.25 | 36 | 3 | 21 | 12 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e02d4de961f703db1e4fe2be448354b10ce728d8 | 60,132 | py | Python | modules/unit_tests/s3/s3validators.py | aeturnum/new_eden | 01b603b2797dc5b3fa82d9ae32c23016c07c0f44 | [
"MIT"
] | null | null | null | modules/unit_tests/s3/s3validators.py | aeturnum/new_eden | 01b603b2797dc5b3fa82d9ae32c23016c07c0f44 | [
"MIT"
] | null | null | null | modules/unit_tests/s3/s3validators.py | aeturnum/new_eden | 01b603b2797dc5b3fa82d9ae32c23016c07c0f44 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Validators Unit Tests
#
# To run this script use:
# python web2py.py -S eden -M -R applications/eden/modules/unit_tests/s3/s3validators.py
#
import unittest
from gluon import current
from s3.s3datetime import S3Calendar, S3DefaultTZ
from s3.s3fields import *
from s3.s3validators import *
from s3compat import PY2
from unit_tests import run_suite
# =============================================================================
class EAST5(datetime.tzinfo):
""" Dummy time zone for tests """
def utcoffset(self, dt):
return datetime.timedelta(hours=5)
class WEST6(datetime.tzinfo):
""" Dummy time zone for tests """
def utcoffset(self, dt):
return datetime.timedelta(hours=-6)
# =============================================================================
class ISLatTest(unittest.TestCase):
"""
Latitude has to be in decimal degrees between -90 & 90
We can convert D/M/S or D°M'S" format into decimal degrees:
Zero padded, separated by spaces or : or (d, m, s) or (°, ', ") or run
together and followed by cardinal direction initial (N,S). Only seconds
can have decimals places. A decimal point with no trailing digits is invalid.
"""
# -------------------------------------------------------------------------
def testValid(self):
""" Test valid latitude expressions """
assertEqual = self.assertEqual
validator = IS_LAT()
# Accepts numeric values inside limit
value, error = validator(56.75)
assertEqual(error, None)
assertEqual(value, 56.75)
# Accepts decimal degrees as string
value, error = validator("32.9975")
assertEqual(error, None)
assertEqual(value, 32.9975)
# Accepts correctly formatted DMS strings
value, error = validator("40:23:15N")
assertEqual(error, None)
assertEqual(value, 40.3875)
value, error = validator(u"81°16'42.348\"N")
assertEqual(error, None)
assertEqual(value, 81.27843)
value, error = validator("40d 023m 15s S")
assertEqual(error, None)
assertEqual(value, -40.3875)
value, error = validator("90 00 00.0")
assertEqual(error, None)
assertEqual(value, 90.0)
value, error = validator("89 59 50.4141 S")
assertEqual(error, None)
assertEqual(value, -89.99733725)
value, error = validator("00 00 00.0")
assertEqual(error, None)
assertEqual(value, 0.0)
value, error = validator("43 23 15S")
assertEqual(error, None)
assertEqual(value, -43.3875)
# -------------------------------------------------------------------------
def testInvalid(self):
""" Test invalid latitude expressions """
assertNotEqual = self.assertNotEqual
validator = IS_LAT()
# Doesn't accept None or empty string
value, error = validator(None)
assertNotEqual(error, None)
value, error = validator("")
assertNotEqual(error, None)
# Doesn't syntactically incorrect strings
value, error = validator(" ")
assertNotEqual(error, None)
value, error = validator("invalid")
assertNotEqual(error, None)
value, error = validator("-43 17 11")
assertNotEqual(error, None)
# Doesn't accept invalid cardinal direction
value, error = validator("43 23 15W")
assertNotEqual(error, None)
# Doesn't accept values outside of limits
value, error = validator(101)
assertNotEqual(error, None)
value, error = validator(u"91°16'42.348\"N")
assertNotEqual(error, None)
value, error = validator("90 00 00.001 S")
assertNotEqual(error, None)
value, error = validator("89 61 50.4121 S") # Minutes excess
assertNotEqual(error, None)
value, error = validator("89 59 78.4141") # Seconds excess
assertNotEqual(error, None)
# =============================================================================
class ISLonTest(unittest.TestCase):
"""
Longitude has to be in decimal degrees between -180 & 180
We can convert D/M/S or D°M'S" format into decimal degrees:
Zero padded, separated by spaces or : or (d, m, s) or (°, ', ") or run
together and followed by cardinal direction initial (E,W). Only seconds
can have decimals places. A decimal point with no trailing digits is invalid.
"""
# -------------------------------------------------------------------------
def testValid(self):
""" Test valid latitude expressions """
assertEqual = self.assertEqual
validator = IS_LON()
# Accepts numeric values inside limit
value, error = validator(116.75)
assertEqual(error, None)
assertEqual(value, 116.75)
# Accepts decimal degrees as string
value, error = validator("132.9975")
assertEqual(error, None)
assertEqual(value, 132.9975)
# Accepts correctly formatted DMS strings
value, error = validator("99:23:15E")
assertEqual(error, None)
assertEqual(value, 99.3875)
value, error = validator(u"121°16'42.348\"E")
assertEqual(error, None)
assertEqual(value, 121.27843)
value, error = validator("40d 023m 15s W")
assertEqual(error, None)
assertEqual(value, -40.3875)
value, error = validator("180 00 00.0")
assertEqual(error, None)
assertEqual(value, 180.0)
value, error = validator("179 59 50.4141 W")
assertEqual(error, None)
assertEqual(value, -179.99733725)
value, error = validator("00 00 00.0")
assertEqual(error, None)
assertEqual(value, 0.0)
value, error = validator("143 23 15W")
assertEqual(error, None)
assertEqual(value, -143.3875)
# -------------------------------------------------------------------------
def testInvalid(self):
""" Test invalid latitude expressions """
assertNotEqual = self.assertNotEqual
validator = IS_LON()
# Doesn't accept None or empty string
value, error = validator(None)
assertNotEqual(error, None)
value, error = validator("")
assertNotEqual(error, None)
# Doesn't syntactically incorrect strings
value, error = validator(" ")
assertNotEqual(error, None)
value, error = validator("invalid")
assertNotEqual(error, None)
value, error = validator("-43 17 11")
assertNotEqual(error, None)
# Doesn't accept invalid cardinal direction
value, error = validator("43 23 15S")
assertNotEqual(error, None)
# Doesn't accept values outside of limits
value, error = validator(201)
assertNotEqual(error, None)
value, error = validator(u"181°16'42.348\"E")
assertNotEqual(error, None)
value, error = validator("180 00 00.001 W")
assertNotEqual(error, None)
value, error = validator("179 61 50.4121 W") # Minutes excess
assertNotEqual(error, None)
value, error = validator("179 59 78.4141") # Seconds excess
assertNotEqual(error, None)
# =============================================================================
class ISONEOFLazyRepresentationTests(unittest.TestCase):
def setUp(self):
s3db = current.s3db
settings = current.deployment_settings
current.auth.override = True
self.org_branches = settings.get_org_branches()
settings.org.branches = True
# Generate some organisation records
orgs = [Storage(name="ISONEOF%s" % i, acronym="IOO%s" % i) for i in range(5)]
table = s3db.org_organisation
ids = []
for org in orgs:
org_id = table.insert(**org)
org["id"] = org_id
s3db.update_super(table, org)
ids.append(org_id)
self.ids = ids
self.orgs = orgs
# -------------------------------------------------------------------------
def tearDown(self):
current.deployment_settings.org.branches = self.org_branches
current.auth.override = False
current.db.rollback()
# -------------------------------------------------------------------------
def testIsOneOfBuildSet(self):
""" Test building of options set """
assertEqual = self.assertEqual
assertIn = self.assertIn
db = current.db
table = current.s3db.org_organisation
renderer = S3Represent(lookup="org_organisation")
validator = IS_ONE_OF(db(table.id.belongs(self.ids)),
"org_organisation.id",
renderer,
)
# Verify the options set
options = dict(validator.options())
for org in self.orgs:
assertIn(str(org.id), options)
assertEqual(options[str(org.id)], org.name)
# IS_ONE_OF passes all rows, no lookups inside renderer
assertEqual(renderer.queries, 0)
# -------------------------------------------------------------------------
def testOrgOrganisationRepresent(self):
""" Test IS_ONE_OF in combination with org_OrganisationRepresent """
# @todo: move into s3db/org tests?
s3db = current.s3db
assertTrue = self.assertTrue
assertEqual = self.assertEqual
db = current.db
table = s3db.org_organisation
renderer = s3db.org_OrganisationRepresent()
validator = IS_ONE_OF(db(table.id.belongs(self.ids)),
"org_organisation.id",
renderer,
)
options = dict(validator.options())
for org in self.orgs:
assertTrue(str(org.id) in options)
assertEqual(options[str(org.id)], "%s (%s)" % (org.name, org.acronym))
assertEqual(renderer.queries, 1) # using custom query
renderer = s3db.org_OrganisationRepresent(parent=False)
validator = IS_ONE_OF(db(table.id.belongs(self.ids)),
"org_organisation.id",
renderer,
)
options = dict(validator.options())
for org in self.orgs:
assertTrue(str(org.id) in options)
assertEqual(options[str(org.id)],
"%s (%s)" % (org.name, org.acronym))
assertEqual(renderer.queries, 0) # using default query
renderer = s3db.org_OrganisationRepresent(parent=False, acronym=False)
validator = IS_ONE_OF(db(table.id.belongs(self.ids)),
"org_organisation.id",
renderer,
)
options = dict(validator.options())
for org in self.orgs:
assertTrue(str(org.id) in options)
assertEqual(options[str(org.id)], org.name)
assertEqual(renderer.queries, 0) # using default query
# =============================================================================
class IS_PHONE_NUMBER_Tests(unittest.TestCase):
""" Test IS_PHONE_NUMBER single phone number validator """
def setUp(self):
settings = current.deployment_settings
self.intl = settings.get_msg_require_international_phone_numbers()
def tearDown(self):
settings = current.deployment_settings
settings.msg.require_international_phone_numbers = self.intl
# -------------------------------------------------------------------------
def testStandardNotationRequirement(self):
""" Test phone number validation with standard notation requirement """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
validate = IS_PHONE_NUMBER(international=False)
number = "(021) 3847589"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "(021) 3847589")
number = "0049-681-5049321"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "0049-681-5049321")
number = " 1-992-883742"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "1-992-883742")
number = "1.123.736489"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "1.123.736489")
number = "+44848958493 "
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "+44848958493")
number = "(021) 3ADF589"
value, error = validate(number)
assertNotEqual(error, None)
number = "Test"
value, error = validate(number)
assertNotEqual(error, None)
# @todo: this is still recognized as valid, as is "-1"
#number = "1"
#value, error = validate(number)
#assertNotEqual(error, None)
number = "+44848958493/+44736282167"
value, error = validate(number)
assertNotEqual(error, None)
number = None
value, error = validate(number)
assertNotEqual(error, None)
number = ""
value, error = validate(number)
assertNotEqual(error, None)
# -------------------------------------------------------------------------
def testInternationalFormat(self):
""" Test phone number validation with international notation requirement """
settings = current.deployment_settings
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
validate = IS_PHONE_NUMBER(international=True)
# Turn on notation requirement globally
settings.msg.require_international_phone_numbers = True
number = "+46-73-3847589"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "+46733847589")
number = "+49.681.5049321"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "+496815049321")
number = "+1992883742"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "+1992883742")
number = "(021) 36374589"
value, error = validate(number)
assertNotEqual(error, None)
assertEqual(error, "Enter phone number in international format like +46783754957")
number = "Test"
value, error = validate(number)
assertNotEqual(error, None)
number = "1-364-283745"
value, error = validate(number)
assertNotEqual(error, None)
number = None
value, error = validate(number)
assertNotEqual(error, None)
number = ""
value, error = validate(number)
assertNotEqual(error, None)
# Turn off notation requirement globally
settings.msg.require_international_phone_numbers = False
number = "1-364-283745"
value, error = validate(number)
assertEqual(error, None)
assertEqual(value, "1-364-283745")
# =============================================================================
class IS_UTC_DATETIME_Tests(unittest.TestCase):
""" Test IS_UTC_DATETIME validator """
# -------------------------------------------------------------------------
def setUp(self):
settings = current.deployment_settings
# Make sure date and time formats are standard
self.date_format = settings.get_L10n_date_format()
self.time_format = settings.get_L10n_time_format()
settings.L10n.date_format = "%Y-%m-%d"
settings.L10n.time_format = "%H:%M:%S"
# Set timezone to UTC
self.tzinfo = current.response.s3.tzinfo
self.tzname = current.session.s3.tzname
self.utc_offset = current.session.s3.utc_offset
# Set current calendar to Gregorian
self.calendar = current.calendar
current.calendar = S3Calendar("Gregorian")
# -------------------------------------------------------------------------
def tearDown(self):
settings = current.deployment_settings
# Reset date and time format settings
settings.L10n.date_format = self.date_format
settings.L10n.time_format = self.time_format
# Reset time zone
current.response.s3.tzinfo = self.tzinfo
current.session.s3.tzname = self.tzname
current.session.s3.utc_offset = self.utc_offset
# Restore current calendar
current.calendar = self.calendar
# -------------------------------------------------------------------------
def testValidation(self):
""" Test validation with valid datetime string """
response = current.response
session = current.session
response.s3.tzinfo = None
session.s3.tzname = "America/Detroit"
validate = IS_UTC_DATETIME()
assertEqual = self.assertEqual
# Test timezone-naive string (winter)
dtstr = "2011-11-19 14:03:00"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 19, 3, 0))
# Test timezone-naive string (summer)
dtstr = "2011-06-11 14:00:00"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 6, 11, 18, 0, 0))
# Test timezone-aware string
dtstr = "2011-11-19 14:28:22+0500"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 9, 28, 22))
# Fall back to offset
response.s3.tzinfo = None
session.s3.tzname = None
session.s3.utc_offset = -8
# Test timezone-naive string
dtstr = "2011-11-19 14:00:00"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 22, 0, 0))
# Test timezone-aware string
dtstr = "2011-11-19 14:00:00+0500"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 9, 0, 0))
# -------------------------------------------------------------------------
def testValidationWithDateTime(self):
""" Test validation with datetime """
response = current.response
session = current.session
response.s3.tzinfo = None
session.s3.tzname = "Australia/Tasmania"
session.s3.utc_offset = "+0200"
validate = IS_UTC_DATETIME()
assertEqual = self.assertEqual
# Test timezone-naive datetime (winter, UTC+11 to UTC)
dt = datetime.datetime(2011, 11, 19, 14, 0, 0)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 3, 0, 0))
# Test timezone-naive datetime (summer, UTC+10)
dt = datetime.datetime(2011, 6, 8, 5, 0, 0)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 6, 7, 19, 0, 0))
# Test timezone-aware datetime (UTC+5 to UTC)
dt = datetime.datetime(2011, 11, 19, 14, 0, 0, tzinfo=EAST5())
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 9, 0, 0))
# Fall back to fixed offset
response.s3.tzinfo = None
session.s3.tzname = None
session.s3.utc_offset = -8
# Test timezone-naive datetime
dt = datetime.datetime(2011, 11, 19, 14, 0, 0)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 22, 0, 0))
# Test timezone-aware datetime
dt = datetime.datetime(2011, 11, 19, 14, 0, 0, tzinfo=EAST5())
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 9, 0, 0))
# -------------------------------------------------------------------------
def testValidationWithDate(self):
""" Test validation with date """
response = current.response
session = current.session
response.s3.tzinfo = None
session.s3.tzname = "UTC"
session.s3.utc_offset = "+0200"
validate = IS_UTC_DATETIME()
assertEqual = self.assertEqual
# Check that date defaults to 8:00 hours (UTC)
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 8, 0, 0))
# Change time zone (far West, fixed offset)
response.s3.tzinfo = None
session.s3.tzname = None
session.s3.utc_offset = -8
# Check that date defaults to 08:00 hours
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 16, 0, 0))
# Change time zone (extreme East, with DST-awareness)
response.s3.tzinfo = None
session.s3.tzname = "Australia/Tasmania"
session.s3.utc_offset = -2
# Check that date defaults to 08:00 hours
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 18, 21, 0, 0))
# Check that date defaults to 08:00 hours
dt = datetime.date(2011, 5, 11)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 5, 10, 22, 0, 0))
# -------------------------------------------------------------------------
def testValidationDestructive(self):
""" Test validation with invalid input """
validate = IS_UTC_DATETIME()
assertEqual = self.assertEqual
# Test with invalid datetime string
dtstr = "Invalid Value"
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# Test with invalid type
dtstr = 33
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# Test with None
dtstr = None
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# Test invalid UTC offset
dtstr = "2011-11-19 14:00:00+3600"
value, error = validate(dtstr)
assertEqual(error, validate.offset_error)
assertEqual(value, dtstr)
# -------------------------------------------------------------------------
def testValidationWithAlternativeCalendar(self):
""" Test validation with calendar-override """
assertEqual = self.assertEqual
# Test default=Gregorian, override=Persian
current.calendar = S3Calendar("Gregorian")
validate = IS_UTC_DATETIME(calendar="Persian")
dtstr = "1390-08-28 14:00:00"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 14, 0, 0))
dtstr_ = validate.formatter(value)
assertEqual(dtstr_, dtstr)
# Test default=Persian, override=Gregorian
current.calendar = S3Calendar("Persian")
validate = IS_UTC_DATETIME(calendar="Gregorian")
dtstr = "2011-11-19 14:00:00"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 14, 0, 0))
dtstr_ = validate.formatter(value)
assertEqual(dtstr_, dtstr)
# -------------------------------------------------------------------------
def testDefaultFormat(self):
""" Test validation with default format """
# Set default format
current.deployment_settings.L10n.date_format = "%d/%m/%Y"
current.deployment_settings.L10n.time_format = "%H:%M"
# Instantiate with default format
validate = IS_UTC_DATETIME()
assertEqual = self.assertEqual
# Test valid string
dtstr = "19/11/2011 14:00"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 14, 0, 0))
# Test invalid string
dtstr = "2011-11-19 14:00:00"
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# -------------------------------------------------------------------------
def testCustomFormat(self):
""" Test validation with custom format (overriding settings) """
# Set default format
current.deployment_settings.L10n.date_format = "%d/%m/%Y"
current.deployment_settings.L10n.time_format = "%H:%M:%S"
# Instantiate with override format
validate = IS_UTC_DATETIME(format="%d.%m.%Y %I:%M %p")
assertEqual = self.assertEqual
# Test valid string
dtstr = "19.11.2011 02:00 PM"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.datetime(2011, 11, 19, 14, 0, 0))
# Test invalid string
dtstr = "2011-11-19 14:00:00"
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# -------------------------------------------------------------------------
def testFormatter(self):
""" Test formatter """
response = current.response
session = current.session
validate = IS_UTC_DATETIME()
assertEqual = self.assertEqual
# Test with None
dt = None
dtstr = validate.formatter(dt)
assertEqual(dtstr, current.messages["NONE"])
# Test without UTC offset
dt = datetime.datetime(2011, 11, 19, 14, 0, 0)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-11-19 14:00:00")
# Change time zone
response.s3.tzinfo = None
session.s3.tzname = "Canada/Eastern"
session.s3.utc_offset = +5
# Test with default timezone (alternate DST)
dt = datetime.datetime(2011, 11, 19, 14, 0, 0)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-11-19 09:00:00")
dt = datetime.datetime(2011, 6, 8, 14, 0, 0)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-06-08 10:00:00")
# Test format override
validate = IS_UTC_DATETIME(format="%d.%m.%Y %I:%M %p",
)
dt = datetime.datetime(2011, 11, 19, 14, 0, 0)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "19.11.2011 09:00 AM")
# -------------------------------------------------------------------------
def testLocalizedErrorMessages(self):
""" Test localized date/time in default error messages """
response = current.response
session = current.session
assertEqual = self.assertEqual
assertTrue = self.assertTrue
# Set default format
current.deployment_settings.L10n.date_format = "%d/%m/%Y"
current.deployment_settings.L10n.time_format = "%I:%M %p"
# Change time zone
response.s3.tzinfo = None
session.s3.tzname = "US/Pacific"
session.s3.utc_offset = +3
# Minimum/maximum
mindt = datetime.datetime(2011, 11, 19, 14, 0, 0)
maxdt = datetime.datetime(2011, 11, 20, 22, 0, 0)
# Test minimum error
validate = IS_UTC_DATETIME(minimum=mindt)
msg = validate.error_message
assertEqual(validate.minimum, mindt)
assertTrue(msg.find("19/11/2011 06:00 AM") != -1)
# Test maximum error
validate = IS_UTC_DATETIME(maximum=maxdt)
msg = validate.error_message
assertEqual(validate.maximum, maxdt)
assertTrue(msg.find("20/11/2011 02:00 PM") != -1)
# Test minimum error with custom format
validate = IS_UTC_DATETIME(minimum=mindt,
format="%Y-%m-%d %H:%M",
)
msg = validate.error_message
assertEqual(validate.minimum, mindt)
assertTrue(msg.find("2011-11-19 06:00") != -1)
# Test maximum error with custom format
validate = IS_UTC_DATETIME(maximum=maxdt,
format="%Y-%m-%d %H:%M",
)
msg = validate.error_message
assertEqual(validate.maximum, maxdt)
assertTrue(msg.find("2011-11-20 14:00") != -1)
# =============================================================================
class IS_UTC_DATE_Tests(unittest.TestCase):
""" Test IS_CALENDAR_DATE validator """
# -------------------------------------------------------------------------
def setUp(self):
settings = current.deployment_settings
# Set default calendar to Gregorian
self.calendar = current.calendar
current.calendar = S3Calendar("Gregorian")
# Make sure date format is standard
self.date_format = settings.get_L10n_date_format()
settings.L10n.date_format = "%Y-%m-%d"
# Set timezone to UTC
self.tzinfo = current.response.tzinfo
self.tzname = current.session.tzname
self.utc_offset = current.session.s3.utc_offset
# -------------------------------------------------------------------------
def tearDown(self):
settings = current.deployment_settings
# Reset date and time format settings
settings.L10n.date_format = self.date_format
# Reset time zone
current.response.s3.tzinfo = self.tzinfo
current.session.s3.tzname = self.tzname
current.session.s3.utc_offset = self.utc_offset
# Reset calendar
current.calendar = self.calendar
# -------------------------------------------------------------------------
def testValidation(self):
""" Test validation with valid datetime string """
response = current.response
validate = IS_UTC_DATE()
assertEqual = self.assertEqual
# Test UTC
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Change time zone
response.s3.tzinfo = S3DefaultTZ(-6)
# Test western time zone (6 hours West, same day)
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Change time zone
response.s3.tzinfo = S3DefaultTZ(+5)
# Test eastern time zone (5 hours East, same day)
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Change time zone
response.s3.tzinfo = S3DefaultTZ(+11)
# Test eastern time zone (11 hours East, next day)
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 18))
# -------------------------------------------------------------------------
def testValidationWithDateTime(self):
""" Test validation with datetime """
response = current.response
validate = IS_UTC_DATE()
assertEqual = self.assertEqual
# Test timezone-naive datetime (UTC, same day)
dt = datetime.datetime(2011, 11, 19, 2, 0, 0)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Test timezone-aware datetime (6 hours West, previous day)
dt = datetime.datetime(2011, 11, 19, 19, 0, 0, tzinfo=WEST6())
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 20))
# Change time zone
response.s3.tzinfo = S3DefaultTZ(-8)
# Test timezone-naive datetime (8 hours West, previous day)
dt = datetime.datetime(2011, 11, 19, 18, 0, 0)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 20))
# Test timezone-aware datetime (5 hours East, next day)
dt = datetime.datetime(2011, 11, 19, 2, 0, 0, tzinfo=EAST5())
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 18))
# -------------------------------------------------------------------------
def testParseRepresent(self):
""" Parsing-Representation consistency test """
# Representation of a parsed string must give the same string
response = current.response
assertEqual = self.assertEqual
validate = IS_UTC_DATE()
represent = S3DateTime.date_represent
response.s3.tzinfo = S3DefaultTZ(-10)
dtstr = "1998-03-21"
value, error = validate(dtstr)
assertEqual(error, None)
representation = validate.formatter(value)
assertEqual(representation, dtstr)
representation = represent(value, utc=True)
assertEqual(representation, dtstr)
response.s3.tzinfo = S3DefaultTZ(0)
dtstr = "1998-03-21"
value, error = validate(dtstr)
assertEqual(error, None)
representation = validate.formatter(value)
assertEqual(representation, dtstr)
representation = represent(value, utc=True)
assertEqual(representation, dtstr)
response.s3.tzinfo = S3DefaultTZ(+6)
dtstr = "1998-03-21"
value, error = validate(dtstr)
assertEqual(error, None)
representation = validate.formatter(value)
assertEqual(representation, dtstr)
representation = represent(value, utc=True)
assertEqual(representation, dtstr)
response.s3.tzinfo = S3DefaultTZ(+12)
dtstr = "1998-03-21"
value, error = validate(dtstr)
assertEqual(error, None)
representation = validate.formatter(value)
assertEqual(representation, dtstr)
representation = represent(value, utc=True)
assertEqual(representation, dtstr)
# -------------------------------------------------------------------------
def testValidationWithDate(self):
""" Test validation with date """
response = current.response
validate = IS_UTC_DATE()
assertEqual = self.assertEqual
# Test UTC
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Test western time zone (5 hours West, same day)
response.s3.tzinfo = S3DefaultTZ(-5)
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Test eastern time zone (5 hours East, same day)
response.s3.tzinfo = S3DefaultTZ(+5)
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Test eastern time zone (9 hours East, next day)
response.s3.tzinfo = S3DefaultTZ(+9)
dt = datetime.date(2011, 11, 19)
value, error = validate(dt)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 18))
# -------------------------------------------------------------------------
def testValidationDestructive(self):
""" Test validation with invalid input """
validate = IS_UTC_DATE()
assertEqual = self.assertEqual
# Test with invalid datetime string
dtstr = "Invalid Value"
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# Test with invalid type
dtstr = 33
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# Test with None
dtstr = None
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# -------------------------------------------------------------------------
def testValidationWithAlternativeCalendar(self):
""" Test validation with calendar-override """
assertEqual = self.assertEqual
# Test default=Gregorian, override=Persian
current.calendar = S3Calendar("Gregorian")
validate = IS_UTC_DATE(calendar="Persian")
dtstr = "1390-08-28"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
dtstr_ = validate.formatter(value)
assertEqual(dtstr_, dtstr)
# Test default=Persian, override=Gregorian
current.calendar = S3Calendar("Persian")
validate = IS_UTC_DATE(calendar="Gregorian")
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
dtstr_ = validate.formatter(value)
assertEqual(dtstr_, dtstr)
# -------------------------------------------------------------------------
def testDefaultFormat(self):
""" Test validation with default format """
# Set default format
current.deployment_settings.L10n.date_format = "%d/%m/%Y"
# Instantiate with default format
validate = IS_UTC_DATE()
assertEqual = self.assertEqual
# Test valid string
dtstr = "19/11/2011"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Test invalid string
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# -------------------------------------------------------------------------
def testCustomFormat(self):
""" Test validation with custom format (overriding settings) """
# Set default format
current.deployment_settings.L10n.date_format = "%d/%m/%Y"
# Instantiate with override format
validate = IS_UTC_DATE(format="%d.%m.%Y")
assertEqual = self.assertEqual
# Test valid string
dtstr = "19.11.2011"
value, error = validate(dtstr)
assertEqual(error, None)
assertEqual(value, datetime.date(2011, 11, 19))
# Test invalid string
dtstr = "2011-11-19"
value, error = validate(dtstr)
assertEqual(error, validate.error_message)
assertEqual(value, dtstr)
# -------------------------------------------------------------------------
def testFormatter(self):
""" Test formatter """
response = current.response
session = current.session
validate = IS_UTC_DATE()
assertEqual = self.assertEqual
# Test with None
dt = None
dtstr = validate.formatter(dt)
assertEqual(dtstr, current.messages["NONE"])
# Test without UTC offset
dt = datetime.date(2011, 11, 19)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-11-19")
# Change time zone
response.s3.tzinfo = S3DefaultTZ(-6)
# Test with default UTC offset (6 hours West, same day)
dt = datetime.date(2011, 11, 19)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-11-19")
# Change time zone
response.s3.tzinfo = S3DefaultTZ(+6)
# Test with default UTC offset (6 hours East, same day)
dt = datetime.date(2011, 11, 19)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-11-19")
# Change time zone
response.s3.tzinfo = S3DefaultTZ(+12)
# Test with default UTC offset (12 hours East, next day)
dt = datetime.date(2011, 11, 19)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "2011-11-20")
response.s3.tzinfo = None
session.s3.tzname = "Australia/Melbourne"
session.s3.utc_offset = +1
# Test format override
validate = IS_UTC_DATE(format="%d.%m.%Y",
)
dt = datetime.datetime(2011, 11, 19, 8, 0, 0)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "19.11.2011")
dt = datetime.datetime(2011, 11, 19, 18, 0, 0)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "20.11.2011")
dt = datetime.date(2011, 11, 19)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "20.11.2011")
dt = datetime.date(2011, 5, 19)
dtstr = validate.formatter(dt)
assertEqual(dtstr, "20.05.2011")
# -------------------------------------------------------------------------
def testLocalizedErrorMessages(self):
""" Test localized date/time in default error messages """
response = current.response
assertEqual = self.assertEqual
assertTrue = self.assertTrue
# Set default format
current.deployment_settings.L10n.date_format = "%d/%m/%Y"
# Change time zone
response.s3.tzinfo = S3DefaultTZ(+3)
# Minimum/maximum
mindt = datetime.date(2011, 11, 16)
maxdt = datetime.date(2011, 11, 20)
# Test minimum error
validate = IS_UTC_DATE(minimum=mindt)
msg = validate.error_message
assertEqual(validate.minimum, mindt)
assertTrue(msg.find("16/11/2011") != -1)
dtstr = "13/11/2011"
value, error = validate(dtstr)
assertEqual(value, dtstr)
assertEqual(error, msg)
# Test maximum error
validate = IS_UTC_DATE(maximum=maxdt)
msg = validate.error_message
assertEqual(validate.maximum, maxdt)
assertTrue(msg.find("20/11/2011") != -1)
# Test minimum error with custom format
validate = IS_UTC_DATE(minimum=mindt,
format="%Y-%m-%d",
)
msg = validate.error_message
assertEqual(validate.minimum, mindt)
assertTrue(msg.find("2011-11-16") != -1)
# Test maximum error with custom format
validate = IS_UTC_DATE(maximum=maxdt,
format="%Y-%m-%d",
)
msg = validate.error_message
assertEqual(validate.maximum, maxdt)
assertTrue(msg.find("2011-11-20") != -1)
# =============================================================================
class IS_JSONS3_Tests(unittest.TestCase):
""" Testing IS_JSONS3 validator """
# -------------------------------------------------------------------------
@classmethod
def setUpClass(self):
db = current.db
# Create a test table
db.define_table("test_jsons3",
Field("value", "json",
requires = IS_JSONS3(),
),
)
# -------------------------------------------------------------------------
@classmethod
def tearDownClass(self):
db = current.db
# Drop the test table
db.test_jsons3.drop()
# -------------------------------------------------------------------------
def testCompatibility(self):
""" Verify consistency of native JSON implementation """
db = current.db
table = db.test_jsons3
# PyDAL with native JSON support consistently accepts and
# returns a Python object for "json" fields. Older versions
# of web2py DAL may raise an exception here:
record_id = table.insert(value={"a": 1})
row = db(table.id == record_id).select(table.value,
limitby=(0, 1),
).first()
self.assertTrue(isinstance(row.value, dict))
# -------------------------------------------------------------------------
def testValidation(self):
""" Verify correct validation and conversion of JSON strings """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
validator = IS_JSONS3()
jsonstr = """{"a": 1}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, {"a": 1})
jsonstr = """not valid"""
value, error = validator(jsonstr)
assertNotEqual(error, None)
assertEqual(value, jsonstr)
# None is not valid JSON (must use IS_EMPTY_OR to allow it)
jsonstr = None
value, error = validator(jsonstr)
assertNotEqual(error, None)
assertEqual(value, jsonstr)
# -------------------------------------------------------------------------
def testValidationNative(self):
""" Verify correct validation of JSON strings without conversion """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
validator = IS_JSONS3(native_json=True)
jsonstr = """{"a":1}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, jsonstr)
jsonstr = """not valid"""
value, error = validator(jsonstr)
assertNotEqual(error, None)
assertEqual(value, jsonstr)
# None is not valid JSON (must use IS_EMPTY_OR to allow it)
jsonstr = None
value, error = validator(jsonstr)
assertNotEqual(error, None)
assertEqual(value, jsonstr)
# -------------------------------------------------------------------------
def testValidationCSVSyntax(self):
""" Verify correct validation and conversion of CSV strings """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
# Pretend CSV import
current.response.s3.bulk = True
try:
validator = IS_JSONS3()
# Invalid syntax (single quotes)
jsonstr = """{'a': 1}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, {"a": 1})
# Invalid syntax (single quotes with nested quotes)
jsonstr = """{'a': 'this ain\\'t a good "example"'}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, {"a": "this ain't a good \"example\""})
# Valid syntax should work too
jsonstr = """{"a": 1}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, {"a": 1})
# Some stuff is just...
jsonstr = """not valid"""
value, error = validator(jsonstr)
assertNotEqual(error, None)
assertEqual(value, jsonstr)
finally:
current.response.s3.bulk = False
# -------------------------------------------------------------------------
def testValidationCSVSyntaxNative(self):
""" Verify correct validation and JSON syntax conversion of CSV strings """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
# Pretend CSV import
current.response.s3.bulk = True
try:
validator = IS_JSONS3(native_json=True)
# Invalid syntax (single quotes) => returns a valid JSON string
jsonstr = """{'a': 1}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, """{"a":1}""")
# Invalid syntax (single quotes with nested quotes)
jsonstr = """{'a': 'this ain\\'t a good "example"'}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, """{"a":"this ain't a good \\"example\\""}""")
# Valid syntax should work too
jsonstr = """{"a": 1}"""
value, error = validator(jsonstr)
assertEqual(error, None)
assertEqual(value, """{"a":1}""")
# Some stuff is just...
jsonstr = """not JSON at all"""
value, error = validator(jsonstr)
assertNotEqual(error, None)
assertEqual(value, jsonstr)
finally:
current.response.s3.bulk = False
# -------------------------------------------------------------------------
def testFormatter(self):
""" Verify correct formatting of data with conversion """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
validator = IS_JSONS3()
data = {"a": 1}
formatted = validator.formatter(data)
assertEqual(formatted, """{"a":1}""")
# Exception: None gives None
# (would give "null" normally, but forms need to know there is no value)
data = None
formatted = validator.formatter(data)
assertEqual(formatted, None)
# -------------------------------------------------------------------------
def testFormatterNative(self):
""" Verify correct formatting of data without conversion """
assertEqual = self.assertEqual
assertNotEqual = self.assertNotEqual
validator = IS_JSONS3(native_json=True)
data = {"a": 1}
formatted = validator.formatter(data)
assertEqual(formatted, """{"a":1}""")
data = """{"a":1}"""
formatted = validator.formatter(data)
assertEqual(formatted, data)
# Exception: None gives None
# (would give "null" normally, but forms need to know there is no value)
data = None
formatted = validator.formatter(data)
assertEqual(formatted, None)
# =============================================================================
class IS_DYNAMIC_FIELDNAME_Test(unittest.TestCase):
""" Test cases for IS_DYNAMIC_FIELDNAME validator """
# -------------------------------------------------------------------------
def testPass(self):
""" Test IS_DYNAMIC_FIELDNAME with valid field names """
assertEqual = self.assertEqual
requires = IS_DYNAMIC_FIELDNAME()
value, error = requires("example")
assertEqual(value, "example")
assertEqual(error, None)
value, error = requires("Another_Example")
assertEqual(value, "another_example")
assertEqual(error, None)
# -------------------------------------------------------------------------
def testFail(self):
""" Test IS_DYNAMIC_FIELDNAME with invalid field names """
assertNotEqual = self.assertNotEqual
requires = IS_DYNAMIC_FIELDNAME()
# Must not be None
value, error = requires(None)
assertNotEqual(error, None)
# Must not be empty
value, error = requires("")
assertNotEqual(error, None)
# Must not contain blanks
value, error = requires("must not contain blanks")
assertNotEqual(error, None)
# Must start with a letter
value, error = requires("_must_start_with_letter")
assertNotEqual(error, None)
# Must not contain invalid characters
value, error = requires("invalid#characters")
assertNotEqual(error, None)
# Must not be "id"
value, error = requires("id")
assertNotEqual(error, None)
# Must not be meta-field name
value, error = requires("modified_by")
assertNotEqual(error, None)
# =============================================================================
class IS_DYNAMIC_FIELDTYPE_Test(unittest.TestCase):
""" Test cases for IS_DYNAMIC_FIELDTYPE validator """
# -------------------------------------------------------------------------
def testPass(self):
""" Test IS_DYNAMIC_FIELDTYPE with valid field types """
assertEqual = self.assertEqual
requires = IS_DYNAMIC_FIELDTYPE()
value, error = requires("boolean")
assertEqual(value, "boolean")
assertEqual(error, None)
value, error = requires("String")
assertEqual(value, "string")
assertEqual(error, None)
value, error = requires(" Integer ")
assertEqual(value, "integer")
assertEqual(error, None)
value, error = requires("reference org_organisation")
assertEqual(value, "reference org_organisation")
assertEqual(error, None)
# -------------------------------------------------------------------------
def testFail(self):
""" Test IS_DYNAMIC_FIELDTYPE with invalid field types """
assertNotEqual = self.assertNotEqual
requires = IS_DYNAMIC_FIELDTYPE()
# Must not be None
value, error = requires(None)
assertNotEqual(error, None)
# Must not be empty
value, error = requires("")
assertNotEqual(error, None)
# Must be a supported field type
value, error = requires("nonsense")
assertNotEqual(error, None)
# Must not be "id"
value, error = requires("id")
assertNotEqual(error, None)
# Referenced tables must be resolvable
value, error = requires("reference nonexistent_table")
assertNotEqual(error, None)
# =============================================================================
class IS_FLOAT_AMOUNT_Tests(unittest.TestCase):
"""
Tests for the IS_FLOAT_AMOUNT validator
"""
# -------------------------------------------------------------------------
def setUp(self):
settings = current.deployment_settings
self.dot = settings.get_L10n_decimal_separator()
self.sep = settings.get_L10n_thousands_separator()
self.grp = settings.get_L10n_thousands_grouping()
settings.L10n.decimal_separator = ","
settings.L10n.thousands_separator = " "
settings.L10n.thousands_grouping = 3
def tearDown(self):
settings = current.deployment_settings
settings.L10n.decimal_separator = self.dot
settings.L10n.thousands_separator = self.sep
settings.L10n.thousands_grouping = self.grp
# -------------------------------------------------------------------------
def test_representation(self):
""" Test the IS_FLOAT_AMOUNT representation function """
represent = IS_FLOAT_AMOUNT.represent
samples = ((None, "", None, True),
(0.0, "0", 0, True),
(0.00325, "0,00", 2, True),
(198.05, "198,05", 2, True),
(1305.0, "1 305", 0, True),
(123456789012.0, "123 456 789 012,000", 3, True),
(0, "0", None, True),
(1305, "1 305,00", 2, True),
(987654321098, "987 654 321 098,00", 2, True),
(-0, "0,00", 2, True),
(-1305.730, "-1 305,73", None, True),
(-123456789012345.0, "-123 456 789 012 345", 2, False),
)
assertEqual = self.assertEqual
for number, expected, precision, fixed in samples:
assertEqual(represent(number,
precision = precision,
fixed = fixed,
),
expected,
)
# -------------------------------------------------------------------------
def test_validation(self):
""" Test the IS_FLOAT_AMOUNT validation function """
validate = IS_FLOAT_AMOUNT()
samples = (("123 456 789 012,00", 123456789012.0),
("0,00", 0.0),
("1 305,00", 1305.0),
(12.345, 12.345),
)
assertEqual = self.assertEqual
for inputstr, expected in samples:
value, error = validate(inputstr)
assertEqual(value, expected)
assertEqual(error, None)
# -------------------------------------------------------------------------
def test_ambiguous_validation(self):
""" Test the ambiguous validation """
settings = current.deployment_settings
settings.L10n.decimal_separator = ","
settings.L10n.thousands_separator = "."
settings.L10n.thousands_grouping = 3
validate = IS_FLOAT_AMOUNT()
samples = (("123.456.789.012,00", 123456789012.0),
("0,00", 0.0),
(u"1,305.234", 1.305234),
(12.345, 12.345),
)
assertEqual = self.assertEqual
for inputstr, expected in samples:
value, error = validate(inputstr)
assertEqual(value, expected)
assertEqual(error, None)
# =============================================================================
class IS_INT_AMOUNT_Tests(unittest.TestCase):
"""
Tests for the IS_INT_AMOUNT validator
"""
# -------------------------------------------------------------------------
def setUp(self):
settings = current.deployment_settings
self.sep = settings.get_L10n_thousands_separator()
self.grp = settings.get_L10n_thousands_grouping()
settings.L10n.thousands_separator = ","
settings.L10n.thousands_grouping = 3
def tearDown(self):
settings = current.deployment_settings
settings.L10n.thousands_separator = self.sep
settings.L10n.thousands_grouping = self.grp
# -------------------------------------------------------------------------
def test_representation(self):
""" Test the IS_INT_AMOUNT representation function """
represent = IS_INT_AMOUNT.represent
precision = 2
fixed = True
samples = ((None, ""),
(0, "0"),
(-0, "0"),
(-12555, "-12,555"),
(1305, "1,305"),
(1234567.89, "1,234,567"),
(123456789012, "123,456,789,012"),
(1234567890123456789, "1,234,567,890,123,456,789"),
)
for number, expected in samples:
self.assertEqual(represent(number), expected)
# -------------------------------------------------------------------------
def test_validation(self):
""" Test the IS_INT_AMOUNT validation function """
validate = IS_INT_AMOUNT()
samples = (("123,456,789,012", 123456789012),
("0", 0),
("993667", 993667),
)
assertEqual = self.assertEqual
for inputstr, expected in samples:
value, error = validate(inputstr)
assertEqual(value, expected)
assertEqual(error, None)
# =============================================================================
if __name__ == "__main__":
run_suite(
ISLatTest,
ISLonTest,
ISONEOFLazyRepresentationTests,
IS_PHONE_NUMBER_Tests,
IS_UTC_DATETIME_Tests,
IS_UTC_DATE_Tests,
IS_JSONS3_Tests,
IS_DYNAMIC_FIELDNAME_Test,
IS_DYNAMIC_FIELDTYPE_Test,
IS_FLOAT_AMOUNT_Tests,
IS_INT_AMOUNT_Tests,
)
# END ========================================================================
| 33.369589 | 90 | 0.54377 | 5,968 | 60,132 | 5.416555 | 0.090985 | 0.044856 | 0.050733 | 0.058003 | 0.819959 | 0.790355 | 0.742436 | 0.711594 | 0.670699 | 0.630607 | 0 | 0.057211 | 0.279984 | 60,132 | 1,801 | 91 | 33.388118 | 0.689209 | 0.170891 | 0 | 0.689623 | 0 | 0.003774 | 0.058272 | 0.00158 | 0 | 0 | 0 | 0.000555 | 0.335849 | 0 | null | null | 0.001887 | 0.006604 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e02faf90fbc7015e776997b2d40047ee3838ed1d | 47 | py | Python | python/testData/copyPaste/LineToPrev.src.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/copyPaste/LineToPrev.src.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/copyPaste/LineToPrev.src.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | print 1<selection>
print 2</selection>
print 3
| 11.75 | 19 | 0.765957 | 8 | 47 | 4.5 | 0.625 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 0.12766 | 47 | 3 | 20 | 15.666667 | 0.804878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e04e04f347be90c95bb75cf3d26e0ba0c7745cda | 99 | py | Python | survtrace/__init__.py | RyanWangZf/SurvTRACE | d55299a28629d233f49ad1feaea7ed00835f0dd0 | [
"MIT"
] | 8 | 2021-10-01T22:39:41.000Z | 2022-03-30T05:46:40.000Z | survtrace/__init__.py | RyanWangZf/SurvTRACE | d55299a28629d233f49ad1feaea7ed00835f0dd0 | [
"MIT"
] | 4 | 2021-10-07T17:40:36.000Z | 2022-03-29T04:18:47.000Z | survtrace/__init__.py | RyanWangZf/SurvTRACE | d55299a28629d233f49ad1feaea7ed00835f0dd0 | [
"MIT"
] | 3 | 2022-03-09T13:46:36.000Z | 2022-03-16T16:11:54.000Z | from .evaluate_utils import Evaluator
from .train_utils import Trainer
from .config import STConfig | 33 | 37 | 0.858586 | 14 | 99 | 5.928571 | 0.642857 | 0.26506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 99 | 3 | 38 | 33 | 0.943182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0622b6aa86f10086394e43f322f5f5e7b51e23c | 172 | py | Python | tensorkit/gnn/adj/__init__.py | lizeyan/tensorkit | 2997a5914ec3c3ec72f91eb5906b5ee878fdc020 | [
"MIT"
] | null | null | null | tensorkit/gnn/adj/__init__.py | lizeyan/tensorkit | 2997a5914ec3c3ec72f91eb5906b5ee878fdc020 | [
"MIT"
] | null | null | null | tensorkit/gnn/adj/__init__.py | lizeyan/tensorkit | 2997a5914ec3c3ec72f91eb5906b5ee878fdc020 | [
"MIT"
] | null | null | null | """GCN utilities based on adjacency matrix graph."""
from .gcn_layers import *
from .tensor_ops import *
try:
from ._graph_tool import *
except ImportError:
pass
| 17.2 | 52 | 0.72093 | 23 | 172 | 5.217391 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19186 | 172 | 9 | 53 | 19.111111 | 0.863309 | 0.267442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0ec227416b9f2627f795a361836d2eb08586f1b7 | 76 | py | Python | vaetc/evaluation/metrics/predictor/__init__.py | ganmodokix/vaetc | 866b79677b4f06603203376d967989dedadbffae | [
"MIT"
] | null | null | null | vaetc/evaluation/metrics/predictor/__init__.py | ganmodokix/vaetc | 866b79677b4f06603203376d967989dedadbffae | [
"MIT"
] | null | null | null | vaetc/evaluation/metrics/predictor/__init__.py | ganmodokix/vaetc | 866b79677b4f06603203376d967989dedadbffae | [
"MIT"
] | null | null | null | from .ridgeway import ridgeway_explicitness
from .sap_score import sap_score | 38 | 43 | 0.881579 | 11 | 76 | 5.818182 | 0.545455 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092105 | 76 | 2 | 44 | 38 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0eea4e011a3373d0a7d6712a702011d962e7de0f | 19,299 | py | Python | src/genie/libs/parser/nxos/show_rip.py | Drey/genieparser | f16649efabf1f3c892bcaad340ae24ce5403ba6b | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/nxos/show_rip.py | Drey/genieparser | f16649efabf1f3c892bcaad340ae24ce5403ba6b | [
"Apache-2.0"
] | 1 | 2019-04-02T16:51:56.000Z | 2019-04-02T16:51:56.000Z | src/genie/libs/parser/nxos/show_rip.py | Drey/genieparser | f16649efabf1f3c892bcaad340ae24ce5403ba6b | [
"Apache-2.0"
] | 1 | 2021-01-29T17:31:33.000Z | 2021-01-29T17:31:33.000Z | """show_rip.py
NXOS parser class for below command(s):
show ip rip vrf all
"""
import xmltodict
import re
try:
from ats import tcl
except Exception:
pass
from genie.metaparser import MetaParser
from genie.metaparser.util.schemaengine import Schema, Any, Optional, Or, And, Default, Use
def regexp(expression):
def match(value):
if re.match(expression,value):
return value
else:
raise TypeError("Value '%s' doesnt match regex '%s'"
%(value,expression))
return match
class ShowIpRipSchema(MetaParser):
"""Schema for show ip rip vrf all"""
schema = {'process':
{regexp('rip-(.*)'):
{'vrf':
{Any():
{'adminDistance': str,
'defaultMetric': str,
'expiryTime': str,
'garbageCollectorTime': str,
'maxPaths': str,
'multicastGroup': str,
Optional('ripInterfaceList'): str,
Optional('ripPort'): str,
'state': str,
'status': str,
'updateTime': str,}
}
}
}
}
class ShowIpRipVrfAll(ShowIpRipSchema, MetaParser):
"""Parser for:
show ip rip vrf all
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip vrf all')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip vrf all | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpv6RipVrfAll(MetaParser):
"""Parser for:
show ipv6 rip vrf all
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip vrf all')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip vrf all | xml')
result = tcl.cast_any(output[1])
return result
class ShowRunRip(MetaParser):
"""Parser for:
show running-config rip
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show running-config rip')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show running-config rip | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpRipNeighborSchema(MetaParser):
"""Schema for show ip rip neighbor vrf all"""
schema = {'interfaces': str,
'process_id':
{regexp('rip-(.*)'):
{'vrf':
{Any():
{'neighbors':
{Any():
{'bad_pkts_received': str,
'bad_routes_received': str,
'last_request_received': str,
'last_request_sent': str,
'last_response_received': str,
'last_response_sent': str,
'neighbor': str
}
},
Optional('number_of_neighbors'): str
}
}
}
}
}
class ShowIpRipNeighborVrfAll(ShowIpRipNeighborSchema, MetaParser):
"""Parser for:
show ip rip neighbor vrf all
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip neighbor vrf all')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip neighbor vrf all | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpv6RipNeighborVrfAll(ShowIpRipNeighborSchema, MetaParser):
"""Parser for:
show ipv6 rip neighbor vrf all
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip neighbor vrf all')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip neighbor vrf all | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpRipInterfaceSchema(MetaParser):
"""Schema for show ip rip interface vrf all"""
schema = {regexp('rip-(.*)'):
{Any():
{Any():
{'address': str,
'admin': str,
'link': str,
'mask': str,
'metric': str,
'protocol': str,
'rip_state': str,
'split_horizon': str}
}
}
}
class ShowIpRipInterfaceVrfAll(ShowIpRipInterfaceSchema,MetaParser):
"""Parser for:
show ip rip interface vrf all
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip interface vrf all')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip interface vrf all | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpv6RipInterfaceVrfAll(ShowIpRipInterfaceSchema,MetaParser):
"""Parser for:
show ipv6 rip interface vrf all
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip interface vrf all')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip interface vrf all | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpRipStatisticsSchema(MetaParser):
"""Schema for show ip rip statistics"""
schema = {'process':
{regexp('rip-(.*)'):
{'multicast_update_periodic': str,
'multicast_update_triggered': str,
'recv_bad_pkts': str,
'recv_bad_routes': str,
'recv_multi_request': str,
'recv_multicast_updates': str,
'recv_uni_requests': str,
'recv_uni_updates': str,
'sent_multicast_request': str,
'sent_uni_updates': str
}
}
}
class ShowIpRipStatistics(ShowIpRipStatisticsSchema, MetaParser):
"""Parser for:
show ip rip statistics
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip statistics')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ip rip statistics | xml')
result = tcl.cast_any(output[1])
return result
class ShowIpv6RipStatistics(ShowIpRipStatisticsSchema, MetaParser):
"""Parser for:
show ipv6 rip statistics
parser class implements detail parsing mechanisms for cli and xml output.
"""
#*************************
# schema - class variable
#
# Purpose is to make sure the parser always return the output
# (nested dict) that has the same data structure across all supported
# parsing mechanisms (cli(), yang(), xml()).
def cli(self):
''' parsing mechanism: cli
Function cli() defines the cli type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
result = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip statistics')
# # To leverage router_show parsers:
# result = tcl.q.router_show(device=device, cmd='show version')
return tcl.cast_any(result[1])
def xml(self):
''' parsing mechanism: xml
Function xml() defines the xml type output parsing mechanism which
typically contains 3 steps: executing, transforming, returning
'''
output = tcl.q.caas.abstract(device=self.device.handle,
exec='show ipv6 rip statistics | xml')
result = tcl.cast_any(output[1])
return result
# class ShowIpRipRouteVrfAll(MetaParser):
# """ parser class - implements detail parsing mechanisms for cli, xml, and
# yang output.
# """
# #*************************
# # schema - class variable
# #
# # Purpose is to make sure the parser always return the output
# # (nested dict) that has the same data structure across all supported
# # parsing mechanisms (cli(), yang(), xml()).
#
#
# def cli(self):
# ''' parsing mechanism: cli
#
# Function cli() defines the cli type output parsing mechanism which
# typically contains 3 steps: executing, transforming, returning
# '''
# result = tcl.q.caas.abstract(device=self.device.handle,
# exec='show ip rip route vrf all')
#
# # # To leverage router_show parsers:
# # result = tcl.q.router_show(device=device, cmd='show version')
#
# return tcl.cast_any(result[1])
#
# def xml(self):
# ''' parsing mechanism: xml
#
# Function xml() defines the xml type output parsing mechanism which
# typically contains 3 steps: executing, transforming, returning
# '''
# output = tcl.q.caas.abstract(device=self.device.handle,
# exec='show ip rip route vrf all | xml')
# result = tcl.cast_any(output[1])
#
# return result
#
# class ShowIpv6RipRouteVrfAll(MetaParser):
# """ parser class - implements detail parsing mechanisms for cli, xml, and
# yang output.
# """
# #*************************
# # schema - class variable
# #
# # Purpose is to make sure the parser always return the output
# # (nested dict) that has the same data structure across all supported
# # parsing mechanisms (cli(), yang(), xml()).
#
#
# def cli(self):
# ''' parsing mechanism: cli
#
# Function cli() defines the cli type output parsing mechanism which
# typically contains 3 steps: executing, transforming, returning
# '''
# result = tcl.q.caas.abstract(device=self.device.handle,
# exec='show ipv6 rip route vrf all')
#
# # # To leverage router_show parsers:
# # result = tcl.q.router_show(device=device, cmd='show version')
#
# return tcl.cast_any(result[1])
#
# def xml(self):
# ''' parsing mechanism: xml
#
# Function xml() defines the xml type output parsing mechanism which
# typically contains 3 steps: executing, transforming, returning
# '''
# output = tcl.q.caas.abstract(device=self.device.handle,
# exec='show ipv6 rip route vrf all | xml')
# result = tcl.cast_any(output[1])
#
# return result | 36.005597 | 91 | 0.549407 | 1,980 | 19,299 | 5.313636 | 0.082323 | 0.066914 | 0.041821 | 0.054367 | 0.860755 | 0.844692 | 0.817698 | 0.81095 | 0.81095 | 0.81095 | 0 | 0.005014 | 0.348982 | 19,299 | 536 | 92 | 36.005597 | 0.832378 | 0.499041 | 0 | 0.416667 | 0 | 0 | 0.13055 | 0.015915 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.005556 | 0.027778 | 0 | 0.344444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ef9887264b04fa677070e72589d7cb0be5361b5 | 12,102 | py | Python | tests/test_patterns.py | pji/imggen | 173bd9e6aeba208d1e0f1ef74857c0d6d28530c7 | [
"MIT"
] | null | null | null | tests/test_patterns.py | pji/imggen | 173bd9e6aeba208d1e0f1ef74857c0d6d28530c7 | [
"MIT"
] | null | null | null | tests/test_patterns.py | pji/imggen | 173bd9e6aeba208d1e0f1ef74857c0d6d28530c7 | [
"MIT"
] | null | null | null | """
test_patterns
~~~~~~~~~~~~~
Unit tests for the imggen.patterns module.
"""
import numpy as np
from imggen import patterns as p
from tests.common import ArrayTestCase, SourceTestCase
# Test cases.
class BoxTestCase(SourceTestCase):
def test_fill(self):
"""Given a size, Solid.fill should return a volume filled with
a box of the origin, dimensions, and color given when the
object was created.
"""
# Expected values.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x80, 0x80, 0x80, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x80, 0x80, 0x80, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'origin': (0, 1, 1),
'dimensions': (1, 2, 3),
'color': 0x80 / 0xff,
}
pattern = p.Box
# Run test and determine result.
self.fill_test(exp, pattern, kwargs)
class GradientTestCase(SourceTestCase):
def test_gradient_fill(self):
"""Given the size of a space to fill with noise, return an
array of that size filled with noise.
"""
# Expected values.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00],
[0x7f, 0x7f, 0x7f, 0x7f],
[0xff, 0xff, 0xff, 0xff],
[0x7f, 0x7f, 0x7f, 0x7f],
[0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00],
[0x7f, 0x7f, 0x7f, 0x7f],
[0xff, 0xff, 0xff, 0xff],
[0x7f, 0x7f, 0x7f, 0x7f],
[0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'direction': 'v',
'stops': [0., 0., .5, 1., 1., 0.],
}
pattern = p.Gradient
# Run test and determine result.
self.fill_test(exp, pattern, kwargs)
class LinesTestCase(SourceTestCase):
def test_lines_fill(self):
"""Given the size of a space to fill with noise, return an
array of that size filled with noise.
"""
# Expected values.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00],
[0x7f, 0x7f, 0x7f, 0x7f],
[0xff, 0xff, 0xff, 0xff],
[0x7f, 0x7f, 0x7f, 0x7f],
],
[
[0x7f, 0x7f, 0x7f, 0x7f],
[0xff, 0xff, 0xff, 0xff],
[0x7f, 0x7f, 0x7f, 0x7f],
[0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'direction': 'h',
'length': 5,
}
pattern = p.Lines
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class RaysTestCase(SourceTestCase):
def test_rays_fill(self):
"""Given a size and location, Ray.fill should return a
volume filled with rays emanating from a central point.
"""
# Expected value.
exp = np.array([
[
[0x89, 0x60, 0x2c, 0x13, 0x58, 0x98, 0xcd, 0xf5],
[0xb1, 0x89, 0x4d, 0x06, 0x66, 0xb9, 0xf5, 0xe0],
[0xe5, 0xc4, 0x89, 0x18, 0x84, 0xf5, 0xcc, 0xab],
[0xd8, 0xe5, 0xf9, 0x89, 0xf5, 0x97, 0x79, 0x6b],
[0x93, 0x85, 0x67, 0x09, 0x75, 0x05, 0x19, 0x26],
[0x53, 0x32, 0x09, 0x7a, 0xe6, 0x75, 0x3a, 0x19],
[0x1e, 0x09, 0x45, 0x98, 0xf8, 0xb1, 0x75, 0x4d],
[0x09, 0x31, 0x66, 0xa6, 0xeb, 0xd2, 0x9e, 0x75],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'count': 3,
'offset': np.pi / 2,
}
pattern = p.Rays
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class RingsTestCase(SourceTestCase):
def test_ring_fill(self):
"""Given a size and location, Ring.fill should return a
volume filled with concentric rings.
"""
# Expected value.
exp = np.array([
[
[0x4f, 0x00, 0x0e, 0xc0, 0xff, 0xc0, 0x0e, 0x00],
[0x00, 0x83, 0x35, 0x00, 0x00, 0x00, 0x35, 0x83],
[0x0e, 0x35, 0x00, 0x86, 0xff, 0x86, 0x00, 0x35],
[0xc0, 0x00, 0x86, 0x00, 0x00, 0x00, 0x86, 0x00],
[0xff, 0x00, 0xff, 0x00, 0x00, 0x00, 0xff, 0x00],
[0xc0, 0x00, 0x86, 0x00, 0x00, 0x00, 0x86, 0x00],
[0x0e, 0x35, 0x00, 0x86, 0xff, 0x86, 0x00, 0x35],
[0x00, 0x83, 0x35, 0x00, 0x00, 0x00, 0x35, 0x83],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'radius': 2,
'width': 1,
'gap': 2,
'count': 3,
}
pattern = p.Rings
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class SolidTestCase(SourceTestCase):
def test_fill(self):
"""Given a size and location, Solid.fill should return a
volume filled with a single color.
"""
# Expected values.
exp = np.array([
[
[0x40, 0x40, 0x40, 0x40],
[0x40, 0x40, 0x40, 0x40],
[0x40, 0x40, 0x40, 0x40],
[0x40, 0x40, 0x40, 0x40],
],
[
[0x40, 0x40, 0x40, 0x40],
[0x40, 0x40, 0x40, 0x40],
[0x40, 0x40, 0x40, 0x40],
[0x40, 0x40, 0x40, 0x40],
],
], dtype=np.uint8)
# Test data and state.
kwargs = {
'color': 0x40 / 0xff,
}
pattern = p.Solid
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class SpheresTestCase(SourceTestCase):
def test_spheres_fill_x(self):
"""Given a size and location, Spheres.fill should return a
volume filled a radial gradient.
"""
# Expected values.
exp = np.array([
[
[0x2e, 0x42, 0x53, 0x60, 0x68, 0x6b, 0x68, 0x60],
[0x42, 0x58, 0x6b, 0x7b, 0x85, 0x89, 0x85, 0x7b],
[0x53, 0x6b, 0x82, 0x94, 0xa1, 0xa6, 0xa1, 0x94],
[0x60, 0x7b, 0x94, 0xab, 0xbd, 0xc4, 0xbd, 0xab],
[0x68, 0x85, 0xa1, 0xbd, 0xd5, 0xe1, 0xd5, 0xbd],
[0x6b, 0x89, 0xa6, 0xc4, 0xe1, 0xff, 0xe1, 0xc4],
[0x68, 0x85, 0xa1, 0xbd, 0xd5, 0xe1, 0xd5, 0xbd],
[0x60, 0x7b, 0x94, 0xab, 0xbd, 0xc4, 0xbd, 0xab],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'radius': 5,
'offset': 'x',
}
pattern = p.Spheres
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
def test_spheres_fill_y(self):
"""Given a size and location, Spheres.fill should return a
volume filled a radial gradient.
"""
# Expected values.
exp = np.array([
[
[0x6b, 0x89, 0xa6, 0xc4, 0xe1, 0xff, 0xe1, 0xc4],
[0x68, 0x85, 0xa1, 0xbd, 0xd5, 0xe1, 0xd5, 0xbd],
[0x60, 0x7b, 0x94, 0xab, 0xbd, 0xc4, 0xbd, 0xab],
[0x53, 0x6b, 0x82, 0x94, 0xa1, 0xa6, 0xa1, 0x94],
[0x42, 0x58, 0x6b, 0x7b, 0x85, 0x89, 0x85, 0x7b],
[0x2e, 0x42, 0x53, 0x60, 0x68, 0x6b, 0x68, 0x60],
[0x42, 0x58, 0x6b, 0x7b, 0x85, 0x89, 0x85, 0x7b],
[0x53, 0x6b, 0x82, 0x94, 0xa1, 0xa6, 0xa1, 0x94],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'radius': 5,
'offset': 'y',
}
pattern = p.Spheres
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class SpotTestCase(SourceTestCase):
def test_spot_fill(self):
"""Given a size and location, Spot.fill should return a
volume filled with a spot of color.
"""
# Expected values.
exp = np.array([
[
[0x32, 0x4a, 0x5d, 0x6a, 0x6e, 0x6a, 0x5d, 0x4a],
[0x4a, 0x66, 0x7c, 0x8c, 0x92, 0x8c, 0x7c, 0x66],
[0x5d, 0x7c, 0x99, 0xae, 0xb6, 0xae, 0x99, 0x7c],
[0x6a, 0x8c, 0xae, 0xcc, 0xda, 0xcc, 0xae, 0x8c],
[0x6e, 0x92, 0xb6, 0xda, 0xff, 0xda, 0xb6, 0x92],
[0x6a, 0x8c, 0xae, 0xcc, 0xda, 0xcc, 0xae, 0x8c],
[0x5d, 0x7c, 0x99, 0xae, 0xb6, 0xae, 0x99, 0x7c],
[0x4a, 0x66, 0x7c, 0x8c, 0x92, 0x8c, 0x7c, 0x66],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'radius': 5,
}
pattern = p.Spot
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class TextTestCase(SourceTestCase):
def test_text_fill(self):
"""Given a size and location, Text.fill should return a
volume with the configured text.
"""
# Expected values.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x0b, 0x50, 0x2c, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x8e, 0x33, 0x3c, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x29, 0x8a, 0x74, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x61, 0x6f, 0x8a, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Set up test data and state.
kwargs = {
'text': 's',
'size': 6,
'origin': (3, 0),
}
pattern = p.Text
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
class WaveTestCase(SourceTestCase):
def test_waves_fill(self):
"""Waves.fill should return a series of concentric rings."""
# Expected value.
exp = np.array([
[
[0x4c, 0x21, 0x75, 0xa3, 0xa3, 0x75, 0x21, 0x4c],
[0x21, 0xa3, 0xf0, 0xb2, 0xb2, 0xf0, 0xa3, 0x21],
[0x75, 0xf0, 0x69, 0x0d, 0x0d, 0x69, 0xf0, 0x75],
[0xa3, 0xb2, 0x0d, 0x86, 0x86, 0x0d, 0xb2, 0xa3],
[0xa3, 0xb2, 0x0d, 0x86, 0x86, 0x0d, 0xb2, 0xa3],
[0x75, 0xf0, 0x69, 0x0d, 0x0d, 0x69, 0xf0, 0x75],
[0x21, 0xa3, 0xf0, 0xb2, 0xb2, 0xf0, 0xa3, 0x21],
[0x4c, 0x21, 0x75, 0xa3, 0xa3, 0x75, 0x21, 0x4c],
],
], dtype=np.uint8)
# Set up test data and state.
pattern = p.Waves
kwargs = {
'length': 3,
'growth': 'l',
}
# Run test and determine results.
self.fill_test(exp, pattern, kwargs)
| 33.710306 | 70 | 0.484465 | 1,355 | 12,102 | 4.301845 | 0.154982 | 0.266255 | 0.358209 | 0.425459 | 0.760851 | 0.738892 | 0.724824 | 0.678676 | 0.527706 | 0.508664 | 0 | 0.237908 | 0.390101 | 12,102 | 358 | 71 | 33.804469 | 0.551822 | 0.160304 | 0 | 0.615702 | 0 | 0 | 0.014867 | 0 | 0 | 0 | 0.276699 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.012397 | 0 | 0.099174 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1632e1fc69e55b6db6eca54d617779b428d114cc | 124 | py | Python | test_import.py | momocus/narou-recommender | 178e8a7cb5da9b5b3cfdbc473ce529d50a0bba5b | [
"Apache-2.0"
] | null | null | null | test_import.py | momocus/narou-recommender | 178e8a7cb5da9b5b3cfdbc473ce529d50a0bba5b | [
"Apache-2.0"
] | 3 | 2019-12-30T17:37:44.000Z | 2020-01-02T09:45:44.000Z | test_import.py | momocus/narou-recommender | 178e8a7cb5da9b5b3cfdbc473ce529d50a0bba5b | [
"Apache-2.0"
] | null | null | null | import bookmark # noqa
import narou # noqa
def test_success() -> None:
assert True
| 17.714286 | 38 | 0.5 | 12 | 124 | 5.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.443548 | 124 | 6 | 39 | 20.666667 | 0.884058 | 0.072581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
164b9892315c61fcdb4489b2dc30de0e1cf3dcf2 | 10,072 | py | Python | tests/test_github.py | locriandev/ocp-build-data-validator | 66c8e7a37fc48af1bdb125c000e842b5c6ed536d | [
"Apache-2.0"
] | 1 | 2020-05-20T10:08:10.000Z | 2020-05-20T10:08:10.000Z | tests/test_github.py | locriandev/ocp-build-data-validator | 66c8e7a37fc48af1bdb125c000e842b5c6ed536d | [
"Apache-2.0"
] | 51 | 2019-10-08T09:55:38.000Z | 2022-03-28T08:08:15.000Z | tests/test_github.py | locriandev/ocp-build-data-validator | 66c8e7a37fc48af1bdb125c000e842b5c6ed536d | [
"Apache-2.0"
] | 18 | 2019-10-07T11:59:48.000Z | 2021-12-10T11:00:57.000Z | import unittest
from flexmock import flexmock
from validator import github
class TestGitHub(unittest.TestCase):
def setUp(self):
(flexmock(github.support)
.should_receive('resource_exists')
.and_return(True))
def test_no_declared_repository(self):
(url, err) = github.validate({}, {})
self.assertIsNone(url)
self.assertIsNone(err)
def test_repository_doesnt_exist(self):
(flexmock(github.support)
.should_receive('resource_exists')
.with_args('https://github.com/myorg/myrepo')
.and_return(False))
data = {
'content': {
'source': {
'git': {
'url': 'https://github.com/myorg/myrepo',
}
}
}
}
(url, err) = github.validate(data, {})
self.assertEqual(url, 'https://github.com/myorg/myrepo')
self.assertEqual(err, ('GitHub repository '
"https://github.com/myorg/myrepo doesn't "
'exist'))
def test_no_declared_branches(self):
data = {
'content': {
'source': {
'git': {
'url': 'https://github.com/myorg/myrepo',
}
}
}
}
(url, err) = github.validate(data, {})
self.assertEqual(url, 'https://github.com/myorg/myrepo')
self.assertEqual(err, ('No branches specified under '
'content > source > git'))
def test_target_branch_doesnt_exist(self):
(flexmock(github)
.should_receive('branch_exists')
.with_args('release-4.2', 'https://github.com/myorg/myrepo')
.and_return(False))
(flexmock(github)
.should_receive('branch_exists')
.with_args('fallback-branch', 'https://github.com/myorg/myrepo')
.and_return(True))
data = {
'content': {
'source': {
'git': {
'branch': {
'target': 'release-{MAJOR}.{MINOR}',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/myorg/myrepo',
}
}
}
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/myorg/myrepo')
self.assertEqual(err, None)
def test_target_nor_fallback_branches_exist(self):
(flexmock(github)
.should_receive('branch_exists')
.with_args('release-4.2', 'https://github.com/myorg/myrepo')
.and_return(False))
(flexmock(github)
.should_receive('branch_exists')
.with_args('fallback-branch', 'https://github.com/myorg/myrepo')
.and_return(False))
data = {
'content': {
'source': {
'git': {
'branch': {
'target': 'release-{MAJOR}.{MINOR}',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/myorg/myrepo',
}
}
}
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/myorg/myrepo')
self.assertEqual(err, ('At least one of the following branches '
'should exist: release-4.2 or fallback-branch'))
def test_declared_dockerfile_doesnt_exist(self):
(flexmock(github.support)
.should_receive('resource_exists')
.with_args('https://github.com/org/repo/blob/xyz/Dockerfile.rhel7')
.and_return(False))
data = {
'content': {
'source': {
'dockerfile': 'Dockerfile.rhel7',
'git': {
'branch': {
'target': 'xyz',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/org/repo',
}
}
}
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/org/repo')
self.assertEqual(err, ('dockerfile Dockerfile.rhel7 '
'not found on branch xyz'))
def test_declared_dockerfile_on_custom_path(self):
bad_file_url = 'https://github.com/org/repo/blob/xyz/Dockerfile.rhel7'
(flexmock(github.support)
.should_receive('resource_exists')
.with_args(bad_file_url)
.and_return(False))
good_file_url = ('https://github.com/org/repo/blob/xyz/my/custom/path/'
'Dockerfile.rhel7')
(flexmock(github.support)
.should_receive('resource_exists')
.with_args(good_file_url)
.and_return(True))
data = {
'content': {
'source': {
'dockerfile': 'Dockerfile.rhel7',
'git': {
'branch': {
'target': 'xyz',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/org/repo',
},
'path': 'my/custom/path',
}
}
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/org/repo')
self.assertIsNone(err)
def test_declared_manifest_doesnt_exist(self):
(flexmock(github.support)
.should_receive('resource_exists')
.with_args('https://github.com/org/repo/blob/xyz/my-manifests')
.and_return(False))
data = {
'content': {
'source': {
'git': {
'branch': {
'target': 'xyz',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/org/repo',
}
}
},
'update-csv': {
'manifests-dir': 'my-manifests',
},
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/org/repo')
self.assertEqual(err, 'manifests my-manifests not found on branch xyz')
def test_declared_manifest_on_custom_path(self):
bad_file_url = 'https://github.com/org/repo/blob/xyz/my-manifests'
(flexmock(github.support)
.should_receive('resource_exists')
.with_args(bad_file_url)
.and_return(False))
good_file_url = ('https://github.com/org/repo/blob/xyz/my/custom/path/'
'my-manifests')
(flexmock(github.support)
.should_receive('resource_exists')
.with_args(good_file_url)
.and_return(True))
data = {
'content': {
'source': {
'git': {
'branch': {
'target': 'xyz',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/org/repo',
},
'path': 'my/custom/path',
}
},
'update-csv': {
'manifests-dir': 'my-manifests',
},
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/org/repo')
self.assertIsNone(err)
def test_translate_private_upstreams_to_public(self):
data = {
'content': {
'source': {
'dockerfile': 'Dockerfile.rhel7',
'git': {
'branch': {
'target': 'xyz',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/openshift-priv/repo',
}
}
}
}
group_cfg = {
'vars': {'MAJOR': 4, 'MINOR': 2},
'public_upstreams': [
{
'private': 'https://github.com/openshift-priv',
'public': 'https://github.com/openshift',
},
{
'private': 'https://github.com/openshift/ose',
'public': 'https://github.com/openshift/origin',
},
],
}
(url, err) = github.validate(data, group_cfg)
self.assertEqual(url, 'https://github.com/openshift/repo')
self.assertIsNone(err)
def test_translate_private_upstreams_to_public_no_match(self):
data = {
'content': {
'source': {
'dockerfile': 'Dockerfile.rhel7',
'git': {
'branch': {
'target': 'xyz',
'fallback': 'fallback-branch',
},
'url': 'https://github.com/org/repo',
}
}
},
'update-csv': {
'manifests-dir': 'my-manifests',
},
}
(url, err) = github.validate(data, {'vars': {'MAJOR': 4, 'MINOR': 2}})
self.assertEqual(url, 'https://github.com/org/repo')
self.assertIsNone(err)
| 34.493151 | 79 | 0.437649 | 834 | 10,072 | 5.148681 | 0.116307 | 0.092222 | 0.117373 | 0.095016 | 0.867024 | 0.810899 | 0.800186 | 0.800186 | 0.77224 | 0.761993 | 0 | 0.00516 | 0.422756 | 10,072 | 291 | 80 | 34.611684 | 0.733402 | 0 | 0 | 0.599222 | 0 | 0 | 0.2639 | 0.004567 | 0 | 0 | 0 | 0 | 0.085603 | 1 | 0.046693 | false | 0 | 0.011673 | 0 | 0.062257 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
169c93c563140a35181abe42e594070aeb1f8236 | 16,288 | py | Python | tests/integration/test_pr_comment.py | nikromen/packit-service | 04be15478e79504c2e408d9bc65667182ffa2801 | [
"MIT"
] | null | null | null | tests/integration/test_pr_comment.py | nikromen/packit-service | 04be15478e79504c2e408d9bc65667182ffa2801 | [
"MIT"
] | null | null | null | tests/integration/test_pr_comment.py | nikromen/packit-service | 04be15478e79504c2e408d9bc65667182ffa2801 | [
"MIT"
] | null | null | null | # Copyright Contributors to the Packit project.
# SPDX-License-Identifier: MIT
import json
from typing import List
import pytest
from celery.canvas import Signature
from flexmock import flexmock
from github import Github
from ogr.services.github import GithubProject
from packit.config import JobConfigTriggerType
from packit.local_project import LocalProject
from packit_service.config import ServiceConfig
from packit_service.constants import (
SANDCASTLE_WORK_DIR,
TASK_ACCEPTED,
)
from packit_service.models import PullRequestModel
from packit_service.service.db_triggers import AddPullRequestDbTrigger
from packit_service.worker.build.copr_build import CoprBuildJobHelper
from packit_service.worker.build.koji_build import KojiBuildJobHelper
from packit_service.worker.jobs import SteveJobs, get_packit_commands_from_comment
from packit_service.worker.result import TaskResults
from packit_service.worker.tasks import (
run_copr_build_handler,
run_koji_build_handler,
run_testing_farm_handler,
)
from packit_service.worker.testing_farm import TestingFarmJobHelper
from packit_service.worker.allowlist import Allowlist
from packit_service.worker.reporting import BaseCommitStatus
from tests.spellbook import DATA_DIR, first_dict_value, get_parameters_from_results
@pytest.fixture(scope="module")
def pr_copr_build_comment_event():
return json.loads(
(DATA_DIR / "webhooks" / "github" / "pr_comment_copr_build.json").read_text()
)
@pytest.fixture(scope="module")
def pr_build_comment_event():
return json.loads(
(DATA_DIR / "webhooks" / "github" / "pr_comment_build.json").read_text()
)
@pytest.fixture(scope="module")
def pr_production_build_comment_event():
return json.loads(
(
DATA_DIR / "webhooks" / "github" / "pr_comment_production_build.json"
).read_text()
)
@pytest.fixture(scope="module")
def pr_embedded_command_comment_event():
return json.loads(
(
DATA_DIR / "webhooks" / "github" / "pr_comment_embedded_command.json"
).read_text()
)
@pytest.fixture(scope="module")
def pr_empty_comment_event():
return json.loads(
(DATA_DIR / "webhooks" / "github" / "pr_comment_empty.json").read_text()
)
@pytest.fixture(scope="module")
def pr_packit_only_comment_event():
return json.loads(
(
DATA_DIR / "webhooks" / "github" / "issue_comment_packit_only.json"
).read_text()
)
@pytest.fixture(scope="module")
def pr_wrong_packit_comment_event():
return json.loads(
(
DATA_DIR / "webhooks" / "github" / "issue_comment_wrong_packit_command.json"
).read_text()
)
@pytest.fixture(
params=[
[
{
"trigger": "pull_request",
"job": "copr_build",
"metadata": {"targets": "fedora-rawhide-x86_64"},
}
],
[
{
"trigger": "pull_request",
"job": "tests",
"metadata": {"targets": "fedora-rawhide-x86_64"},
}
],
[
{
"trigger": "pull_request",
"job": "copr_build",
"metadata": {"targets": "fedora-rawhide-x86_64"},
},
{
"trigger": "pull_request",
"job": "tests",
"metadata": {"targets": "fedora-rawhide-x86_64"},
},
],
]
)
def mock_pr_comment_functionality(request):
packit_yaml = (
"{'specfile_path': 'the-specfile.spec', 'synced_files': [], 'jobs': "
+ str(request.param)
+ "}"
)
flexmock(
GithubProject,
full_repo_name="packit-service/hello-world",
get_file_content=lambda path, ref: packit_yaml,
get_files=lambda ref, filter_regex: ["the-specfile.spec"],
get_web_url=lambda: "https://github.com/the-namespace/the-repo",
get_pr=lambda pr_id: flexmock(head_commit="12345"),
)
flexmock(Github, get_repo=lambda full_name_or_id: None)
config = ServiceConfig()
config.command_handler_work_dir = SANDCASTLE_WORK_DIR
flexmock(ServiceConfig).should_receive("get_service_config").and_return(config)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request, id=123
)
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
flexmock(PullRequestModel).should_receive("get_by_id").with_args(123).and_return(
trigger
)
flexmock(LocalProject, refresh_the_arguments=lambda: None)
flexmock(Allowlist, check_and_report=True)
def one_job_finished_with_msg(results: List[TaskResults], msg: str):
for value in results:
assert value["success"]
if value["details"]["msg"] == msg:
break
else:
raise AssertionError(f"None of the jobs finished with {msg!r}")
def test_pr_comment_copr_build_handler(
mock_pr_comment_functionality, pr_copr_build_comment_event
):
flexmock(PullRequestModel).should_receive("get_or_create").with_args(
pr_id=9,
namespace="packit-service",
repo_name="hello-world",
project_url="https://github.com/packit-service/hello-world",
).and_return(
flexmock(id=9, job_config_trigger_type=JobConfigTriggerType.pull_request)
)
flexmock(CoprBuildJobHelper).should_receive("run_copr_build").and_return(
TaskResults(success=True, details={})
).once()
flexmock(GithubProject).should_receive("get_files").and_return(["foo.spec"])
flexmock(GithubProject).should_receive("get_web_url").and_return(
"https://github.com/the-namespace/the-repo"
)
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(CoprBuildJobHelper).should_receive("report_status_to_all").with_args(
description=TASK_ACCEPTED,
state=BaseCommitStatus.pending,
url="",
).once()
flexmock(Signature).should_receive("apply_async").once()
processing_results = SteveJobs().process_message(pr_copr_build_comment_event)
event_dict, job, job_config, package_config = get_parameters_from_results(
processing_results
)
results = run_copr_build_handler(
package_config=package_config,
event=event_dict,
job_config=job_config,
)
assert first_dict_value(results["job"])["success"]
def test_pr_comment_build_handler(
mock_pr_comment_functionality, pr_build_comment_event
):
flexmock(PullRequestModel).should_receive("get_or_create").with_args(
pr_id=9,
namespace="packit-service",
repo_name="hello-world",
project_url="https://github.com/packit-service/hello-world",
).and_return(
flexmock(id=9, job_config_trigger_type=JobConfigTriggerType.pull_request)
)
flexmock(CoprBuildJobHelper).should_receive("run_copr_build").and_return(
TaskResults(success=True, details={})
)
flexmock(GithubProject, get_files="foo.spec")
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(CoprBuildJobHelper).should_receive("report_status_to_all").with_args(
description=TASK_ACCEPTED,
state=BaseCommitStatus.pending,
url="",
).once()
flexmock(Signature).should_receive("apply_async").once()
processing_results = SteveJobs().process_message(pr_build_comment_event)
event_dict, job, job_config, package_config = get_parameters_from_results(
processing_results
)
results = run_copr_build_handler(
package_config=package_config,
event=event_dict,
job_config=job_config,
)
assert first_dict_value(results["job"])["success"]
def test_pr_comment_production_build_handler(pr_production_build_comment_event):
packit_yaml = str(
{
"specfile_path": "the-specfile.spec",
"synced_files": [],
"jobs": [
{
"trigger": "pull_request",
"job": "production_build",
"metadata": {"targets": "fedora-rawhide-x86_64", "scratch": "true"},
}
],
}
)
flexmock(
GithubProject,
full_repo_name="packit-service/hello-world",
get_file_content=lambda path, ref: packit_yaml,
get_files=lambda ref, filter_regex: ["the-specfile.spec"],
get_web_url=lambda: "https://github.com/the-namespace/the-repo",
get_pr=lambda pr_id: flexmock(head_commit="12345"),
)
flexmock(Github, get_repo=lambda full_name_or_id: None)
config = ServiceConfig()
config.command_handler_work_dir = SANDCASTLE_WORK_DIR
flexmock(ServiceConfig).should_receive("get_service_config").and_return(config)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request, id=123
)
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
flexmock(PullRequestModel).should_receive("get_by_id").with_args(123).and_return(
trigger
)
flexmock(LocalProject, refresh_the_arguments=lambda: None)
flexmock(Allowlist, check_and_report=True)
flexmock(PullRequestModel).should_receive("get_or_create").with_args(
pr_id=9,
namespace="packit-service",
repo_name="hello-world",
project_url="https://github.com/packit-service/hello-world",
).and_return(
flexmock(id=9, job_config_trigger_type=JobConfigTriggerType.pull_request)
)
flexmock(KojiBuildJobHelper).should_receive("run_koji_build").and_return(
TaskResults(success=True, details={})
)
flexmock(GithubProject, get_files="foo.spec")
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(KojiBuildJobHelper).should_receive("report_status_to_all").with_args(
description=TASK_ACCEPTED,
state=BaseCommitStatus.pending,
url="",
).once()
flexmock(Signature).should_receive("apply_async").once()
processing_results = SteveJobs().process_message(pr_production_build_comment_event)
event_dict, job, job_config, package_config = get_parameters_from_results(
processing_results
)
results = run_koji_build_handler(
package_config=package_config,
event=event_dict,
job_config=job_config,
)
assert first_dict_value(results["job"])["success"]
@pytest.mark.parametrize(
"comment",
(
"",
" ",
" ",
"some unrelated",
"some\nmore\nunrelated\ntext",
"even\nsome → unicode",
" stuff",
" \n ",
"x ",
"""comment with embedded /packit build not recognized
unless /packit command is on line by itself""",
"\n2nd line\n\n4th line",
"1st line\n\t\n\t\t\n4th line\n",
),
)
def test_pr_comment_invalid(comment):
commands = get_packit_commands_from_comment(comment)
assert len(commands) == 0
@pytest.mark.parametrize(
"comments_list",
(
"/packit build",
"/packit build ",
"/packit build ",
" /packit build",
" /packit build ",
"asd\n/packit build\n",
"asd\n /packit build \n",
"Should be fixed now, let's\n /packit build\n it.",
),
)
def test_pr_embedded_command_handler(
mock_pr_comment_functionality, pr_embedded_command_comment_event, comments_list
):
flexmock(PullRequestModel).should_receive("get_or_create").with_args(
pr_id=9,
namespace="packit-service",
repo_name="hello-world",
project_url="https://github.com/packit-service/hello-world",
).and_return(
flexmock(id=9, job_config_trigger_type=JobConfigTriggerType.pull_request)
)
pr_embedded_command_comment_event["comment"]["body"] = comments_list
flexmock(CoprBuildJobHelper).should_receive("run_copr_build").and_return(
TaskResults(success=True, details={})
)
flexmock(GithubProject, get_files="foo.spec")
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(CoprBuildJobHelper).should_receive("report_status_to_all").with_args(
description=TASK_ACCEPTED,
state=BaseCommitStatus.pending,
url="",
).once()
flexmock(Signature).should_receive("apply_async").once()
processing_results = SteveJobs().process_message(pr_embedded_command_comment_event)
event_dict, job, job_config, package_config = get_parameters_from_results(
processing_results
)
results = run_copr_build_handler(
package_config=package_config,
event=event_dict,
job_config=job_config,
)
assert first_dict_value(results["job"])["success"]
def test_pr_comment_empty_handler(
mock_pr_comment_functionality, pr_empty_comment_event
):
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(GithubProject).should_receive("can_merge_pr").and_return(True)
results = SteveJobs().process_message(pr_empty_comment_event)
assert results == []
def test_pr_comment_packit_only_handler(
mock_pr_comment_functionality, pr_packit_only_comment_event
):
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(GithubProject).should_receive("can_merge_pr").and_return(True)
results = SteveJobs().process_message(pr_packit_only_comment_event)
assert results == []
def test_pr_comment_wrong_packit_command_handler(
mock_pr_comment_functionality, pr_wrong_packit_comment_event
):
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(GithubProject).should_receive("can_merge_pr").and_return(True)
results = SteveJobs().process_message(pr_wrong_packit_comment_event)
assert results == []
def test_pr_test_command_handler(pr_embedded_command_comment_event):
jobs = [
{
"trigger": "pull_request",
"job": "tests",
"metadata": {"targets": "fedora-rawhide-x86_64"},
}
]
packit_yaml = (
"{'specfile_path': 'the-specfile.spec', 'synced_files': [], 'jobs': "
+ str(jobs)
+ "}"
)
flexmock(
GithubProject,
full_repo_name="packit-service/hello-world",
get_file_content=lambda path, ref: packit_yaml,
get_files=lambda ref, filter_regex: ["the-specfile.spec"],
get_web_url=lambda: "https://github.com/the-namespace/the-repo",
get_pr=lambda pr_id: flexmock(head_commit="12345"),
)
flexmock(Github, get_repo=lambda full_name_or_id: None)
config = ServiceConfig()
config.command_handler_work_dir = SANDCASTLE_WORK_DIR
flexmock(ServiceConfig).should_receive("get_service_config").and_return(config)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request, id=123
)
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
flexmock(PullRequestModel).should_receive("get_by_id").with_args(123).and_return(
trigger
)
flexmock(LocalProject, refresh_the_arguments=lambda: None)
flexmock(Allowlist, check_and_report=True)
flexmock(PullRequestModel).should_receive("get_or_create").with_args(
pr_id=9,
namespace="packit-service",
repo_name="hello-world",
project_url="https://github.com/packit-service/hello-world",
).and_return(
flexmock(id=9, job_config_trigger_type=JobConfigTriggerType.pull_request)
)
pr_embedded_command_comment_event["comment"]["body"] = "/packit test"
flexmock(GithubProject, get_files="foo.spec")
flexmock(GithubProject).should_receive("is_private").and_return(False)
flexmock(Signature).should_receive("apply_async").once()
flexmock(TestingFarmJobHelper).should_receive("run_testing_farm_on_all").and_return(
TaskResults(success=True, details={})
)
processing_results = SteveJobs().process_message(pr_embedded_command_comment_event)
event_dict, job, job_config, package_config = get_parameters_from_results(
processing_results
)
run_testing_farm_handler(
package_config=package_config,
event=event_dict,
job_config=job_config,
)
| 34.508475 | 88 | 0.689342 | 1,867 | 16,288 | 5.668988 | 0.1173 | 0.050359 | 0.019652 | 0.041761 | 0.815382 | 0.777211 | 0.756708 | 0.727891 | 0.710979 | 0.699358 | 0 | 0.005511 | 0.197937 | 16,288 | 471 | 89 | 34.581741 | 0.804577 | 0.004543 | 0 | 0.57767 | 0 | 0 | 0.161212 | 0.028256 | 0 | 0 | 0 | 0 | 0.024272 | 1 | 0.043689 | false | 0 | 0.053398 | 0.01699 | 0.114078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc56526248fde2e38ac876fa499f8e2666966c90 | 50 | py | Python | routesimilarity/__init__.py | shkr/routesimilarity | e9e2a974b67e5b9f1482fee0ed3853691feac2d1 | [
"MIT"
] | null | null | null | routesimilarity/__init__.py | shkr/routesimilarity | e9e2a974b67e5b9f1482fee0ed3853691feac2d1 | [
"MIT"
] | null | null | null | routesimilarity/__init__.py | shkr/routesimilarity | e9e2a974b67e5b9f1482fee0ed3853691feac2d1 | [
"MIT"
] | null | null | null | from .directed_hausdorff import directed_hausdorff | 50 | 50 | 0.92 | 6 | 50 | 7.333333 | 0.666667 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 50 | 1 | 50 | 50 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc718ff04af724e984c2adf24b6cb10b1e3df682 | 41 | py | Python | ssd/modeling/detector/__init__.py | tkhe/ssd-family | a797ec36fda59549aff54419c105813c33d8cdd3 | [
"MIT"
] | 1 | 2019-07-12T02:21:24.000Z | 2019-07-12T02:21:24.000Z | ssd/modeling/detector/__init__.py | tkhe/ssd-family | a797ec36fda59549aff54419c105813c33d8cdd3 | [
"MIT"
] | 3 | 2021-06-08T21:36:05.000Z | 2022-03-12T00:30:57.000Z | ssd/modeling/detector/__init__.py | tkhe/ssd-family | a797ec36fda59549aff54419c105813c33d8cdd3 | [
"MIT"
] | 1 | 2020-08-12T15:02:17.000Z | 2020-08-12T15:02:17.000Z | from .build import build_detection_model
| 20.5 | 40 | 0.878049 | 6 | 41 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcbc611be495cd0413484dbb1b376e3b5fa533f3 | 30 | py | Python | trial_launchpad/__init__.py | aierh/autoML | 8e31966edf6de2c223d5eeb6cd4b4dbd6ddbbf77 | [
"MIT"
] | 185 | 2019-12-26T12:41:53.000Z | 2020-09-18T06:22:32.000Z | trial_launchpad/__init__.py | aierh/autoML | 8e31966edf6de2c223d5eeb6cd4b4dbd6ddbbf77 | [
"MIT"
] | 8 | 2020-02-25T19:32:22.000Z | 2020-09-18T06:17:48.000Z | trial_launchpad/__init__.py | aierh/autoML | 8e31966edf6de2c223d5eeb6cd4b4dbd6ddbbf77 | [
"MIT"
] | 27 | 2019-12-26T15:02:47.000Z | 2020-09-08T21:24:54.000Z | from .launcher import Launcher | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c0a6e8b6c59e7c55ae4125a471635061df67e54 | 272 | py | Python | pylgmath/__init__.py | utiasASRL/pylgmath | b392f9960c2b12758bd05a639966f161240282cb | [
"BSD-3-Clause"
] | 3 | 2021-11-11T17:54:35.000Z | 2021-12-09T01:44:16.000Z | pylgmath/__init__.py | utiasASRL/pylgmath | b392f9960c2b12758bd05a639966f161240282cb | [
"BSD-3-Clause"
] | null | null | null | pylgmath/__init__.py | utiasASRL/pylgmath | b392f9960c2b12758bd05a639966f161240282cb | [
"BSD-3-Clause"
] | null | null | null | from .common import operations as cmnop
from .so3 import operations as so3op
from .se3 import operations as se3op
from .so3.rotation import Rotation
from .se3.transformation import Transformation
from .se3.transformation_with_covariance import TransformationWithCovariance | 45.333333 | 76 | 0.860294 | 35 | 272 | 6.628571 | 0.428571 | 0.206897 | 0.232759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028807 | 0.106618 | 272 | 6 | 76 | 45.333333 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c4a6147125c8df46b7a768ee1c85eb44c9f27ee | 11,961 | py | Python | basis_pursuit/algorithms.py | ymalitsky/coo-pda | 8b604c1b2927d3f0f9adb49f2d09f88481e5d734 | [
"MIT"
] | null | null | null | basis_pursuit/algorithms.py | ymalitsky/coo-pda | 8b604c1b2927d3f0f9adb49f2d09f88481e5d734 | [
"MIT"
] | null | null | null | basis_pursuit/algorithms.py | ymalitsky/coo-pda | 8b604c1b2927d3f0f9adb49f2d09f88481e5d734 | [
"MIT"
] | null | null | null | # This module contains implementation of the primal-dual algorithm and itc coordinate extensions for the basis pursuit problem.
import numpy as np
import scipy.linalg as LA
from time import process_time, time
from numba import jit, vectorize
from prox_numba import prox_l1
from utils import subdif_gap
def pd_basis_pursuit(A, b, x0, sigma, tau, numb_iter=100, tol=1e-6):
"""
Implementation of the primal-dual algorithm of Chambolle and Pock for basis pursuit problem:
\min |x|_1 s.t. Ax = b
A : 2-dimensional array
sigma: positive number, the step for the dual variable
tau: positive number, the step for the primal variable
Algorithm runs either for numb_iter iteration or when the stopping
criteria reaches tol accuracy. The stopping criteria includes:
primal gap (based on the first order condition) and the
feasibility gap ||Ax-b||.
"""
m,n = A.shape
x = x0
#y = A.dot(x0) - b
y = np.zeros(m)
STOP = False
for i in range(numb_iter):
ATy = A.T.dot(y)
x1 = prox_l1(x - tau * ATy, tau)
z = x1 + (x1 - x)
# Az = Ax1+
res = A.dot(z) - b
y += sigma * res
x = x1
# compute the distance between subdifferential and a current point
gap1 = subdif_gap(-ATy, x)
### Change to a normal formula in the un-noise case
#gap2 = LA.norm(A.T.dot(res))
gap2 = LA.norm(res, ord=np.inf)
#print(gap1, gap2)
if gap1 <= tol and gap2 <= tol:
STOP = True
break
if STOP:
output = [i, gap1, gap2]
else:
output = [-1, gap1, gap2]
return x, y, output
# ------------------------------------------------------------------------------------
# ------------------------ Coordinate primal-dual algorithm --------------------------
# ------------------------------------------------------------------------------------
@jit(nopython=True, nogil=True, cache=True)
def coo_pd_update_numba(x, y, u, AT, n, steps, sigma, ik):
"""
Update for the coordinate primal-dual method for basis pursuit
"""
a = AT[ik]
tau = steps[ik] / sigma
t = prox_l1(x[ik] - tau / n * np.dot(a, y), tau / n)
h = t - x[ik]
y += u + sigma * (n + 1) * h * a
u += sigma * h * a
x[ik] = t
return x, y, u
def coo_pd_numba(AT, b, x0, steps, sigma, numb_iter=100, tol=1e-6):
"""
Coordinate version of the primal-dual algorithm of Pock and Chambolle for problem
min_x |x|_1 s.t. Ax =b
AT equals to A.T. This is more convenient for the
algorithm. Notice that AT should have C-contiguous flag. This
means that A.T will not work, it is better to make a copy
A.T.copy()
Instead of running a random generator in each iteration, we shuffle indices in advance.
Algorithm runs either for numb_iter iteration or when the stopping
criteria reaches tol accuracy. The stopping criteria include:
primal gap (based on the first order condition) and the
feasibility gap ||Ax-b||.
"""
n, m = AT.shape
x = x0.copy()
u = sigma * (np.dot(AT.T, x0) - b)
y = u.copy()
STOP = False
np.random.seed(0)
permut = np.arange(n)
for epoch in range(numb_iter):
np.random.shuffle(permut)
for ik in permut:
#print(ik)
x, y, u = coo_pd_update_numba(x, y, u, AT, n, steps, sigma, ik)
f_gap = 1 / sigma * LA.norm(u, ord=np.inf)
# we don't want to compute s_gap in every iteration, since it
# requires computing A.T.dot(y). We compute it only if the
# feasibility gap is already small.
if f_gap <= tol:
s_gap = subdif_gap(-np.dot(AT, y), x)
if s_gap <= tol:
STOP = True
break
if STOP:
output = [epoch, s_gap, f_gap]
else:
f_gap = 1 / sigma * np.sqrt(np.dot(u, u))
s_gap = subdif_gap(-np.dot(AT, y), x)
output = [-1, s_gap, f_gap]
return x, y, output
# ------------------------------------------------------------------------------------
# ------------------------ Block-coordinate primal-dual algorithm --------------------
# ------------------------------------------------------------------------------------
# block-coordinate update
@jit(nopython=True, nogil=True, cache=True)
def coo_block_pd_update_numba(x, y, u, AT, n_block, dim_block, steps, sigma, ik):
"""
Update for block-coordinate primal-dual method for basis pursuit problem
n_block : number of blocks
dim_block: dimension of one block (we assume that all blocks have the same dimension)
steps: array of inverse operator norms for blocks A[i]
sigma: dual stepsize. This is the only parameter that influence convergence
ik: number from 0 to n_block; defines which block to choose.
"""
block0 = ik * dim_block
block1 = (ik + 1) * dim_block
x_block = x[block0: block1].copy()
# Ai = A[:, block0: block1]
Ai = AT[block0:block1]
# corresponds to the block of the size dim_block x m
tau = steps[ik] / sigma
block_update = prox_l1(
x_block - tau / n_block * np.dot(Ai, y), tau / n_block)
h = block_update - x_block
Aih = np.dot(Ai.T, h)
y += u + sigma * (n_block + 1) * Aih
u += sigma * Aih
x[block0:block1] = block_update
return x, y, u
def coo_block_pd_numba(AT, b, x0, steps, sigma, numb_iter=100, tol=1e-6):
"""
Block-coordinate version of primal-dual algorithm of Pock and Chambolle for problem
min_x |x|_1 s.t. Ax =b
AT equals to A.T. This is more convenient for the
algorithm. Notice that AT should have C-contiguous flag. This
means that A.T will not work, it is better to make a copy
A.T.copy()
The number of blocks equals to n diveded over the size of the array steps.
Algorithm runs either for numb_iter iteration or when the stopping
criteria reaches tol accuracy. The stopping criteria include:
primal gap (based on the first order condition) and the
feasibility gap ||Ax-b||.
"""
n, m = AT.shape
x = x0.copy()
u = sigma * (np.dot(AT.T, x0) - b)
y = u.copy()
n_block = len(steps)
dim_block = n // n_block
STOP = False
np.random.seed(0)
permut = np.arange(n_block)
for epoch in range(numb_iter):
np.random.shuffle(permut)
for i in range(n_block):
ik = permut[i]
x, y, u = coo_block_pd_update_numba(
x, y, u, AT, n_block, dim_block, steps, sigma, ik)
f_gap = 1 / sigma * LA.norm(u, ord=np.inf)
# we don't want to compute s_gap in every iteration, since it
# requires computing A.T.dot(y). We compute it only if the
# feasibility gap is already small.
if f_gap <= tol:
s_gap = subdif_gap(-np.dot(AT, y), x)
if s_gap <= tol:
STOP = True
break
if STOP:
# n_epoch = i // n_block
output = [epoch, s_gap, f_gap]
else:
f_gap = 1 / sigma * LA.norm(u, ord=np.inf)
s_gap = subdif_gap(-np.dot(AT, y), x)
# means that the algorithm does not converge within N*n_batch
# iterations
epoch = -1
output = [epoch, s_gap, f_gap]
return x, y, output
# ------------------------------------------------------------------------------------
# ------ Full variants of the coordinate algorithms. Useful for line profiling -------
# ------------------------------------------------------------------------------------
def coo_block_pd_full(AT, b, x0, steps, sigma, numb_iter=100, tol=1e-6):
"""
Block-coordinate version of the primal-dual algorithm of
Chambolle-Pock for problem min_x |x|_1 s.t. Ax =b The number of
blocks equals to the length of steps array.
AT equals to A.T. This is more convenient for the
algorithm. Notice that AT should have C-contiguous flag. This
means that A.T will not work, it is better to make a copy
A.T.copy()
Instead of running a random generator in each iteration, we shuffle indices in advance
Algorithm runs either for numb_iter iteration or when the stopping
criteria reaches tol accuracy. The stopping criteria include:
primal gap (based on the first order condition) and the
feasibility gap ||Ax-b||.
"""
n, m = AT.shape
x = x0.copy()
u = sigma * (AT.T.dot(x0) - b)
y = u.copy()
n_block = len(steps)
dim_block = n // n_block
STOP = False
np.random.seed(0)
# make permutation of all blocks
permut = np.arange(n_block)
for epoch in range(numb_iter):
np.random.shuffle(permut)
for i in range(n_block):
ik = permut[i]
block0 = ik * dim_block
block1 = (ik + 1) * dim_block
x_block = x[block0: block1].copy()
Ai = AT[block0: block1]
tau = steps[ik] / sigma
AiTy = np.dot(Ai, y)
tmp1 = x_block - (tau / n_block) * AiTy
block_update = prox_l1(tmp1, tau / n_block)
h = block_update - x_block
Aih = np.dot(Ai.T, h)
y += u + sigma * (n_block + 1) * Aih
u += sigma * Aih
x[block0:block1] = block_update
f_gap = 1 / sigma * LA.norm(u, ord=np.inf)
# we don't want to compute s_gap in every iteration, since it
# requires computing A.T.dot(y). We compute it only if the
# feasibility gap is already small.
if f_gap <= tol:
s_gap = subdif_gap(-np.dot(AT, y), x)
if s_gap <= tol:
STOP = True
break
if STOP:
# n_epoch = i // n_block
output = [epoch, s_gap, f_gap]
else:
f_gap = 1 / sigma * np.sqrt(np.dot(u, u))
s_gap = subdif_gap(-np.dot(AT, y), x)
# means that the algorithm does not converge within N*n_batch
# iterations
epoch = -1
output = [epoch, s_gap, f_gap]
return x, y, output
def coo_pd_full(AT, b, x0, steps, sigma, numb_iter=100, tol=1e-6):
"""
Coordinate version of primal-dual algorithm of Pock and Chambolle
for problem min_x |x|_1 s.t. Ax =b
AT equals to A.T. This is more convenient for the
algorithm. Notice that AT should have C-contiguous flag. This
means that A.T will not work, it is better to make a copy
A.T.copy()
Instead of running a random generator in each iteration, we
shuffle indices in advance
Algorithm runs either for numb_iter iteration or when the stopping
criteria reaches tol accuracy. The stopping criteria include:
primal gap (based on the first order condition) and the
feasibility gap ||Ax-b||.
"""
n, m = AT.shape
x = x0.copy()
u = sigma * (np.dot(AT.T, x0) - b)
y = u.copy()
STOP = False
np.random.seed(0)
#make permutation of all blocks
permut = np.arange(n)
for epoch in range(numb_iter):
np.random.shuffle(permut)
for ik in permut:
a = AT[ik]
tau = steps[ik] / sigma
ay = np.dot(a, y)
t = prox_l1(x[ik] - (tau / n) * ay, tau / n)
h = t - x[ik]
u += (sigma * h) * a
y += u + (sigma * n * h) * a
x[ik] = t
f_gap = 1 / sigma *LA.norm(u, ord=np.inf)
# we don't want to compute s_gap in every iteration, since it
# requires computing A.T.dot(y). We compute it only if the
# feasibility gap is already small.
if f_gap <= tol:
s_gap = subdif_gap(-np.dot(AT, y), x)
if s_gap <= tol:
STOP = True
break
if STOP:
output = [epoch, s_gap, f_gap]
else:
f_gap = 1 / sigma * np.sqrt(np.dot(u, u))
s_gap = subdif_gap(-np.dot(AT, y), x)
output = [-1, s_gap, f_gap]
return x, y, output
| 31.229765 | 127 | 0.559569 | 1,800 | 11,961 | 3.630556 | 0.122778 | 0.01469 | 0.011783 | 0.012242 | 0.786534 | 0.777353 | 0.742464 | 0.708493 | 0.704361 | 0.693344 | 0 | 0.013166 | 0.295126 | 11,961 | 382 | 128 | 31.311518 | 0.76195 | 0.446869 | 0 | 0.756906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038674 | false | 0 | 0.033149 | 0 | 0.110497 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c6a875246e66d3b5b7255029e8040bb99befebc | 97 | py | Python | podium/experimental/model_selection/__init__.py | TakeLab/podium | 11ef32d889e483d4d77a44b61e0b5da956ee3a54 | [
"BSD-3-Clause"
] | 51 | 2021-03-19T14:14:31.000Z | 2022-02-18T00:42:51.000Z | podium/experimental/model_selection/__init__.py | TakeLab/podium | 11ef32d889e483d4d77a44b61e0b5da956ee3a54 | [
"BSD-3-Clause"
] | 9 | 2021-03-31T15:39:28.000Z | 2021-04-16T13:28:15.000Z | podium/experimental/model_selection/__init__.py | TakeLab/podium | 11ef32d889e483d4d77a44b61e0b5da956ee3a54 | [
"BSD-3-Clause"
] | 1 | 2021-07-26T04:54:18.000Z | 2021-07-26T04:54:18.000Z | """
This package contains model selection methods.
"""
from .model_selection import grid_search
| 16.166667 | 46 | 0.783505 | 12 | 97 | 6.166667 | 0.833333 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134021 | 97 | 5 | 47 | 19.4 | 0.880952 | 0.474227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c6fa9025f800fa55b1eb5f94cb0a045d6fe5157 | 37 | py | Python | dev/pyposmat/analysis/GaussianMixtureModel/dev__manifold_analysis.py | eragasa/pypospack | 21cdecaf3b05c87acc532d992be2c04d85bfbc22 | [
"MIT"
] | 4 | 2018-01-18T19:59:56.000Z | 2020-08-25T11:56:52.000Z | dev/pyposmat/analysis/GaussianMixtureModel/dev__manifold_analysis.py | eragasa/pypospack | 21cdecaf3b05c87acc532d992be2c04d85bfbc22 | [
"MIT"
] | 1 | 2018-04-22T23:02:13.000Z | 2018-04-22T23:02:13.000Z | dev/pyposmat/analysis/GaussianMixtureModel/dev__manifold_analysis.py | eragasa/pypospack | 21cdecaf3b05c87acc532d992be2c04d85bfbc22 | [
"MIT"
] | 1 | 2019-09-14T07:04:42.000Z | 2019-09-14T07:04:42.000Z | import os
import manifold_analysis
| 7.4 | 24 | 0.837838 | 5 | 37 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 37 | 4 | 25 | 9.25 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5ef5642b91defe0662565c1816c2b66293b045c | 111 | py | Python | chapter_2/strings/name.py | ieonsii/python-crash-course.2nd | 88e345ed26603c750c1d632da2b2e72fdddc26b7 | [
"MIT"
] | null | null | null | chapter_2/strings/name.py | ieonsii/python-crash-course.2nd | 88e345ed26603c750c1d632da2b2e72fdddc26b7 | [
"MIT"
] | null | null | null | chapter_2/strings/name.py | ieonsii/python-crash-course.2nd | 88e345ed26603c750c1d632da2b2e72fdddc26b7 | [
"MIT"
] | null | null | null | name = "chirstal quioco"
print(name.title())
name = "Chirstal Quioco"
print(name.lower())
print(name.upper())
| 15.857143 | 24 | 0.702703 | 15 | 111 | 5.2 | 0.466667 | 0.346154 | 0.461538 | 0.589744 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 111 | 6 | 25 | 18.5 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
912fe428f22923327e62a0b9bb87cb7d637a4ccc | 24,963 | py | Python | im2scene/giraffe/rendering.py | JoseMAbril/Proyecto_AML | b1ef319bf9e27e70be6e6424a0d1ba4790a45a3a | [
"MIT"
] | null | null | null | im2scene/giraffe/rendering.py | JoseMAbril/Proyecto_AML | b1ef319bf9e27e70be6e6424a0d1ba4790a45a3a | [
"MIT"
] | null | null | null | im2scene/giraffe/rendering.py | JoseMAbril/Proyecto_AML | b1ef319bf9e27e70be6e6424a0d1ba4790a45a3a | [
"MIT"
] | null | null | null | import torch
import numpy as np
from im2scene.common import interpolate_sphere
from torchvision.utils import save_image, make_grid
import imageio
from math import sqrt
from os import makedirs
from os.path import join
class Renderer(object):
''' Render class for GIRAFFE.
It provides functions to render the representation.
Args:
model (nn.Module): trained GIRAFFE model
device (device): pytorch device
'''
def __init__(self, model, device=None):
self.model = model.to(device)
gen = self.model.generator_test
if gen is None:
gen = self.model.generator
gen.eval()
self.generator = gen
# sample temperature; only used for visualiations
self.sample_tmp = 0.65
def set_random_seed(self):
torch.manual_seed(0)
np.random.seed(0)
def render_full_visualization(self, img_out_path,
render_program=['object_rotation']):
for rp in render_program:
if rp == 'object_rotation':
self.set_random_seed()
self.render_object_rotation(img_out_path)
if rp == 'object_translation_horizontal':
self.set_random_seed()
self.render_object_translation_horizontal(img_out_path)
if rp == 'object_translation_vertical':
self.set_random_seed()
self.render_object_translation_depth(img_out_path)
if rp == 'interpolate_app':
self.set_random_seed()
self.render_interpolation(img_out_path)
if rp == 'interpolate_app_bg':
self.set_random_seed()
self.render_interpolation_bg(img_out_path)
if rp == 'interpolate_shape':
self.set_random_seed()
self.render_interpolation(img_out_path, mode='shape')
if rp == 'object_translation_circle':
self.set_random_seed()
self.render_object_translation_circle(img_out_path)
if rp == 'render_camera_elevation':
self.set_random_seed()
self.render_camera_elevation(img_out_path)
if rp == 'render_add_cars':
self.set_random_seed()
self.render_add_objects_cars5(img_out_path)
if rp == 'render_add_clevr10':
self.set_random_seed()
self.render_add_objects_clevr10(img_out_path)
if rp == 'render_add_clevr6':
self.set_random_seed()
self.render_add_objects_clevr6(img_out_path)
def render_object_rotation(self, img_out_path, batch_size=15, n_steps=32):
gen = self.generator
bbox_generator = gen.bounding_box_generator
n_boxes = bbox_generator.n_boxes
# Set rotation range
is_full_rotation = (bbox_generator.rotation_range[0] == 0
and bbox_generator.rotation_range[1] == 1)
n_steps = int(n_steps * 2) if is_full_rotation else n_steps
r_scale = [0., 1.] if is_full_rotation else [0.1, 0.9]
# Get Random codes and bg rotation
latent_codes = gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
# Set Camera
camera_matrices = gen.get_camera(batch_size=batch_size)
s_val = [[0, 0, 0] for i in range(n_boxes)]
t_val = [[0.5, 0.5, 0.5] for i in range(n_boxes)]
r_val = [0. for i in range(n_boxes)]
s, t, _ = gen.get_transformations(s_val, t_val, r_val, batch_size)
out = []
for step in range(n_steps):
# Get rotation for this step
r = [step * 1.0 / (n_steps - 1) for i in range(n_boxes)]
r = [r_scale[0] + ri * (r_scale[1] - r_scale[0]) for ri in r]
r = gen.get_rotation(r, batch_size)
# define full transformation and evaluate model
transformations = [s, t, r]
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
out_folder = join(img_out_path, 'rotation_object')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(
out, out_folder, name='rotation_object',
is_full_rotation=is_full_rotation,
add_reverse=(not is_full_rotation))
def render_object_rotationDemo(self, img_out_path, batch_size=1, n_steps=32, latent_codes=None):
gen = self.generator
bbox_generator = gen.bounding_box_generator
n_boxes = bbox_generator.n_boxes
# Set rotation range
is_full_rotation = (bbox_generator.rotation_range[0] == 0
and bbox_generator.rotation_range[1] == 1)
n_steps = int(n_steps * 2) if is_full_rotation else n_steps
r_scale = [0., 1.] if is_full_rotation else [0.1, 0.9]
# Get Random codes and bg rotation
#latent_codes = gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
# Set Camera
camera_matrices = gen.get_camera(batch_size=batch_size)
s_val = [[0, 0, 0] for i in range(n_boxes)]
t_val = [[0.5, 0.5, 0.5] for i in range(n_boxes)]
r_val = [0. for i in range(n_boxes)]
s, t, _ = gen.get_transformations(s_val, t_val, r_val, batch_size)
out = []
for step in range(n_steps):
# Get rotation for this step
r = [step * 1.0 / (n_steps - 1) for i in range(n_boxes)]
r = [r_scale[0] + ri * (r_scale[1] - r_scale[0]) for ri in r]
r = gen.get_rotation(r, batch_size)
# define full transformation and evaluate model
transformations = [s, t, r]
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
out_folder = join(img_out_path, 'rotation_object')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(
out, out_folder, name='rotation_object',
is_full_rotation=is_full_rotation,
add_reverse=(not is_full_rotation))
def render_object_translation_horizontal(self, img_out_path, batch_size=15,
n_steps=32):
gen = self.generator
# Get values
latent_codes = gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(batch_size=batch_size)
n_boxes = gen.bounding_box_generator.n_boxes
s = [[0., 0., 0.]
for i in range(n_boxes)]
r = [0.5 for i in range(n_boxes)]
if n_boxes == 1:
t = []
x_val = 1
elif n_boxes == 2:
t = [[0.5, 0.5, 0.]]
x_val = 2.
out = []
for step in range(n_steps):
i = step * 1.0 / (n_steps - 1)
ti = t + [[x_val, i, 0.]]
transformations = gen.get_transformations(s, ti, r, batch_size)
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
out_folder = join(img_out_path, 'translation_object_horizontal')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(
out, out_folder, name='translation_horizontal',
add_reverse=True)
def render_object_translation_depth(self, img_out_path, batch_size=15,
n_steps=32):
gen = self.generator
# Get values
latent_codes = gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(batch_size=batch_size)
n_boxes = gen.bounding_box_generator.n_boxes
s = [[0., 0., 0.]
for i in range(n_boxes)]
r = [0.5 for i in range(n_boxes)]
if n_boxes == 1:
t = []
y_val = 0.5
elif n_boxes == 2:
t = [[0.4, 0.8, 0.]]
y_val = 0.2
out = []
for step in range(n_steps):
i = step * 1.0 / (n_steps - 1)
ti = t + [[i, y_val, 0.]]
transformations = gen.get_transformations(s, ti, r, batch_size)
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
out_folder = join(img_out_path, 'translation_object_depth')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(
out, out_folder, name='translation_depth', add_reverse=True)
def render_interpolation(self, img_out_path, batch_size=15, n_samples=6,
n_steps=32, mode='app'):
gen = self.generator
n_boxes = gen.bounding_box_generator.n_boxes
# Get values
z_shape_obj_1, z_app_obj_1, z_shape_bg_1, z_app_bg_1 = \
gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
z_i = [
gen.sample_z(
z_app_obj_1.shape,
tmp=self.sample_tmp) for j in range(n_samples)
]
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(batch_size=batch_size)
if n_boxes == 1:
t_val = [[0.5, 0.5, 0.5]]
transformations = gen.get_transformations(
[[0., 0., 0.] for i in range(n_boxes)],
t_val,
[0.5 for i in range(n_boxes)],
batch_size
)
out = []
for j in range(n_samples):
z_i1 = z_i[j]
z_i2 = z_i[(j+1) % (n_samples)]
for step in range(n_steps):
w = step * 1.0 / ((n_steps) - 1)
z_ii = interpolate_sphere(z_i1, z_i2, w)
if mode == 'app':
latent_codes = [z_shape_obj_1, z_ii, z_shape_bg_1,
z_app_bg_1]
else:
latent_codes = [z_ii, z_app_obj_1, z_shape_bg_1,
z_app_bg_1]
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
# Save Video
out_folder = join(img_out_path, 'interpolate_%s' % mode)
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(
out, out_folder, name='interpolate_%s' % mode,
is_full_rotation=True)
def render_interpolation_bg(self, img_out_path, batch_size=15, n_samples=6,
n_steps=32, mode='app'):
gen = self.generator
n_boxes = gen.bounding_box_generator.n_boxes
# Get values
z_shape_obj_1, z_app_obj_1, z_shape_bg_1, z_app_bg_1 = \
gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
z_i = [
gen.sample_z(
z_app_bg_1.shape,
tmp=self.sample_tmp) for j in range(n_samples)
]
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(batch_size=batch_size)
if n_boxes == 1:
t_val = [[0.5, 0.5, 0.5]]
transformations = gen.get_transformations(
[[0., 0., 0.] for i in range(n_boxes)],
t_val,
[0.5 for i in range(n_boxes)],
batch_size
)
out = []
for j in range(n_samples):
z_i1 = z_i[j]
z_i2 = z_i[(j+1) % (n_samples)]
for step in range(n_steps):
w = step * 1.0 / ((n_steps) - 1)
z_ii = interpolate_sphere(z_i1, z_i2, w)
if mode == 'app':
latent_codes = [z_shape_obj_1, z_app_obj_1, z_shape_bg_1,
z_ii]
else:
latent_codes = [z_shape_obj_1, z_app_obj_1, z_ii,
z_app_bg_1]
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
# Save Video
out_folder = join(img_out_path, 'interpolate_bg_%s' % mode)
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(
out, out_folder, name='interpolate_bg_%s' % mode,
is_full_rotation=True)
def render_object_translation_circle(self, img_out_path, batch_size=15,
n_steps=32):
gen = self.generator
# Disable object sampling
sample_object_existance = gen.sample_object_existance
gen.sample_object_existance = False
# Get values
latent_codes = gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(batch_size=batch_size)
n_boxes = gen.bounding_box_generator.n_boxes
s = [[0, 0, 0, ]
for i in range(n_boxes)]
r = [0 for i in range(n_boxes)]
s10, t10, r10 = gen.get_random_transformations(batch_size)
out = []
for step in range(n_steps):
i = step * 1.0 / (n_steps - 1)
cos_i = (np.cos(2 * np.pi * i) * 0.5 + 0.5).astype(np.float32)
sin_i = (np.sin(2 * np.pi * i) * 0.5 + 0.5).astype(np.float32)
if n_boxes <= 2:
t = [[0.5, 0.5, 0.] for i in range(n_boxes - 1)] + [
[cos_i, sin_i, 0]
]
transformations = gen.get_transformations(s, t, r, batch_size)
else:
cos_i, sin_i = cos_i * 1.0 - 0.0, sin_i * 1. - 0.
_, ti, _ = gen.get_transformations(
val_t=[[cos_i, sin_i, 0]], batch_size=batch_size)
t10[:, -1:] = ti
transformations = [s10, t10, r10]
with torch.no_grad():
out_i = gen(batch_size, latent_codes, camera_matrices,
transformations, bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
gen.sample_object_existance = sample_object_existance
# Save Video
out_folder = join(img_out_path, 'translation_circle')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(out, out_folder, name='translation_circle',
is_full_rotation=True)
def render_camera_elevation(self, img_out_path, batch_size=15, n_steps=32):
gen = self.generator
n_boxes = gen.bounding_box_generator.n_boxes
r_range = [0.1, 0.9]
# Get values
latent_codes = gen.get_latent_codes(batch_size, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
transformations = gen.get_transformations(
[[0., 0., 0.] for i in range(n_boxes)],
[[0.5, 0.5, 0.5] for i in range(n_boxes)],
[0.5 for i in range(n_boxes)],
batch_size,
)
out = []
for step in range(n_steps):
v = step * 1.0 / (n_steps - 1)
r = r_range[0] + v * (r_range[1] - r_range[0])
camera_matrices = gen.get_camera(val_v=r, batch_size=batch_size)
with torch.no_grad():
out_i = gen(
batch_size, latent_codes, camera_matrices, transformations,
bg_rotation, mode='val')
out.append(out_i.cpu())
out = torch.stack(out)
out_folder = join(img_out_path, 'camera_elevation')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(out, out_folder, name='elevation_camera',
is_full_rotation=False)
def render_add_objects_cars5(self, img_out_path, batch_size=15):
gen = self.generator
# Get values
z_shape_obj, z_app_obj, z_shape_bg, z_app_bg = gen.get_latent_codes(
batch_size, tmp=self.sample_tmp)
z_shape_obj = gen.sample_z(
z_shape_obj[:, :1].repeat(1, 6, 1).shape, tmp=self.sample_tmp)
z_app_obj = gen.sample_z(
z_app_obj[:, :1].repeat(1, 6, 1).shape, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(val_v=0., batch_size=batch_size)
s = [
[-1., -1., -1.],
[-1., -1., -1.],
[-1., -1., -1.],
[-1., -1., -1.],
[-1., -1., -1.],
[-1., -1., -1.],
]
t = [
[-0.7, -.8, 0.],
[-0.7, 0.5, 0.],
[-0.7, 1.8, 0.],
[1.5, -.8, 0.],
[1.5, 0.5, 0.],
[1.5, 1.8, 0.],
]
r = [
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
]
outs = []
for i in range(1, 7):
transformations = gen.get_transformations(
s[:i], t[:i], r[:i], batch_size)
latent_codes = [z_shape_obj[:, :i], z_app_obj[:, :i], z_shape_bg,
z_app_bg]
with torch.no_grad():
out = gen(
batch_size, latent_codes, camera_matrices, transformations,
bg_rotation, mode='val').cpu()
outs.append(out)
outs = torch.stack(outs)
idx = torch.arange(6).reshape(-1, 1).repeat(1, (128 // 6)).reshape(-1)
outs = outs[[idx]]
# import pdb; pdb.set_trace()
out_folder = join(img_out_path, 'add_cars')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(outs, out_folder, name='add_cars',
is_full_rotation=False, add_reverse=True)
def render_add_objects_clevr10(self, img_out_path, batch_size=15):
gen = self.generator
# Disable object sampling
sample_object_existance = gen.sample_object_existance
gen.sample_object_existance = False
n_steps = 6
n_objs = 12
# Get values
z_shape_obj, z_app_obj, z_shape_bg, z_app_bg = gen.get_latent_codes(
batch_size, tmp=self.sample_tmp)
z_shape_obj = gen.sample_z(
z_shape_obj[:, :1].repeat(1, n_objs, 1).shape, tmp=self.sample_tmp)
z_app_obj = gen.sample_z(
z_app_obj[:, :1].repeat(1, n_objs, 1).shape, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(val_v=0., batch_size=batch_size)
s = [
[0, 0, 0] for i in range(n_objs)
]
t = []
for i in range(n_steps):
if i % 3 == 0:
x = 0.0
elif i % 3 == 1:
x = 0.5
else:
x = 1
if i in [0, 1, 2]:
y = 0.
else:
y = 0.8
t = t + [[x, y, 0], [x, y + 0.4, 0]]
r = [
0 for i in range(n_objs)
]
out_total = []
for i in range(2, n_objs + 1, 2):
transformations = gen.get_transformations(
s[:i], t[:i], r[:i], batch_size)
latent_codes = [z_shape_obj[:, :i], z_app_obj[:, :i], z_shape_bg,
z_app_bg]
with torch.no_grad():
out = gen(
batch_size, latent_codes, camera_matrices, transformations,
bg_rotation, mode='val').cpu()
out_total.append(out)
out_total = torch.stack(out_total)
idx = torch.arange(6).reshape(-1, 1).repeat(1, (128 // 6)).reshape(-1)
outs = out_total[[idx]]
gen.sample_object_existance = sample_object_existance
out_folder = join(img_out_path, 'add_clevr_objects10')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(outs, out_folder, name='add_clevr10',
is_full_rotation=False, add_reverse=True)
def render_add_objects_clevr6(self, img_out_path, batch_size=15):
gen = self.generator
# Disable object sampling
sample_object_existance = gen.sample_object_existance
gen.sample_object_existance = False
n_objs = 6
# Get values
z_shape_obj, z_app_obj, z_shape_bg, z_app_bg = gen.get_latent_codes(
batch_size, tmp=self.sample_tmp)
z_shape_obj = gen.sample_z(
z_shape_obj[:, :1].repeat(1, n_objs, 1).shape, tmp=self.sample_tmp)
z_app_obj = gen.sample_z(
z_app_obj[:, :1].repeat(1, n_objs, 1).shape, tmp=self.sample_tmp)
bg_rotation = gen.get_random_bg_rotation(batch_size)
camera_matrices = gen.get_camera(val_v=0., batch_size=batch_size)
s = [
[0, 0, 0] for i in range(n_objs)
]
t = []
for i in range(n_objs):
if i % 2 == 0:
x = 0.2
else:
x = 0.8
if i in [0, 1]:
y = 0.
elif i in [2, 3]:
y = 0.5
else:
y = 1.
t = t + [[x, y, 0]]
r = [
0 for i in range(n_objs)
]
out_total = []
for i in range(1, n_objs + 1):
transformations = gen.get_transformations(
s[:i], t[:i], r[:i], batch_size)
latent_codes = [z_shape_obj[:, :i], z_app_obj[:, :i], z_shape_bg,
z_app_bg]
with torch.no_grad():
out = gen(
batch_size, latent_codes, camera_matrices, transformations,
bg_rotation, mode='val').cpu()
out_total.append(out)
out_total = torch.stack(out_total)
idx = torch.arange(6).reshape(-1, 1).repeat(1, (128 // 6)).reshape(-1)
outs = out_total[[idx]]
gen.sample_object_existance = sample_object_existance
out_folder = join(img_out_path, 'add_clevr_objects6')
makedirs(out_folder, exist_ok=True)
self.save_video_and_images(outs, out_folder, name='add_clevr6',
is_full_rotation=False, add_reverse=True)
##################
# Helper functions
def write_video(self, out_file, img_list, n_row=5, add_reverse=False,
write_small_vis=True):
n_steps, batch_size = img_list.shape[:2]
nrow = n_row if (n_row is not None) else int(sqrt(batch_size))
img = [(255*make_grid(img, nrow=nrow, pad_value=1.).permute(
1, 2, 0)).cpu().numpy().astype(np.uint8) for img in img_list]
if add_reverse:
img += list(reversed(img))
imageio.mimwrite(out_file, img, fps=30, quality=8)
if write_small_vis:
img = [(255*make_grid(img, nrow=batch_size, pad_value=1.).permute(
1, 2, 0)).cpu().numpy().astype(
np.uint8) for img in img_list[:, :9]]
if add_reverse:
img += list(reversed(img))
imageio.mimwrite(
(out_file[:-4] + '_sm.mp4'), img, fps=30, quality=4)
def save_video_and_images(self, imgs, out_folder, name='rotation_object',
is_full_rotation=False, img_n_steps=6,
add_reverse=False):
# Save video
out_file_video = join(out_folder, '%s.mp4' % name)
self.write_video(out_file_video, imgs, add_reverse=add_reverse)
# Save images
n_steps, batch_size = imgs.shape[:2]
if is_full_rotation:
idx_paper = np.linspace(
0, n_steps - n_steps // img_n_steps, img_n_steps
).astype(np.int)
else:
idx_paper = np.linspace(0, n_steps - 1, img_n_steps).astype(np.int)
for idx in range(batch_size):
img_grid = imgs[idx_paper, idx]
save_image(make_grid(
img_grid, nrow=img_n_steps, pad_value=1.), join(
out_folder, '%04d_%s.jpg' % (idx, name)))
| 38.642415 | 100 | 0.548412 | 3,378 | 24,963 | 3.736827 | 0.062463 | 0.06203 | 0.025351 | 0.027014 | 0.83229 | 0.817238 | 0.798384 | 0.757031 | 0.72566 | 0.72154 | 0 | 0.029658 | 0.34491 | 24,963 | 645 | 101 | 38.702326 | 0.742249 | 0.033129 | 0 | 0.632887 | 0 | 0 | 0.028222 | 0.00744 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030593 | false | 0 | 0.015296 | 0 | 0.047801 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
913b5adf5329077dfc7ae8a5ebdcc74959753ddc | 182 | py | Python | tests/foo.py | DimaOrekhov/hydra-slayer | cfa084c44a1ee5f01ff68445660c1ba137333cb8 | [
"Apache-2.0"
] | null | null | null | tests/foo.py | DimaOrekhov/hydra-slayer | cfa084c44a1ee5f01ff68445660c1ba137333cb8 | [
"Apache-2.0"
] | null | null | null | tests/foo.py | DimaOrekhov/hydra-slayer | cfa084c44a1ee5f01ff68445660c1ba137333cb8 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa
__all__ = ["foo"]
def foo(a, b):
"""Docs? Contribution is welcome."""
return {"a": a, "b": b}
def bar():
"""Docs? Contribution is welcome."""
pass
| 14 | 40 | 0.538462 | 24 | 182 | 3.916667 | 0.583333 | 0.042553 | 0.382979 | 0.531915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007299 | 0.247253 | 182 | 12 | 41 | 15.166667 | 0.678832 | 0.412088 | 0 | 0 | 0 | 0 | 0.052083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
e68e333ff3dcae392dde1cbcde35c3163a43f629 | 96 | py | Python | venv/lib/python3.8/site-packages/urllib3/packages/ssl_match_hostname/__init__.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/65/53/30/0a41f1fa9cbc111b31c4cdc897e322444664b55fbc88b06609f4511c8e | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e68fc7e7dc1a4be2e53b3ad34a7fe24fab7de783 | 26,519 | py | Python | obsidion/core/utils/predicates.py | Darkflame72/Minecraft-Discord | 4e701df9820d18c9f2b8a4863145e2af36729505 | [
"MIT"
] | 1 | 2020-02-29T22:37:01.000Z | 2020-02-29T22:37:01.000Z | obsidion/core/utils/predicates.py | Darkflame72/Minecraft-Discord | 4e701df9820d18c9f2b8a4863145e2af36729505 | [
"MIT"
] | 1 | 2020-03-27T05:49:37.000Z | 2020-03-27T05:51:25.000Z | obsidion/core/utils/predicates.py | Darkflame72/Minecraft-Discord | 4e701df9820d18c9f2b8a4863145e2af36729505 | [
"MIT"
] | 1 | 2020-03-27T05:53:17.000Z | 2020-03-27T05:53:17.000Z | import re
from typing import Any
from typing import Callable
from typing import cast
from typing import Optional
from typing import Pattern
from typing import Sequence
from typing import Union
import discord
from discord.ext import commands
_ID_RE = re.compile(r"([0-9]{15,21})$")
_USER_MENTION_RE = re.compile(r"<@!?([0-9]{15,21})>$")
_CHAN_MENTION_RE = re.compile(r"<#([0-9]{15,21})>$")
_ROLE_MENTION_RE = re.compile(r"<@&([0-9]{15,21})>$")
class MessagePredicate:
"""A simple collection of predicates for message events.
These predicates intend to help simplify checks in message events
and reduce boilerplate code.
This class should be created through the provided classmethods.
Instances of this class are callable message predicates, i.e. they
return ``True`` if a message matches the criteria.
All predicates are combined with :meth:`MessagePredicate.same_context`.
Examples
--------
Waiting for a response in the same channel and from the same
author::
await bot.wait_for("message", check=MessagePredicate.same_context(ctx))
Waiting for a response to a yes or no question::
pred = MessagePredicate.yes_or_no(ctx)
await bot.wait_for("message", check=pred)
if pred.result is True:
# User responded "yes"
...
Getting a member object from a user's response::
pred = MessagePredicate.valid_member(ctx)
await bot.wait_for("message", check=pred)
member = pred.result
Attributes
----------
result : Any
The object which the message content matched with. This is
dependent on the predicate used - see each predicate's
documentation for details, not every method will assign this
attribute. Defaults to ``None``.
"""
def __init__(
self, predicate: Callable[["MessagePredicate", discord.Message], bool]
) -> None:
self._pred: Callable[["MessagePredicate", discord.Message], bool] = predicate
self.result: Any = None
def __call__(self, message: discord.Message) -> bool:
return self._pred(self, message)
@classmethod
def same_context(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the message fits the described context.
Parameters
----------
ctx : Optional[Context]
The current invocation context.
channel : Optional[discord.TextChannel]
The channel we expect a message in. If unspecified,
defaults to ``ctx.channel``. If ``ctx`` is unspecified
too, the message's channel will be ignored.
user : Optional[discord.abc.User]
The user we expect a message from. If unspecified,
defaults to ``ctx.author``. If ``ctx`` is unspecified
too, the message's author will be ignored.
Returns
-------
MessagePredicate
The event predicate.
"""
if ctx is not None:
channel = channel or ctx.channel
user = user or ctx.author
return cls(
lambda self, m: (user is None or user.id == m.author.id)
and (channel is None or channel.id == m.channel.id)
)
@classmethod
def cancelled(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the message is ``[p]cancel``.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
if ctx is None:
lambda self, m: (False)
_ctx: commands.Context = ctx
same_context = cls.same_context(_ctx, channel, user)
return cls(
lambda self, m: (
same_context(m) and m.content.lower() == f"{_ctx.prefix}cancel"
)
)
@classmethod
def yes_or_no(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the message is "yes"/"y" or "no"/"n".
This will assign ``True`` for *yes*, or ``False`` for *no* to
the `result` attribute.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
content = m.content.lower()
if content in ("yes", "y"):
self.result = True
elif content in ("no", "n"):
self.result = False
else:
return False
return True
return cls(predicate)
@classmethod
def valid_int(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is an integer.
Assigns the response to `result` as an `int`.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
try:
self.result = int(m.content)
except ValueError:
return False
else:
return True
return cls(predicate)
@classmethod
def valid_float(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is a float.
Assigns the response to `result` as a `float`.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
try:
self.result = float(m.content)
except ValueError:
return False
else:
return True
return cls(predicate)
@classmethod
def positive(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is a positive number.
Assigns the response to `result` as a `float`.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
try:
number = float(m.content)
except ValueError:
return False
else:
if number > 0:
self.result = number
return True
else:
return False
return cls(predicate)
@classmethod
def valid_role(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response refers to a role in the current guild.
Assigns the matching `discord.Role` object to `result`.
This predicate cannot be used in DM.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
guild = cls._get_guild(ctx, channel, cast(discord.Member, user))
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
role = self._find_role(guild, m.content)
if role is None:
return False
self.result = role
return True
return cls(predicate)
@classmethod
def valid_member(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response refers to a member in the current guild.
Assigns the matching `discord.Member` object to `result`.
This predicate cannot be used in DM.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
guild = cls._get_guild(ctx, channel, cast(discord.Member, user))
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
match = _ID_RE.match(m.content) or _USER_MENTION_RE.match(m.content)
if match:
result = guild.get_member(int(match.group(1)))
else:
result = guild.get_member_named(m.content)
if result is None:
return False
self.result = result
return True
return cls(predicate)
@classmethod
def valid_text_channel(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response refers to a text channel in the current guild.
Assigns the matching `discord.TextChannel` object to `result`.
This predicate cannot be used in DM.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
guild = cls._get_guild(ctx, channel, cast(discord.Member, user))
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
match = _ID_RE.match(m.content) or _CHAN_MENTION_RE.match(m.content)
if match:
result = guild.get_channel(int(match.group(1)))
else:
result = discord.utils.get(guild.text_channels, name=m.content)
if not isinstance(result, discord.TextChannel):
return False
self.result = result
return True
return cls(predicate)
@classmethod
def has_role(
cls,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response refers to a role which the author has.
Assigns the matching `discord.Role` object to `result`.
One of ``user`` or ``ctx`` must be supplied. This predicate
cannot be used in DM.
Parameters
----------
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
guild = cls._get_guild(ctx, channel, cast(discord.Member, user))
if user is None:
if ctx is None:
raise TypeError(
"One of `user` or `ctx` must be supplied to "
"`MessagePredicate.has_role`."
)
user = ctx.author
_user: discord.User = user
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
role = self._find_role(guild, m.content)
if role is None or role not in _user.roles:
return False
self.result = role
return True
return cls(predicate)
@classmethod
def equal_to(
cls,
value: str,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is equal to the specified value.
Parameters
----------
value : str
The value to compare the response with.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
return cls(lambda self, m: same_context(m) and m.content == value)
@classmethod
def lower_equal_to(
cls,
value: str,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response *as lowercase* is equal to the specified value.
Parameters
----------
value : str
The value to compare the response with.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
return cls(lambda self, m: same_context(m) and m.content.lower() == value)
@classmethod
def less(
cls,
value: Union[int, float],
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is less than the specified value.
Parameters
----------
value : Union[int, float]
The value to compare the response with.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
valid_int = cls.valid_int(ctx, channel, user)
valid_float = cls.valid_float(ctx, channel, user)
return cls(
lambda self, m: (valid_int(m) or valid_float(m))
and float(m.content) < value
)
@classmethod
def greater(
cls,
value: Union[int, float],
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is greater than the specified value.
Parameters
----------
value : Union[int, float]
The value to compare the response with.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
valid_int = cls.valid_int(ctx, channel, user)
valid_float = cls.valid_float(ctx, channel, user)
return cls(
lambda self, m: (valid_int(m) or valid_float(m))
and float(m.content) > value
)
@classmethod
def length_less(
cls,
length: int,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response's length is less than the specified length.
Parameters
----------
length : int
The value to compare the response's length with.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
return cls(lambda self, m: same_context(m) and len(m.content) <= length)
@classmethod
def length_greater(
cls,
length: int,
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response's length is greater than the specified length.
Parameters
----------
length : int
The value to compare the response's length with.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
return cls(lambda self, m: same_context(m) and len(m.content) >= length)
@classmethod
def contained_in(
cls,
collection: Sequence[str],
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response is contained in the specified collection.
The index of the response in the ``collection`` sequence is
assigned to the `result` attribute.
Parameters
----------
collection : Sequence[str]
The collection containing valid responses.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
try:
self.result = collection.index(m.content)
except ValueError:
return False
else:
return True
return cls(predicate)
@classmethod
def lower_contained_in(
cls,
collection: Sequence[str],
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Same as :meth:`contained_in`, but the response is set to lowercase b
efore matching.
Parameters
----------
collection : Sequence[str]
The collection containing valid lowercase responses.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
try:
self.result = collection.index(m.content.lower())
except ValueError:
return False
else:
return True
return cls(predicate)
@classmethod
def regex(
cls,
pattern: Union[Pattern[str], str],
ctx: Optional[commands.Context] = None,
channel: Optional[discord.TextChannel] = None,
user: Optional[discord.abc.User] = None,
) -> "MessagePredicate":
"""Match if the response matches the specified regex pattern.
This predicate will use `re.search` to find a match. The
resulting `match object <match-objects>` will be assigned
to `result`.
Parameters
----------
pattern : Union[`pattern object <re-objects>`, str]
The pattern to search for in the response.
ctx : Optional[Context]
Same as ``ctx`` in :meth:`same_context`.
channel : Optional[discord.TextChannel]
Same as ``channel`` in :meth:`same_context`.
user : Optional[discord.abc.User]
Same as ``user`` in :meth:`same_context`.
Returns
-------
MessagePredicate
The event predicate.
"""
same_context = cls.same_context(ctx, channel, user)
def predicate(self: MessagePredicate, m: discord.Message) -> bool:
if not same_context(m):
return False
if isinstance(pattern, str):
pattern_obj = re.compile(pattern)
else:
pattern_obj = pattern
match = pattern_obj.search(m.content)
if match:
self.result = match
return True
return False
return cls(predicate)
@staticmethod
def _find_role(guild: discord.Guild, argument: str) -> Optional[discord.Role]:
match = _ID_RE.match(argument) or _ROLE_MENTION_RE.match(argument)
if match:
result = guild.get_role(int(match.group(1)))
else:
result = discord.utils.get(guild.roles, name=argument)
return result
@staticmethod
def _get_guild(
ctx: commands.Context, channel: discord.TextChannel, user: discord.Member
) -> discord.Guild:
if ctx is not None:
return ctx.guild
elif channel is not None:
return channel.guild
elif user is not None:
return user.guild
| 31.383432 | 85 | 0.561974 | 2,857 | 26,519 | 5.143507 | 0.073154 | 0.078598 | 0.036747 | 0.06247 | 0.792651 | 0.766451 | 0.758149 | 0.758149 | 0.7131 | 0.707996 | 0 | 0.001578 | 0.330744 | 26,519 | 844 | 86 | 31.420616 | 0.826403 | 0.381877 | 0 | 0.700549 | 0 | 0 | 0.036533 | 0.002026 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093407 | false | 0 | 0.027473 | 0.002747 | 0.282967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6a404f833dde7d088518986d96fb8e10223c4b6 | 23 | py | Python | __init__.py | gil-cohen/portfolio | b0b53cbed4cc09430be1827cc3cc28837daab1a4 | [
"MIT"
] | null | null | null | __init__.py | gil-cohen/portfolio | b0b53cbed4cc09430be1827cc3cc28837daab1a4 | [
"MIT"
] | null | null | null | __init__.py | gil-cohen/portfolio | b0b53cbed4cc09430be1827cc3cc28837daab1a4 | [
"MIT"
] | null | null | null | from . import portfolio | 23 | 23 | 0.826087 | 3 | 23 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6becb5b90c0bd1324c67bb3d2147465aa94d75f | 24 | py | Python | pydoku/__init__.py | marstr/pydoku | 205652355f07b88660b1c26ba18b8c17573f5699 | [
"MIT"
] | 1 | 2020-07-31T16:00:14.000Z | 2020-07-31T16:00:14.000Z | pydoku/__init__.py | marstr/pydoku | 205652355f07b88660b1c26ba18b8c17573f5699 | [
"MIT"
] | 5 | 2021-03-19T04:38:06.000Z | 2021-09-22T19:10:42.000Z | pydoku/__init__.py | marstr/pydoku | 205652355f07b88660b1c26ba18b8c17573f5699 | [
"MIT"
] | null | null | null | from .board import Board | 24 | 24 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6cf3bf7688a03911344891204e3e195c48e672a | 119 | py | Python | src/Other/job.py | FashtimeDotCom/hongkong.marksix | 6e62f79ae556e8c35ec145443e646b5082c68cc5 | [
"Apache-2.0"
] | 8 | 2020-12-13T10:27:20.000Z | 2022-03-21T08:22:07.000Z | src/Other/job.py | FashtimeDotCom/hongkong.marksix | 6e62f79ae556e8c35ec145443e646b5082c68cc5 | [
"Apache-2.0"
] | null | null | null | src/Other/job.py | FashtimeDotCom/hongkong.marksix | 6e62f79ae556e8c35ec145443e646b5082c68cc5 | [
"Apache-2.0"
] | 12 | 2020-12-15T07:49:00.000Z | 2022-03-06T15:52:59.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
class Job:
def __init__(self, job_id):
self.job_id = job_id
| 17 | 31 | 0.588235 | 19 | 119 | 3.315789 | 0.684211 | 0.238095 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.235294 | 119 | 6 | 32 | 19.833333 | 0.681319 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
e6da265b393e2f84cb4affd69a0516b47754b7ef | 340 | py | Python | src/UQpy/surrogates/kriging/regression_models/__init__.py | SURGroup/UncertaintyQuantification | a94c8db47d07134ea2b3b0a3ca53ca818532c3e6 | [
"MIT"
] | null | null | null | src/UQpy/surrogates/kriging/regression_models/__init__.py | SURGroup/UncertaintyQuantification | a94c8db47d07134ea2b3b0a3ca53ca818532c3e6 | [
"MIT"
] | null | null | null | src/UQpy/surrogates/kriging/regression_models/__init__.py | SURGroup/UncertaintyQuantification | a94c8db47d07134ea2b3b0a3ca53ca818532c3e6 | [
"MIT"
] | null | null | null | from UQpy.surrogates.kriging.regression_models.baseclass import *
from UQpy.surrogates.kriging.regression_models.ConstantRegression import ConstantRegression
from UQpy.surrogates.kriging.regression_models.LinearRegression import LinearRegression
from UQpy.surrogates.kriging.regression_models.QuadraticRegression import QuadraticRegression
| 68 | 93 | 0.902941 | 35 | 340 | 8.657143 | 0.314286 | 0.105611 | 0.237624 | 0.330033 | 0.541254 | 0.541254 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047059 | 340 | 4 | 94 | 85 | 0.935185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc226fae8a1520ac7eae24cb7db328cd939c2aa8 | 32 | py | Python | dblab/__init__.py | CampusJob/dblab | 81d59e5c298a85c210aee35e88c276247583d429 | [
"Apache-2.0"
] | null | null | null | dblab/__init__.py | CampusJob/dblab | 81d59e5c298a85c210aee35e88c276247583d429 | [
"Apache-2.0"
] | null | null | null | dblab/__init__.py | CampusJob/dblab | 81d59e5c298a85c210aee35e88c276247583d429 | [
"Apache-2.0"
] | null | null | null |
from .dblab import DatabaseLab
| 10.666667 | 30 | 0.8125 | 4 | 32 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 31 | 16 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc67a8c8306a4b6e3af5548b78d10840975e3822 | 80 | py | Python | cursoemvideo/aula17.py | victorcunha94/curso_em_video_python | ba1673d506a983f8630c88abf4845aa2bd1a81ea | [
"MIT"
] | null | null | null | cursoemvideo/aula17.py | victorcunha94/curso_em_video_python | ba1673d506a983f8630c88abf4845aa2bd1a81ea | [
"MIT"
] | null | null | null | cursoemvideo/aula17.py | victorcunha94/curso_em_video_python | ba1673d506a983f8630c88abf4845aa2bd1a81ea | [
"MIT"
] | null | null | null | a = [2, 4, 6, 8]
b = a[:]
b[2] = 10
print(f'Lista A:{a}')
print(f'Lista B:{b}')
| 13.333333 | 21 | 0.4625 | 20 | 80 | 1.85 | 0.5 | 0.324324 | 0.594595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 0.2 | 80 | 5 | 22 | 16 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5db227b0182a12715da888f0d2bb94c7dcb9c538 | 1,882 | py | Python | audi/apps/common/handlers.py | sangwonl/audi | 92dd28fc39a81d9aa623501547db586050af844e | [
"MIT"
] | null | null | null | audi/apps/common/handlers.py | sangwonl/audi | 92dd28fc39a81d9aa623501547db586050af844e | [
"MIT"
] | 3 | 2015-11-01T15:22:18.000Z | 2015-11-01T15:25:33.000Z | audi/apps/common/handlers.py | sangwonl/audi | 92dd28fc39a81d9aa623501547db586050af844e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from audi.core.handlers.base import BaseHandler
class RobotsHandler(BaseHandler):
def get(self):
params = {
'scheme': self.request.scheme,
'host': self.request.host,
}
self.response.headers['Content-Type'] = 'text/plain'
def set_variables(text, key):
return text.replace('{{ %s }}' % key, params[key])
self.response.write(reduce(set_variables, params, open('audi/apps/common/templates/seo/robots.txt').read()))
class HumansHandler(BaseHandler):
def get(self):
params = {
'scheme': self.request.scheme,
'host': self.request.host,
}
self.response.headers['Content-Type'] = 'text/plain'
def set_variables(text, key):
return text.replace('{{ %s }}' % key, params[key])
self.response.write(reduce(set_variables, params, open('audi/apps/common/templates/seo/humans.txt').read()))
class SitemapHandler(BaseHandler):
def get(self):
params = {
'scheme': self.request.scheme,
'host': self.request.host,
}
self.response.headers['Content-Type'] = 'application/xml'
def set_variables(text, key):
return text.replace('{{ %s }}' % key, params[key])
self.response.write(reduce(set_variables, params, open('audi/apps/common/templates/seo/sitemap.xml').read()))
class CrossDomainHandler(BaseHandler):
def get(self):
params = {
'scheme': self.request.scheme,
'host': self.request.host,
}
self.response.headers['Content-Type'] = 'application/xml'
def set_variables(text, key):
return text.replace('{{ %s }}' % key, params[key])
self.response.write(reduce(set_variables, params, open('audi/apps/common/templates/seo/crossdomain.xml').read()))
| 31.898305 | 121 | 0.599894 | 210 | 1,882 | 5.338095 | 0.233333 | 0.078501 | 0.06066 | 0.074933 | 0.833185 | 0.833185 | 0.833185 | 0.833185 | 0.833185 | 0.833185 | 0 | 0.000702 | 0.242827 | 1,882 | 58 | 122 | 32.448276 | 0.785965 | 0.011158 | 0 | 0.682927 | 0 | 0 | 0.182894 | 0.091447 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195122 | false | 0 | 0.02439 | 0.097561 | 0.414634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5dbee893346c418babd061587efcbb35644947c3 | 42 | py | Python | mc/db/__init__.py | aspuru-guzik-group/mission_control | bfe930e1038e9e0d6c4bb327474766e85b2190cb | [
"Apache-2.0"
] | 3 | 2017-09-01T19:49:59.000Z | 2018-06-04T10:30:01.000Z | mc/db/__init__.py | aspuru-guzik-group/mission_control | bfe930e1038e9e0d6c4bb327474766e85b2190cb | [
"Apache-2.0"
] | null | null | null | mc/db/__init__.py | aspuru-guzik-group/mission_control | bfe930e1038e9e0d6c4bb327474766e85b2190cb | [
"Apache-2.0"
] | 1 | 2018-12-13T19:48:27.000Z | 2018-12-13T19:48:27.000Z | """ Database/persistence abstractions."""
| 21 | 41 | 0.738095 | 3 | 42 | 10.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.794872 | 0.809524 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5dcc2a510888f4d06e9072bbe4168eaaafb792cf | 17,285 | py | Python | fetcher/apr_fetchers/snowball_fetcher.py | Avalanche-FR-community/apr-fetcher | 25b12e8fe3da4a7ee678017b80dabc07990144f8 | [
"MIT"
] | null | null | null | fetcher/apr_fetchers/snowball_fetcher.py | Avalanche-FR-community/apr-fetcher | 25b12e8fe3da4a7ee678017b80dabc07990144f8 | [
"MIT"
] | null | null | null | fetcher/apr_fetchers/snowball_fetcher.py | Avalanche-FR-community/apr-fetcher | 25b12e8fe3da4a7ee678017b80dabc07990144f8 | [
"MIT"
] | null | null | null | from typing import Dict, List, Tuple, Union
import requests
from web3.main import Web3
from .pangolinv2_fetcher import PangolinV2APRFetcher
from .traderjoe_fetcher import TraderjoeAPRFetcher
from .lydia_fetcher import LydiaAPRFetcher
from .axial_fetcher import AxialAPRFetcher
from ..utils.utils import calculate_lp_token_price, get_block_average_time, open_contract, blockchain_urls, get_token_price_from_dexs, decimals_mapping
from ..dapp_apr_fetcher import DappAPRFetcher
from pprint import pprint
import json
from web3.middleware import geth_poa_middleware
defaultABI = '[{"inputs":[{"internalType":"address","name":"_token","type":"address"},{"internalType":"address","name":"_governance","type":"address"},{"internalType":"address","name":"_timelock","type":"address"},{"internalType":"address","name":"_controller","type":"address"}],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"spender","type":"address"},{"indexed":false,"internalType":"uint256","name":"value","type":"uint256"}],"name":"Approval","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"from","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"},{"indexed":false,"internalType":"uint256","name":"value","type":"uint256"}],"name":"Transfer","type":"event"},{"inputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"address","name":"spender","type":"address"}],"name":"allowance","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"spender","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"approve","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"available","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"balance","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"balanceOf","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"controller","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"decimals","outputs":[{"internalType":"uint8","name":"","type":"uint8"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"spender","type":"address"},{"internalType":"uint256","name":"subtractedValue","type":"uint256"}],"name":"decreaseAllowance","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"_amount","type":"uint256"}],"name":"deposit","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"depositAll","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"earn","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"getRatio","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"governance","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"reserve","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"harvest","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"spender","type":"address"},{"internalType":"uint256","name":"addedValue","type":"uint256"}],"name":"increaseAllowance","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"max","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"min","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"name","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"_controller","type":"address"}],"name":"setController","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_governance","type":"address"}],"name":"setGovernance","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"_min","type":"uint256"}],"name":"setMin","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_timelock","type":"address"}],"name":"setTimelock","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"symbol","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"timelock","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"token","outputs":[{"internalType":"contract IERC20","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"totalSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"recipient","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"transfer","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"sender","type":"address"},{"internalType":"address","name":"recipient","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"transferFrom","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"_shares","type":"uint256"}],"name":"withdraw","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"withdrawAll","outputs":[],"stateMutability":"nonpayable","type":"function"}]'
defaultABI2 = '[{"inputs":[{"internalType":"address","name":"_token","type":"address"},{"internalType":"address","name":"_governance","type":"address"}],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"reward","type":"uint256"}],"name":"RewardAdded","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"user","type":"address"},{"indexed":false,"internalType":"uint256","name":"reward","type":"uint256"}],"name":"RewardPaid","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"user","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"}],"name":"Staked","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"user","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"}],"name":"Withdrawn","type":"event"},{"inputs":[],"name":"DISTRIBUTION","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"DURATION","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"SNOWBALL","outputs":[{"internalType":"contract IERC20","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"SNOWCONE","outputs":[{"internalType":"contract IERC20","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"TOKEN","outputs":[{"internalType":"contract IERC20","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"TREASURY","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"acceptGovernance","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"balanceOf","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"_distribution","type":"address"}],"name":"changeDistribution","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"deposit","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"depositAll","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"},{"internalType":"address","name":"account","type":"address"}],"name":"depositFor","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"derivedBalance","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"derivedBalances","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"derivedSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"earned","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"exit","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"getReward","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"getRewardForDuration","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"governance","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"kick","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"lastTimeRewardApplicable","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"lastUpdateTime","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"reward","type":"uint256"}],"name":"notifyRewardAmount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"pendingGovernance","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"periodFinish","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"rewardPerToken","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"rewardPerTokenStored","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"rewardRate","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"rewards","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"_governance","type":"address"}],"name":"setGovernance","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"totalSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"userRewardPerTokenPaid","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"withdraw","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"withdrawAll","outputs":[],"stateMutability":"nonpayable","type":"function"}]'
class SnowballAPRFetcher(DappAPRFetcher):
"""
Interface for apr fetcher
"""
def __init__(self):
super().__init__("avalanche", Web3(Web3.HTTPProvider(blockchain_urls["avalanche"])))
self._gauges_contract = open_contract(self._web3, self._blockchain, "0x215D5eDEb6A6a3f84AE9d72962FEaCCdF815BF27")
self._token_contract = open_contract(self._web3, self._blockchain, self.dapp_token_address(self._web3))
# open contract for each pool
lst_tokens = self._gauges_contract.functions.tokens().call()
self._pools = {
token_address: None
for token_address in (
lst_tokens
)
}
keys = list(self._pools.keys())
self._total_weight = 0
j = 0
url = 'https://api.snowapi.net/graphql'
myobj = {"query": 'query { SnowglobeContracts { pair, snowglobeAddress, gaugeAddress }}'}
not_deprecated_gauge_addresses = json.loads(requests.post(url, json = myobj).text)
not_deprecated_gauge_addresses = [d["gaugeAddress"].lower() for d in not_deprecated_gauge_addresses["data"]["SnowglobeContracts"]]
for p in keys:
weight = self._gauges_contract.functions.weights(self._web3.toChecksumAddress(p)).call()
pool_contract = open_contract(self._web3, self._blockchain, p, providedABI=defaultABI)
gauge_address = self._gauges_contract.functions.gauges(self._web3.toChecksumAddress(p)).call()
gauge_contract = open_contract(self._web3, self._blockchain, gauge_address, providedABI=defaultABI2)
if (
gauge_address.lower() not in not_deprecated_gauge_addresses
):
self._pools.pop(p)
else:
self._pools[p] = gauge_contract
self._total_weight += weight
def dapp_pools_infos(self, web3) -> List[Dict[str, Union[str, float]]]:
pools_infos = []
for p, p_contract in self._pools.items():
weight = self._gauges_contract.functions.weights(self._web3.toChecksumAddress(p)).call()
pool_contract = open_contract(self._web3, self._blockchain, p)
decimals_supply = pool_contract.functions.decimals().call()
ratio = (pool_contract.functions.getRatio().call() * 10**-18)
gauge_address = self._gauges_contract.functions.gauges(self._web3.toChecksumAddress(p)).call()
pools_infos.append(
{
"total_staked": pool_contract.functions.balance().call() * 10**-decimals_supply,
"pool_address": pool_contract.functions.token().call(),
"alloc_point": weight,
}
)
return pools_infos
def dapp_token_address(self, web3) -> str:
return self._gauges_contract.functions.SNOWBALL().call()
def dapp_token_per_year(self, web3) -> float:
decimals = self._token_contract.functions.decimals().call()
token_per_year = sum([p_contract.functions.rewardRate().call() for p_contract in self._pools.values()]) * 10**(-decimals) * 3600 * 24 * 365
return token_per_year
def dapp_token_total_alloc(self, web3) -> int:
return self._total_weight
def dapp_token_price(self, web3) -> float:
return get_token_price_from_dexs(web3, self._blockchain, self.dapp_token_address(web3))
def additional_aprs(self, i: int, pool_info: Dict[str, Union[float, int, str]]) -> List[Tuple[str, float]]:
"""
keys = list(self._pools.keys())
p = keys[i]
pool_contract = open_contract(self._web3, self._blockchain, p)
gauge_address = self._gauges_contract.functions.gauges(self._web3.toChecksumAddress(p)).call()
traderjoe_fetch = TraderjoeAPRFetcher()
pangolin_fetch = PangolinV2APRFetcher()
lydia_fetch = LydiaAPRFetcher()
axial_fetch = AxialAPRFetcher()
"""
"""
print(p)
print(pool_contract.functions.balanceOf(self._web3.toChecksumAddress(gauge_address)).call())
print(pool_contract.functions.decimals().call())
print((pool_contract.functions.getRatio().call() * 10**-18))
print(self._gauges_contract.functions.gauges(self._web3.toChecksumAddress(keys[i])).call())
"""
return []
| 163.066038 | 6,346 | 0.662308 | 1,697 | 17,285 | 6.639364 | 0.129641 | 0.069229 | 0.100648 | 0.104553 | 0.752197 | 0.725126 | 0.706399 | 0.640543 | 0.629981 | 0.598651 | 0 | 0.021192 | 0.063639 | 17,285 | 105 | 6,347 | 164.619048 | 0.674946 | 0.023199 | 0 | 0.054795 | 0 | 0.027397 | 0.76611 | 0.754233 | 0 | 0 | 0.002558 | 0 | 0 | 1 | 0.09589 | false | 0 | 0.164384 | 0.041096 | 0.356164 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5df52dfede6cd41f53ace70176bc285faccfe8cc | 88,400 | py | Python | code/plots.py | Sandalmoth/dual-adaptation | 1052b47dbd3c473c406bb72d9ecd0693ca0c1f80 | [
"Zlib"
] | null | null | null | code/plots.py | Sandalmoth/dual-adaptation | 1052b47dbd3c473c406bb72d9ecd0693ca0c1f80 | [
"Zlib"
] | null | null | null | code/plots.py | Sandalmoth/dual-adaptation | 1052b47dbd3c473c406bb72d9ecd0693ca0c1f80 | [
"Zlib"
] | null | null | null | """
All kinds of plotting
- abcdiag: abc diagnostics
"""
import copy
import csv
import click
import h5py
import matplotlib.cm as cm
import matplotlib.gridspec as gridspec
from matplotlib import pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import numpy as np
from pyabc import History
from scipy.integrate import simps
from scipy.interpolate import PchipInterpolator as pchip
import toml
import simtools
# import cm_xml_to_matplotlib as cmx
# BUOR = cmx.make_cmap('blue-orange-div.xml')
class Rate:
def __init__(self, s, c, w, u, m):
"""
:param s float: shape parameter in R
:param c float: center parameter in (0, 1)
:param w float: width between function ends in R
:param u float: mode of function in R
:param m float: function maximum in in R > 0
"""
self.u = u
self.w = w
self.m = m
self.c = c
self.a = s*c
self.b = s - self.a
self.factor = self.a**self.a * self.b**self.b * (self.a + self.b)**(-self.a - self.b)
def __call__(self, x):
y = (x/self.w - self.u/self.w + self.c)**self.a * (1 - (x/self.w - self.u/self.w + self.c))**self.b
y = self.m * y / self.factor
y[x <= self.u - self.c*self.w] = 0
y[x >= self.u - (self.c - 1)*self.w] = 0
return y
class Noise:
def __init__(self, s):
"""
:param s float: standard deviation of normal distribution
"""
self.s = s
def __call__(self, x):
return 1/np.sqrt(2*np.pi*self.s**2) * \
np.exp(-x**2/(2*self.s**2))
class Observation:
"""
A stochastic process that describes an observation
"""
def __init__(self):
pass
def __str__(self):
return str(self.obs)
def parse_observations(self, obsfile_up, obsfile_down):
"""
An observation file holds a probability density function
specified by a mean and sigma at a number of time coordinates.
The sigma are used for a weighted least squares of the means from simulation.
:param obsfile_up path: path to csv of observations for parameter increase
:param obsfile_down path: path to csv of observations for parameter decrease
"""
self.obs = {
'up': {'t': [], 'x': [], 's': []},
'down': {'t': [], 'x': [], 's': []}
}
with open(obsfile_up, 'r') as obs_up:
rdr = csv.DictReader(obs_up)
for line in rdr:
self.obs['up']['t'].append(float(line['time']))
self.obs['up']['x'].append(float(line['param']))
self.obs['up']['s'].append(float(line['stdev']))
with open(obsfile_down, 'r') as obs_down:
rdr = csv.DictReader(obs_down)
for line in rdr:
self.obs['down']['t'].append(float(line['time']))
self.obs['down']['x'].append(float(line['param']))
self.obs['down']['s'].append(float(line['stdev']))
self.interpolators = {
'up': {
'x': pchip(self.obs['up']['t'], self.obs['up']['x'], extrapolate=True),
's': pchip(self.obs['up']['t'], self.obs['up']['s'], extrapolate=True)
},
'down': {
'x': pchip(self.obs['down']['t'], self.obs['down']['x'], extrapolate=True),
's': pchip(self.obs['down']['t'], self.obs['down']['s'], extrapolate=True)
}
}
def get_instance(self, time_up, time_down):
"""
Get means and sigmas at specified times.
:param time np.array: time axis for realization of up trend
:param time np.array: time axis for realization of down trend
:returns: {'up': [up example]}
"""
instance = {}
for time, obs_set in zip([time_up, time_down], ['up', 'down']):
obs_t = np.array(self.obs[obs_set]['t'])
obs_s = np.array(self.obs[obs_set]['s'])
obs_x = np.array(self.obs[obs_set]['x'])
# are we outside observations?
# if so, add data to improve interpolation and warn user
if time[0] < obs_t[0]:
print('Warning: requesting observation interpolation outside observations')
obs_t = np.insert(obs_t, 0, time[0])
obs_x = np.insert(obs_x, 0, obs_x[0])
obs_s = np.insert(obs_s, 0, obs_s[0])
if time[0] > obs_t[0]:
print('Warning: requesting observation interpolation outside observations')
obs_t = np.append(obs_t, time[0])
obs_x = np.append(obs_x, obs_x[0])
obs_s = np.append(obs_s, obs_s[0])
instance['x_' + obs_set] = self.interpolators[obs_set]['x'](time)
instance['s_' + obs_set] = self.interpolators[obs_set]['s'](time)
return instance
@click.group()
def main():
"""
Plotting and data generation tools
"""
pass
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-u', '--obsfile-up', type=click.Path())
@click.option('-d', '--obsfile-down', type=click.Path())
@click.option('-b', '--dbfile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
@click.option('-i', '--history-id', type=int, default=1)
def abcdiag(paramfile, obsfile_up, obsfile_down, dbfile, save, history_id):
"""
Diagnostic plots for examining how abc fitting worked
"""
db_path = 'sqlite:///' + dbfile
abc_history = History(db_path)
abc_history.id = history_id
simtools.PARAMS = toml.load(paramfile)
if save is not None:
pdf_out = PdfPages(save)
### ABC SIMULATION PARAMETERS ###
fig, axs = plt.subplots(nrows=3, sharex=True)
t_axis = list(range(abc_history.max_t + 1))
populations = abc_history.get_all_populations()
populations = populations[populations.t >= 0]
axs[0].plot(t_axis, populations['particles'])
axs[1].plot(t_axis, populations['epsilon'])
axs[2].plot(t_axis, populations['samples'])
axs[0].set_title('ABC parameters per generation')
axs[0].set_ylabel('Particles')
axs[1].set_ylabel('Epsilon')
axs[2].set_ylabel('Samples')
axs[-1].set_xlabel('Generation (t)')
fig.set_size_inches(8, 5)
if save is not None:
pdf_out.savefig()
else:
plt.show()
# PLOT SHOWING PARAMETERS WITH CONFIDENCE OVER GENERATIONS ###
fig, axs = plt.subplots(nrows=6, ncols=2)
t_axis = np.arange(abc_history.max_t + 1)
quartile1 = []
medians = []
quartile3 = []
parameters = ['s', 'c', 'w', 'n', 'm', 'r']
for i, generation in enumerate(t_axis):
abc_data, __ = abc_history.get_distribution(m=0, t=generation)
data = [abc_data[x] for x in parameters]
t_quartile1, t_medians, t_quartile3 = np.percentile(
data, [25, 50, 75], axis=1
)
quartile1.append(t_quartile1)
medians.append(t_medians)
quartile3.append(t_quartile3)
last_distro = data
if i == 0:
first_distro = data
quartile1 = np.array(quartile1)
medians = np.array(medians)
quartile3 = np.array(quartile3)
for i, param in enumerate(parameters):
axs[i][0].plot(t_axis, medians[:, i])
axs[i][0].fill_between(t_axis, quartile1[:, i], quartile3[:, i],
alpha=0.3, color='gray')
axs[i][0].set_ylabel(param)
axs[i][1].hist(first_distro[i], bins=32, density=True)
axs[i][1].hist(last_distro[i], bins=32, density=True)
axs[-1][0].set_xlabel('Generation (t)')
fig.set_size_inches(8, 8)
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
if save is not None:
pdf_out.close()
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-u', '--obsfile-up', type=click.Path())
@click.option('-d', '--obsfile-down', type=click.Path())
@click.option('-b', '--dbfile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
@click.option('-i', '--history-id', type=int, default=1)
def abcfit(paramfile, obsfile_up, obsfile_down, dbfile, save, history_id):
"""
Plots showing off the fit from abc
"""
db_path = 'sqlite:///' + dbfile
abc_history = History(db_path)
abc_history.id = history_id
simtools.PARAMS = toml.load(paramfile)
if save is not None:
pdf_out = PdfPages(save)
### PLOT OF RATE###
abc_data, __ = abc_history.get_distribution(m=0,
t=abc_history.max_t)
parameters = ['s', 'c', 'w', 'n', 'm', 'r']
params = {k: np.median(abc_data[k]) for k in parameters}
f_rate_1 = Rate(params['s'], params['c'], params['w'], simtools.PARAMS['optimum_normal'], params['m'])
f_rate_2 = Rate(params['s'], params['c'], params['w'], simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_noise = Noise(params['n'])
# x_width = simtools.PARAMS['parameter_range'][1] - \
# simtools.PARAMS['parameter_range'][0]
# x_axis = np.linspace(-x_width/2, x_width/2, simtools.PARAMS['parameter_points'])
x_axis = np.linspace(*simtools.PARAMS['parameter_range'], simtools.PARAMS['parameter_points'])
fig, axs = plt.subplots()
axs.plot(x_axis, f_rate_1(x_axis), color='k', linestyle='-', linewidth='1.0', label='Mutant or untreated normal cell')
axs.plot(x_axis, f_rate_2(x_axis), color='k', linestyle='--', linewidth='1.0', label='Normal cell with treatment')
axs.legend(frameon=False)
axs.set_xlabel('$x$')
axs.set_ylabel('$\lambda(x)$')
axs.set_ylim(axs.get_ylim()[0], axs.get_ylim()[1]*1.2)
fig.set_size_inches(3.8, 3.8)
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
### HEATMAP OF RISE AND FALL WITH MEAN AND OBSERVATION ###
fig, axs = plt.subplots(nrows=2)
sim = {}
f_rate_up = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_rate_down = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_normal'], params['m'])
parameter_range = simtools.PARAMS['parameter_range'][1] - \
simtools.PARAMS['parameter_range'][0]
observation = Observation()
observation.parse_observations(obsfile_up, obsfile_down)
obs = observation.get_instance(
simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up']),
simtools.get_time_axis(simtools.PARAMS['time_range_down'][1],
simtools.PARAMS['time_points_down'])
)
f_initial = simtools.get_stationary_distribution_function(
f_rate_down,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis, parameter_axis, parameters = simtools.simulate_pde(
f_initial,
f_rate_up,
f_noise,
simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points'],
simtools.PARAMS['abc_convolution_method']
)
sim['x_up'] = np.array(
[np.sum(parameters[:, i]*parameter_axis) / \
parameter_axis.size*parameter_range \
for i in range(parameters.shape[1])]
)
axs[0].plot(sim['x_up'], time_axis, color='k',
linewidth=1.0)
axs[0].imshow(
np.transpose(parameters),
aspect=parameter_range/simtools.PARAMS['time_range_up'][1],
extent=[np.min(parameter_axis), np.max(parameter_axis), 0,
simtools.PARAMS['time_range_up'][1]],
cmap=cm.viridis,
origin='lower'
)
axs[0].plot(obs['x_up'], time_axis, linewidth=1.0, color='r')
f_initial = simtools.get_stationary_distribution_function(
f_rate_up,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis, parameter_axis, parameters = simtools.simulate_pde(
f_initial,
f_rate_down,
f_noise,
simtools.PARAMS['time_range_down'][1],
simtools.PARAMS['time_points_down'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
sim['x_down'] = np.array(
[np.sum(parameters[:, i]*parameter_axis) / \
parameter_axis.size*parameter_range \
for i in range(parameters.shape[1])]
)
axs[1].plot(sim['x_down'], time_axis, color='k',
linewidth=1.0)
axs[1].imshow(
np.transpose(parameters),
aspect=parameter_range/simtools.PARAMS['time_range_down'][1],
extent=[np.min(parameter_axis), np.max(parameter_axis), 0,
simtools.PARAMS['time_range_down'][1]],
cmap=cm.viridis,
origin='lower'
)
axs[1].plot(obs['x_down'], time_axis, linewidth=1.0, color='r')
fig.set_size_inches(5, 8)
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
### HEATMAP OF RISE AND FALL WITH MEAN AND OBSERVATION ###
### HORIZONTAL NICE VERSION ###
fig, axs = plt.subplots(ncols=2, sharey=True)
sim = {}
f_rate_up = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_rate_down = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_normal'], params['m'])
parameter_range = simtools.PARAMS['parameter_range'][1] - \
simtools.PARAMS['parameter_range'][0]
extra_width = simtools.PARAMS['optimum_treatment'] - simtools.PARAMS['optimum_normal']
narrow_range = simtools.PARAMS['optimum_treatment'] - simtools.PARAMS['optimum_normal'] + 2*extra_width
# parameter_middle = (simtools.PARAMS['optimum_treatment'] + simtools.PARAMS['optimum_normal'])/2
parameter_bottom = int(simtools.PARAMS['parameter_points']*(simtools.PARAMS['optimum_normal'] - extra_width - simtools.PARAMS['parameter_range'][0])/parameter_range)
parameter_top = int(simtools.PARAMS['parameter_points']*(simtools.PARAMS['optimum_treatment'] + extra_width - simtools.PARAMS['parameter_range'][0])/parameter_range)
print(parameter_bottom, parameter_top)
observation = Observation()
observation.parse_observations(obsfile_up, obsfile_down)
obs = observation.get_instance(
simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up']),
simtools.get_time_axis(simtools.PARAMS['time_range_down'][1],
simtools.PARAMS['time_points_down'])
)
f_initial = simtools.get_stationary_distribution_function(
f_rate_down,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis, parameter_axis, parameters = simtools.simulate_pde(
f_initial,
f_rate_up,
f_noise,
simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points'],
simtools.PARAMS['abc_convolution_method']
)
sim['x_up'] = np.array(
[np.sum(parameters[:, i]*parameter_axis) / \
parameter_axis.size*parameter_range \
for i in range(parameters.shape[1])]
)
parameter_axis = parameter_axis[parameter_bottom:parameter_top]
parameters = parameters[parameter_bottom:parameter_top, :]
print(np.min(parameter_axis), np.max(parameter_axis))
axs[0].plot(time_axis, sim['x_up'], color='k',
linewidth=1.0)
axs[0].imshow(
parameters,
# aspect=simtools.PARAMS['time_range_up'][1]/parameter_range,
# aspect=80/parameter_range,
aspect=80/narrow_range,
extent=[0, simtools.PARAMS['time_range_up'][1],
np.min(parameter_axis), np.max(parameter_axis)],
# extent=[0, simtools.PARAMS['time_range_up'][1],
# -1, 4],
cmap=cm.magma,
origin='lower'
)
axs[0].plot(time_axis, obs['x_up'], linewidth=1.0, color='k',
linestyle='--')
axs[0].set_ylim(np.min(parameter_axis), np.max(parameter_axis))
f_initial = simtools.get_stationary_distribution_function(
f_rate_up,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis, parameter_axis, parameters = simtools.simulate_pde(
f_initial,
f_rate_down,
f_noise,
simtools.PARAMS['time_range_down'][1],
simtools.PARAMS['time_points_down'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
sim['x_down'] = np.array(
[np.sum(parameters[:, i]*parameter_axis) / \
parameter_axis.size*parameter_range \
for i in range(parameters.shape[1])]
)
parameter_axis = parameter_axis[parameter_bottom:parameter_top]
parameters = parameters[parameter_bottom:parameter_top, :]
axs[1].plot(time_axis, sim['x_down'], color='k',
linewidth=1.0, label="Mean (Simulated)")
img = axs[1].imshow(
parameters,
# aspect=simtools.PARAMS['time_range_up'][1]/parameter_range,
# aspect=80/parameter_range,
aspect=80/narrow_range,
extent=[0, simtools.PARAMS['time_range_down'][1],
np.min(parameter_axis), np.max(parameter_axis)],
cmap=cm.magma,
origin='lower'
)
axs[1].plot(time_axis, obs['x_down'], linewidth=1.0, color='k',
label="Mean (Reference)", linestyle='--')
axs[1].set_ylim(np.min(parameter_axis), np.max(parameter_axis))
cbr = fig.colorbar(img, ax=axs[1], fraction=0.046, pad=0.04)
cbr.set_label('Parameter density', labelpad=-15)
cbr.set_ticks([np.min(parameters), np.max(parameters)])
cbr.set_ticklabels(['Low', 'High'])
axs[0].set_ylabel('$x$')
axs[0].set_xlabel('Time [days]')
axs[1].set_xlabel('Time [days]')
axs[1].legend(loc='center left', bbox_to_anchor=(1.6, 0.5), frameon=False)
fig.set_size_inches(6.2, 2.5)
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
if save is not None:
pdf_out.close()
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-b', '--dbfile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('-i', '--history-id', type=int, default=1)
def generate_dataset_mpi(paramfile, dbfile, outfile, history_id):
"""
Generate a field using the pde for further c++ mpi simulation
"""
db_path = 'sqlite:///' + dbfile
abc_history = History(db_path)
abc_history.id = history_id
simtools.PARAMS = toml.load(paramfile)
abc_data, __ = abc_history.get_distribution(m=0, t=abc_history.max_t)
parameters = ['s', 'c', 'w', 'n', 'm', 'r']
params = {k: np.median(abc_data[k]) for k in parameters}
f_noise = Noise(params['n'])
simtools.PARAMS = toml.load(paramfile)
f_rate_up = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_rate_down = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_normal'], params['m'])
f_initial = simtools.get_stationary_distribution_function(
f_rate_down,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis, parameter_axis, parameter_density = simtools.simulate_pde(
f_initial,
f_rate_up,
f_noise,
simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
# find the child distribution at each point in time
child_density = np.zeros(shape=parameter_density.shape)
for i in range(parameter_density.shape[1]):
child_density[:, i] = simtools.get_child_distribution(parameter_density[:, i],
f_rate_up, f_noise,
simtools.PARAMS['parameter_range'])
# find growth rate at each point in time
growth_rate = np.zeros(shape=time_axis.shape)
for i in range(parameter_density.shape[1]):
growth_rate[i] = simps(parameter_density[:, i]*f_rate_up(parameter_axis), x=parameter_axis)
# write parameter density hdf5
out = h5py.File(outfile, 'w')
gp_pd = out.create_group('parameter_density')
gp_pd['time_axis'] = time_axis
gp_pd['parameter_axis'] = parameter_axis
# gp_pd['parameter_density'] = parameter_density
gp_pd['parameter_density'] = child_density
gp_pd['growth_rate'] = growth_rate
# write rate function data to simulation config toml
simtools.PARAMS['mpi_noise_function_sigma'] = params['n']
simtools.PARAMS['mpi_rate_function_width'] = params['w']
simtools.PARAMS['mpi_rate_function_center'] = params['c']
simtools.PARAMS['mpi_rate_function_shape'] = params['s']
simtools.PARAMS['mpi_rate_function_max'] = params['m']
simtools.PARAMS['mpi_rate_function_ratio'] = params['r']
simtools.PARAMS['mpi_death_rate'] = growth_rate[-1]
with open(paramfile, 'w') as params_toml:
toml.dump(simtools.PARAMS, params_toml)
@main.command()
@click.option('-i', '--infile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
def plot_dataset(infile, save):
"""
Plots for examining input to mpi simulator
"""
def lr(x):
return abs(x[-1] - x[0])
data = h5py.File(infile, 'r')
gp_pd = data['parameter_density']
if save is not None:
pdf_out = PdfPages(save)
parameter_density = np.array(gp_pd['parameter_density'])
parameter_axis = np.array(gp_pd['parameter_axis'])
time_axis = np.array(gp_pd['time_axis'])
# child density plot
fig, axs = plt.subplots()
fig.set_size_inches(4, 4)
img = axs.imshow(
np.transpose(parameter_density),
extent=(np.min(parameter_axis), np.max(parameter_axis),
np.min(time_axis), np.max(time_axis)),
aspect=lr(parameter_axis)/lr(time_axis),
cmap=cm.viridis,
origin='lower'
)
cbr = fig.colorbar(img, ax=axs, fraction=0.046, pad=0.04)
cbr.set_label('Parameter density', labelpad=-15)
cbr.set_ticks([np.min(parameter_density), np.max(parameter_density)])
cbr.set_ticklabels(['Low', 'High'])
axs.set_ylabel('Time')
axs.set_xlabel('Parameter')
axs.grid()
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# child density first vs last
fig, axs = plt.subplots()
fig.set_size_inches(4, 3)
axs.plot(parameter_axis, parameter_density[:, 0], color='k', linewidth=1.0, label='t = 0')
axs.plot(parameter_axis, parameter_density[:, -1], color='k', linewidth=1.0, linestyle='--',
label='t = ' + str(time_axis[-1]))
axs.set_xlabel('Time')
axs.set_ylabel('Parameter density')
axs.legend()
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# growth rate over time
growth_rate = np.array(gp_pd['growth_rate'])
fig, axs = plt.subplots()
fig.set_size_inches(4, 3)
axs.plot(time_axis, growth_rate, color='k', linewidth=1.0)
axs.set_xlabel('Time')
axs.set_ylabel('Growth rate')
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
if save is not None:
pdf_out.close()
def moving_mean(vector, window):
"""
Calculate moving mean of array-like object
Reduces window size near edges
"""
extent = (window - 1) / 2
average = []
for i, __ in enumerate(vector):
local_extent = extent
while not (i - local_extent >= 0 and i + local_extent + 1 <= len(vector)):
local_extent -= 1
imin = int(i - local_extent) if i - local_extent > 0 else 0
imax = int(i + local_extent + 1) if i + local_extent + 1 < len(vector) else len(vector)
sample = sorted(vector[imin:imax])
average.append(sum(sample) / len(sample))
return np.array(average)
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-i', '--infile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
def mpiout(paramfile, infile, outfile, save):
data = h5py.File(outfile, 'r')
gp_result = data['result']
indata = h5py.File(infile, 'r')
gp_input = indata['parameter_density']
simtools.PARAMS = toml.load(paramfile)
if save is not None:
pdf_out = PdfPages(save)
# escape probability as a function of time of mutation
fig, axs = plt.subplots()
time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'])
escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
simtools.PARAMS['mpi_simulations_per_time_point']
axs.plot(time_axis, escaped_sum, color='lightgrey', linewidth='0.5')
axs.plot(time_axis, moving_mean(escaped_sum, 101), color='k', linewidth='1.0')
axs.set_xlabel('Time of mutation')
axs.set_ylabel('Probability of a mutant reaching ' + \
str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
if save is not None:
pdf_out.savefig()
else:
plt.show()
# mutation vulnerability as a function of time of mutation
fig, axs = plt.subplots()
time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'])
escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
simtools.PARAMS['mpi_simulations_per_time_point']
growth_rate = gp_input['growth_rate']
axs.plot(time_axis, escaped_sum*growth_rate, color='lightgrey', linewidth='0.5')
axs.plot(time_axis, moving_mean(escaped_sum*growth_rate, 101), color='k',
linewidth='1.0', label='Mutation risk')
axs_cum = axs.twinx()
axs_cum.plot(time_axis, np.cumsum(escaped_sum*growth_rate), color='k',
linestyle='--', linewidth='1.0')
# empty curve drawn on first axis for legend purposes
axs.plot([], [], color='k',
linewidth='1.0', linestyle='--', label='Cumulative mutation risk')
axs.set_xlabel('Time of mutation')
axs.set_ylim(0, axs.get_ylim()[1])
axs.set_yticks([0])
axs_cum.set_ylim(0, axs_cum.get_ylim()[1])
axs_cum.set_yticks([0])
axs.legend()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# plot of growth rate, escape probability and mutation vulnerability all in one
fig, axs = plt.subplots()
time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'])
escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
simtools.PARAMS['mpi_simulations_per_time_point']
growth_rate = gp_input['growth_rate']
axs.plot(time_axis, escaped_sum, color='orange', linewidth='0.5', alpha=0.5)
axs.plot(time_axis, moving_mean(escaped_sum, 101), color='orange', linewidth='1.0')
axs_rate = axs.twinx()
axs_rate.plot(time_axis, growth_rate, color='blue', linewidth=1.0)
axs_risk = axs.twinx()
axs_risk.plot(time_axis, escaped_sum*growth_rate, color='lightgrey', linewidth='0.5')
axs_risk.plot(time_axis, moving_mean(escaped_sum*growth_rate, 101), color='k',
linewidth='1.0', label='Mutation risk')
axs_cum = axs.twinx()
axs_cum.plot(time_axis, np.cumsum(escaped_sum*growth_rate), color='k',
linestyle='--', linewidth='1.0')
# empty curves drawn on first axis for legend purposes
axs.plot([], [], color='orange',
linewidth='1.0', linestyle='-', label='Probability of reaching ' + str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
axs.plot([], [], color='blue',
linewidth='1.0', linestyle='-', label='Normal cell average growth rate')
axs.plot([], [], color='k',
linewidth='1.0', linestyle='-', label='Mutation risk')
axs.plot([], [], color='k',
linewidth='1.0', linestyle='--', label='Cumulative mutation risk')
axs.set_ylabel('Probability of a mutant reaching ' + \
str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
axs_rate.set_ylabel('Normal cell growth rate')
axs.set_xlabel('Time of mutation')
axs.set_ylim(0, axs.get_ylim()[1])
axs_rate.set_ylim(0, axs_rate.get_ylim()[1])
axs_risk.set_ylim(0, axs_risk.get_ylim()[1])
axs_risk.set_yticks([0])
axs_cum.set_ylim(0, axs_cum.get_ylim()[1])
axs_cum.set_yticks([0])
axs.legend(loc='lower right', frameon=False)
if save is not None:
pdf_out.savefig()
else:
plt.show()
# plot of growth rate, escape probability and mutation vulnerability all in one
# small multiples version
# fig, axs = plt.subplots(nrows=3)
fig = plt.figure(constrained_layout=True)
fig.set_size_inches(7, 4)
gs = gridspec.GridSpec(ncols=2, nrows=2, figure=fig)
axs = []
axs.append(fig.add_subplot(gs[:, 0]))
axs.append(fig.add_subplot(gs[0, 1]))
axs.append(fig.add_subplot(gs[1, 1]))
time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'])
escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
simtools.PARAMS['mpi_simulations_per_time_point']
growth_rate = gp_input['growth_rate']
axs[0].plot(time_axis, escaped_sum, color='orange', linewidth='0.4', alpha=0.5)
axs[0].plot(time_axis, moving_mean(escaped_sum, 101), color='orange', linewidth='1.0')
axs_rate = axs[0].twinx()
axs_rate.plot(time_axis, growth_rate, color='blue', linewidth=1.0)
axs[1].plot(time_axis, escaped_sum*growth_rate, color='lightgrey', linewidth='0.5')
axs[1].plot(time_axis, moving_mean(escaped_sum*growth_rate, 101), color='k',
linewidth='1.0', label='Mutation risk')
axs[2].plot(time_axis, np.cumsum(escaped_sum*growth_rate), color='k',
linestyle='-', linewidth='1.0')
# empty curves drawn on first axis for legend purposes
axs[0].plot([], [], color='orange',
linewidth='1.0', linestyle='-', label='Probability of reaching ' + str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
axs[0].plot([], [], color='blue',
linewidth='1.0', linestyle='-', label='Normal cell average growth rate')
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='-', label='Mutation risk')
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='--', label='Cumulative mutation risk')
axs[0].set_ylabel('Probability of a new mutant reaching ' + \
str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
axs_rate.set_ylabel('Normal cell growth rate')
axs[1].set_ylabel('Mutation risk')
axs[2].set_ylabel('Cumulative risk')
for i in range(3):
axs[i].set_xlabel('Time')
axs[0].set_ylim(0, axs[0].get_ylim()[1])
axs_rate.set_ylim(0, axs_rate.get_ylim()[1])
axs[1].set_ylim(0, axs[1].get_ylim()[1])
axs[1].set_yticks([0])
axs[2].set_ylim(0, axs[2].get_ylim()[1])
axs[2].set_yticks([0])
axs[0].tick_params(axis='y', colors='orange')
axs_rate.tick_params(axis='y', colors='blue')
# axs[0].legend(frameon=False)
if save is not None:
pdf_out.savefig()
else:
plt.show()
# plot of growth rate, escape probabability, mutation risk and survival function
# small multiples version
# fig, axs = plt.subplots(nrows=3)
mutation_probability = 1e-7
fig = plt.figure(constrained_layout=True)
fig.set_size_inches(7, 4)
gs = gridspec.GridSpec(ncols=2, nrows=2, figure=fig)
axs = []
axs.append(fig.add_subplot(gs[:, 0]))
axs.append(fig.add_subplot(gs[0, 1]))
axs.append(fig.add_subplot(gs[1, 1]))
time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'])
escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
simtools.PARAMS['mpi_simulations_per_time_point']
growth_rate = gp_input['growth_rate']
axs[0].plot(time_axis, escaped_sum, color='orange', linewidth='0.4', alpha=0.5)
axs[0].plot(time_axis, moving_mean(escaped_sum, 101), color='orange', linewidth='1.0')
axs_rate = axs[0].twinx()
axs_rate.plot(time_axis, growth_rate, color='blue', linewidth=1.0)
axs[1].plot(time_axis, escaped_sum*growth_rate*mutation_probability, color='lightgrey', linewidth='0.5')
axs[1].plot(time_axis, moving_mean(escaped_sum*growth_rate*mutation_probability, 101), color='k',
linewidth='1.0', label='Mutation risk')
# calculate survivor function
# rate = lambda x: 0.01/(1 + np.exp(-0.1*(x - 20)))
dt = (np.max(time_axis) - np.min(time_axis))/len(time_axis);
print(time_axis[100], dt*100)
print(time_axis[500], dt*500)
# time = np.arange(0, 300, 1)
effective_population_size = 5e6
event_times = []
time_risk = escaped_sum*growth_rate*mutation_probability*effective_population_size
for __ in range(10000):
# if __%100 == 0:
# print(__)
i = 0
while True:
if i < simtools.PARAMS['time_points_up']:
if np.random.random() < time_risk[i]*dt:
event_times.append(time_axis[i])
break
elif np.random.random() < time_risk[-1]*dt:
event_times.append(i*dt)
break
i += 1
if i*dt > 600:
event_times.append(i*dt)
break
# if i == simtools.PARAMS['time_points_up']:
# event_times.append(time_axis[-1] + dt)
# break
# print(event_times)
event_times = np.array(event_times)
long_time_axis = np.linspace(0, 500, 50)
surv = np.array([np.sum(event_times > x)/event_times.size for x in long_time_axis])
axs[2].plot(long_time_axis, surv, color='k', linestyle='-', linewidth=1.0)
# axs[2].plot(time_axis, np.cumsum(escaped_sum*growth_rate), color='k',
# linestyle='-', linewidth='1.0')
# empty curves drawn on first axis for legend purposes
axs[0].plot([], [], color='orange',
linewidth='1.0', linestyle='-', label='Probability of reaching ' + str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
axs[0].plot([], [], color='blue',
linewidth='1.0', linestyle='-', label='Normal cell average growth rate')
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='-', label='Mutation risk')
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='--', label='Cumulative mutation risk')
axs[0].set_ylabel('Probability of a new mutant reaching ' + \
str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
axs_rate.set_ylabel('Normal cell growth rate')
axs[1].set_ylabel('Prob. of a mut. reaching\n' + str(simtools.PARAMS['mpi_max_population_size']) + ' cells being born')
axs[2].set_ylabel('Mutation free\nsurvival function')
for i in range(3):
axs[i].set_xlabel('Time')
axs[0].set_ylim(0, axs[0].get_ylim()[1])
axs_rate.set_ylim(0, axs_rate.get_ylim()[1])
axs[1].set_ylim(0, axs[1].get_ylim()[1])
# axs[1].set_yticks([0])
# axs[2].set_ylim(0, axs[2].get_ylim()[1])
axs[2].set_ylim(axs[2].get_ylim()[0], 1)
# axs[2].set_yticks([0])
axs[0].tick_params(axis='y', colors='orange')
axs_rate.tick_params(axis='y', colors='blue')
# axs[0].legend(frameon=False)
if save is not None:
pdf_out.savefig()
else:
plt.show()
# plot of growth rate, escape probability and mutation vulnerability all in one
# death time and escape time distribution as a function of time of mutation
fig, axs = plt.subplots(ncols=2)
fig.set_size_inches(6, 3)
escaped = np.array(gp_result['escaped'])
time = np.array(gp_result['time'])
quantiles_death = [[], [], [], []]
quantiles_escaped = [[], [], [], []]
colors = ['grey', 'black', 'grey', 'lightgrey']
for i in range(escaped.shape[1]):
death_times = time[:, i][escaped[:, i] == 0]
escaped_times = time[:, i][escaped[:, i] == 1]
q_death = np.percentile(death_times, (25, 50, 75, 95))
q_escaped = np.percentile(escaped_times, (25, 50, 75, 95)) if escaped_times.size != 0 else (None, None, None, None)
for j in range(4):
quantiles_death[j].append(q_death[j])
quantiles_escaped[j].append(q_escaped[j])
for i, color in enumerate(colors):
axs[0].plot(
simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up']),
quantiles_death[i],
linewidth=1.0,
color=color
)
for i, color in enumerate(colors):
axs[1].plot(
simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up']),
quantiles_escaped[i],
linewidth=1.0,
color=color
)
axs[0].set_xlabel('Time of mutation')
axs[1].set_xlabel('Time of mutation')
axs[0].set_ylabel('Time of death')
axs[1].set_ylabel('Time of escape')
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# histogram of aggregate death/escape time distributions
fig, axs = plt.subplots(ncols=2)
fig.set_size_inches(6, 3)
axs[0].hist(time[escaped == 0], color='lightgrey',
range=(0, np.percentile(time[escaped == 0], 99)), bins=100,
density=True)
axs[1].hist(time[escaped == 1], color='lightgrey',
range=(0, np.percentile(time[escaped == 1], 99) if escaped_times.size != 0 else 1), bins=100,
density=True)
x0 = np.linspace(0, np.percentile(time[escaped == 0], 99), 100)
death_rate = simtools.PARAMS['mpi_death_rate']
axs[0].plot(x0, death_rate*np.exp(-death_rate*x0), color='k', linewidth=1.0,
label='Exponential dist.\n$\lambda$ = Death rate')
axs[0].legend()
axs[0].set_xlabel('Time of death')
axs[1].set_xlabel('Time of escape')
axs[0].set_ylabel('Probability density')
axs[1].set_ylabel('Probability density')
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# histogram of max # cells in populations that did not escape
# fig, axs = plt.subplots()
# fig.set_size_inches(4, 4)
# max_cells = np.array(gp_result['max_cells'])
# axs.hist(max_cells[escaped == 0], color='k',
# range=(0.5, 5.5), bins=5)
# axs.set_xlabel('Maximum number of cells achieved')
# axs.set_ylabel('Frequency')
# plt.tight_layout()
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
# histogram of first parameter in dead/escaped lines
fig, axs = plt.subplots(ncols=2)
fig.set_size_inches(6, 3)
first_parameter = np.array(gp_result['first_parameter'])
axs[0].hist(first_parameter[escaped == 0], color='lightgrey',
bins=100, density=True)
axs[1].hist(first_parameter[escaped == 1], color='lightgrey',
bins=100, density=True)
f_rate_down = Rate(
simtools.PARAMS['mpi_rate_function_shape'],
simtools.PARAMS['mpi_rate_function_center'],
simtools.PARAMS['mpi_rate_function_width'],
simtools.PARAMS['optimum_normal'], 1)
x0 = np.linspace(axs[0].get_xlim()[0], axs[0].get_xlim()[1], 1000)
axs[0].plot(x0, f_rate_down(x0)*axs[0].get_ylim()[1], color='k', linewidth=1.0, label='Rate function')
x1 = np.linspace(axs[1].get_xlim()[0], axs[1].get_xlim()[1], 1000)
axs[1].plot(x1, f_rate_down(x1)*axs[1].get_ylim()[1], color='k', linewidth=1.0, label='Rate function')
axs[1].legend()
for i in range(2):
axs[i].set_xlabel('Parameter of first cell')
axs[i].set_ylabel('Probability density')
axs[0].set_title('Mutants that did not survive')
axs[1].set_title('Mutants that reached ' + \
str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
if save is not None:
pdf_out.close()
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-b', '--dbfile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('-i', '--history-id', type=int, default=1)
def generate_dataset_verify(paramfile, dbfile, outfile, history_id):
"""
Generate start and end distribution for c++ mpi verification simulations
"""
db_path = 'sqlite:///' + dbfile
abc_history = History(db_path)
abc_history.id = history_id
simtools.PARAMS = toml.load(paramfile)
abc_data, __ = abc_history.get_distribution(m=0, t=abc_history.max_t)
parameters = ['s', 'c', 'w', 'n', 'm', 'r']
params = {k: np.median(abc_data[k]) for k in parameters}
f_noise = Noise(params['n'])
simtools.PARAMS = toml.load(paramfile)
f_rate_up = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_rate_down = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_normal'], params['m'])
f_initial_up = simtools.get_stationary_distribution_function(
f_rate_down,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
f_initial_down = simtools.get_stationary_distribution_function(
f_rate_up,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis_up, parameter_axis_up, __ = simtools.simulate_pde(
f_initial_up,
f_rate_up,
f_noise,
simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis_down, parameter_axis_down, __ = simtools.simulate_pde(
f_initial_down,
f_rate_down,
f_noise,
simtools.PARAMS['time_range_down'][1],
simtools.PARAMS['time_points_down'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
assert all(parameter_axis_up == parameter_axis_down)
# write parameter density hdf5
out = h5py.File(outfile, 'w')
gp_pd = out.create_group('parameter_density')
gp_pd['time_axis_up'] = time_axis_up
gp_pd['time_axis_down'] = time_axis_down
gp_pd['parameter_axis'] = parameter_axis_up
gp_pd['parameter_density_up'] = f_initial_up(parameter_axis_up)
gp_pd['parameter_density_down'] = f_initial_down(parameter_axis_up)
# write rate function data to simulation config toml
simtools.PARAMS['mpi_noise_function_sigma'] = params['n']
simtools.PARAMS['mpi_rate_function_width'] = params['w']
simtools.PARAMS['mpi_rate_function_center'] = params['c']
simtools.PARAMS['mpi_rate_function_shape'] = params['s']
simtools.PARAMS['mpi_rate_function_max'] = params['m']
simtools.PARAMS['mpi_rate_function_ratio'] = params['r']
with open(paramfile, 'w') as params_toml:
toml.dump(simtools.PARAMS, params_toml)
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-i', '--infile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
def verification_plots(paramfile, infile, outfile, save):
"""
plot comparing exact verification data to pde solution
"""
if save is not None:
pdf_out = PdfPages(save)
simtools.PARAMS = toml.load(paramfile)
params = {}
params['n'] = simtools.PARAMS['mpi_noise_function_sigma']
params['w'] = simtools.PARAMS['mpi_rate_function_width']
params['c'] = simtools.PARAMS['mpi_rate_function_center']
params['s'] = simtools.PARAMS['mpi_rate_function_shape']
params['m'] = simtools.PARAMS['mpi_rate_function_max']
params['r'] = simtools.PARAMS['mpi_rate_function_ratio']
f_noise = Noise(params['n'])
f_rate_up = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_rate_down = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_normal'], params['m'])
f_initial_up = simtools.get_stationary_distribution_function(
f_rate_down,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
f_initial_down = simtools.get_stationary_distribution_function(
f_rate_up,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis_up, parameter_axis_up, parameter_density_up = simtools.simulate_pde(
f_initial_up,
f_rate_up,
f_noise,
simtools.PARAMS['time_range_up'][1],
simtools.PARAMS['time_points_up'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points'],
convolve_method='fft'
)
time_axis_down, parameter_axis_down, parameter_density_down = simtools.simulate_pde(
f_initial_down,
f_rate_down,
f_noise,
simtools.PARAMS['time_range_down'][1],
simtools.PARAMS['time_points_down'],
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points'],
convolve_method='fft'
)
def lr(x):
return x[1] - x[0]
data = h5py.File(outfile, 'r')
gp_result = data['result']
inpt = h5py.File(infile, 'r')
gp_input = inpt['parameter_density']
# plot of expected density (pde)
fig, axs = plt.subplots(ncols=2, nrows=2)
fig.set_size_inches(6, 5)
img = axs[0][0].imshow(
np.transpose(parameter_density_up),
extent=(np.min(parameter_axis_up), np.max(parameter_axis_up),
np.min(time_axis_up), np.max(time_axis_up)),
aspect=lr(parameter_axis_up)/lr(time_axis_up),
cmap=cm.viridis,
origin='lower'
)
# cbr = fig.colorbar(img, ax=axs, fraction=0.046, pad=0.04)
# cbr.set_label('Parameter density', labelpad=-15)
# cbr.set_ticks([np.min(parameter_density_up), np.max(parameter_density_up)])
# cbr.set_ticklabels(['Low', 'High'])
axs[0][0].set_ylabel('Time')
axs[0][0].set_xlabel('Parameter')
# axs[0].grid()
img = axs[0][1].imshow(
np.transpose(parameter_density_down),
extent=(np.min(parameter_axis_down), np.max(parameter_axis_down),
np.min(time_axis_down), np.max(time_axis_down)),
aspect=lr(parameter_axis_down)/lr(time_axis_down),
cmap=cm.viridis,
origin='lower'
)
# cbr = fig.colorbar(img, ax=axs, fraction=0.046, pad=0.04)
# cbr.set_label('Parameter density', labelpad=-15)
# cbr.set_ticks([np.min(parameter_density_down), np.max(parameter_density_down)])
# cbr.set_ticklabels(['Low', 'High'])
axs[0][1].set_ylabel('Time')
axs[0][1].set_xlabel('Parameter')
# axs[0][1].grid()
axs[1][0].plot(gp_input['parameter_axis'][:], gp_input['parameter_density_up'][:],
label='Starting density (up)', color='k', linewidth=1.0)
axs[1][1].plot(gp_input['parameter_axis'][:], gp_input['parameter_density_down'][:],
label='Starting density (down)', color='k', linewidth=1.0)
axs[1][0].set_xlabel('Parameter ($x$)')
axs[1][0].set_ylabel('Parameter density (up)')
axs[1][1].set_xlabel('Parameter ($x$)')
axs[1][1].set_ylabel('Parameter density (down)')
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# define some shorthand names for upcoming calculations
parameter_range = lr(simtools.PARAMS['parameter_range'])
time_points_up = simtools.PARAMS['time_points_up']
pdu = parameter_density_up
pau = parameter_axis_up
time_points_down = simtools.PARAMS['time_points_down']
pdd = parameter_density_down
pad = parameter_axis_down
# plot of mean over time pde vs exact
fig, axs = plt.subplots(ncols=2, nrows=2)
fig.set_size_inches(6, 6)
for i in range(simtools.PARAMS['mpi_statics_number_of_simulations']):
axs[0][0].plot(
gp_input['time_axis_up'][:], gp_result['mean_up'][:, i], color='k', linewidth=1.0, alpha=0.2)
axs[1][0].plot(
gp_input['time_axis_down'][:], gp_result['mean_down'][:, i], color='k', linewidth=1.0, alpha=0.2)
axs[0][0].plot(time_axis_up, [np.sum(pdu[:, i]*pau)/pau.size*parameter_range
for i in range(time_points_up)],
color='r', linewidth=2.0)
axs[1][0].plot(time_axis_down, [np.sum(pdd[:, i]*pad)/pad.size*parameter_range
for i in range(time_points_down)],
color='r', linewidth=2.0)
for i in range(2):
axs[i][0].set_xlabel('Time [days]')
axs[i][0].set_ylabel('Mean $x$')
for i in range(simtools.PARAMS['mpi_statics_number_of_simulations']):
axs[0][1].plot(
gp_input['time_axis_up'][:], gp_result['stdev_up'][:, i], color='k', linewidth=1.0, alpha=0.2)
axs[1][1].plot(
gp_input['time_axis_down'][:], gp_result['stdev_down'][:, i], color='k', linewidth=1.0, alpha=0.2)
axs[0][1].plot(
time_axis_up,
[np.sqrt(np.sum(pdu[:, i]*pau**2)/pau.size*parameter_range - \
(np.sum(pdu[:, i]*pau)/pau.size*parameter_range)**2)
for i in range(time_points_up)],
color='r', linewidth=2.0)
axs[1][1].plot(
time_axis_down,
[np.sqrt(np.sum(pdd[:, i]*pad**2)/pad.size*parameter_range - \
(np.sum(pdd[:, i]*pad)/pad.size*parameter_range)**2)
for i in range(time_points_down)],
color='r', linewidth=2.0)
for i in range(2):
axs[i][1].set_xlabel('Time [days]')
axs[i][1].set_ylabel('Standard deviation of $x$')
# axs[0][0].plot([], [], linewidth=1.0, color='k',
# label="Simulation", linestyle='-')
# axs[0][0].plot([], [], linewidth=2.0, color='k',
# label="Reference", linestyle='-')
# axs[0][0].legend(loc='center left', bbox_to_anchor=(4, -10), frameon=False, ncol=2)
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
if save is not None:
pdf_out.close()
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-b', '--dbfile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('-i', '--history-id', type=int, default=1)
def generate_dataset_holiday(paramfile, dbfile, outfile, history_id):
"""
Generate a field using the pde for further c++ mpi simulation
"""
db_path = 'sqlite:///' + dbfile
abc_history = History(db_path)
abc_history.id = history_id
simtools.PARAMS = toml.load(paramfile)
abc_data, __ = abc_history.get_distribution(m=0, t=abc_history.max_t)
parameters = ['s', 'c', 'w', 'n', 'm', 'r']
params = {k: np.median(abc_data[k]) for k in parameters}
f_noise = Noise(params['n'])
simtools.PARAMS = toml.load(paramfile)
f_rate_up = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_treatment'], params['m']*params['r'])
f_rate_down = Rate(params['s'], params['c'], params['w'],
simtools.PARAMS['optimum_normal'], params['m'])
f_initial = simtools.get_stationary_distribution_function(
f_rate_down,
f_noise,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_points_full = int(simtools.PARAMS['time_points_up']* \
simtools.PARAMS['holiday_time_up_factor'])
time_step = simtools.PARAMS['time_range_up'][1]*simtools.PARAMS['holiday_time_up_factor']/ \
time_points_full
# set up outfile
out = h5py.File(outfile, 'w')
gp_pd = out.create_group('parameter_density')
gp_pd.create_dataset('time_axis', (1, time_points_full),
maxshape=(None, time_points_full))
gp_pd.create_dataset('parameter_density', (1, simtools.PARAMS['parameter_points'], time_points_full),
maxshape=(None, simtools.PARAMS['parameter_points'], time_points_full))
gp_pd.create_dataset('growth_rate', (1, time_points_full),
maxshape=(None, time_points_full))
holiday_times = []
n_trials = 0
capacity = 1
# lead simulation can be shared
time_axis_lead, parameter_axis_lead, parameter_density_lead = simtools.simulate_pde(
f_initial,
f_rate_up,
f_noise,
simtools.PARAMS['time_range_up'][1]*simtools.PARAMS['holiday_time_up_factor'],
time_points_full,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
for i in range(*simtools.PARAMS['holiday_start_range']):
for j in range(*simtools.PARAMS['holiday_duration_range']):
if i + j > time_points_full - 1:
continue
print(i, j)
n_trials += 1
holiday_times.append((i, j))
lead_length = i
holiday_length = j + 1
tail_length = time_points_full - i - j + 1
tail_length = max(0, tail_length)
time_range_lead = simtools.PARAMS['time_range_up'][1]*lead_length/ \
simtools.PARAMS['time_points_up']
time_range_holiday = simtools.PARAMS['time_range_up'][1]*holiday_length/ \
simtools.PARAMS['time_points_up']
time_range_tail = simtools.PARAMS['time_range_up'][1]*tail_length/ \
simtools.PARAMS['time_points_up']
time_axis_holiday, parameter_axis_holiday, parameter_density_holiday = simtools.simulate_pde(
simtools.distribution_to_function(parameter_axis_lead, parameter_density_lead[:, lead_length]),
f_rate_down,
f_noise,
time_range_holiday,
holiday_length,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
time_axis_tail, parameter_axis_tail, parameter_density_tail = simtools.simulate_pde(
simtools.distribution_to_function(parameter_axis_holiday, parameter_density_holiday[:, -1]),
f_rate_up,
f_noise,
time_range_tail,
tail_length,
simtools.PARAMS['parameter_range'],
simtools.PARAMS['parameter_points']
)
growth_rate_lead = np.zeros(shape=time_axis_lead.shape)
for k in range(lead_length):
growth_rate_lead[k] = simps(parameter_density_lead[:, k]*f_rate_up(parameter_axis_lead),
x=parameter_axis_lead)
growth_rate_holiday = np.zeros(shape=time_axis_holiday.shape)
for k in range(parameter_density_holiday.shape[1]):
growth_rate_holiday[k] = simps(parameter_density_holiday[:, k]*f_rate_down(parameter_axis_holiday),
x=parameter_axis_holiday)
growth_rate_tail = np.zeros(shape=time_axis_tail.shape)
for k in range(parameter_density_tail.shape[1]):
growth_rate_tail[k] = simps(parameter_density_tail[:, k]*f_rate_up(parameter_axis_tail),
x=parameter_axis_tail)
child_density_lead = np.zeros(shape=(parameter_density_lead.shape[0], lead_length))
for k in range(lead_length):
child_density_lead[:, k] = simtools.get_child_distribution(parameter_density_lead[:, k],
f_rate_up, f_noise,
simtools.PARAMS['parameter_range'])
child_density_holiday = np.zeros(shape=parameter_density_holiday.shape)
for k in range(parameter_density_holiday.shape[1]):
child_density_holiday[:, k] = simtools.get_child_distribution(parameter_density_holiday[:, k],
f_rate_down, f_noise,
simtools.PARAMS['parameter_range'])
child_density_tail = np.zeros(shape=parameter_density_tail.shape)
for k in range(parameter_density_tail.shape[1]):
child_density_tail[:, k] = simtools.get_child_distribution(parameter_density_tail[:, k],
f_rate_up, f_noise,
simtools.PARAMS['parameter_range'])
time_axis = np.concatenate([time_axis_lead[:lead_length],
time_axis_holiday[:-1] + time_range_lead - time_step,
time_axis_tail[:-1] + time_range_lead + time_range_holiday - time_step*2])
# time_axis2 = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1]* \
# simtools.PARAMS['holiday_time_up_factor'], time_points_full) # same for all
parameter_axis = parameter_axis_lead # same for all
parameter_density = np.concatenate([parameter_density_lead[:, :lead_length],
parameter_density_holiday[:, :-1],
parameter_density_tail[:, :-1]], axis=1)
growth_rate = np.concatenate([growth_rate_lead[:lead_length],
growth_rate_holiday[:-1],
growth_rate_tail[:-1]])
child_density = np.concatenate([child_density_lead[:, :lead_length],
child_density_holiday[:, :-1],
child_density_tail[:, :-1]], axis=1)
if n_trials > capacity:
gp_pd['time_axis'].resize(gp_pd['time_axis'].shape[0] * 2, 0)
gp_pd['parameter_density'].resize(gp_pd['parameter_density'].shape[0] * 2, 0)
gp_pd['growth_rate'].resize(gp_pd['growth_rate'].shape[0] * 2, 0)
capacity *= 2
gp_pd['time_axis'][n_trials - 1] = time_axis
gp_pd['parameter_density'][n_trials - 1] = child_density
gp_pd['growth_rate'][n_trials - 1] = growth_rate
gp_pd['time_axis'].resize(n_trials, 0)
gp_pd['parameter_density'].resize(n_trials, 0)
gp_pd['growth_rate'].resize(n_trials, 0)
# gp_pd['time_axis'] = time_axis
gp_pd['parameter_axis'] = parameter_axis
gp_pd['holiday_parameters'] = holiday_times
# gp_pd['parameter_density'] = parameter_density
# gp_pd['parameter_density'] = child_density
# gp_pd['growth_rate'] = growth_rate
# # write rate function data to simulation config toml
simtools.PARAMS['mpi_noise_function_sigma'] = params['n']
simtools.PARAMS['mpi_rate_function_width'] = params['w']
simtools.PARAMS['mpi_rate_function_center'] = params['c']
simtools.PARAMS['mpi_rate_function_shape'] = params['s']
simtools.PARAMS['mpi_rate_function_max'] = params['m']
simtools.PARAMS['mpi_rate_function_ratio'] = params['r']
simtools.PARAMS['mpi_death_rate'] = growth_rate[-1]
# simulation needs to know number of timelines
simtools.PARAMS['mpi_holiday_timelines'] = len(holiday_times)
with open(paramfile, 'w') as params_toml:
toml.dump(simtools.PARAMS, params_toml)
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-i', '--infile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
def plot_dataset_holiday(paramfile, infile, save):
"""
Plots for examining input to drug holiday simulator
"""
simtools.PARAMS = toml.load(paramfile)
def lr(x):
return abs(x[-1] - x[0])
data = h5py.File(infile, 'r')
gp_pd = data['parameter_density']
if save is not None:
pdf_out = PdfPages(save)
# growth rate over time
fig, axs = plt.subplots()
growth_rate = np.array(gp_pd['growth_rate'])
time_axis = np.array(gp_pd['time_axis'])
rate_range = np.max(growth_rate) - np.min(growth_rate)
for i in range(growth_rate.shape[0]):
plt.plot(time_axis[i, :], growth_rate[i, :] + i*rate_range*1.2,
color='k', linewidth=0.5)
axs.set_xlabel('Time [days]')
axs.set_yticks([])
fig.set_size_inches(4, 8)
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# cumulative growth heatmap
fig, axs = plt.subplots()
fig.set_size_inches(6, 4)
ts_start_axis = np.array(sorted(set(gp_pd['holiday_parameters'][:, 0])))
ts_duration_axis = np.array(sorted(set(gp_pd['holiday_parameters'][:, 1])))
start_axis = ts_start_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
duration_axis = ts_duration_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
coordinates = [(np.where(ts_start_axis==x)[0][0],
np.where(ts_duration_axis==y)[0][0])
for x, y in gp_pd['holiday_parameters'][:, ]]
cumulative_map = np.empty(shape=(start_axis.size, duration_axis.size))
cumulative_map[:] = np.nan
for i in range(gp_pd['parameter_density'].shape[0]):
time_axis = gp_pd['time_axis'][i, :]
growth_rate = gp_pd['growth_rate'][i, :]
cumulative_map[coordinates[i]] = np.sum(growth_rate)
print(cumulative_map)
print(np.nanmax(cumulative_map))
print(np.nanmin(cumulative_map))
print(np.where(cumulative_map == np.nanmax(cumulative_map)))
print(np.where(cumulative_map == np.nanmin(cumulative_map)))
cum_min = np.where(cumulative_map == np.nanmin(cumulative_map))
print(ts_start_axis[cum_min[0]])
print(ts_duration_axis[cum_min[1]])
print(cumulative_map[0, :])
print(cumulative_map[:, 0])
zero_effect = np.mean(cumulative_map[:, 0])
print("zero_effect", zero_effect, np.std(cumulative_map[:, 0]))
effect_range = max(abs(np.min(cumulative_map)), abs(np.max(cumulative_map)))
img = axs.imshow(
np.transpose(cumulative_map),
extent=(np.min(start_axis), np.max(start_axis),
np.min(duration_axis), np.max(duration_axis)),
aspect=lr(start_axis)/lr(duration_axis),
cmap=cm.RdBu_r,
origin='lower',
vmin=effect_range - (effect_range - zero_effect)*2,
vmax=effect_range
)
cbr = fig.colorbar(img, ax=axs, fraction=0.046, pad=0.04)
cbr.set_label('Average divisions per surviving cell')
axs.set_xlabel('Holiday start day')
axs.set_ylabel('Holiday duration [days]')
if save is not None:
pdf_out.savefig()
else:
plt.show()
# holiday effect (if repeated)
fig, axs = plt.subplots()
ts_start_axis = np.array(sorted(set(gp_pd['holiday_parameters'][:, 0])))
ts_duration_axis = np.array(sorted(set(gp_pd['holiday_parameters'][:, 1])))
start_axis = ts_start_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
duration_axis = ts_duration_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
coordinates = [(np.where(ts_start_axis==x)[0][0],
np.where(ts_duration_axis==y)[0][0])
for x, y in gp_pd['holiday_parameters'][:, ]]
cumulative_map = np.empty(shape=(start_axis.size, duration_axis.size))
cumulative_map[:] = np.nan
for i in range(gp_pd['parameter_density'].shape[0]):
time_axis = gp_pd['time_axis'][i, :]
growth_rate = gp_pd['growth_rate'][i, :]
cumulative_map[coordinates[i]] = np.sum(growth_rate)
print(cumulative_map)
print(np.nanmax(cumulative_map))
print(np.nanmin(cumulative_map))
print(np.where(cumulative_map == np.nanmax(cumulative_map)))
print(np.where(cumulative_map == np.nanmin(cumulative_map)))
cum_min = np.where(cumulative_map == np.nanmin(cumulative_map))
print(ts_start_axis[cum_min[0]])
print(ts_duration_axis[cum_min[1]])
zero_effect = np.mean(cumulative_map[0, :])
print("zero_effect", zero_effect, np.std(cumulative_map[:, 0]))
cumulative_map = zero_effect - cumulative_map
effect_range = max(abs(np.min(cumulative_map)), abs(np.max(cumulative_map)))
img = axs.imshow(
np.transpose(cumulative_map),
extent=(np.min(start_axis), np.max(start_axis),
np.min(duration_axis), np.max(duration_axis)),
aspect=lr(start_axis)/lr(duration_axis),
cmap=cm.BrBG,
origin='lower',
vmin=-effect_range,
vmax=effect_range
)
cbr = fig.colorbar(img, ax=axs, fraction=0.046, pad=0.04)
# cbr.set_ticklabels(['Low', 'High'])
if save is not None:
pdf_out.savefig()
else:
plt.show()
# # mean child density over time
# fig, axs = plt.subplots()
# child_density = np.array(gp_pd['parameter_density'])
# parameter_axis = np.array(gp_pd['parameter_axis'])
# time_axis = np.array(gp_pd['time_axis'])
# time_points = time_axis.shape[0]
# parameter_range = np.max(parameter_axis) - np.min(parameter_axis)
# print(child_density.shape)
# print(time_points)
# average_child_density = \
# np.array([[np.sum(parameter_axis*child_density[i, :, j]/ \
# parameter_axis.size*parameter_range)
# for i in range(time_points)]
# for j in range(child_density.shape[2])])
# print(average_child_density.shape)
# density_range = np.max(average_child_density) - np.min(average_child_density)
# for i in range(time_axis.shape[0]):
# print(time_axis[i, :].shape, average_child_density[:, i].shape)
# plt.plot(time_axis[i, :], average_child_density[:, i] + i*density_range*1.2,
# color='k', linewidth=0.5)
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
# # mean child density# over time heatmap
# # fig, axs = plt.subplots()
# # child_density = np.array(gp_pd['parameter_density'])
# # parameter_axis = np.array(gp_pd['parameter_axis'])
# # time_axis = np.mean(np.array(gp_pd['time_axis']), axis=0)
# # time_points = time_axis.shape[0]
# # parameter_range = np.max(parameter_axis) - np.min(parameter_axis)
# # print(child_density.shape)
# # print(time_points)
# # average_child_density = \
# # np.array([[np.sum(parameter_axis*child_density[j, :, i]/ \
# # parameter_axis.size*parameter_range)
# # for i in range(time_points)]
# # for j in range(child_density.shape[0])])
# # fig.set_size_inches(4, 4)
# # img = axs.imshow(
# # np.transpose(average_child_density),
# # extent=(np.min(parameter_axis), np.max(parameter_axis),
# # np.min(time_axis), np.max(time_axis)),
# # aspect=lr(parameter_axis)/lr(time_axis),
# # vmin=np.min(parameter_axis), vmax=np.max(parameter_axis),
# # cmap=cm.viridis,
# # origin='lower'
# # )
# # if save is not None:
# # pdf_out.savefig()
# # else:
# # plt.show()
# # time axis homogenaeity
# fig, axs = plt.subplots()
# time_axis = np.array(gp_pd['time_axis'])
# average_time_axis = np.mean(np.array(gp_pd['time_axis']), axis=0)
# for i in range(time_axis.shape[0]):
# axs.plot(time_axis[i, :] - average_time_axis, alpha=0.5, linewidth=1.0, color='k')
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
if save is not None:
pdf_out.close()
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-i', '--infile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('-t', '--interfile', type=click.Path(), default=None)
def process_holiday(paramfile, infile, outfile, interfile):
data = h5py.File(outfile, 'r')
gp_result = data['result']
indata = h5py.File(infile, 'r')
gp_input = indata['parameter_density']
parameter_density = gp_input['parameter_density']
simtools.PARAMS = toml.load(paramfile)
def lr(x):
return x[1] - x[0]
ts_start_axis = np.array(sorted(set(gp_input['holiday_parameters'][:, 0])))
ts_duration_axis = np.array(sorted(set(gp_input['holiday_parameters'][:, 1])))
start_axis = ts_start_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
duration_axis = ts_duration_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
coordinates = [(np.where(ts_start_axis==x)[0][0],
np.where(ts_duration_axis==y)[0][0])
for x, y in gp_input['holiday_parameters'][:, ]]
cumulative_map = np.zeros(shape=(start_axis.size, duration_axis.size))
for i in range(parameter_density.shape[0]):
print(i, parameter_density.shape[0])
growth_rate = gp_input['growth_rate'][i, :]
escaped_sum = np.sum(gp_result['escaped'][:, :, i], axis=0) / \
simtools.PARAMS['mpi_holiday_simulations_per_timeline']
cumulative_map[coordinates[i]] = np.sum(escaped_sum*growth_rate)
print(cumulative_map)
inter = h5py.File(interfile, 'w')
gp_proc = inter.create_group('processed_output')
# gp_proc.create_dataset('cumulative_risk', cumulative_map.shape)
gp_proc['cumulative_risk'] = cumulative_map
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-i', '--infile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('-t', '--interfile', type=click.Path(), default=None)
@click.option('--save', type=click.Path(), default=None)
def plot_processed_holiday(paramfile, infile, outfile, interfile, save):
data = h5py.File(outfile, 'r')
gp_result = data['result']
indata = h5py.File(infile, 'r')
gp_input = indata['parameter_density']
inter = h5py.File(interfile, 'r')
gp_proc = inter['processed_output']
simtools.PARAMS = toml.load(paramfile)
if save is not None:
pdf_out = PdfPages(save)
# heatmap
fig, axs = plt.subplots()
def lr(x):
return x[-1] - x[0]
ts_start_axis = np.array(sorted(set(gp_input['holiday_parameters'][:, 0])))
ts_duration_axis = np.array(sorted(set(gp_input['holiday_parameters'][:, 1])))
start_axis = ts_start_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
duration_axis = ts_duration_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
coordinates = [(np.where(ts_start_axis==x)[0][0],
np.where(ts_duration_axis==y)[0][0])
for x, y in gp_input['holiday_parameters'][:, ]]
cumulative_map = np.array(gp_proc['cumulative_risk'])
img = axs.imshow(
np.transpose(cumulative_map),
extent=(np.min(start_axis), np.max(start_axis),
np.min(duration_axis), np.max(duration_axis)),
aspect=lr(start_axis)/lr(duration_axis),
cmap=cm.magma_r,
origin='lower'
)
print(lr(start_axis), lr(duration_axis))
cbr = fig.colorbar(img, ax=axs, fraction=0.046, pad=0.04)
cbr.set_label('Cumulative mutation risk [multiples of baseline]')
max_c = np.max(cumulative_map)/np.min(cumulative_map)
cbr.set_ticks([(x + 1)*np.min(cumulative_map) for x in range(int(max_c + 1))])
cbr.set_ticklabels([(x + 1) for x in range(int(max_c + 1))])
axs.set_xlabel('Holiday start day')
axs.set_ylabel('Holiday duration [days]')
plt.tight_layout()
if save is not None:
pdf_out.savefig()
else:
plt.show()
# linearity comparison
final_risk = np.array(gp_proc['cumulative_risk'][-1, :])
cumulative_growth = np.empty(shape=(start_axis.size, duration_axis.size))
cumulative_growth[:] = np.nan
for i in range(gp_input['parameter_density'].shape[0]):
time_axis = gp_input['time_axis'][i, :]
growth_rate = gp_input['growth_rate'][i, :]
cumulative_growth[coordinates[i]] = np.sum(growth_rate)
cum_min = np.where(cumulative_growth == np.nanmin(cumulative_growth))
zero_effect = np.mean(cumulative_growth[:, 0])
effect_range = max(abs(np.min(cumulative_growth)), abs(np.max(cumulative_growth)))
final_rate = cumulative_growth[-1, :]
fig, axs = plt.subplots()
ax2 = axs.twinx()
axs.plot(duration_axis, final_rate, color='blue')
ax2.plot(duration_axis, final_risk, color='orange')
axs.set_xlabel('Holiday duration [days]')
axs.set_ylabel('Average divisions per surviving cell')
ax2.set_ylabel('Cumulative mutation risk')
if save is not None:
pdf_out.savefig()
else:
plt.show()
if save is not None:
pdf_out.close()
@main.command()
@click.option('-p', '--paramfile', type=click.Path())
@click.option('-i', '--infile', type=click.Path())
@click.option('-o', '--outfile', type=click.Path())
@click.option('--save', type=click.Path(), default=None)
def holiday_plots(paramfile, infile, outfile, save):
data = h5py.File(outfile, 'r')
gp_result = data['result']
indata = h5py.File(infile, 'r')
gp_input = indata['parameter_density']
simtools.PARAMS = toml.load(paramfile)
if save is not None:
pdf_out = PdfPages(save)
# escape probability as a function of time of mutation
fig, axs = plt.subplots()
parameter_density = gp_input['parameter_density']
for i in range(parameter_density.shape[0]):
time_axis = gp_input['time_axis'][i, :]
escaped_sum = np.sum(gp_result['escaped'][:, :, i], axis=0) / \
simtools.PARAMS['mpi_holiday_simulations_per_timeline']
axs.plot(time_axis, escaped_sum + i/20,
color='lightgrey', linewidth='0.5', zorder=1, alpha=0.5)
axs.plot(time_axis, moving_mean(escaped_sum, 101) + i/20,
color='k', linewidth='0.5', zorder=2)
axs.set_xlabel('Time of mutation')
axs.set_ylabel('Probability of a mutant reaching ' + \
str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
if save is not None:
pdf_out.savefig()
else:
plt.show()
# mutation vulnerability as a function of time of mutation
fig, axs = plt.subplots()
for i in range(parameter_density.shape[0]):
time_axis = gp_input['time_axis'][i, :]
growth_rate = gp_input['growth_rate'][i, :]
escaped_sum = np.sum(gp_result['escaped'][:, :, i], axis=0) / \
simtools.PARAMS['mpi_holiday_simulations_per_timeline']
axs.plot(time_axis, escaped_sum*growth_rate + i/2000,
color='lightgrey', linewidth='0.5', zorder=1, alpha=0.5)
axs.plot(time_axis, moving_mean(escaped_sum*growth_rate, 101) + i/20,
color='k', linewidth='0.5', zorder=2)
axs.set_xlabel('Time of mutation')
if save is not None:
pdf_out.savefig()
else:
plt.show()
# cumulative mutation vulnerability as a function of time of mutation
fig, axs = plt.subplots()
for i in range(parameter_density.shape[0]):
time_axis = gp_input['time_axis'][i, :]
growth_rate = gp_input['growth_rate'][i, :]
escaped_sum = np.sum(gp_result['escaped'][:, :, i], axis=0) / \
simtools.PARAMS['mpi_holiday_simulations_per_timeline']
axs.plot(time_axis, np.cumsum(escaped_sum*growth_rate) + i/20,
color='k', linewidth='0.5', zorder=1)
axs.set_xlabel('Time of mutation')
if save is not None:
pdf_out.savefig()
else:
plt.show()
# cumulative mutation vulnerability heatmap
fig, axs = plt.subplots()
def lr(x):
return x[1] - x[0]
ts_start_axis = np.array(sorted(set(gp_input['holiday_parameters'][:, 0])))
ts_duration_axis = np.array(sorted(set(gp_input['holiday_parameters'][:, 1])))
start_axis = ts_start_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
duration_axis = ts_duration_axis \
/(simtools.PARAMS['time_points_up']) \
*simtools.PARAMS['time_range_up'][1]
coordinates = [(np.where(ts_start_axis==x)[0][0],
np.where(ts_duration_axis==y)[0][0])
for x, y in gp_input['holiday_parameters'][:, ]]
cumulative_map = np.zeros(shape=(start_axis.size, duration_axis.size))
for i in range(parameter_density.shape[0]):
time_axis = gp_input['time_axis'][i, :]
growth_rate = gp_input['growth_rate'][i, :]
escaped_sum = np.sum(gp_result['escaped'][:, :, i], axis=0) / \
simtools.PARAMS['mpi_holiday_simulations_per_timeline']
cumulative_map[coordinates[i]] = np.sum(escaped_sum*growth_rate)
img = axs.imshow(
np.transpose(cumulative_map),
extent=(np.min(start_axis), np.max(start_axis),
np.min(duration_axis), np.max(duration_axis)),
aspect=lr(start_axis)/lr(duration_axis),
cmap=cm.viridis,
origin='lower'
)
if save is not None:
pdf_out.savefig()
else:
plt.show()
# cumulative mutation vulnerability heatmap
# masking the top mask_amount of numbers
mask_amount = 50 # in %
fig, axs = plt.subplots()
mask_limit = np.percentile(cumulative_map, 100 - mask_amount)
masked_cumulative_map = copy.deepcopy(cumulative_map)
masked_cumulative_map[cumulative_map > mask_limit] = None
img = axs.imshow(
np.transpose(masked_cumulative_map),
extent=(np.min(start_axis), np.max(start_axis),
np.min(duration_axis), np.max(duration_axis)),
aspect=lr(start_axis)/lr(duration_axis),
cmap=cm.viridis,
origin='lower'
)
if save is not None:
pdf_out.savefig()
else:
plt.show()
# fig, axs = plt.subplots()
# time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
# simtools.PARAMS['time_points_up'])
# escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
# simtools.PARAMS['mpi_simulations_per_time_point']
# growth_rate = gp_input['growth_rate']
# axs.plot(time_axis, escaped_sum*growth_rate, color='lightgrey', linewidth='0.5')
# axs.plot(time_axis, moving_mean(escaped_sum*growth_rate, 101), color='k',
# linewidth='1.0', label='Mutation risk')
# axs_cum = axs.twinx()
# axs_cum.plot(time_axis, np.cumsum(escaped_sum*growth_rate), color='k',
# linestyle='--', linewidth='1.0')
# # empty curve drawn on first axis for legend purposes
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='--', label='Cumulative mutation risk')
# axs.set_xlabel('Time of mutation')
# axs.set_ylim(0, axs.get_ylim()[1])
# axs.set_yticks([0])
# axs_cum.set_ylim(0, axs_cum.get_ylim()[1])
# axs_cum.set_yticks([0])
# axs.legend()
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
# # plot of growth rate, escape probability and mutation vulnerability all in one
# fig, axs = plt.subplots()
# time_axis = simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
# simtools.PARAMS['time_points_up'])
# escaped_sum = np.sum(gp_result['escaped'], axis=0) / \
# simtools.PARAMS['mpi_simulations_per_time_point']
# growth_rate = gp_input['growth_rate']
# axs.plot(time_axis, escaped_sum, color='orange', linewidth='0.5', alpha=0.5)
# axs.plot(time_axis, moving_mean(escaped_sum, 101), color='orange', linewidth='1.0')
# axs_rate = axs.twinx()
# axs_rate.plot(time_axis, growth_rate, color='blue', linewidth=1.0)
# axs_risk = axs.twinx()
# axs_risk.plot(time_axis, escaped_sum*growth_rate, color='lightgrey', linewidth='0.5')
# axs_risk.plot(time_axis, moving_mean(escaped_sum*growth_rate, 101), color='k',
# linewidth='1.0', label='Mutation risk')
# axs_cum = axs.twinx()
# axs_cum.plot(time_axis, np.cumsum(escaped_sum*growth_rate), color='k',
# linestyle='--', linewidth='1.0')
# # empty curves drawn on first axis for legend purposes
# axs.plot([], [], color='orange',
# linewidth='1.0', linestyle='-', label='Probability of reaching ' + str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
# axs.plot([], [], color='blue',
# linewidth='1.0', linestyle='-', label='Normal cell average growth rate')
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='-', label='Mutation risk')
# axs.plot([], [], color='k',
# linewidth='1.0', linestyle='--', label='Cumulative mutation risk')
# axs.set_ylabel('Probability of a mutant reaching ' + \
# str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
# axs_rate.set_ylabel('Normal cell growth rate')
# axs.set_xlabel('Time of mutation')
# axs.set_ylim(0, axs.get_ylim()[1])
# axs_rate.set_ylim(0, axs_rate.get_ylim()[1])
# axs_risk.set_ylim(0, axs_risk.get_ylim()[1])
# axs_risk.set_yticks([0])
# axs_cum.set_ylim(0, axs_cum.get_ylim()[1])
# axs_cum.set_yticks([0])
# axs.legend(loc='lower right', frameon=False)
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
# # plot of growth rate, escape probability and mutation vulnerability all in one
# # death time and escape time distribution as a function of time of mutation
# fig, axs = plt.subplots(ncols=2)
# fig.set_size_inches(6, 3)
# escaped = np.array(gp_result['escaped'])
# time = np.array(gp_result['time'])
# quantiles_death = [[], [], [], []]
# quantiles_escaped = [[], [], [], []]
# colors = ['grey', 'black', 'grey', 'lightgrey']
# for i in range(escaped.shape[1]):
# death_times = time[:, i][escaped[:, i] == 0]
# escaped_times = time[:, i][escaped[:, i] == 1]
# q_death = np.percentile(death_times, (25, 50, 75, 95))
# q_escaped = np.percentile(escaped_times, (25, 50, 75, 95)) if escaped_times.size != 0 else (None, None, None, None)
# for j in range(4):
# quantiles_death[j].append(q_death[j])
# quantiles_escaped[j].append(q_escaped[j])
# for i, color in enumerate(colors):
# axs[0].plot(
# simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
# simtools.PARAMS['time_points_up']),
# quantiles_death[i],
# linewidth=1.0,
# color=color
# )
# for i, color in enumerate(colors):
# axs[1].plot(
# simtools.get_time_axis(simtools.PARAMS['time_range_up'][1],
# simtools.PARAMS['time_points_up']),
# quantiles_escaped[i],
# linewidth=1.0,
# color=color
# )
# axs[0].set_xlabel('Time of mutation')
# axs[1].set_xlabel('Time of mutation')
# axs[0].set_ylabel('Time of death')
# axs[1].set_ylabel('Time of escape')
# plt.tight_layout()
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
# # histogram of aggregate death/escape time distributions
# fig, axs = plt.subplots(ncols=2)
# fig.set_size_inches(6, 3)
# axs[0].hist(time[escaped == 0], color='lightgrey',
# range=(0, np.percentile(time[escaped == 0], 99)), bins=100,
# density=True)
# axs[1].hist(time[escaped == 1], color='lightgrey',
# range=(0, np.percentile(time[escaped == 1], 99) if escaped_times.size != 0 else 1), bins=100,
# density=True)
# x0 = np.linspace(0, np.percentile(time[escaped == 0], 99), 100)
# death_rate = simtools.PARAMS['mpi_death_rate']
# axs[0].plot(x0, death_rate*np.exp(-death_rate*x0), color='k', linewidth=1.0,
# label='Exponential dist.\n$\lambda$ = Death rate')
# axs[0].legend()
# axs[0].set_xlabel('Time of death')
# axs[1].set_xlabel('Time of escape')
# axs[0].set_ylabel('Probability density')
# axs[1].set_ylabel('Probability density')
# plt.tight_layout()
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
# # histogram of first parameter in dead/escaped lines
# fig, axs = plt.subplots(ncols=2)
# fig.set_size_inches(6, 3)
# first_parameter = np.array(gp_result['first_parameter'])
# axs[0].hist(first_parameter[escaped == 0], color='lightgrey',
# bins=100, density=True)
# axs[1].hist(first_parameter[escaped == 1], color='lightgrey',
# bins=100, density=True)
# f_rate_down = Rate(
# simtools.PARAMS['mpi_rate_function_shape'],
# simtools.PARAMS['mpi_rate_function_center'],
# simtools.PARAMS['mpi_rate_function_width'],
# simtools.PARAMS['optimum_normal'], 1)
# x0 = np.linspace(axs[0].get_xlim()[0], axs[0].get_xlim()[1], 1000)
# axs[0].plot(x0, f_rate_down(x0)*axs[0].get_ylim()[1], color='k', linewidth=1.0, label='Rate function')
# x1 = np.linspace(axs[1].get_xlim()[0], axs[1].get_xlim()[1], 1000)
# axs[1].plot(x1, f_rate_down(x1)*axs[1].get_ylim()[1], color='k', linewidth=1.0, label='Rate function')
# axs[1].legend()
# for i in range(2):
# axs[i].set_xlabel('Parameter of first cell')
# axs[i].set_ylabel('Probability density')
# axs[0].set_title('Mutants that did not survive')
# axs[1].set_title('Mutants that reached ' + \
# str(simtools.PARAMS['mpi_max_population_size']) + ' cells')
# plt.tight_layout()
# if save is not None:
# pdf_out.savefig()
# else:
# plt.show()
if save is not None:
pdf_out.close()
if __name__ == '__main__':
main()
| 36.363636 | 169 | 0.609751 | 11,803 | 88,400 | 4.346014 | 0.045751 | 0.072598 | 0.031932 | 0.011365 | 0.833476 | 0.796534 | 0.759338 | 0.729219 | 0.704948 | 0.673581 | 0 | 0.018003 | 0.239072 | 88,400 | 2,430 | 170 | 36.378601 | 0.744581 | 0.183235 | 0 | 0.622355 | 0 | 0 | 0.122503 | 0.02023 | 0 | 0 | 0 | 0 | 0.000661 | 1 | 0.018519 | false | 0.001323 | 0.009259 | 0.005291 | 0.037037 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d080b09c9aa6ae426f0d0873389df83219ecebc | 5,451 | py | Python | test/test_em_framework/test_outcomes.py | brodderickrodriguez/EMAworkbench | 90031223a4b6feb49633d45816e20981dc9415a0 | [
"BSD-3-Clause"
] | 75 | 2015-01-14T20:39:14.000Z | 2022-03-31T09:28:15.000Z | test/test_em_framework/test_outcomes.py | brodderickrodriguez/EMAworkbench | 90031223a4b6feb49633d45816e20981dc9415a0 | [
"BSD-3-Clause"
] | 92 | 2015-01-15T16:12:38.000Z | 2022-03-23T20:46:37.000Z | test/test_em_framework/test_outcomes.py | brodderickrodriguez/EMAworkbench | 90031223a4b6feb49633d45816e20981dc9415a0 | [
"BSD-3-Clause"
] | 64 | 2015-02-16T15:07:12.000Z | 2022-03-23T16:17:16.000Z | '''
Created on Jul 28, 2015
.. codeauthor:: jhkwakkel <j.h.kwakkel (at) tudelft (dot) nl>
'''
from __future__ import (division, print_function, absolute_import,
unicode_literals)
import unittest
import unittest.mock as mock
from ema_workbench.em_framework.outcomes import ScalarOutcome,\
TimeSeriesOutcome
class TestScalarOutcome(unittest.TestCase):
outcome_class = ScalarOutcome
outcome_klass = "ScalarOutcome"
def test_outcome(self):
name = 'test'
outcome = self.outcome_class(name)
self.assertEqual(outcome.name, name)
self.assertEqual(outcome.variable_name, [name])
self.assertIsNone(outcome.function)
self.assertEqual(repr(outcome), self.outcome_klass+'(\'test\')')
name = 'test'
var_name = 'something else'
outcome = self.outcome_class(name, variable_name=var_name)
self.assertEqual(outcome.name, name)
self.assertEqual(outcome.variable_name, [var_name])
self.assertIsNone(outcome.function)
name = 'test'
var_name = 'something else'
function = mock.Mock()
outcome = self.outcome_class(name, variable_name=var_name,
function=function)
self.assertEqual(outcome.name, name)
self.assertEqual(outcome.variable_name, [var_name])
self.assertIsNotNone(outcome.function)
with self.assertRaises(ValueError):
name = 'test'
var_name = 'something else'
function = 'not a function'
outcome = self.outcome_class(name, variable_name=var_name,
function=function)
with self.assertRaises(ValueError):
name = 'test'
var_name = 1
outcome = self.outcome_class(name, variable_name=var_name,
function=function)
with self.assertRaises(ValueError):
name = 'test'
var_name = ['a variable', 1]
outcome = self.outcome_class(name, variable_name=var_name,
function=function)
name = 'test'
var_name = 'something else'
function = lambda x: x
outcome1 = self.outcome_class(name, variable_name=var_name,
function=function)
outcome2 = self.outcome_class(name, variable_name=var_name,
function=function)
self.assertEqual(outcome1, outcome2)
def test_process(self):
name = 'test'
outcome = self.outcome_class(name)
outputs = [1]
self.assertEqual(outcome.process(outputs), outputs[0])
name = 'test'
function = mock.Mock()
function.return_value = 2
outcome = self.outcome_class(name, function=function)
outputs = [1]
self.assertEqual(outcome.process(outputs), 2)
function.assert_called_once()
name = 'test'
function = mock.Mock()
function.return_value = 2
variable_name = ['a', 'b']
outcome = self.outcome_class(name, function=function,
variable_name=variable_name)
outputs = [1, 2]
self.assertEqual(outcome.process(outputs), 2)
function.assert_called_once()
function.assert_called_with(1, 2)
with self.assertRaises(ValueError):
name = 'test'
function = mock.Mock()
function.return_value = 2
variable_name = ['a', 'b']
outcome = self.outcome_class(name, function=function,
variable_name=variable_name)
outcome.process([1])
class TestTimeSeriesOutcome(TestScalarOutcome):
outcome_class = TimeSeriesOutcome
outcome_klass = "TimeSeriesOutcome"
def test_process(self):
name = 'test'
outcome = self.outcome_class(name)
outputs = [[1]]
self.assertEqual(outcome.process(outputs), outputs[0])
name = 'test'
function = mock.Mock()
function.return_value = [2]
outcome = self.outcome_class(name, function=function)
outputs = [1]
self.assertEqual(outcome.process(outputs), [2])
function.assert_called_once()
name = 'test'
function = mock.Mock()
function.return_value = [2]
variable_name = ['a', 'b']
outcome = self.outcome_class(name, function=function,
variable_name=variable_name)
outputs = [1, 2]
self.assertEqual(outcome.process(outputs), [2])
function.assert_called_once()
function.assert_called_with(1, 2)
with self.assertRaises(ValueError):
name = 'test'
function = mock.Mock()
function.return_value = [2]
variable_name = ['a', 'b']
outcome = self.outcome_class(name, function=function,
variable_name=variable_name)
outcome.process([1])
if __name__ == "__main__":
unittest.main() | 33.237805 | 72 | 0.549807 | 512 | 5,451 | 5.666016 | 0.142578 | 0.091003 | 0.088245 | 0.110307 | 0.794209 | 0.775595 | 0.765943 | 0.758704 | 0.719062 | 0.686315 | 0 | 0.010842 | 0.356999 | 5,451 | 164 | 73 | 33.237805 | 0.816833 | 0.015777 | 0 | 0.773109 | 0 | 0 | 0.035274 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.02521 | false | 0 | 0.033613 | 0 | 0.109244 | 0.008403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d0da1efd7badc48cd44d9b7eb9bba157b0225d0 | 186 | py | Python | anaconda-verify/run_test.py | nikicc/anaconda-recipes | 9c611a5854bf41bbc5e7ed9853dc71c0851a62ef | [
"BSD-3-Clause"
] | 130 | 2015-07-28T03:41:21.000Z | 2022-03-16T03:07:41.000Z | anaconda-verify/run_test.py | nikicc/anaconda-recipes | 9c611a5854bf41bbc5e7ed9853dc71c0851a62ef | [
"BSD-3-Clause"
] | 119 | 2015-08-01T00:54:06.000Z | 2021-01-05T13:00:46.000Z | anaconda-verify/run_test.py | nikicc/anaconda-recipes | 9c611a5854bf41bbc5e7ed9853dc71c0851a62ef | [
"BSD-3-Clause"
] | 72 | 2015-07-29T02:35:56.000Z | 2022-02-26T14:31:15.000Z | from anaconda_verify import __version__
from anaconda_verify.package import CondaPackageCheck
assert CondaPackageCheck.no_easy_install_script
assert __version__ == '1.3.8', __version__
| 31 | 53 | 0.865591 | 23 | 186 | 6.26087 | 0.652174 | 0.166667 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017647 | 0.086022 | 186 | 5 | 54 | 37.2 | 0.829412 | 0 | 0 | 0 | 0 | 0 | 0.026882 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5d3438adbed4db78768e733fb59850d35b267f17 | 2,652 | py | Python | test/test_vsql_unop_not.py | LivingLogic/LivingApps.Python.LivingAPI | 70bb71d7f582535a4c52e1f00d9ed070f3f2cc4f | [
"MIT"
] | 2 | 2017-09-15T15:28:23.000Z | 2019-01-25T09:23:53.000Z | test/test_vsql_unop_not.py | LivingLogic/LivingApps.Python.LivingAPI | 70bb71d7f582535a4c52e1f00d9ed070f3f2cc4f | [
"MIT"
] | 1 | 2019-01-28T08:06:23.000Z | 2019-01-28T14:45:52.000Z | test/test_vsql_unop_not.py | LivingLogic/LivingApps.Python.LivingAPI | 70bb71d7f582535a4c52e1f00d9ed070f3f2cc4f | [
"MIT"
] | 1 | 2019-01-25T21:20:55.000Z | 2019-01-25T21:20:55.000Z | """
Tests for the vSQL unary logical "not" operator ``not``.
The test are done via the Python DB interface.
To run the tests, :mod:`pytest` is required.
"""
from conftest import *
###
### Tests
###
def test_bool1(config_persons):
check_vsql(config_persons, "repr(not app.p_bool_none.value) == 'True'")
def test_bool2(config_persons):
check_vsql(config_persons, "repr(not app.p_bool_false.value) == 'True'")
def test_bool3(config_persons):
check_vsql(config_persons, "repr(not app.p_bool_true.value) == 'False'")
def test_int1(config_persons):
check_vsql(config_persons, "repr(not app.p_int_none.value) == 'True'")
def test_int2(config_persons):
check_vsql(config_persons, "repr(not app.p_int_value.value) == 'False'")
def test_number1(config_persons):
check_vsql(config_persons, "repr(not app.p_number_none.value) == 'True'")
def test_number2(config_persons):
check_vsql(config_persons, "repr(not app.p_number_value.value) == 'False'")
def test_str1(config_persons):
check_vsql(config_persons, "repr(not app.p_str_none.value) == 'True'")
def test_str2(config_persons):
check_vsql(config_persons, "repr(not app.p_str_value.value) == 'False'")
def test_date1(config_persons):
check_vsql(config_persons, "repr(not app.p_date_none.value) == 'True'")
def test_date2(config_persons):
check_vsql(config_persons, "repr(not app.p_date_value.value) == 'False'")
def test_datetime1(config_persons):
check_vsql(config_persons, "repr(not app.p_datetime_none.value) == 'True'")
def test_datetime2(config_persons):
check_vsql(config_persons, "repr(not app.p_datetime_value.value) == 'False'")
def test_datedelta1(config_persons):
check_vsql(config_persons, "repr(not app.p_datedelta_none.value) == 'True'")
def test_datedelta2(config_persons):
check_vsql(config_persons, "repr(not app.p_datedelta_value.value) == 'False'")
def test_datetimedelta1(config_persons):
check_vsql(config_persons, "repr(not app.p_datetimedelta_none.value) == 'True'")
def test_datetimedelta2(config_persons):
check_vsql(config_persons, "repr(not app.p_datetimedelta_value.value) == 'False'")
def test_monthdelta1(config_persons):
check_vsql(config_persons, "repr(not app.p_monthdelta_none.value) == 'True'")
def test_monthdelta2(config_persons):
check_vsql(config_persons, "repr(not app.p_monthdelta_value.value) == 'False'")
def test_color1(config_persons):
check_vsql(config_persons, "repr(not app.p_color_none.value) == 'True'")
def test_color2(config_persons):
check_vsql(config_persons, "repr(not app.p_color_value.value) == 'False'")
def test_geo(config_persons):
check_vsql(config_persons, "repr(not geo(49, 11, 'Here')) == 'False'")
| 32.740741 | 83 | 0.763952 | 403 | 2,652 | 4.704715 | 0.173697 | 0.301688 | 0.208861 | 0.255274 | 0.812236 | 0.602321 | 0.602321 | 0.602321 | 0.580169 | 0.580169 | 0 | 0.010365 | 0.090498 | 2,652 | 80 | 84 | 33.15 | 0.775705 | 0.059201 | 0 | 0 | 0 | 0 | 0.39169 | 0.210569 | 0 | 0 | 0 | 0 | 0 | 1 | 0.488889 | false | 0 | 0.022222 | 0 | 0.511111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5d4208a6d1a720541899384f0132a66ef698e768 | 183 | py | Python | src_old/tests/scripts/core/ex12.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | src_old/tests/scripts/core/ex12.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | src_old/tests/scripts/core/ex12.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null |
a=array((1,2,3,5,8,5),int)
b=array((5,8,6,9,8,2),int)
k=zeros((len(a),len(a)),int)
d=array(((5,8,6,9,8,2),(5,8,6,9,8,2),(5,8,6,9,8,2),(5,8,6,9,8,2),(5,8,6,9,8,2),(5,8,6,9,8,2)),int)
| 30.5 | 98 | 0.502732 | 64 | 183 | 1.4375 | 0.234375 | 0.173913 | 0.228261 | 0.304348 | 0.630435 | 0.630435 | 0.630435 | 0.391304 | 0.391304 | 0.391304 | 0 | 0.269663 | 0.027322 | 183 | 5 | 99 | 36.6 | 0.247191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5376b32079eb4338c65d1a7a70fa2b571b5796e4 | 187 | py | Python | Strategy/BeforeStrategy/before_strategy/__init__.py | Tomvictor/python-design-patterns | 6b99607d721bbe03d26a0a451a10e88cd1c1d112 | [
"MIT"
] | null | null | null | Strategy/BeforeStrategy/before_strategy/__init__.py | Tomvictor/python-design-patterns | 6b99607d721bbe03d26a0a451a10e88cd1c1d112 | [
"MIT"
] | null | null | null | Strategy/BeforeStrategy/before_strategy/__init__.py | Tomvictor/python-design-patterns | 6b99607d721bbe03d26a0a451a10e88cd1c1d112 | [
"MIT"
] | null | null | null | __all__ = ['order','shipper','shipping_cost']
from before_strategy.order import Order
from before_strategy.shipper import Shipper
from before_strategy.shipping_cost import ShippingCost | 46.75 | 54 | 0.834225 | 24 | 187 | 6.125 | 0.416667 | 0.204082 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 187 | 4 | 54 | 46.75 | 0.864706 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
538187ec6a1e14ced0d23c2fbd512a9c1ac007f2 | 42 | py | Python | models/__init__.py | Tuckle/biospace | bdc1b859ee4abc82734227b9e0bf533491e2ac1f | [
"Apache-2.0"
] | null | null | null | models/__init__.py | Tuckle/biospace | bdc1b859ee4abc82734227b9e0bf533491e2ac1f | [
"Apache-2.0"
] | null | null | null | models/__init__.py | Tuckle/biospace | bdc1b859ee4abc82734227b9e0bf533491e2ac1f | [
"Apache-2.0"
] | null | null | null | from .postgres import *
from .neo import * | 21 | 23 | 0.738095 | 6 | 42 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 2 | 24 | 21 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53b7fb4c1cfa36b1d868197797e04ce180f137fc | 107 | py | Python | strix/__init__.py | HFM3/strix | 94bbc568f614bbb0f525d8ce17de4c64ef3b46d2 | [
"MIT"
] | null | null | null | strix/__init__.py | HFM3/strix | 94bbc568f614bbb0f525d8ce17de4c64ef3b46d2 | [
"MIT"
] | null | null | null | strix/__init__.py | HFM3/strix | 94bbc568f614bbb0f525d8ce17de4c64ef3b46d2 | [
"MIT"
] | null | null | null | from strix.base_functions import *
from strix.gca import GCA
from strix.file_formats import kml_gca as kml
| 26.75 | 45 | 0.831776 | 19 | 107 | 4.526316 | 0.526316 | 0.313953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130841 | 107 | 3 | 46 | 35.666667 | 0.924731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54d0e2e1911470813b3f442b917aefa02b3b326a | 173 | py | Python | config.py | aantonop/wifiportal21 | 73c6e1120eeeb2b4a1c1684bc61062ab77fde85f | [
"MIT"
] | 171 | 2015-12-10T23:30:03.000Z | 2021-11-23T15:03:35.000Z | config.py | Othello1111/wifiportal21 | 73c6e1120eeeb2b4a1c1684bc61062ab77fde85f | [
"MIT"
] | 2 | 2016-06-30T03:59:02.000Z | 2021-09-06T00:43:46.000Z | config.py | Othello1111/wifiportal21 | 73c6e1120eeeb2b4a1c1684bc61062ab77fde85f | [
"MIT"
] | 27 | 2015-12-12T00:29:02.000Z | 2020-10-07T15:35:00.000Z | receiving_key = "xpub6F8dWKbomfy7qmQ9Ma16SAwL3H9xMyaEjAfsEhtRjt5Bx3MFHTgDjvp4eZfUZES4i4AgaVGzVPyCKbSufdVsFvfR4wNjKRGraJrv5nLVs4h" # m/44'/0'/0'/0
SATOSHIS_PER_MINUTE = 2000
| 57.666667 | 145 | 0.884393 | 12 | 173 | 12.5 | 0.833333 | 0.026667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 0.046243 | 173 | 2 | 146 | 86.5 | 0.757576 | 0.075145 | 0 | 0 | 0 | 0 | 0.702532 | 0.702532 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54dba918f0c22d58c41b7d0af96890fc7bebade2 | 257 | py | Python | test/test_pychrone.py | seizethedata/pychrone | 49b0ab306822602bf8d205e85459f485d79fb199 | [
"MIT"
] | 3 | 2018-07-02T13:06:13.000Z | 2020-11-10T22:57:19.000Z | test/test_pychrone.py | seizethedata/pychrone | 49b0ab306822602bf8d205e85459f485d79fb199 | [
"MIT"
] | 2 | 2020-03-18T11:02:22.000Z | 2020-08-26T12:33:20.000Z | test/test_pychrone.py | seizethedata/pychrone | 49b0ab306822602bf8d205e85459f485d79fb199 | [
"MIT"
] | 1 | 2020-08-06T16:39:12.000Z | 2020-08-06T16:39:12.000Z | import pytest
import pychrone
import geojson
def test_none():
assert (pychrone.Create_isochrone(37.847591, 55.920284, 5) !=None)
def test_geojson():
assert (isinstance(pychrone.Create_isochrone(37.847591, 55.920284, 5), geojson.geometry.Polygon)) | 25.7 | 101 | 0.762646 | 35 | 257 | 5.485714 | 0.514286 | 0.072917 | 0.239583 | 0.260417 | 0.416667 | 0.416667 | 0.416667 | 0.416667 | 0 | 0 | 0 | 0.14978 | 0.116732 | 257 | 10 | 101 | 25.7 | 0.696035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | true | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab118ef3bcfe591a4e804c45b5a0c105921abb31 | 1,755 | py | Python | tests/test_1_core_sample_locs.py | JosePazNoguera/pam | afb580c57223acd01466938eea8dc3d83097d5dd | [
"MIT"
] | 29 | 2020-04-10T23:24:26.000Z | 2021-05-21T12:30:03.000Z | tests/test_1_core_sample_locs.py | JosePazNoguera/pam | afb580c57223acd01466938eea8dc3d83097d5dd | [
"MIT"
] | 63 | 2020-04-29T19:02:11.000Z | 2022-03-29T14:02:04.000Z | tests/test_1_core_sample_locs.py | JosePazNoguera/pam | afb580c57223acd01466938eea8dc3d83097d5dd | [
"MIT"
] | 13 | 2020-04-16T19:00:18.000Z | 2022-03-18T14:42:48.000Z | import pytest
from random import random
from pam.core import Population, Household, Person
from pam.activity import Plan, Activity, Leg
from .fixtures import *
def test_assign_same_locs_to_household(SmithHousehold):
population = Population()
population.add(SmithHousehold)
class FakeSampler:
def sample(self, location_idx, activity):
return random()
population.sample_locs(FakeSampler())
home_location = population[1].location
for pid, person in SmithHousehold:
assert person.home == home_location
def test_assign_same_locs_to_person_activity_in_same_area(SmithHousehold):
population = Population()
population.add(SmithHousehold)
class FakeSampler:
def sample(self, location_idx, activity):
return random()
population.sample_locs(FakeSampler())
SmithHousehold[3].plan[2].location == SmithHousehold[3].plan[6].location
def test_assign_same_locs_to_household_activity_in_same_area(SmithHousehold):
population = Population()
population.add(SmithHousehold)
class FakeSampler:
def sample(self, location_idx, activity):
return random()
population.sample_locs(FakeSampler())
SmithHousehold[3].plan[2].location == SmithHousehold[4].plan[2].location
def test_assign_same_locs_to_household_escort_activity_in_same_area(SmithHousehold):
population = Population()
population.add(SmithHousehold)
class FakeSampler:
def sample(self, location_idx, activity):
return random()
population.sample_locs(FakeSampler())
SmithHousehold[2].plan[2].location == SmithHousehold[2].plan[8].location
SmithHousehold[2].plan[2].location == SmithHousehold[4].plan[2].location
| 29.25 | 84 | 0.727635 | 199 | 1,755 | 6.201005 | 0.20603 | 0.12966 | 0.063209 | 0.055105 | 0.8047 | 0.8047 | 0.769854 | 0.718801 | 0.615883 | 0.615883 | 0 | 0.011822 | 0.180627 | 1,755 | 59 | 85 | 29.745763 | 0.846314 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 1 | 0.2 | false | 0 | 0.125 | 0.1 | 0.525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.