hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
76307d9e639e13e77016f78213872c2e3bc839c8 | 3,115 | py | Python | test/functions/lambda7.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 1,482 | 2015-10-16T21:59:32.000Z | 2022-03-30T11:44:40.000Z | test/functions/lambda7.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 226 | 2015-10-15T15:53:44.000Z | 2022-03-25T03:08:27.000Z | test/functions/lambda7.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 129 | 2015-10-20T02:41:49.000Z | 2022-03-22T01:44:36.000Z | anon = lambda a, c={'key':
555}, e=fff: None
anon : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
lambda : meta.lambda-function.python, source.python, storage.type.function.lambda.python
: meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
a : meta.function.lambda.parameters.python, meta.lambda-function.python, source.python, variable.parameter.function.language.python
, : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.separator.parameters.python, source.python
: meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
c : meta.function.lambda.parameters.python, meta.lambda-function.python, source.python, variable.parameter.function.language.python
= : keyword.operator.python, meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
{ : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.definition.dict.begin.python, source.python
' : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.definition.string.begin.python, source.python, string.quoted.single.python
key : meta.function.lambda.parameters.python, meta.lambda-function.python, source.python, string.quoted.single.python
' : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.definition.string.end.python, source.python, string.quoted.single.python
: : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.separator.dict.python, source.python
: meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
: meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
555 : constant.numeric.dec.python, meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
} : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.definition.dict.end.python, source.python
, : meta.function.lambda.parameters.python, meta.lambda-function.python, punctuation.separator.parameters.python, source.python
: meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
e : meta.function.lambda.parameters.python, meta.lambda-function.python, source.python, variable.parameter.function.language.python
= : keyword.operator.python, meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
fff : meta.function.lambda.parameters.python, meta.lambda-function.python, source.python
: : meta.lambda-function.python, punctuation.section.function.lambda.begin.python, source.python
: source.python
None : constant.language.python, source.python
| 89 | 171 | 0.713965 | 349 | 3,115 | 6.372493 | 0.100287 | 0.161871 | 0.218525 | 0.23741 | 0.899281 | 0.872302 | 0.842176 | 0.823291 | 0.823291 | 0.823291 | 0 | 0.002328 | 0.172713 | 3,115 | 34 | 172 | 91.617647 | 0.860691 | 0 | 0 | 0.4 | 0 | 0.066667 | 0.000963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
521f153f9a85fa8b732da1a1718004f001b08395 | 6,472 | py | Python | test_gt_image_overlapping.py | mksarker/data_preprocessing | dabdb7f3dbf1c4bf5ee49a39aef2cb258539b027 | [
"MIT"
] | null | null | null | test_gt_image_overlapping.py | mksarker/data_preprocessing | dabdb7f3dbf1c4bf5ee49a39aef2cb258539b027 | [
"MIT"
] | null | null | null | test_gt_image_overlapping.py | mksarker/data_preprocessing | dabdb7f3dbf1c4bf5ee49a39aef2cb258539b027 | [
"MIT"
] | null | null | null |
import cv2
import os
import numpy as np
from matplotlib import pyplot as plt
import png
from skimage.morphology import erosion, square, dilation
def imgcolor(img,color,shape):
img=img.reshape((-1,3))
img=np.multiply(img, color)
img=img.reshape((shape[0],shape[1],3))
return img
# Read the image from the directory
dir_out='/media/mostafa/RESEARCH/MICCAI2019/Results/SKIN_MICCAI2019_PAPER_RESULTS/output/mobilegan-blend/'
img_list=os.listdir('data/2016/Test/GT/') #data/2017/Test/GT/
for filename in img_list:
if filename.endswith('.png'):
print(filename)
filename=filename.split('.')[0]
img_gt=cv2.imread('data/2016/Test/GT/'+filename+'.png')
#img_gt=cv2.resize(img_gt,(56,96))
org_img=cv2.imread('data/2016/Test/OR/'+filename+'.jpg')
#org_img=cv2.resize(org_img,(96,96))
img_gt=img_gt/255
img_gt=np.array(img_gt,dtype=np.uint8)
img_gt[np.where(img_gt<1)]=0
img_predict=cv2.imread('predict/mobilegan/2016_Test/'+filename+'.jpg')
# kernel = np.ones((5,5),np.uint8)
# img_predict = cv2.erode(img_predict,kernel,iterations = 2)
#img_predict=cv2.resize(img_predict,(565,584))
kernel = np.ones((5,5),np.uint8)
img_predict = cv2.morphologyEx(img_predict, cv2.MORPH_CLOSE, kernel)
img_predict=np.array(img_predict,dtype=np.uint8)
# img_predict = cv2.erosion(img_predict,)
img_predict=img_predict/255
img_predict=np.array(img_predict,dtype=np.uint8)
img_predict[np.where(img_predict<1)]=0
result=img_predict-img_gt
# Compute the FP, TP, FN, TN *****************************************
FP=0*img_predict
FP[np.where(result>0)]=1
FN=0*img_predict
FN[np.where(result<0)]=1
TP=0*img_predict
TP=cv2.bitwise_and(img_predict,img_gt)
TN=0*img_predict
TN=cv2.bitwise_and(1-img_predict,1-img_gt)
aa=cv2.bitwise_or(img_predict,img_gt)
#np.multiply(matrix, color)
# Fill the colors into a mask ********************************************
colors=[ [231, 76, 60] , [248, 196, 113] , [ 46, 204, 113 ], [ 250, 51, 212 ]]
# colors=[ [0, 0, 255] , [ 0, 255,0] , [255, 255, 0], [255, 0, 0]] # Red, Yellow, Green,Blue
colors=np.array(colors,dtype=np.uint8 )
shape=img_gt.shape
img_gt=imgcolor(img_gt,colors[0],shape)
img_predict=imgcolor(img_predict,colors[1],shape)
FP=imgcolor(FP,colors[2],shape)
TP=imgcolor(TP,colors[3],shape)
# Image Blending opearation ********************************************
dst1 = cv2.addWeighted(FP,0.5,TP,0.5,0)
# Blend_org = cv2.addWeighted(img_gt,0.8, img_predict,0.5,0)
Blend_org= cv2.addWeighted(org_img,0.8 ,dst1,0.5,0)
cv2.imwrite(dir_out+filename+'.jpg', Blend_org)
# cv2.imshow('color',img_gt)
# cv2.imshow('FP',FP)
# cv2.imshow('TP',TP)
# cv2.imshow('img_prediorg_imgct',img_predict)
# cv2.imshow('dst1',dst1)
# cv2.imshow('dst2',dst2)
# cv2.imshow('Blend_org',Blend_org)
# cv2.waitKey(0)
# org_img
import cv2
import os
import numpy as np
from matplotlib import pyplot as plt
import png
from skimage.morphology import erosion, square, dilation
def imgcolor(img,color,shape):
img=img.reshape((-1,3))
img=np.multiply(img, color)
img=img.reshape((shape[0],shape[1],3))
return img
# Read the image from the directory
dir_out='/media/mostafa/RESEARCH/MICCAI2019/Results/SKIN_MICCAI2019_PAPER_RESULTS/output/mobilegan-blend/'
img_list=os.listdir('data/2016/Test/GT/') #data/2017/Test/GT/
for filename in img_list:
if filename.endswith('.png'):
print(filename)
filename=filename.split('.')[0]
img_gt=cv2.imread('data/2016/Test/GT/'+filename+'.png')
#img_gt=cv2.resize(img_gt,(56,96))
org_img=cv2.imread('data/2016/Test/OR/'+filename+'.jpg')
#org_img=cv2.resize(org_img,(96,96))
img_gt=img_gt/255
img_gt=np.array(img_gt,dtype=np.uint8)
img_gt[np.where(img_gt<1)]=0
img_predict=cv2.imread('predict/mobilegan/2016_Test/'+filename+'.jpg')
# kernel = np.ones((5,5),np.uint8)
# img_predict = cv2.erode(img_predict,kernel,iterations = 2)
#img_predict=cv2.resize(img_predict,(565,584))
kernel = np.ones((5,5),np.uint8)
img_predict = cv2.morphologyEx(img_predict, cv2.MORPH_CLOSE, kernel)
img_predict=np.array(img_predict,dtype=np.uint8)
# img_predict = cv2.erosion(img_predict,)
img_predict=img_predict/255
img_predict=np.array(img_predict,dtype=np.uint8)
img_predict[np.where(img_predict<1)]=0
result=img_predict-img_gt
# Compute the FP, TP, FN, TN *****************************************
FP=0*img_predict
FP[np.where(result>0)]=1
FN=0*img_predict
FN[np.where(result<0)]=1
TP=0*img_predict
TP=cv2.bitwise_and(img_predict,img_gt)
TN=0*img_predict
TN=cv2.bitwise_and(1-img_predict,1-img_gt)
aa=cv2.bitwise_or(img_predict,img_gt)
#np.multiply(matrix, color)
# Fill the colors into a mask ********************************************
colors=[ [231, 76, 60] , [248, 196, 113] , [ 46, 204, 113 ], [ 250, 51, 212 ]]
# colors=[ [0, 0, 255] , [ 0, 255,0] , [255, 255, 0], [255, 0, 0]] # Red, Yellow, Green,Blue
colors=np.array(colors,dtype=np.uint8 )
shape=img_gt.shape
img_gt=imgcolor(img_gt,colors[0],shape)
img_predict=imgcolor(img_predict,colors[1],shape)
FP=imgcolor(FP,colors[2],shape)
TP=imgcolor(TP,colors[3],shape)
# Image Blending opearation ********************************************
dst1 = cv2.addWeighted(FP,0.5,TP,0.5,0)
# Blend_org = cv2.addWeighted(img_gt,0.8, img_predict,0.5,0)
Blend_org= cv2.addWeighted(org_img,0.8 ,dst1,0.5,0)
cv2.imwrite(dir_out+filename+'.jpg', Blend_org)
# cv2.imshow('color',img_gt)
# cv2.imshow('FP',FP)
# cv2.imshow('TP',TP)
# cv2.imshow('img_prediorg_imgct',img_predict)
# cv2.imshow('dst1',dst1)
# cv2.imshow('dst2',dst2)
# cv2.imshow('Blend_org',Blend_org)
# cv2.waitKey(0)
# org_img
| 28.764444 | 106 | 0.595797 | 940 | 6,472 | 3.951064 | 0.129787 | 0.156166 | 0.049004 | 0.036618 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.073413 | 0.221261 | 6,472 | 224 | 107 | 28.892857 | 0.663492 | 0.282602 | 0 | 1 | 0 | 0 | 0.08671 | 0.054031 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.122449 | 0 | 0.163265 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8701b7e07ef822304704c6006eee623551212cfb | 5,881 | py | Python | main/migrations/0003_auto_20210618_1415.py | ashkantaravati/thesis-survey-app-back | c0f8bf77bafd43a28f891624ee87ab3d56d7349c | [
"MIT"
] | 1 | 2021-07-12T19:13:17.000Z | 2021-07-12T19:13:17.000Z | main/migrations/0003_auto_20210618_1415.py | ashkantaravati/thesis-survey-app-back | c0f8bf77bafd43a28f891624ee87ab3d56d7349c | [
"MIT"
] | null | null | null | main/migrations/0003_auto_20210618_1415.py | ashkantaravati/thesis-survey-app-back | c0f8bf77bafd43a28f891624ee87ab3d56d7349c | [
"MIT"
] | 1 | 2021-08-08T11:14:22.000Z | 2021-08-08T11:14:22.000Z | # Generated by Django 3.2.4 on 2021-06-18 14:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0002_auto_20210618_1247'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='name',
field=models.CharField(max_length=50, verbose_name="Organization's Name"),
),
migrations.AlterField(
model_name='participantteammember',
name='age',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_10_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_10_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_1_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_1_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_2_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_2_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_3_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_3_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_4_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_4_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_5_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_5_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_6_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_6_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_7_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_7_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_8_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_8_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_9_max',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='overconfidence_question_9_min',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='sex',
field=models.CharField(blank=True, choices=[('male', 'آقا'), ('female', 'خانم')], max_length=10, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='team_coordination_question_1',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='team_coordination_question_2',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='team_coordination_question_3',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='team_coordination_question_4',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='participantteammember',
name='team_coordination_question_5',
field=models.IntegerField(blank=True, null=True),
),
]
| 38.188312 | 120 | 0.60993 | 508 | 5,881 | 6.846457 | 0.120079 | 0.161012 | 0.201265 | 0.233468 | 0.905118 | 0.905118 | 0.889592 | 0.878091 | 0.878091 | 0.878091 | 0 | 0.014772 | 0.286346 | 5,881 | 153 | 121 | 38.437909 | 0.813915 | 0.007652 | 0 | 0.741497 | 1 | 0 | 0.235516 | 0.224889 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006803 | 0 | 0.027211 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
5e6562998c8fe85528afc476041898f7bdfc2ebe | 34 | py | Python | model/__init__.py | voegtlel/ldap-admin-backend | 4b57ea867799d2af73a7550f306f4f1e2bf4f938 | [
"MIT"
] | 1 | 2019-09-03T07:21:59.000Z | 2019-09-03T07:21:59.000Z | model/__init__.py | voegtlel/ldap-admin-backend | 4b57ea867799d2af73a7550f306f4f1e2bf4f938 | [
"MIT"
] | null | null | null | model/__init__.py | voegtlel/ldap-admin-backend | 4b57ea867799d2af73a7550f306f4f1e2bf4f938 | [
"MIT"
] | null | null | null | import model.db
import model.view
| 11.333333 | 17 | 0.823529 | 6 | 34 | 4.666667 | 0.666667 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 2 | 18 | 17 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5ece2b1e9a1520ee9883d339da2d7a66409cd444 | 33,608 | py | Python | crispy/inversions.py | bionic-toucan/crisPy2 | b84482bba7ead44a26576c4a7f6c5ee4d4392809 | [
"MIT"
] | 5 | 2020-02-13T17:36:50.000Z | 2021-01-28T12:52:39.000Z | crispy/inversions.py | bionic-toucan/crispy2 | b84482bba7ead44a26576c4a7f6c5ee4d4392809 | [
"MIT"
] | null | null | null | crispy/inversions.py | bionic-toucan/crispy2 | b84482bba7ead44a26576c4a7f6c5ee4d4392809 | [
"MIT"
] | 1 | 2020-10-28T12:51:12.000Z | 2020-10-28T12:51:12.000Z | import numpy as np
import matplotlib.pyplot as plt
import yaml, zarr
from matplotlib.colors import SymLogNorm
from astropy.wcs import WCS
import astropy.units as u
from astropy.coordinates import SkyCoord
from sunpy.coordinates import Helioprojective
from .mixin import InversionSlicingMixin
from .utils import ObjDict
class Inversion(InversionSlicingMixin):
"""
Class for transporting and using the inversions obtained from RADYNVERSION.
:param filename: The file of the inversion. Can be either an hdf5 file path or an ObjDict.
:type filename: str or ObjDict
:param z: The height grid that the atmospheric parameters are calculated at. This can be either an hdf5 file path or a numpy.ndarray.
:type z: str or numpy.ndarray
:param header: The header information of the associated observation.
:type header: dict or None
:cvar ne: The electron number density estimated by RADYNVERSION. This is the median solution for a certain number of draws from the latent space.
:cvar temp: The electron temperature estimated by RADYNVERSION. This is the median solution for a certain number of draws from the latent space.
:cvar vel: The bulk velocity flow estimated by RADYNVERSION. This is the median solution for a certain number of draws from the latent space.
:cvar err: This contains the median absolute deviation (MAD, standard error on the median) for each estimated parameter giving a sense of confidence intervals.
:cvar wcs: The WCS from the observartion associated with the inversion.
:cvar z: The height grid the inversions are estimated on.
:cvar header: The header information from the observation associated with the inversion.
"""
def __init__(self, filename, z, header, wcs=None):
if type(filename) == str:
self.f = zarr.open(filename, mode="r")
if type(z) == str:
self.z = zarr.open(z, mode="r")["z"][:]
else:
self.z = z
if wcs == None:
self.wcs = self._inversion_wcs(header)
else:
self.wcs = wcs
self.header = header
elif type(filename) == ObjDict:
self.f = filename
self.wcs = wcs
self.z = z
self.header = header
@property
def ne(self):
if type(self.f) == ObjDict:
return self.f["ne"]
else:
return self.f["/atmos/ne"]
@property
def temp(self):
if type(self.f) == ObjDict:
return self.f["temperature"]
else:
return self.f["/atmos/temperature"]
@property
def vel(self):
if type(self.f) == ObjDict:
return self.f["vel"]
else:
return self.f["/atmos/vel"]
@property
def ne_err(self):
if type(self.f) == ObjDict:
return self.f["ne_err"]
else:
return self.f["/atmos/ne_err"]
@property
def temp_err(self):
if type(self.f) == ObjDict:
return self.f["temperature_err"]
else:
return self.f["/atmos/temperature_err"]
@property
def vel_err(self):
if type(self.f) == ObjDict:
return self.f["vel_err"]
else:
return self.f["/atmos/vel_err"]
def __str__(self):
try :
time = self.header["DATE-AVG"][-12:]
date = self.header["DATE-AVG"][:-13]
pointing_x = str(self.header["CRVAL1"])
pointing_y = str(self.header["CRVAL2"])
except KeyError:
time = self.header["time_obs"]
date = self.header["date_obs"]
pointing_x = str(self.header["crval"][-1])
pointing_y = str(self.header["crval"][-2])
return f"""Inversion
------------------
{date} {time}
Pointing: ({pointing_x}, {pointing_y})"""
def _inversion_wcs(self, header):
wcs_dict = {}
try:
wcs_dict["NAXIS1"] = header["NAXIS1"]
wcs_dict["NAXIS2"] = header["NAXIS2"]
wcs_dict["NAXIS3"] = self.z.shape[0]
wcs_dict["CTYPE1"] = "HPLN-TAN"
wcs_dict["CTYPE2"] = "HPLT-TAN"
wcs_dict["CTYPE3"] = "HEIGHT"
wcs_dict["CUNIT1"] = "arcsec"
wcs_dict["CUNIT2"] = "arcsec"
wcs_dict["CUNIT3"] = "Mm"
wcs_dict["CRPIX1"] = header["CRPIX1"]
wcs_dict["CRPIX2"] = header["CRPIX2"]
wcs_dict["CRPIX3"] = self.z.shape[0] // 2
wcs_dict["CRVAL1"] = header["CRVAL1"]
wcs_dict["CRVAL2"] = header["CRVAL2"]
wcs_dict["CRVAL3"] = self.z[self.z.shape[0] // 2]
wcs_dict["CDELT1"] = header["CDELT1"]
wcs_dict["CDELT2"] = header["CDELT2"]
wcs_dict["CDELT3"] = 1.0 # z is sampled non-uniformly
except KeyError:
wcs_dict["NAXIS1"] = header["dimensions"][-1]
wcs_dict["NAXIS2"] = header["dimensions"][-2]
wcs_dict["NAXIS3"] = self.z.shape[0]
wcs_dict["CTYPE1"] = "HPLN-TAN"
wcs_dict["CTYPE2"] = "HPLT-TAN"
wcs_dict["CTYPE3"] = "HEIGHT"
wcs_dict["CUNIT1"] = "arcsec"
wcs_dict["CUNIT2"] = "arcsec"
wcs_dict["CUNIT3"] = "Mm"
wcs_dict["CRPIX1"] = header["crpix"][-1]
wcs_dict["CRPIX2"] = header["crpix"][-2]
wcs_dict["CRPIX3"] = self.z.shape[0] // 2
wcs_dict["CRVAL1"] = header["crval"][-1]
wcs_dict["CRVAL2"] = header["crval"][-2]
wcs_dict["CRVAL3"] = self.z[self.z.shape[0] // 2]
wcs_dict["CDELT1"] = header["pixel_scale"]
wcs_dict["CDELT2"] = header["pixel_scale"]
wcs_dict["CDELT3"] = 1.0 # z is sampled non-uniformly
return WCS(wcs_dict)
def plot_ne(self, eb=False):
"""
Class method to plot the electron number density for a given location within the field-of-view. This works by slicing the ``Inversion`` object.
Parameters
----------
eb : bool, optional
Whether or not to plot the median absolute deviation (MAD) for the electron number density as errorbars. Default is False.
"""
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
title = f"{datetime}"
fig = plt.figure()
ax1 = fig.gca()
if eb:
ax1.errorbar(self.z, self.ne, yerr=self.mad[0], capsize=3)
else:
ax1.plot(self.z, self.ne)
ax1.set_ylabel(r"log$_{10}$ n$_{\text{e}}$ \[cm$^{-3}$\]")
ax1.set_xlabel("z [Mm]")
ax1.set_title(f"Electron Number Density {title}")
ax1.tick_params(direction="in")
fig.show()
def plot_temp(self, eb=False):
"""
Class method to plot the electron temperature for a given point in the field-of-view. This is done by slicing the ``Inversion`` object.
Parameters
----------
eb : bool, optional
Whether or not to plot the median absolute deviation (MAD) of the estimated electron temperatures as errorbars. Default is False.
"""
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
title = f"{datetime}"
fig = plt.figure()
ax1 = fig.gca()
if eb:
ax1.errorbar(self.z, self.temp, yerr=self.mad[1], capsize=3)
else:
ax1.plot(self.z, self.temp)
ax1.set_ylabel(r"log$_{10}$ T \[K\]")
ax1.set_xlabel("z [Mm]")
ax1.set_title(f"Electron Temperature {title}")
ax1.tick_params(direction="in")
fig.show()
def plot_vel(self, eb=False):
"""
Class method to plot the bulk velocity for a certain point within the field-of-view. This is done using a slice of the ``Inversion`` instance.
Parameters
----------
eb : bool, optional
Whether or not to plot the median absolute deviation (MAD) of the bulk velocity as errorbars. Default is False.
"""
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
title = f"{datetime}"
fig = plt.figure()
ax1 = fig.gca()
if eb:
ax1.errorbar(self.z, self.vel, yerr=self.mad[2], capsize=3)
else:
ax1.plot(self.z, self.vel)
ax1.set_ylabel(r"Bulk Plasma Flow \[km s$^{-1}$\]")
ax1.set_xlabel("z [Mm]")
ax1.set_title(f"Bulk Plasma Flow {title}")
ax1.tick_params(direction="in")
fig.show()
def plot_params(self, eb=False):
"""
Class method to plot the electron number density, electron temperature, and bulk velocity for a certain point within the field-of-view. This is done using a slice of the ``Inversion`` instance.
Parameters
----------
eb : bool, optional
Whether or not to plot the median absolute deviation (MAD) for each estimated quantity as errorbars. Default is False.
"""
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
title = f"{datetime}"
fig = plt.figure()
fig.suptitle(title)
ax1 = fig.add_subplot(1, 3, 1)
if self.eb:
ax1.errorbar(self.z, self.ne, yerr=self.mad[0], capsize=3)
else:
ax1.plot(self.z, self.ne)
ax1.set_ylabel(r"log$_{10}$ n$_{e}$ \[cm$^{-3}$\]")
ax1.set_xlabel("z [Mm]")
ax1.set_title("Electron Number Density")
ax1.tick_params(direction="in")
ax2 = fig.add_subplot(1, 3, 2)
if self.eb:
ax2.errorbar(self.z, self.temp, yerr=self.mad[1], capsize=3)
else:
ax2.plot(self.z, self.temp)
ax2.set_ylabel(r"log$_{10}$ T \[K\]")
ax2.set_xlabel("z [Mm]")
ax2.set_title("Electron Temperature")
ax2.tick_params(direction="in")
ax3 = fig.add_subplot(1, 3, 3)
if self.eb:
ax3.errorbar(self.z, self.vel, yerr=self.mad[2], capsize=3)
else:
ax3.plot(self.z, self.vel)
ax3.set_ylabel(r"Bulk Plasma Flow \[km s$^{-1}\]")
ax3.set_xlabel("z [Mm]")
ax3.set_title("Bulk Plasma Flow")
ax3.tick_params(direction="in")
fig.show()
def ne_map(self, frame=None):
"""
Creates an electron density map at a specified height denoted in the ``Inversion`` slice.
Parameters
----------
frame : str, optional
The frame to plot the map in. Default is None therefore uses the WCS frame. Other option is "pix" to plot in the pixel frame.
"""
if type(self.ind) == int:
idx = self.ind
else:
idx = self.ind[-1]
height = np.round(self.z[idx], decimals=4)
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
else:
datetime = ""
if frame is None:
fig = plt.figure()
ax1 = fig.add_subplot(1, 1, 1, projection=self.wcs.low_level_wcs)
im1 = ax1.imshow(self.ne, cmap="cividis")
ax1.set_ylabel("Helioprojective Latitude [arcsec]")
ax1.set_xlabel("Helioprojective Longitude [arcsec]")
ax1.set_title(f"Electron Number Density {datetime} z={height}Mm")
fig.colorbar(im1, ax=ax1, label=r"log$_{10}$n$_{e}$ [cm$^{-3}$]")
fig.show()
else:
fig = plt.figure()
ax1 = fig.gca()
im1 = ax1.imshow(self.ne, cmap="cividis")
ax1.set_ylabel("y [pixels]")
ax1.set_xlabel("x [pixels]")
ax1.set_title(f"Electron Number Density {datetime} z={height}Mm")
fig.colorbar(im1, ax=ax1, label=r"log$_{10}$n$_{e}$ [cm$^{-3}$]")
fig.show()
def temp_map(self, frame=None):
"""
Creates an electron temperature map at a specified height denoted in the ``Inversion`` slice.
Parameters
----------
frame : str, optional
The frame to plot the map in. Default is None therefore uses the WCS frame. Other option is "pix" to plot in the pixel frame.
"""
if type(self.ind) == int:
idx = self.ind
else:
idx = self.ind[-1]
height = np.round(self.z[idx], decimals=4)
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
else:
datetime = ""
if frame is None:
fig = plt.figure()
ax1 = fig.add_subplot(1, 1, 1, projection=self.wcs.low_level_wcs)
im1 = ax1.imshow(self.temp, cmap="hot")
ax1.set_ylabel("Helioprojective Latitude [arcsec]")
ax1.set_xlabel("Helioprojective Longitude [arcsec]")
ax1.set_title(f"Electron Temperature {datetime} z={height}Mm")
fig.colorbar(im1, ax=ax1, label=r"log$_{10}$T [K]")
fig.show()
else:
fig = plt.figure()
ax1 = fig.gca()
im1 = ax1.imshow(self.temp, cmap="cividis")
ax1.set_ylabel("y [pixels]")
ax1.set_xlabel("x [pixels]")
ax1.set_title(f"Electron Temperature {datetime} z={height}Mm")
fig.colorbar(im1, ax=ax1, label=r"log$_{10}$T [K]")
fig.show()
def vel_map(self, frame=None):
"""
Creates a bulk velocity map at a specified height denoted in the ``Inversion`` slice.
Parameters
----------
frame : str, optional
The frame to plot the map in. Default is None therefore uses the WCS frame. Other option is "pix" to plot in the pixel frame.
"""
if type(self.ind) == int:
idx = self.ind
else:
idx = self.ind[-1]
height = np.round(self.z[idx], decimals=4)
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
else:
datetime = ""
if frame is None:
fig = plt.figure()
ax1 = fig.add_subplot(1, 1, 1, projection=self.wcs.low_level_wcs)
im1 = ax1.imshow(self.vel, cmap="RdBu", norm=SymLogNorm(1))
ax1.set_ylabel("Helioprojective Latitude [arcsec]")
ax1.set_xlabel("Helioprojective Longitude [arcsec]")
ax1.set_title(f"Bulk Velocity Flow {datetime} z={height}Mm")
fig.colorbar(im1, ax=ax1, label=r"v [kms$^{-1}$]")
fig.show()
else:
fig = plt.figure()
ax1 = fig.gca()
im1 = ax1.imshow(self.vel, cmap="RdBu", norm=SymLogNorm(1))
ax1.set_ylabel("y [pixels]")
ax1.set_xlabel("x [pixels]")
ax1.set_title(f"Bulk Velocity Flow {datetime} z={height}Mm")
fig.colorbar(im1, ax=ax1, label=r"v [kms$^{-1}$]")
fig.show()
def params_map(self, frame=None):
"""
Creates maps of electron number density, electron temperature, and bulk velocity at a specified height denoted in the ``Inversion`` slice.
Parameters
----------
frame : str, optional
The frame to plot the map in. Default is None therefore uses the WCS frame. Other option is "pix" to plot in the pixel frame.
"""
if type(self.ind) == int:
idx = self.ind
else:
idx = self.ind[-1]
height = np.round(self.z[idx], decimals=4)
if self.header is not None:
try:
datetime = self.header["DATE-AVG"]
except KeyError:
datetime = self.header["date_obs"] + "T" + self.header["time_obs"]
else:
datetime = ""
if frame is None:
fig = plt.figure()
fig.suptitle(f"{datetime} z={np.round(height,3)}Mm")
ax1 = fig.add_subplot(1, 3, 1, projection=self.wcs.low_level_wcs)
im1 = ax1.imshow(self.ne, cmap="cividis")
ax1.set_ylabel("Helioprojective Latitude [arcsec]")
ax1.set_xlabel("Helioprojective Longitude [arcsec]")
ax1.set_title("Electron Number Density")
fig.colorbar(im1, ax=ax1, orientation="horizontal", label=r"log$_{10}$n$_{e}$ [cm$^{-3}$]")
ax2 = fig.add_subplot(1, 3, 2, projection=self.wcs.low_level_wcs)
im2 = ax2.imshow(self.temp, cmap="hot")
ax2.set_ylabel("Helioprojective Latitude [arcsec]")
ax2.set_xlabel("Helioprojective Longitude [arcsec]")
ax2.set_title("Electron Temperature")
fig.colorbar(im2, ax=ax2, orientation="horizontal", label=r"log$_{10}$T [K]")
ax3 = fig.add_subplot(1, 3, 3, projection=self.wcs.low_level_wcs)
im3 = ax3.imshow(self.vel, cmap="RdBu", norm=SymLogNorm(1))
ax3.set_ylabel("Helioprojective Latitude [arcsec]")
ax3.set_xlabel("Helioprojective Longitude [arcsec]")
ax3.set_title("Bulk Velocity Flow")
fig.colorbar(im3, ax=ax3, orientation="horizontal", label=r"v [kms$^{-1}$]")
fig.show()
else:
fig = plt.figure()
ax1 = fig.add_subplot(1, 3, 1)
im1 = ax1.imshow(self.ne, cmap="cividis")
ax1.set_ylabel("y [pixels]")
ax1.set_xlabel("x [pixels]")
ax1.set_title("Electron Number Density")
fig.colorbar(im1, ax=ax1, orientation="horizontal", label=r"log$_{10}$n$_{e}$ [cm$^{-3}$]")
ax2 = fig.add_subplot(1, 3, 2)
im2 = ax2.imshow(self.temp, cmap="hot")
ax2.set_ylabel("y [pixels]")
ax2.set_xlabel("x [pixels]")
ax2.set_title("Electron Temperature")
fig.colorbar(im2, ax=ax2, orientation="horizontal", label=r"log$_{10}$T [K]")
ax3 = fig.add_subplot(1, 3, 3)
im3 = ax3.imshow(self.vel, cmap="RdBu", norm=SymLogNorm(1))
ax3.set_ylabel("y [pixels]")
ax3.set_xlabel("x [pixels]")
ax3.set_title("Bulk Velocity Flow")
fig.colorbar(im3, ax=ax3, orientation="horizontal", label=r"v [kms$^{-1}$]")
fig.show()
def to_lonlat(self, y, x, coord=False, unit=False):
"""
This function will take a y, x coordinate in pixel space and map it to Helioprojective Longitude, Helioprojective Latitude according to the transform in the WCS. This will return the Helioprojective coordinates in units of arcseconds. Note this function takes arguments in the order of numpy indexing (y,x) but returns a pair longitude/latitude which is Solar-X, Solar-Y.
Parameters
----------
y : int
The y-index to be converted to Helioprojective Latitude.
x : int
The x-index to be converted to Helioprojective Longitude.
"""
if coord:
if len(self.wcs.low_level_wcs.array_shape) == 4:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].array_index_to_world(y,x)
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].array_index_to_world(y,x)
else:
return self.wcs.low_level_wcs._wcs[0,0].array_index_to_world(y,x)
else:
return self.wcs[0,0].array_index_to_world(y,x)
elif len(self.wcs.low_level_wcs.array_shape) == 3:
if hasattr(self, "ind") and self.wcs.low_level_wcs._wcs.naxis == 4:
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].array_index_to_world(y,x)
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].array_index_to_world(y,x)
else:
return self.wcs.low_level_wcs._wcs[0,0].array_index_to_world(y,x)
else:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
return self.wcs.low_level_wcs._wcs[0,self.ind[-2]].array_index_to_world(y,x)
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,:,self.ind[-1]].array_index_to_world(y,x)
else:
return self.wcs.low_level_wcs._wcs[0].array_index_to_world(y,x)
else:
return self.wcs[0].array_index_to_world(y,x)
elif len(self.wcs.low_level_wcs.array_shape) == 2:
return self.wcs.array_index_to_world(y,x)
else:
raise NotImplementedError("Too many or too little dimensions.")
else:
if unit:
if len(self.wcs.low_level_wcs.array_shape) == 4:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
sc = self.wcs.low_level_wcs._wcs[0,0].array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
sc = self.wcs[0,0].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif len(self.wcs.low_level_wcs.array_shape) == 3:
if hasattr(self, "ind") and self.wcs.low_level_wcs._wcs.naxis == 4:
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
sc = self.wcs.low_level_wcs._wcs[0,0].array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
sc = self.wcs.low_level_wcs._wcs[0,self.ind[-2]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,:,self.ind[-1]].array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
sc = self.wcs.low_level_wcs._wcs[0].array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
sc = self.wcs[0].array_index_to_world(y,x)
return sc.Tx, sc.Ty
elif len(self.wcs.low_level_wcs.array_shape) == 2:
sc = self.wcs.array_index_to_world(y,x)
return sc.Tx, sc.Ty
else:
raise NotImplementedError("Too many or too little dimensions.")
else:
if len(self.wcs.low_level_wcs.array_shape) == 4:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
sc = self.wcs.low_level_wcs._wcs[0,0].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
sc = self.wcs[0,0].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif len(self.wcs.low_level_wcs.array_shape) == 3:
if hasattr(self, "ind") and self.wcs.low_level_wcs._wcs.naxis == 4:
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
sc = self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
sc = self.wcs.low_level_wcs._wcs[0,0].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,self.ind[-2],self.ind[-1]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
sc = self.wcs.low_level_wcs._wcs[0,self.ind[-2]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
sc = self.wcs.low_level_wcs._wcs[0,:,self.ind[-1]].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
sc = self.wcs.low_level_wcs._wcs[0].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
sc = self.wcs[0].array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
elif len(self.wcs.low_level_wcs.array_shape) == 2:
sc = self.wcs.array_index_to_world(y,x)
return sc.Tx.value, sc.Ty.value
else:
raise NotImplementedError("Too many or too little dimensions.")
def from_lonlat(self,lon,lat):
"""
This function takes a Helioprojective Longitude, Helioprojective Latitude pair and converts them to the y, x indices to index the object correctly. The function takes its arguments in the order Helioprojective Longitude, Helioprojective Latitude but returns the indices in the (y,x) format so that the output of this function can be used to directly index the object.
Parameters
----------
lon : float
The Helioprojective Longitude in arcseconds.
lat : float
The Helioprojective Latitude in arcseconds.
"""
lon, lat = lon << u.arcsec, lat << u.arcsec
sc = SkyCoord(lon, lat, frame=Helioprojective)
if len(self.wcs.low_level_wcs.array_shape) == 4:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].world_to_array_index(sc)
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].world_to_array_index(sc)
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].world_to_array_index(sc)
else:
return self.wcs.low_level_wcs._wcs[0,0].world_to_array_index(sc)
else:
return self.wcs[0,0].world_to_array_index(lon,lat)
elif len(self.wcs.low_level_wcs.array_shape) == 3:
if hasattr(self, "ind") and self.wcs.low_level_wcs._wcs.naxis == 4:
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2],self.ind[-1]].world_to_array_index(sc)
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
return self.wcs.low_level_wcs._wcs[0,0,self.ind[-2]].world_to_array_index(sc)
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,0,:,self.ind[-1]].world_to_array_index(sc)
else:
return self.wcs.low_level_wcs._wcs[0,0].world_to_array_index(sc)
else:
if hasattr(self, "ind"):
if type(self.ind[-2]) == slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,self.ind[-2],self.ind[-1]].world_to_array_index(sc)
elif type(self.ind[-2]) == slice and type(self.ind[-1]) != slice:
return self.wcs.low_level_wcs._wcs[0,self.ind[-2]].world_to_array_index(sc)
elif type(self.ind[-2]) != slice and type(self.ind[-1]) == slice:
return self.wcs.low_level_wcs._wcs[0,:,self.ind[-1]].world_to_array_index(sc)
else:
return self.wcs.low_level_wcs._wcs[0].world_to_array_index(sc)
else:
return self.wcs[0].world_to_array_index(sc)
elif len(self.wcs.low_level_wcs.array_shape) == 2:
return self.wcs.world_to_array_index(sc)
else:
raise NotImplementedError("Too many or too little dimensions.") | 48.080114 | 379 | 0.534337 | 4,466 | 33,608 | 3.893641 | 0.069189 | 0.057968 | 0.048076 | 0.060383 | 0.817816 | 0.79165 | 0.76997 | 0.753752 | 0.740526 | 0.727299 | 0 | 0.023969 | 0.333373 | 33,608 | 699 | 380 | 48.080114 | 0.752187 | 0.14193 | 0 | 0.754545 | 0 | 0 | 0.097914 | 0.001632 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034545 | false | 0 | 0.018182 | 0 | 0.189091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0d5da505d4e15d21cef86943c0bd85c70e70e8cb | 4,322 | py | Python | pinaxcon/proposals/migrations/0007_knowledgeproposal_lawproposal_testingproposal.py | n6151h/pyconau2017 | 092de5fd60d2b0dd207242cf2585e16ec6843392 | [
"MIT"
] | null | null | null | pinaxcon/proposals/migrations/0007_knowledgeproposal_lawproposal_testingproposal.py | n6151h/pyconau2017 | 092de5fd60d2b0dd207242cf2585e16ec6843392 | [
"MIT"
] | null | null | null | pinaxcon/proposals/migrations/0007_knowledgeproposal_lawproposal_testingproposal.py | n6151h/pyconau2017 | 092de5fd60d2b0dd207242cf2585e16ec6843392 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-09-27 07:58
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('symposion_proposals', '0001_initial'),
('proposals', '0006_auto_20160925_0551'),
]
operations = [
migrations.CreateModel(
name='KnowledgeProposal',
fields=[
('proposalbase_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='symposion_proposals.ProposalBase')),
('target_audience', models.IntegerField(choices=[(1, b'User'), (2, b'Business'), (3, b'Community'), (4, b'Developer')])),
('recording_release', models.BooleanField(default=True, help_text=b"I allow Linux Australia to release any recordings of presentations covered by this proposal, under the <a href='https://creativecommons.org/licenses/by-sa/3.0/au/deed.en'> Creative Commons Attribution-Share Alike Australia 3.0 Licence</a>")),
('materials_release', models.BooleanField(default=True, help_text=b"I allow Linux Australia to release any other material (such as slides) from presentations covered by this proposal, under the <a href='https://creativecommons.org/licenses/by-sa/3.0/au/deed.en'> Creative Commons Attribution-Share Alike Australia 3.0 Licence</a>")),
],
options={
'verbose_name': 'Open Knowledge Australia Miniconf Proposal',
},
bases=('symposion_proposals.proposalbase',),
),
migrations.CreateModel(
name='LawProposal',
fields=[
('proposalbase_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='symposion_proposals.ProposalBase')),
('target_audience', models.IntegerField(choices=[(1, b'User'), (2, b'Business'), (3, b'Community'), (4, b'Developer')])),
('recording_release', models.BooleanField(default=True, help_text=b"I allow Linux Australia to release any recordings of presentations covered by this proposal, under the <a href='https://creativecommons.org/licenses/by-sa/3.0/au/deed.en'> Creative Commons Attribution-Share Alike Australia 3.0 Licence</a>")),
('materials_release', models.BooleanField(default=True, help_text=b"I allow Linux Australia to release any other material (such as slides) from presentations covered by this proposal, under the <a href='https://creativecommons.org/licenses/by-sa/3.0/au/deed.en'> Creative Commons Attribution-Share Alike Australia 3.0 Licence</a>")),
],
options={
'verbose_name': 'Open Law and Policy Miniconf Proposal',
},
bases=('symposion_proposals.proposalbase',),
),
migrations.CreateModel(
name='TestingProposal',
fields=[
('proposalbase_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='symposion_proposals.ProposalBase')),
('target_audience', models.IntegerField(choices=[(1, b'User'), (2, b'Business'), (3, b'Community'), (4, b'Developer')])),
('recording_release', models.BooleanField(default=True, help_text=b"I allow Linux Australia to release any recordings of presentations covered by this proposal, under the <a href='https://creativecommons.org/licenses/by-sa/3.0/au/deed.en'> Creative Commons Attribution-Share Alike Australia 3.0 Licence</a>")),
('materials_release', models.BooleanField(default=True, help_text=b"I allow Linux Australia to release any other material (such as slides) from presentations covered by this proposal, under the <a href='https://creativecommons.org/licenses/by-sa/3.0/au/deed.en'> Creative Commons Attribution-Share Alike Australia 3.0 Licence</a>")),
],
options={
'verbose_name': 'Testing/Automation Miniconf Proposal',
},
bases=('symposion_proposals.proposalbase',),
),
]
| 75.824561 | 349 | 0.677696 | 526 | 4,322 | 5.475285 | 0.230038 | 0.008333 | 0.0625 | 0.066667 | 0.873264 | 0.873264 | 0.855556 | 0.855556 | 0.855556 | 0.802778 | 0 | 0.02069 | 0.194817 | 4,322 | 56 | 350 | 77.178571 | 0.806897 | 0.015502 | 0 | 0.612245 | 1 | 0.122449 | 0.524694 | 0.050564 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.061224 | 0 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0d7cab08132c0d93750428cecae2f2982a66f854 | 23,773 | py | Python | pymazda/sensordata/android_builds.py | bdr99/pymazda | aa05b9414a8111f9381bbf425f5cb2d75da53e2c | [
"MIT"
] | 22 | 2021-01-02T17:50:05.000Z | 2022-02-26T15:48:19.000Z | pymazda/sensordata/android_builds.py | bdr99/pymazda | aa05b9414a8111f9381bbf425f5cb2d75da53e2c | [
"MIT"
] | 21 | 2021-03-04T02:52:47.000Z | 2022-03-12T03:53:11.000Z | pymazda/sensordata/android_builds.py | bdr99/pymazda | aa05b9414a8111f9381bbf425f5cb2d75da53e2c | [
"MIT"
] | 5 | 2021-03-04T19:57:07.000Z | 2022-03-08T21:11:38.000Z | import json
ANDROID_BUILDS_JSON = '{"Pixel 3":{"codename":"blueline","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.006","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1D.210205.004","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1D.210105.003","version":"11"},{"buildId":"RQ1A.210105.003","version":"11"},{"buildId":"RQ1A.201205.003.A1","version":"11"},{"buildId":"RQ1A.201205.003","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.001","version":"10"},{"buildId":"QQ2A.200501.001.B2","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.002","version":"10"},{"buildId":"QQ1A.200205.002","version":"10"},{"buildId":"QQ1A.200105.003","version":"10"},{"buildId":"QQ1A.200105.002","version":"10"},{"buildId":"QQ1A.191205.008","version":"10"},{"buildId":"QP1A.191105.003","version":"10"},{"buildId":"QP1A.191005.007","version":"10"},{"buildId":"QP1A.190711.020.C3","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3A.190801.002","version":"9"},{"buildId":"PQ3A.190705.003","version":"9"},{"buildId":"PQ3A.190605.004.A1","version":"9"},{"buildId":"PQ3A.190605.003","version":"9"},{"buildId":"PQ3A.190505.002","version":"9"},{"buildId":"PQ2A.190405.003","version":"9"},{"buildId":"PQ2A.190305.002","version":"9"},{"buildId":"PQ2A.190205.001","version":"9"},{"buildId":"PQ1A.190105.004","version":"9"},{"buildId":"PQ1A.181205.006.A1","version":"9"},{"buildId":"PQ1A.181205.006","version":"9"},{"buildId":"PQ1A.181105.017.A1","version":"9"},{"buildId":"PD1A.180720.031","version":"9"},{"buildId":"PD1A.180720.030","version":"9"}]},"Pixel 3a":{"codename":"sargo","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1A.210105.002","version":"11"},{"buildId":"RQ1A.201205.003","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.002","version":"10"},{"buildId":"QQ2A.200501.001.B2","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.002","version":"10"},{"buildId":"QQ1A.200205.002","version":"10"},{"buildId":"QQ1A.200105.002","version":"10"},{"buildId":"QQ1A.191205.011","version":"10"},{"buildId":"QP1A.191105.003","version":"10"},{"buildId":"QP1A.191005.007","version":"10"},{"buildId":"QP1A.190711.020.C3","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3B.190801.002","version":"9"},{"buildId":"PQ3B.190705.003","version":"9"},{"buildId":"PQ3B.190605.006","version":"9"},{"buildId":"PD2A.190115.032","version":"9"},{"buildId":"PD2A.190115.029","version":"9"}]},"Pixel 3a XL":{"codename":"bonito","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1A.210105.002","version":"11"},{"buildId":"RQ1A.201205.003","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.002","version":"10"},{"buildId":"QQ2A.200501.001.B2","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.002","version":"10"},{"buildId":"QQ1A.200205.002","version":"10"},{"buildId":"QQ1A.200105.002","version":"10"},{"buildId":"QQ1A.191205.011","version":"10"},{"buildId":"QP1A.191105.003","version":"10"},{"buildId":"QP1A.191005.007","version":"10"},{"buildId":"QP1A.190711.020.C3","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3B.190801.002","version":"9"},{"buildId":"PQ3B.190705.003","version":"9"},{"buildId":"PQ3B.190605.006","version":"9"},{"buildId":"PD2A.190115.032","version":"9"},{"buildId":"PD2A.190115.029","version":"9"}]},"Pixel 3 XL":{"codename":"crosshatch","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.006","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1D.210205.004","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1D.210105.003","version":"11"},{"buildId":"RQ1A.210105.003","version":"11"},{"buildId":"RQ1A.201205.003.A1","version":"11"},{"buildId":"RQ1A.201205.003","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.001","version":"10"},{"buildId":"QQ2A.200501.001.B2","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.002","version":"10"},{"buildId":"QQ1A.200205.002","version":"10"},{"buildId":"QQ1A.200105.003","version":"10"},{"buildId":"QQ1A.200105.002","version":"10"},{"buildId":"QQ1A.191205.008","version":"10"},{"buildId":"QP1A.191105.003","version":"10"},{"buildId":"QP1A.191005.007","version":"10"},{"buildId":"QP1A.190711.020.C3","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3A.190801.002","version":"9"},{"buildId":"PQ3A.190705.003","version":"9"},{"buildId":"PQ3A.190605.004.A1","version":"9"},{"buildId":"PQ3A.190605.003","version":"9"},{"buildId":"PQ3A.190505.002","version":"9"},{"buildId":"PQ2A.190405.003","version":"9"},{"buildId":"PQ2A.190305.002","version":"9"},{"buildId":"PQ2A.190205.001","version":"9"},{"buildId":"PQ1A.190105.004","version":"9"},{"buildId":"PQ1A.181205.006.A1","version":"9"},{"buildId":"PQ1A.181205.006","version":"9"},{"buildId":"PQ1A.181105.017.A1","version":"9"},{"buildId":"PD1A.180720.031","version":"9"},{"buildId":"PD1A.180720.030","version":"9"}]},"Pixel 4":{"codename":"flame","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1A.210105.003","version":"11"},{"buildId":"RQ1A.201205.008.A1","version":"11"},{"buildId":"RQ1A.201205.008","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.001","version":"10"},{"buildId":"QQ2A.200501.001.B2","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.004.A1","version":"10"},{"buildId":"QQ2A.200305.003","version":"10"},{"buildId":"QQ1D.200205.002","version":"10"},{"buildId":"QQ1C.200205.002","version":"10"},{"buildId":"QQ1B.200205.002","version":"10"},{"buildId":"QQ1D.200105.002","version":"10"},{"buildId":"QQ1C.200105.004","version":"10"},{"buildId":"QQ1B.200105.004","version":"10"},{"buildId":"QQ1C.191205.016.A1","version":"10"},{"buildId":"QQ1B.191205.012.A1","version":"10"},{"buildId":"QQ1B.191205.011","version":"10"},{"buildId":"QD1A.190821.014.C2","version":"10"},{"buildId":"QD1A.190821.014","version":"10"},{"buildId":"QD1A.190821.007.A3","version":"10"},{"buildId":"QD1A.190821.011.C4","version":"10"},{"buildId":"QD1A.190821.011","version":"10"},{"buildId":"QD1A.190821.007","version":"10"}]},"Pixel 4 XL":{"codename":"coral","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1A.210105.003","version":"11"},{"buildId":"RQ1A.201205.008.A1","version":"11"},{"buildId":"RQ1A.201205.008","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.001","version":"10"},{"buildId":"QQ2A.200501.001.B2","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.004.A1","version":"10"},{"buildId":"QQ2A.200305.003","version":"10"},{"buildId":"QQ1D.200205.002","version":"10"},{"buildId":"QQ1C.200205.002","version":"10"},{"buildId":"QQ1B.200205.002","version":"10"},{"buildId":"QQ1D.200105.002","version":"10"},{"buildId":"QQ1C.200105.004","version":"10"},{"buildId":"QQ1B.200105.004","version":"10"},{"buildId":"QQ1C.191205.016.A1","version":"10"},{"buildId":"QQ1B.191205.012.A1","version":"10"},{"buildId":"QQ1B.191205.011","version":"10"},{"buildId":"QD1A.190821.014.C2","version":"10"},{"buildId":"QD1A.190821.014","version":"10"},{"buildId":"QD1A.190821.007.A3","version":"10"},{"buildId":"QD1A.190821.011.C4","version":"10"},{"buildId":"QD1A.190821.011","version":"10"},{"buildId":"QD1A.190821.007","version":"10"}]},"Pixel 4a":{"codename":"sunfish","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.002","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.007","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1A.210105.002","version":"11"},{"buildId":"RQ1A.201205.008","version":"11"},{"buildId":"RP1A.201105.002","version":"11"},{"buildId":"RP1A.201005.006","version":"11"},{"buildId":"RP1A.200720.011","version":"11"},{"buildId":"RP1A.200720.010","version":"11"},{"buildId":"QD4A.200805.003","version":"10"},{"buildId":"QD4A.200805.001","version":"10"},{"buildId":"QD4A.200317.027","version":"10"},{"buildId":"QD4A.200317.024.A1","version":"10"}]},"Pixel 5":{"codename":"redfin","builds":[{"buildId":"RQ3A.210605.005","version":"11"},{"buildId":"RQ2A.210505.003","version":"11"},{"buildId":"RQ2A.210405.005","version":"11"},{"buildId":"RQ2A.210305.007","version":"11"},{"buildId":"RQ2A.210305.006","version":"11"},{"buildId":"RQ1D.210205.004","version":"11"},{"buildId":"RQ1C.210205.006","version":"11"},{"buildId":"RQ1A.210205.004","version":"11"},{"buildId":"RQ1D.210105.003","version":"11"},{"buildId":"RQ1A.210105.003","version":"11"},{"buildId":"RQ1D.201205.012.A1","version":"11"},{"buildId":"RQ1A.201205.011","version":"11"},{"buildId":"RQ1A.201205.010","version":"11"},{"buildId":"RD1B.201105.010","version":"11"},{"buildId":"RD1A.201105.003.C1","version":"11"},{"buildId":"RD1A.201105.003.B1","version":"11"},{"buildId":"RD1A.201105.003.A1","version":"11"},{"buildId":"RD1A.201105.003","version":"11"},{"buildId":"RD1A.200810.022.A4","version":"11"},{"buildId":"RD1A.200810.021.B3","version":"11"},{"buildId":"RD1A.200810.020.A1","version":"11"},{"buildId":"RD1A.200810.021.A1","version":"11"},{"buildId":"RD1A.200810.020","version":"11"}]},"Pixel 2":{"codename":"walleye","builds":[{"buildId":"RP1A.201005.004.A1","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.001","version":"10"},{"buildId":"QQ2A.200501.001.B3","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.002","version":"10"},{"buildId":"QQ1A.200205.002","version":"10"},{"buildId":"QQ1A.200105.002","version":"10"},{"buildId":"QQ1A.191205.008","version":"10"},{"buildId":"QP1A.191105.004","version":"10"},{"buildId":"QP1A.191005.007.A1","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3A.190801.002","version":"9"},{"buildId":"PQ3A.190705.001","version":"9"},{"buildId":"PQ3A.190605.003","version":"9"},{"buildId":"PQ3A.190505.001","version":"9"},{"buildId":"PQ2A.190405.003","version":"9"},{"buildId":"PQ2A.190305.002","version":"9"},{"buildId":"PQ2A.190205.002","version":"9"},{"buildId":"PQ1A.190105.004","version":"9"},{"buildId":"PQ1A.181205.002","version":"9"},{"buildId":"PQ1A.181105.017.A1","version":"9"},{"buildId":"PPR2.181005.003","version":"9"},{"buildId":"PPR2.180905.005","version":"9"},{"buildId":"PPR1.180610.011","version":"9"},{"buildId":"PPR1.180610.009","version":"9"},{"buildId":"OPM4.171019.021.Q1","version":"8.1.0"},{"buildId":"OPM2.171026.006.G1","version":"8.1.0"},{"buildId":"OPM4.171019.021.E1","version":"8.1.0"},{"buildId":"OPM2.171026.006.C1","version":"8.1.0"},{"buildId":"OPM4.171019.016.B1","version":"8.1.0"},{"buildId":"OPM2.171019.029.B1","version":"8.1.0"},{"buildId":"OPM4.171019.015.A1","version":"8.1.0"},{"buildId":"OPM2.171019.029","version":"8.1.0"},{"buildId":"OPM1.171019.021","version":"8.1.0"},{"buildId":"OPM1.171019.019","version":"8.1.0"},{"buildId":"OPM2.171019.016","version":"8.1.0"},{"buildId":"OPM1.171019.014","version":"8.1.0"},{"buildId":"OPM1.171019.013","version":"8.1.0"},{"buildId":"OPM2.171019.012","version":"8.1.0"},{"buildId":"OPM1.171019.011","version":"8.1.0"},{"buildId":"OPD3.170816.023","version":"8.1.0"},{"buildId":"OPD1.170816.025","version":"8.1.0"},{"buildId":"OPD3.170816.016","version":"8.1.0"},{"buildId":"OPD2.170816.015","version":"8.1.0"},{"buildId":"OPD1.170816.018","version":"8.1.0"},{"buildId":"OPD3.170816.012","version":"8.1.0"},{"buildId":"OPD1.170816.012","version":"8.1.0"},{"buildId":"OPD1.170816.011","version":"8.1.0"},{"buildId":"OPD1.170816.010","version":"8.1.0"}]},"Pixel 2 XL":{"codename":"taimen","builds":[{"buildId":"RP1A.201005.004.A1","version":"11"},{"buildId":"RP1A.201005.004","version":"11"},{"buildId":"RP1A.200720.009","version":"11"},{"buildId":"QQ3A.200805.001","version":"10"},{"buildId":"QQ3A.200705.002","version":"10"},{"buildId":"QQ3A.200605.002.A1","version":"10"},{"buildId":"QQ3A.200605.001","version":"10"},{"buildId":"QQ2A.200501.001.B3","version":"10"},{"buildId":"QQ2A.200501.001.A3","version":"10"},{"buildId":"QQ2A.200405.005","version":"10"},{"buildId":"QQ2A.200305.002","version":"10"},{"buildId":"QQ1A.200205.002","version":"10"},{"buildId":"QQ1A.200105.002","version":"10"},{"buildId":"QQ1A.191205.008","version":"10"},{"buildId":"QP1A.191105.004","version":"10"},{"buildId":"QP1A.191005.007.A1","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3A.190801.002","version":"9"},{"buildId":"PQ3A.190705.001","version":"9"},{"buildId":"PQ3A.190605.003","version":"9"},{"buildId":"PQ3A.190505.001","version":"9"},{"buildId":"PQ2A.190405.003","version":"9"},{"buildId":"PQ2A.190305.002","version":"9"},{"buildId":"PQ2A.190205.002","version":"9"},{"buildId":"PQ1A.190105.004","version":"9"},{"buildId":"PQ1A.181205.002","version":"9"},{"buildId":"PQ1A.181105.017.A1","version":"9"},{"buildId":"PPR2.181005.003","version":"9"},{"buildId":"PPR2.180905.005","version":"9"},{"buildId":"PPR1.180610.011","version":"9"},{"buildId":"PPR1.180610.009","version":"9"},{"buildId":"OPM4.171019.021.R1","version":"8.1.0"},{"buildId":"OPM2.171026.006.H1","version":"8.1.0"},{"buildId":"OPM4.171019.021.E1","version":"8.1.0"},{"buildId":"OPM2.171026.006.C1","version":"8.1.0"},{"buildId":"OPM4.171019.016.B1","version":"8.1.0"},{"buildId":"OPM2.171019.029.B1","version":"8.1.0"},{"buildId":"OPM4.171019.015.A1","version":"8.1.0"},{"buildId":"OPM2.171019.029","version":"8.1.0"},{"buildId":"OPM1.171019.021","version":"8.1.0"},{"buildId":"OPM1.171019.018","version":"8.1.0"},{"buildId":"OPM1.171019.014","version":"8.1.0"},{"buildId":"OPM1.171019.013","version":"8.1.0"},{"buildId":"OPM2.171019.012","version":"8.1.0"},{"buildId":"OPM1.171019.011","version":"8.1.0"},{"buildId":"OPD3.170816.023","version":"8.1.0"},{"buildId":"OPD1.170816.025","version":"8.1.0"},{"buildId":"OPD3.170816.012","version":"8.1.0"},{"buildId":"OPD1.170816.012","version":"8.1.0"},{"buildId":"OPD1.170816.011","version":"8.1.0"},{"buildId":"OPD1.170816.010","version":"8.1.0"}]},"Pixel XL":{"codename":"marlin","builds":[{"buildId":"QP1A.191005.007.A3","version":"10"},{"buildId":"QP1A.191005.007.A1","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3A.190801.002","version":"9"},{"buildId":"PQ3A.190705.001","version":"9"},{"buildId":"PQ3A.190605.003","version":"9"},{"buildId":"PQ3A.190505.001","version":"9"},{"buildId":"PQ2A.190405.003","version":"9"},{"buildId":"PQ2A.190305.002","version":"9"},{"buildId":"PQ2A.190205.003","version":"9"},{"buildId":"PQ1A.190105.004","version":"9"},{"buildId":"PQ1A.181205.002.A1","version":"9"},{"buildId":"PPR2.181005.003.A1","version":"9"},{"buildId":"PPR1.181005.003.A1","version":"9"},{"buildId":"PPR2.181005.003","version":"9"},{"buildId":"PPR1.181005.003","version":"9"},{"buildId":"PPR2.180905.006.A1","version":"9"},{"buildId":"PPR2.180905.006","version":"9"},{"buildId":"PPR1.180905.003","version":"9"},{"buildId":"PPR1.180610.010","version":"9"},{"buildId":"PPR1.180610.009","version":"9"},{"buildId":"OPM4.171019.021.P1","version":"8.1.0"},{"buildId":"OPM4.171019.021.D1","version":"8.1.0"},{"buildId":"OPM4.171019.016.B1","version":"8.1.0"},{"buildId":"OPM2.171019.029","version":"8.1.0"},{"buildId":"OPM1.171019.021","version":"8.1.0"},{"buildId":"OPM1.171019.016","version":"8.1.0"},{"buildId":"OPM1.171019.014","version":"8.1.0"},{"buildId":"OPM1.171019.012","version":"8.1.0"},{"buildId":"OPM1.171019.011","version":"8.1.0"},{"buildId":"OPR3.170623.013","version":"8.1.0"},{"buildId":"OPR1.170623.032","version":"8.1.0"},{"buildId":"OPR3.170623.008","version":"8.1.0"},{"buildId":"OPR1.170623.027","version":"8.1.0"},{"buildId":"OPR3.170623.007","version":"8.1.0"},{"buildId":"OPR1.170623.026","version":"8.1.0"},{"buildId":"OPR6.170623.012","version":"8.1.0"},{"buildId":"OPR6.170623.011","version":"8.1.0"},{"buildId":"NZH54D","version":"7.1"},{"buildId":"NKG47S","version":"7.1"},{"buildId":"NHG47Q","version":"7.1"},{"buildId":"NJH47F","version":"7.1"},{"buildId":"NZH54B","version":"7.1"},{"buildId":"NKG47M","version":"7.1"},{"buildId":"NJH47D","version":"7.1"},{"buildId":"NHG47O","version":"7.1"},{"buildId":"NJH47B","version":"7.1"},{"buildId":"NJH34C","version":"7.1"},{"buildId":"NKG47L","version":"7.1"},{"buildId":"NHG47N","version":"7.1"},{"buildId":"NHG47L","version":"7.1"},{"buildId":"N2G47T","version":"7.1"},{"buildId":"N2G47O","version":"7.1"},{"buildId":"NHG47K","version":"7.1"},{"buildId":"N2G47J","version":"7.1"},{"buildId":"N2G47E","version":"7.1"},{"buildId":"NOF27D","version":"7.1"},{"buildId":"NOF27C","version":"7.1"},{"buildId":"NOF27B","version":"7.1"},{"buildId":"NOF26W","version":"7.1"},{"buildId":"NOF26V","version":"7.1"},{"buildId":"NMF26V","version":"7.1"},{"buildId":"NMF26U","version":"7.1"},{"buildId":"NMF26Q","version":"7.1"},{"buildId":"NMF26O","version":"7.1"},{"buildId":"NDE63X","version":"7.1"},{"buildId":"NDE63V","version":"7.1"},{"buildId":"NDE63U","version":"7.1"},{"buildId":"NDE63P","version":"7.1"},{"buildId":"NDE63L","version":"7.1"},{"buildId":"NDE63H","version":"7.1"}]},"Pixel":{"codename":"sailfish","builds":[{"buildId":"QP1A.191005.007.A3","version":"10"},{"buildId":"QP1A.191005.007.A1","version":"10"},{"buildId":"QP1A.190711.020","version":"10"},{"buildId":"QP1A.190711.019","version":"10"},{"buildId":"PQ3A.190801.002","version":"9"},{"buildId":"PQ3A.190705.001","version":"9"},{"buildId":"PQ3A.190605.003","version":"9"},{"buildId":"PQ3A.190505.001","version":"9"},{"buildId":"PQ2A.190405.003","version":"9"},{"buildId":"PQ2A.190305.002","version":"9"},{"buildId":"PQ2A.190205.003","version":"9"},{"buildId":"PQ1A.190105.004","version":"9"},{"buildId":"PQ1A.181205.002.A1","version":"9"},{"buildId":"PPR2.181005.003.A1","version":"9"},{"buildId":"PPR1.181005.003.A1","version":"9"},{"buildId":"PPR2.181005.003","version":"9"},{"buildId":"PPR1.181005.003","version":"9"},{"buildId":"PPR2.180905.006.A1","version":"9"},{"buildId":"PPR2.180905.006","version":"9"},{"buildId":"PPR1.180905.003","version":"9"},{"buildId":"PPR1.180610.010","version":"9"},{"buildId":"PPR1.180610.009","version":"9"},{"buildId":"OPM4.171019.021.P1","version":"8.1.0"},{"buildId":"OPM4.171019.021.D1","version":"8.1.0"},{"buildId":"OPM4.171019.016.B1","version":"8.1.0"},{"buildId":"OPM2.171019.029","version":"8.1.0"},{"buildId":"OPM1.171019.021","version":"8.1.0"},{"buildId":"OPM1.171019.016","version":"8.1.0"},{"buildId":"OPM1.171019.014","version":"8.1.0"},{"buildId":"OPM1.171019.012","version":"8.1.0"},{"buildId":"OPM1.171019.011","version":"8.1.0"},{"buildId":"OPR3.170623.013","version":"8.1.0"},{"buildId":"OPR1.170623.032","version":"8.1.0"},{"buildId":"OPR3.170623.008","version":"8.1.0"},{"buildId":"OPR1.170623.027","version":"8.1.0"},{"buildId":"OPR3.170623.007","version":"8.1.0"},{"buildId":"OPR1.170623.026","version":"8.1.0"},{"buildId":"OPR6.170623.012","version":"8.1.0"},{"buildId":"OPR6.170623.011","version":"8.1.0"},{"buildId":"NZH54D","version":"7.1"},{"buildId":"NKG47S","version":"7.1"},{"buildId":"NHG47Q","version":"7.1"},{"buildId":"NJH47F","version":"7.1"},{"buildId":"NZH54B","version":"7.1"},{"buildId":"NKG47M","version":"7.1"},{"buildId":"NJH47D","version":"7.1"},{"buildId":"NHG47O","version":"7.1"},{"buildId":"NJH47B","version":"7.1"},{"buildId":"NJH34C","version":"7.1"},{"buildId":"NKG47L","version":"7.1"},{"buildId":"NHG47N","version":"7.1"},{"buildId":"NHG47L","version":"7.1"},{"buildId":"N2G47T","version":"7.1"},{"buildId":"N2G47O","version":"7.1"},{"buildId":"NHG47K","version":"7.1"},{"buildId":"N2G47J","version":"7.1"},{"buildId":"N2G47E","version":"7.1"},{"buildId":"NOF27D","version":"7.1"},{"buildId":"NOF27C","version":"7.1"},{"buildId":"NOF27B","version":"7.1"},{"buildId":"NOF26W","version":"7.1"},{"buildId":"NOF26V","version":"7.1"},{"buildId":"NMF26V","version":"7.1"},{"buildId":"NMF26U","version":"7.1"},{"buildId":"NMF26Q","version":"7.1"},{"buildId":"NMF26O","version":"7.1"},{"buildId":"NDE63X","version":"7.1"},{"buildId":"NDE63V","version":"7.1"},{"buildId":"NDE63U","version":"7.1"},{"buildId":"NDE63P","version":"7.1"},{"buildId":"NDE63L","version":"7.1"},{"buildId":"NDE63H","version":"7.1"}]}}'
class AndroidBuilds:
def __init__(self):
self.builds = None
def get_builds(self):
if self.builds is None:
self.builds = json.loads(ANDROID_BUILDS_JSON)
return self.builds | 1,828.692308 | 23,542 | 0.634333 | 3,360 | 23,773 | 4.485417 | 0.060714 | 0.093159 | 0.162431 | 0.051755 | 0.963241 | 0.95216 | 0.932918 | 0.929733 | 0.925751 | 0.925751 | 0 | 0.24143 | 0.003575 | 23,773 | 13 | 23,543 | 1,828.692308 | 0.394799 | 0 | 0 | 0 | 0 | 0.111111 | 0.989232 | 0.988096 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 14 |
0d853e6cf57339758676a68e1e23da9d82fc612d | 9,674 | py | Python | stdlib_tests/test_date.py | nguyenluc99/MaintenanceProgramming | a1ccd0d1dc6c35b32f24ba781e729745b5ad0032 | [
"BSD-3-Clause"
] | null | null | null | stdlib_tests/test_date.py | nguyenluc99/MaintenanceProgramming | a1ccd0d1dc6c35b32f24ba781e729745b5ad0032 | [
"BSD-3-Clause"
] | null | null | null | stdlib_tests/test_date.py | nguyenluc99/MaintenanceProgramming | a1ccd0d1dc6c35b32f24ba781e729745b5ad0032 | [
"BSD-3-Clause"
] | null | null | null | from datetime import date
d = date.max
e = date.min
print(d == e)
print(e == d)
print(d == d)
print(e == e)
print(e != d)
print(date(2020, 9, 30))
print(date(2020, 9, 30).__str__())
print(date(2020, 9, 30).ctime())
print(date(2020, 9, 30).weekday())
print(date(2020, 9, 30).year)
print(date(2020, 9, 30).month)
print(date(2020, 9, 30).day)
print(date(9999, 12, day=31))
print(date(9999, 12, day=31).__str__())
print(date(9999, 12, day=31).ctime())
print(date(9999, 12, day=31).weekday())
print(date(9999, 12, day=31).year)
print(date(9999, 12, day=31).month)
print(date(9999, 12, day=31).day)
print(date(1999, month=12, day=31))
print(date(1999, month=12, day=31).__str__())
print(date(1999, month=12, day=31).ctime())
print(date(1999, month=12, day=31).weekday())
print(date(1999, month=12, day=31).year)
print(date(1999, month=12, day=31).month)
print(date(1999, month=12, day=31).day)
print(date(True, True, True))
print(date(True, True, True).__str__())
print(date(True, True, True).ctime())
print(date(True, True, True).weekday())
print(date(True, True, True).year)
print(date(True, True, True).month)
print(date(True, True, True).day)
print(date(year=2400, month=2, day=29))
print(date(year=2400, month=2, day=29).__str__())
print(date(year=2400, month=2, day=29).ctime())
print(date(year=2400, month=2, day=29).weekday())
print(date(year=2400, month=2, day=29).year)
print(date(year=2400, month=2, day=29).month)
print(date(year=2400, month=2, day=29).day)
d = date.min
print(d)
print(d.__str__())
print(d.ctime())
print(d.weekday())
print(d.year)
print(d.month)
print(d.day)
d = date.max
print(d)
print(d.__str__())
print(d.ctime())
print(d.weekday())
print(d.year)
print(d.month)
print(d.day)
print(date.today())
print(date.today().__str__())
print(date.today().ctime())
print(date.today().weekday())
print(date.today().year)
print(date.today().month)
print(date.today().day)
print(date.fromisoformat("2020-09-30"))
print(date.fromisoformat("2020-09-30").__str__())
print(date.fromisoformat("2020-09-30").ctime())
print(date.fromisoformat("2020-09-30").weekday())
print(date.fromisoformat("2020-09-30").year)
print(date.fromisoformat("2020-09-30").month)
print(date.fromisoformat("2020-09-30").day)
d = date.today()
print(d)
print(d.replace(9, month=5))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(month=1))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(year=31))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(day=12))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(8999, day=12))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(1700, 5, day=15))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(1066, month=7, day=28))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(year=1, month=2, day=3))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(year=4646, day=20))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(month=3, day=7))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace(month=3, year=9696))
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(d.replace())
print(d.year)
print(d.month)
print(d.day)
print(d.weekday())
print(d.ctime())
print(d.__str__())
print(date.fromisoformat("2020-10-01"))
d = date.max
print(date.fromisoformat(d.__str__()))
try:
print(date(10000, month=12, day=31))
except Exception as e:
print(e)
try:
print(date(2020, 9, 31))
except Exception as e:
print(e)
try:
print(date(2020, 2, 30))
except Exception as e:
print(e)
try:
print(date(2019, 2, 29))
except Exception as e:
print(e)
try:
print(date(2400, 2, 30))
except Exception as e:
print(e)
try:
print(date(2100, 2, 30))
except Exception as e:
print(e)
try:
print(date(9999, 13, 31))
except Exception as e:
print(e)
try:
print(date(0, 12, 31))
except Exception as e:
print(e)
try:
print(date(9999, 0, 31))
except Exception as e:
print(e)
try:
print(date(9999, 12, 0))
except Exception as e:
print(e)
try:
print(date(1000, year=12, day=31))
except Exception as e:
print(e)
try:
print(date(10, 10, year=12))
except Exception as e:
print(e)
try:
print(date(1000, 10, month=12))
except Exception as e:
print(e)
try:
print(date())
except Exception as e:
print(e)
try:
print(date(1, 2, 3, 4))
except Exception as e:
print(e)
try:
print(date(1, 1, None))
except Exception as e:
print(e)
try:
print(date(9999, "str", 31))
except Exception as e:
print(e)
try:
print(date(2020.0, 9, 30))
except Exception as e:
print(e)
try:
print(date(1000, 10, False))
except Exception as e:
print(e)
try:
print(date(1, 1))
except Exception as e:
print(e)
try:
print(date(1, day=31))
except Exception as e:
print(e)
try:
print(date(1, month=12))
except Exception as e:
print(e)
try:
print(date(1, year=9999))
except Exception as e:
print(e)
try:
print(date(year=1, month=12))
except Exception as e:
print(e)
try:
print(date(year=9999, day=22))
except Exception as e:
print(e)
try:
print(date(year=99999, day=22))
except Exception as e:
print(e)
try:
print(date(year=1, month=12.0))
except Exception as e:
print(e)
try:
print(date(year=9999, day=None))
except Exception as e:
print(e)
try:
print(date(year=1, day="str"))
except Exception as e:
print(e)
try:
print(date(year=1.0, month=12))
except Exception as e:
print(e)
try:
print(date(year=None, day=1))
except Exception as e:
print(e)
try:
print(date(year="str", day=2))
except Exception as e:
print(e)
try:
print(date(1, 40))
except Exception as e:
print(e)
try:
print(date(True, 2))
except Exception as e:
print(e)
try:
print(date(True, day=4))
except Exception as e:
print(e)
try:
print(date(False, day=4))
except Exception as e:
print(e)
try:
print(date(False, None))
except Exception as e:
print(e)
try:
print(date(None, False))
except Exception as e:
print(e)
try:
print(date(9.0, year=1))
except Exception as e:
print(e)
try:
print(date(None, year=1))
except Exception as e:
print(e)
try:
print(date("str", year=1))
except Exception as e:
print(e)
try:
print(date("str", month=1))
except Exception as e:
print(e)
try:
print(date(1.0, month=1))
except Exception as e:
print(e)
try:
print(date(None, month=1))
except Exception as e:
print(e)
try:
print(date(0, 99999999999))
except Exception as e:
print(e)
try:
print(date(9, day=32))
except Exception as e:
print(e)
try:
print(date(month=13, year=99))
except Exception as e:
print(e)
try:
print(date(1))
except Exception as e:
print(e)
try:
print(date(day=1))
except Exception as e:
print(e)
try:
print(date(month=1))
except Exception as e:
print(e)
try:
print(date(year=1))
except Exception as e:
print(e)
try:
print(date(0))
except Exception as e:
print(e)
try:
print(date(day=0))
except Exception as e:
print(e)
try:
print(date(month=0))
except Exception as e:
print(e)
try:
print(date(year=0))
except Exception as e:
print(e)
try:
print(date(9999999))
except Exception as e:
print(e)
try:
print(date(day=32))
except Exception as e:
print(e)
try:
print(date(month=13))
except Exception as e:
print(e)
try:
print(date(year=10000))
except Exception as e:
print(e)
try:
print(date(True))
except Exception as e:
print(e)
try:
print(date(day=False))
except Exception as e:
print(e)
try:
print(date(month=4.0))
except Exception as e:
print(e)
try:
print(date(year=None))
except Exception as e:
print(e)
try:
print(date("str"))
except Exception as e:
print(e)
try:
print(date(year="str"))
except Exception as e:
print(e)
try:
print(date.fromisoformat("10000-10-01"))
except Exception as e:
print(e)
try:
print(date.fromisoformat("1000-10-01 "))
except Exception as e:
print(e)
try:
print(date.fromisoformat("1000,10,01"))
except Exception as e:
print(e)
try:
print(date.fromisoformat("1000, 10, 01"))
except Exception as e:
print(e)
try:
print(date.fromisoformat("any string"))
except Exception as e:
print(e)
try:
print(date.fromisoformat(None))
except Exception as e:
print(e)
try:
print(date.fromisoformat(2020, 10, 10))
except Exception as e:
print(e)
try:
print(date.fromisoformat(2020.0))
except Exception as e:
print(e)
try:
print(date.fromisoformat(True))
except Exception as e:
print(e)
try:
print(date.fromisoformat(False))
except Exception as e:
print(e)
try:
print(date.fromisoformat("0000-00-00"))
except Exception as e:
print(e)
try:
print(date.fromisoformat("0001-13-01"))
except Exception as e:
print(e)
try:
print(date.fromisoformat("0001-01-32"))
except Exception as e:
print(e)
d = date.today()
try:
print(date.replace(1, 2, 3, 4))
except Exception as e:
print(e)
| 17.244207 | 49 | 0.653918 | 1,652 | 9,674 | 3.776029 | 0.041768 | 0.18756 | 0.090895 | 0.227958 | 0.903815 | 0.855883 | 0.789676 | 0.757454 | 0.726034 | 0.720263 | 0 | 0.076274 | 0.166529 | 9,674 | 560 | 50 | 17.275 | 0.697383 | 0 | 0 | 0.691023 | 0 | 0 | 0.019123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002088 | 0 | 0.002088 | 0.653445 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
0d8d79c990cc46e4b3989766087703edcee9236e | 91,250 | py | Python | utils_methods.py | venkatesh-saligrama/Personalized-Federated-Learning | 0ba79295d7c2e93bc9e2a37a6912bf005c4be698 | [
"MIT"
] | 12 | 2021-07-23T07:50:19.000Z | 2022-02-17T18:25:01.000Z | utils_methods.py | venkatesh-saligrama/Personalized-Federated-Learning | 0ba79295d7c2e93bc9e2a37a6912bf005c4be698 | [
"MIT"
] | null | null | null | utils_methods.py | venkatesh-saligrama/Personalized-Federated-Learning | 0ba79295d7c2e93bc9e2a37a6912bf005c4be698 | [
"MIT"
] | 3 | 2021-07-12T03:57:55.000Z | 2021-09-19T11:11:57.000Z | from utils_libs import *
from utils_dataset import *
from utils_models import *
from utils_general import *
# fast_exec disables training statistics
### Methods
def train_FedAvg(data_obj, act_prob, learning_rate, batch_size, K, com_amount, print_per, weight_decay, lr_decay,
model_func, init_model, save_period, meta_learning_rate_list=False,
num_grad_step_list=False, do_proto=False, do_plain=False,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'FedAvg_S%d_F%f_Lr%f_B%d_K%d_W%f_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, K, weight_decay, lr_decay, rand_seed)
if meta_learning_rate_list != False:
l1_str = [str(elem) for elem in meta_learning_rate_list]
suffix += '_MetaLr_[' + ', '.join(l1_str) + ']'
l2_str = [str(elem) for elem in num_grad_step_list]
suffix += '_GS_[' + ', '.join(l2_str) + ']'
if do_proto:
suffix += '_Proto'
if do_plain:
suffix += '_Plain'
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
cent_x = np.concatenate(clnt_x, axis=0)
cent_y = np.concatenate(clnt_y, axis=0)
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
metaLr_numGrad = []
if meta_learning_rate_list != False:
for meta_learning_rate in meta_learning_rate_list:
for num_grad_step in num_grad_step_list:
metaLr_numGrad.append([meta_learning_rate, num_grad_step])
n_cases = len(metaLr_numGrad)
n_cases = n_cases + 1 if do_proto else n_cases
n_cases = n_cases + 1 if do_plain else n_cases
trn_perf_sel = np.zeros((n_cases, com_amount, 4));
trn_perf_all = np.zeros((n_cases, com_amount, 4));
tst_perf_sel = np.zeros((n_cases, com_amount, 4));
tst_perf_all = np.zeros((n_cases, com_amount, 5));
n_par = len(get_mdl_params([model_func()])[0])
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:,:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:,:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:,:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:,:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
avg_model = model_func().to(device)
if saved_itr == -1:
avg_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
else:
# Load recent one
avg_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt'%(data_obj.name, suffix, saved_itr+1)))
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]; trn_y = clnt_y[clnt]; tst_x = False; tst_y = False
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(avg_model.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
cur_model = train_model(cur_model, trn_x, trn_y, tst_x, tst_y, learning_rate * (lr_decay ** i),
batch_size, K, print_per, weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model
clnt_params_list[clnt] = get_mdl_params([avg_model], n_par)[0].cpu().numpy()
tst_perf_all[0][i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
# Scale with weights
avg_selected = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(avg_selected, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
for idx_, [meta_learning_rate, num_grad_step] in enumerate(metaLr_numGrad):
[list_1, list_2, list_3, list_4] = get_all_results_maml(meta_learning_rate,
num_grad_step, data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[idx_,i,:] = list_1; tst_perf_all[idx_,i,:len(list_2)] = list_2
trn_perf_sel[idx_,i,:] = list_3; trn_perf_all[idx_,i,:] = list_4
offset_ = len(metaLr_numGrad)
if do_proto:
[list_1, list_2, list_3, list_4] = get_all_results_proto(
data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x, data_obj.tst_y, data_obj.dataset,
model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[idx_+offset_,i,:] = list_1; tst_perf_all[idx_+offset_,i,:len(list_2)] = list_2
trn_perf_sel[idx_+offset_,i,:] = list_3; trn_perf_all[idx_+offset_,i,:] = list_4
offset_ = len(metaLr_numGrad) + 1
if do_plain:
[list_1, list_2, list_3, list_4] = get_all_results_plain(data_obj.clnt_x,
data_obj.clnt_y, data_obj.tst_x, data_obj.tst_y, data_obj.dataset,
avg_model, all_model, fast_exec, i)
tst_perf_sel[offset_,i,:] = list_1; tst_perf_all[offset_,i,:len(list_2)] = list_2
trn_perf_sel[offset_,i,:] = list_3; trn_perf_all[offset_,i,:] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy'%(data_obj.name, suffix, (i+1)), trn_perf_sel[:,:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy'%(data_obj.name, suffix, (i+1)), tst_perf_sel[:,:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy'%(data_obj.name, suffix, (i+1)), trn_perf_all[:,:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy'%(data_obj.name, suffix, (i+1)), tst_perf_all[:,:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[0,:,-1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[0,i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
###
def train_Meta_FedAvg_MAML(data_obj, act_prob ,learning_rate, batch_size, meta_learning_rate, K, com_amount, print_per,
weight_decay, model_func, init_model, save_period, lr_decay, num_grad_step,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'PerAvg_MAML_S%d_F%f_Lr%f_B%d_K%d_W%f_MetaLr%f_GS%d_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, K, weight_decay, meta_learning_rate, num_grad_step, lr_decay,rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
trn_perf_sel = np.zeros((com_amount, 4)); trn_perf_all = np.zeros((com_amount, 4))
tst_perf_sel = np.zeros((com_amount, 4)); tst_perf_all = np.zeros((com_amount, 5))
n_par = len(get_mdl_params([model_func()])[0])
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
avg_model = model_func().to(device)
if saved_itr == -1:
avg_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
else:
avg_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt'%(data_obj.name,suffix,saved_itr+1)))
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(avg_model.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
cur_model = train_meta_model_MAML(model_func, cur_model, trn_x, trn_y, num_grad_step, meta_learning_rate,
learning_rate * (lr_decay ** i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model
clnt_params_list[clnt] = get_mdl_params([avg_model], n_par)[0].cpu().numpy()
tst_perf_all[i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
# Scale with weights
avg_selected = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(avg_selected, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
[list_1, list_2, list_3, list_4] = get_all_results_maml(meta_learning_rate,
num_grad_step, data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[i] = list_1; tst_perf_all[i,:len(list_2)] = list_2
trn_perf_sel[i] = list_3; trn_perf_all[i] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)), trn_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)), tst_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)), trn_perf_all[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)), tst_perf_all[:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[:, -1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
###
def train_Meta_FedAvg_Proto(data_obj, act_prob ,learning_rate, batch_size, K, com_amount, print_per, weight_decay,
model_func, init_model, save_period, lr_decay,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'PerAvg_Proto_S%d_F%f_Lr%f_B%d_K%d_W%f_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, K, weight_decay, lr_decay, rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
trn_perf_sel = np.zeros((com_amount, 4)); trn_perf_all = np.zeros((com_amount, 4))
tst_perf_sel = np.zeros((com_amount, 4)); tst_perf_all = np.zeros((com_amount, 5))
n_par = len(get_mdl_params([model_func()])[0])
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
avg_model = model_func().to(device)
if saved_itr == -1:
avg_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
else:
# Load recent one
avg_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name,suffix,saved_itr+1)))
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(avg_model.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
cur_model = train_proto_model(cur_model, trn_x, trn_y, learning_rate*(lr_decay**i),
batch_size, K, print_per, weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model
clnt_params_list[clnt] = get_mdl_params([avg_model], n_par)[0].cpu().numpy()
tst_perf_all[i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
# Scale with weights
avg_selected = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(avg_selected, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
[list_1, list_2, list_3, list_4] = get_all_results_proto(
data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[i] = list_1; tst_perf_all[i,:len(list_2)] = list_2
trn_perf_sel[i] = list_3; trn_perf_all[i] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)), trn_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)), tst_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)), trn_perf_all[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)), tst_perf_all[:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[:, -1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
###
# FedDyn methods..
###
def train_FedDyn(data_obj, act_prob, alpha, learning_rate, batch_size, K, com_amount, print_per, weight_decay,
model_func, init_model, save_period, lr_decay, meta_learning_rate_list=False,
num_grad_step_list=False, do_proto=False, do_plain=False, rand_seed=0, save_models=False, fast_exec=False):
suffix = 'FedDy_S%d_F%f_Lr%f_B%d_alpha%f_K%d_W%f_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, alpha, K, weight_decay, lr_decay,rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
metaLr_numGrad = []
if meta_learning_rate_list != False:
for meta_learning_rate in meta_learning_rate_list:
for num_grad_step in num_grad_step_list:
metaLr_numGrad.append([meta_learning_rate, num_grad_step])
n_cases = len(metaLr_numGrad)
n_cases = n_cases + 1 if do_proto else n_cases
n_cases = n_cases + 1 if do_plain else n_cases
trn_perf_sel = np.zeros((n_cases, com_amount, 4));
trn_perf_all = np.zeros((n_cases, com_amount, 4));
tst_perf_sel = np.zeros((n_cases, com_amount, 4));
tst_perf_all = np.zeros((n_cases, com_amount, 5));
n_par = len(get_mdl_params([model_func()])[0])
lambda_model_list=np.zeros((n_clnt, n_par)).astype('float32')
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:,:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:,:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:,:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:,:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
lambda_model_list= np.load('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
cld_model = model_func().to(device)
avg_selected = model_func().to(device)
if saved_itr == -1:
cld_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
avg_selected = get_mdl_params([init_model], n_par)[0].cpu().numpy()
else:
cld_model.load_state_dict(torch.load('Model/%s/%s/%dcom_cld.pt' %(data_obj.name, suffix, (saved_itr+1))))
avg_selected.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (saved_itr+1))))
avg_selected = get_mdl_params([avg_selected], n_par)[0].cpu().numpy()
cld_mdl_param = get_mdl_params([cld_model], n_par)[0].cpu().numpy()
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
server_model = torch.tensor(cld_mdl_param, dtype=torch.float32, device=device)
server_model_object = set_client_from_params(model_func().to(device),server_model)
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(server_model_object.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
lambda_model = torch.tensor(lambda_model_list[clnt], dtype=torch.float32, device=device)
cur_model = train_dyn_model(alpha, lambda_model, server_model, cur_model, trn_x, trn_y,
learning_rate * (lr_decay ** i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model, do not update the lambda model.
clnt_params_list[clnt] = np.copy(avg_selected)
tst_perf_all[0][i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
lambda_model_list[clnt] = lambda_model_list[clnt] - alpha * (clnt_params_list[clnt] - cld_mdl_param)
# Scale with weights
avg_selected = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(avg_selected, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
cld_mdl_param = avg_selected - 1/alpha*np.mean(lambda_model_list, axis=0)
for idx_, [meta_learning_rate, num_grad_step] in enumerate(metaLr_numGrad):
[list_1, list_2, list_3, list_4] = get_all_results_maml(meta_learning_rate,
num_grad_step, data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[idx_,i,:] = list_1; tst_perf_all[idx_,i,:len(list_2)] = list_2
trn_perf_sel[idx_,i,:] = list_3; trn_perf_all[idx_,i,:] = list_4
offset_ = len(metaLr_numGrad)
if do_proto:
[list_1, list_2, list_3, list_4] = get_all_results_proto(
data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x, data_obj.tst_y, data_obj.dataset,
model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[idx_+offset_,i,:] = list_1; tst_perf_all[idx_+offset_,i,:len(list_2)] = list_2
trn_perf_sel[idx_+offset_,i,:] = list_3; trn_perf_all[idx_+offset_,i,:] = list_4
offset_ = len(metaLr_numGrad) + 1
if do_plain:
[list_1, list_2, list_3, list_4] = get_all_results_plain(data_obj.clnt_x,
data_obj.clnt_y, data_obj.tst_x, data_obj.tst_y, data_obj.dataset,
avg_model, all_model, fast_exec, i)
tst_perf_sel[offset_,i,:] = list_1; tst_perf_all[offset_,i,:len(list_2)] = list_2
trn_perf_sel[offset_,i,:] = list_3; trn_perf_all[offset_,i,:] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
torch.save(cld_model.state_dict(), 'Model/%s/%s/%dcom_cld.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, (i+1)), lambda_model_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy'%(data_obj.name, suffix, (i+1)), trn_perf_sel[:,:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy'%(data_obj.name, suffix, (i+1)), tst_perf_sel[:,:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy'%(data_obj.name, suffix, (i+1)), trn_perf_all[:,:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy'%(data_obj.name, suffix, (i+1)), tst_perf_all[:,:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[0,:,-1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[0,i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
# ###
def train_Meta_FedDyn_MAML(data_obj, act_prob, alpha, learning_rate, batch_size, meta_learning_rate, K, com_amount, print_per,
weight_decay, model_func, init_model, save_period, num_grad_step, lr_decay,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'PFLDyn_MAML_S%d_F%f_Lr%f_B%d_alpha%f_K%d_W%f_MetaLr%f_GS%d_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, alpha, K, weight_decay, meta_learning_rate, num_grad_step, lr_decay,rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
trn_perf_sel = np.zeros((com_amount, 4)); trn_perf_all = np.zeros((com_amount, 4))
tst_perf_sel = np.zeros((com_amount, 4)); tst_perf_all = np.zeros((com_amount, 5))
n_par = len(get_mdl_params([model_func()])[0])
lambda_model_list=np.zeros((n_clnt, n_par)).astype('float32')
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
lambda_model_list= np.load('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
cld_model = model_func().to(device)
avg_selected = model_func().to(device)
if saved_itr == -1:
cld_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
avg_selected = get_mdl_params([init_model], n_par)[0].cpu().numpy()
else:
cld_model.load_state_dict(torch.load('Model/%s/%s/%dcom_cld.pt' %(data_obj.name, suffix, (saved_itr+1))))
avg_selected.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (saved_itr+1))))
avg_selected = get_mdl_params([avg_selected], n_par)[0].cpu().numpy()
cld_mdl_param = get_mdl_params([cld_model], n_par)[0].cpu().numpy()
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
server_model = torch.tensor(cld_mdl_param, dtype=torch.float32, device=device)
server_model_object = set_client_from_params(model_func().to(device),server_model)
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(server_model_object.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
lambda_model = torch.tensor(lambda_model_list[clnt], dtype=torch.float32, device=device)
cur_model = train_dyn_meta_model_MAML(alpha, lambda_model, server_model, model_func, cur_model, trn_x, trn_y,
num_grad_step, meta_learning_rate,
learning_rate * (lr_decay ** i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model, do not update the lambda model.
clnt_params_list[clnt] = np.copy(avg_selected)
tst_perf_all[i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
lambda_model_list[clnt] = lambda_model_list[clnt] - alpha * (clnt_params_list[clnt] - cld_mdl_param)
# Scale with weights
avg_selected = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(avg_selected, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
cld_mdl_param = avg_selected - 1/alpha*np.mean(lambda_model_list, axis=0)
[list_1, list_2, list_3, list_4] = get_all_results_maml(meta_learning_rate,
num_grad_step, data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[i] = list_1; tst_perf_all[i,:len(list_2)] = list_2
trn_perf_sel[i] = list_3; trn_perf_all[i] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
torch.save(cld_model.state_dict(), 'Model/%s/%s/%dcom_cld.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, (i+1)), lambda_model_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)), trn_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)), tst_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)), trn_perf_all[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)), tst_perf_all[:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[:, -1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
####
def train_Meta_FedDyn_Proto(data_obj, act_prob, alpha, learning_rate, batch_size, K, com_amount, print_per, weight_decay,
model_func, init_model, save_period, lr_decay,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'PFLDyn_Proto_S%d_F%f_Lr%f_B%d_alpha%f_K%d_W%f_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, alpha, K, weight_decay, lr_decay, rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
trn_perf_sel = np.zeros((com_amount, 4)); trn_perf_all = np.zeros((com_amount, 4))
tst_perf_sel = np.zeros((com_amount, 4)); tst_perf_all = np.zeros((com_amount, 5))
n_par = len(get_mdl_params([model_func()])[0])
lambda_model_list=np.zeros((n_clnt, n_par)).astype('float32')
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
lambda_model_list= np.load('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, i+1))
if not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount)):
cld_model = model_func().to(device)
avg_selected = model_func().to(device)
if saved_itr == -1:
cld_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
avg_selected = get_mdl_params([init_model], n_par)[0].cpu().numpy()
else:
cld_model.load_state_dict(torch.load('Model/%s/%s/%dcom_cld.pt' %(data_obj.name, suffix, (saved_itr+1))))
avg_selected.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (saved_itr+1))))
avg_selected = get_mdl_params([avg_selected], n_par)[0].cpu().numpy()
cld_mdl_param = get_mdl_params([cld_model], n_par)[0].cpu().numpy()
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
server_model = torch.tensor(cld_mdl_param, dtype=torch.float32, device=device)
server_model_object = set_client_from_params(model_func().to(device),server_model)
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(server_model_object.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
lambda_model = torch.tensor(lambda_model_list[clnt], dtype=torch.float32, device=device)
cur_model = train_dyn_proto_model(alpha, lambda_model, server_model, cur_model, trn_x, trn_y,
learning_rate*(lr_decay**i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model, do not update the lambda model.
clnt_params_list[clnt] = np.copy(avg_selected)
tst_perf_all[i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
lambda_model_list[clnt] = lambda_model_list[clnt] - alpha * (clnt_params_list[clnt] - cld_mdl_param)
# Scale with weights
avg_selected = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(avg_selected, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
cld_mdl_param = avg_selected - 1/alpha*np.mean(lambda_model_list, axis=0)
[list_1, list_2, list_3, list_4] = get_all_results_proto(
data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[i] = list_1; tst_perf_all[i,:len(list_2)] = list_2
trn_perf_sel[i] = list_3; trn_perf_all[i] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
torch.save(cld_model.state_dict(), 'Model/%s/%s/%dcom_cld.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, (i+1)), lambda_model_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)), trn_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)), tst_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)), trn_perf_all[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)), tst_perf_all[:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%d_lambda_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[:, -1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
def train_SCAFFOLD(data_obj, act_prob, learning_rate, batch_size, K, com_amount, print_per, weight_decay, model_func,
init_model, save_period, lr_decay,
meta_learning_rate_list=False, num_grad_step_list=False, do_proto=False, do_plain=False,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'SCAFFOLD_S%d_F%f_Lr%f_B%d_K%d_W%f_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, K, weight_decay, lr_decay,rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
metaLr_numGrad = []
if meta_learning_rate_list != False:
for meta_learning_rate in meta_learning_rate_list:
for num_grad_step in num_grad_step_list:
metaLr_numGrad.append([meta_learning_rate, num_grad_step])
n_cases = len(metaLr_numGrad)
n_cases = n_cases + 1 if do_proto else n_cases
n_cases = n_cases + 1 if do_plain else n_cases
trn_perf_sel = np.zeros((n_cases, com_amount, 4));
trn_perf_all = np.zeros((n_cases, com_amount, 4));
tst_perf_sel = np.zeros((n_cases, com_amount, 4));
tst_perf_all = np.zeros((n_cases, com_amount, 5));
n_par = len(get_mdl_params([model_func()])[0])
c_state_list=np.zeros((n_clnt, n_par)).astype('float32')
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:,:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:,:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:,:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:,:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
c_state_list= np.load('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
avg_model = model_func().to(device)
if saved_itr == -1:
avg_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
else:
avg_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (saved_itr+1))))
server_params = get_mdl_params([avg_model], n_par)[0].cpu().numpy()
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
server_c_state = np.mean(c_state_list, axis=0)
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(avg_model.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
curr_state_params_diff = torch.tensor(-c_state_list[clnt] + server_c_state, dtype=torch.float32, device=device)
cur_model = train_SCAF_model(curr_state_params_diff, cur_model, trn_x, trn_y,
learning_rate * (lr_decay ** i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model, do not update the lambda model.
clnt_params_list[clnt] = np.copy(server_params)
tst_perf_all[0][i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
c_state_list[clnt] += (-server_c_state + 1/K/learning_rate * (server_params - clnt_params_list[clnt]))
server_params = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(server_params, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
for idx_, [meta_learning_rate, num_grad_step] in enumerate(metaLr_numGrad):
[list_1, list_2, list_3, list_4] = get_all_results_maml(meta_learning_rate,
num_grad_step, data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[idx_,i,:] = list_1; tst_perf_all[idx_,i,:len(list_2)] = list_2
trn_perf_sel[idx_,i,:] = list_3; trn_perf_all[idx_,i,:] = list_4
offset_ = len(metaLr_numGrad)
if do_proto:
[list_1, list_2, list_3, list_4] = get_all_results_proto(
data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x, data_obj.tst_y, data_obj.dataset,
model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[idx_+offset_,i,:] = list_1; tst_perf_all[idx_+offset_,i,:len(list_2)] = list_2
trn_perf_sel[idx_+offset_,i,:] = list_3; trn_perf_all[idx_+offset_,i,:] = list_4
offset_ = len(metaLr_numGrad) + 1
if do_plain:
[list_1, list_2, list_3, list_4] = get_all_results_plain(data_obj.clnt_x,
data_obj.clnt_y, data_obj.tst_x, data_obj.tst_y, data_obj.dataset,
avg_model, all_model, fast_exec, i)
tst_perf_sel[offset_,i,:] = list_1; tst_perf_all[offset_,i,:len(list_2)] = list_2
trn_perf_sel[offset_,i,:] = list_3; trn_perf_all[offset_,i,:] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, (i+1)), c_state_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy'%(data_obj.name, suffix, (i+1)), trn_perf_sel[:,:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy'%(data_obj.name, suffix, (i+1)), tst_perf_sel[:,:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy'%(data_obj.name, suffix, (i+1)), trn_perf_all[:,:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy'%(data_obj.name, suffix, (i+1)), tst_perf_all[:,:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[0,:,-1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[0,i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
####
def train_Meta_SCAFFOLD_MAML(data_obj, act_prob, learning_rate, batch_size, meta_learning_rate, K, com_amount, print_per,
weight_decay, model_func, init_model, save_period, num_grad_step, lr_decay,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'PFLSCAF_MAML_S%d_F%f_Lr%f_B%d_K%d_W%f_MetaLr%f_GS%d_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, K, weight_decay, meta_learning_rate, num_grad_step, lr_decay,rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
trn_perf_sel = np.zeros((com_amount, 4)); trn_perf_all = np.zeros((com_amount, 4))
tst_perf_sel = np.zeros((com_amount, 4)); tst_perf_all = np.zeros((com_amount, 5))
n_par = len(get_mdl_params([model_func()])[0])
c_state_list=np.zeros((n_clnt, n_par)).astype('float32')
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
c_state_list= np.load('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, i+1))
if (not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount))):
avg_model = model_func().to(device)
if saved_itr == -1:
avg_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
else:
avg_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (saved_itr+1))))
server_model_param = get_mdl_params([avg_model], n_par)[0].cpu().numpy()
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
server_c_state = np.mean(c_state_list, axis=0)
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(avg_model.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
curr_state_params_diff = torch.tensor(-c_state_list[clnt] + server_c_state, dtype=torch.float32, device=device)
cur_model = train_SCAF_meta_model_MAML(curr_state_params_diff,model_func, cur_model, trn_x, trn_y,
num_grad_step, meta_learning_rate,
learning_rate * (lr_decay ** i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model, do not update the lambda model.
clnt_params_list[clnt] = np.copy(server_model_param)
tst_perf_all[i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
c_state_list[clnt] += (-server_c_state + 1/K/learning_rate * (server_model_param - clnt_params_list[clnt]))
server_model_param = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(server_model_param, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
[list_1, list_2, list_3, list_4] = get_all_results_maml(meta_learning_rate,
num_grad_step, data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[i] = list_1; tst_perf_all[i,:len(list_2)] = list_2
trn_perf_sel[i] = list_3; trn_perf_all[i] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, (i+1)), c_state_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)), trn_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)), tst_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)), trn_perf_all[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)), tst_perf_all[:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[:, -1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all
####
def train_Meta_SCAFFOLD_Proto(data_obj, act_prob, learning_rate, batch_size, K, com_amount, print_per, weight_decay,
model_func, init_model, save_period, lr_decay,
rand_seed=0, save_models=False, fast_exec=False):
suffix = 'PFLSCAFD_Proto_S%d_F%f_Lr%f_B%d_K%d_W%f_lrdecay%f_seed%d' %(save_period, act_prob, learning_rate, batch_size, K, weight_decay, lr_decay,rand_seed)
n_clnt=data_obj.n_client
clnt_x = data_obj.clnt_x; clnt_y=data_obj.clnt_y
if (not os.path.exists('Model/%s/%s' %(data_obj.name, suffix))):
os.mkdir('Model/%s/%s' %(data_obj.name, suffix))
n_save_instances = int(com_amount / save_period)
fed_mdls_sel = list(range(n_save_instances)); fed_mdls_all = list(range(n_save_instances))
trn_perf_sel = np.zeros((com_amount, 4)); trn_perf_all = np.zeros((com_amount, 4))
tst_perf_sel = np.zeros((com_amount, 4)); tst_perf_all = np.zeros((com_amount, 5))
n_par = len(get_mdl_params([model_func()])[0])
c_state_list=np.zeros((n_clnt, n_par)).astype('float32')
init_par_list=get_mdl_params([init_model], n_par)[0].cpu().numpy()
clnt_params_list=np.ones(n_clnt).astype('float32').reshape(-1, 1) * init_par_list.reshape(1, -1) # n_clnt X n_par
saved_itr = -1
# Check if there are past saved iterates
for i in range(com_amount):
if os.path.exists('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)):
saved_itr = i
if save_models:
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_sel[saved_itr//save_period] = fed_model
###
fed_model = model_func()
fed_model.load_state_dict(torch.load('Model/%s/%s/%dcom_all.pt' %( data_obj.name, suffix, i+1)))
fed_model.eval()
fed_model = fed_model.to(device)
# Freeze model
for params in fed_model.parameters():
params.requires_grad = False
fed_mdls_all[saved_itr//save_period] = fed_model
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1))):
trn_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
trn_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_sel[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)))
tst_perf_all[:i+1] = np.load('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)))
if save_models:
clnt_params_list = np.load('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1))
c_state_list= np.load('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, i+1))
if not os.path.exists('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, com_amount)):
avg_model = model_func().to(device)
if saved_itr == -1:
avg_model.load_state_dict(copy.deepcopy(dict(init_model.named_parameters())))
else:
avg_model.load_state_dict(torch.load('Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (saved_itr+1))))
server_model_param = get_mdl_params([avg_model], n_par)[0].cpu().numpy()
for i in range(saved_itr+1, com_amount):
### Fix randomness
np.random.seed(i + rand_seed)
clnt_list = np.arange(n_clnt)
np.random.shuffle(clnt_list)
selected_clnts = clnt_list[:int(act_prob*n_clnt)]
print('Selected Clients: %s' %(', '.join(['%2d' %item for item in selected_clnts])))
server_c_state = np.mean(c_state_list, axis=0)
for clnt in selected_clnts:
print('---- Training client %d' %clnt)
trn_x = clnt_x[clnt]
trn_y = clnt_y[clnt]
cur_model = model_func().to(device)
cur_model.load_state_dict(copy.deepcopy(dict(avg_model.named_parameters())))
for params in cur_model.parameters():
params.requires_grad = True
curr_state_params_diff = torch.tensor(-c_state_list[clnt] + server_c_state, dtype=torch.float32, device=device)
cur_model = train_SCAF_proto_model(curr_state_params_diff, cur_model, trn_x, trn_y,
learning_rate*(lr_decay**i), batch_size, K, print_per,
weight_decay, data_obj.dataset)
is_diverged = is_model_NaN(cur_model)
if is_diverged:
# If model has NaN do not update the list put the average model, do not update the lambda model.
clnt_params_list[clnt] = np.copy(server_model_param)
tst_perf_all[i][-1] += 1
else:
clnt_params_list[clnt] = get_mdl_params([cur_model], n_par)[0].cpu().numpy()
c_state_list[clnt] += (-server_c_state + 1/K/learning_rate * (server_model_param - clnt_params_list[clnt]))
server_model_param = np.mean(clnt_params_list[selected_clnts], axis = 0)
avg_model = set_client_from_params(model_func().to(device),
torch.tensor(server_model_param, dtype=torch.float32).to(device))
avg_all = np.mean(clnt_params_list, axis = 0)
all_model = set_client_from_params(model_func().to(device), torch.tensor(avg_all, dtype=torch.float32).to(device))
[list_1, list_2, list_3, list_4] = get_all_results_proto(
data_obj.clnt_x, data_obj.clnt_y, data_obj.tst_x,
data_obj.tst_y, data_obj.dataset, model_func, avg_model, all_model, fast_exec, i)
tst_perf_sel[i] = list_1; tst_perf_all[i,:len(list_2)] = list_2
trn_perf_sel[i] = list_3; trn_perf_all[i] = list_4
# Freeze model
for params in avg_model.parameters():
params.requires_grad = False
if ((i+1) % save_period == 0):
if save_models:
torch.save(avg_model.state_dict(), 'Model/%s/%s/%dcom_sel.pt' %(data_obj.name, suffix, (i+1)))
torch.save(all_model.state_dict(), 'Model/%s/%s/%dcom_all.pt' %(data_obj.name, suffix, (i+1)))
np.save('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, (i+1)), clnt_params_list)
np.save('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, (i+1)), c_state_list)
np.save('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, (i+1)), trn_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, (i+1)), tst_perf_sel[:i+1])
np.save('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, (i+1)), trn_perf_all[:i+1])
np.save('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, (i+1)), tst_perf_all[:i+1])
if (i+1) > save_period:
if os.path.exists('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period)):
# Delete the previous saved arrays
os.remove('Model/%s/%s/%dcom_trn_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_sel.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_trn_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%dcom_tst_perf_all.npy' %(data_obj.name, suffix, i+1-save_period))
if save_models:
os.remove('Model/%s/%s/%d_clnt_params_list.npy' %(data_obj.name, suffix, i+1-save_period))
os.remove('Model/%s/%s/%d_c_state_list.npy' %(data_obj.name, suffix, i+1-save_period))
if ((i+1) % save_period == 0):
fed_mdls_sel[i//save_period] = avg_model
fed_mdls_all[i//save_period] = all_model
# See if all clients in consecutive int(1/act_prob) rounds is diverging. If so stop execution.
failure_arr = tst_perf_all[:, -1]
total_fails = failure_arr[np.max([0,i-int(1/act_prob)]):i].sum()
print('Total failures in this round: %d' %tst_perf_all[i, -1])
if total_fails == int(act_prob*n_clnt)*int(1/act_prob):
break
return fed_mdls_sel, trn_perf_sel, tst_perf_sel, fed_mdls_all, trn_perf_all, tst_perf_all | 58.084023 | 221 | 0.571693 | 13,188 | 91,250 | 3.61973 | 0.015999 | 0.055429 | 0.037832 | 0.091878 | 0.989798 | 0.987808 | 0.986447 | 0.985315 | 0.985315 | 0.984729 | 0 | 0.013272 | 0.296471 | 91,250 | 1,571 | 222 | 58.084023 | 0.73033 | 0.033666 | 0 | 0.954064 | 0 | 0.00265 | 0.104303 | 0.092312 | 0.001767 | 0 | 0 | 0 | 0 | 1 | 0.007951 | false | 0 | 0.003534 | 0 | 0.019435 | 0.039753 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0d968203434504fbb34d039a02af7fe818d37c76 | 62 | py | Python | learning_python/modules/collateral/module_basics/use_module2b.py | fallenfuzz/pynet | 9624d83cca160fd325a34e838e4474c9b80fe2ab | [
"Apache-2.0"
] | 528 | 2015-01-07T15:28:51.000Z | 2022-03-27T09:45:37.000Z | learning_python/modules/collateral/module_basics/use_module2b.py | fallenfuzz/pynet | 9624d83cca160fd325a34e838e4474c9b80fe2ab | [
"Apache-2.0"
] | 19 | 2015-07-01T23:52:27.000Z | 2021-09-22T04:30:34.000Z | learning_python/modules/collateral/module_basics/use_module2b.py | fallenfuzz/pynet | 9624d83cca160fd325a34e838e4474c9b80fe2ab | [
"Apache-2.0"
] | 555 | 2015-01-18T07:21:43.000Z | 2022-03-20T21:25:22.000Z | from my_module2 import dns_ip
dns_ip()
dns_ip(dns="1.1.1.1")
| 12.4 | 29 | 0.725806 | 15 | 62 | 2.733333 | 0.466667 | 0.365854 | 0.585366 | 0.487805 | 0.439024 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.112903 | 62 | 4 | 30 | 15.5 | 0.654545 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
0dac4d35123d8727e3cb1b07d809960edbde692e | 4,664 | py | Python | CodingInterview2/40_KLeastNumbers/test_kleast_numbers.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | 10 | 2020-07-06T11:00:58.000Z | 2022-01-29T09:25:24.000Z | CodingInterview2/40_KLeastNumbers/test_kleast_numbers.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | null | null | null | CodingInterview2/40_KLeastNumbers/test_kleast_numbers.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | 3 | 2020-07-13T06:39:23.000Z | 2020-08-15T16:29:48.000Z | from kleast_numbers import get_kleast_partition
from kleast_numbers import get_kleast
from kleast_numbers import get_kleast_heap
def test_normal():
lst = [4, 5, 1, 3, 2]
assert get_kleast(lst, 0) == []
assert get_kleast(lst, 1) == [1]
assert get_kleast(lst, 2) == [2,1]
assert get_kleast(lst, 3) == [2,3,1]
assert get_kleast(lst, 5) == [4,5,1,3,2]
assert get_kleast(lst, 6) == []
def test_part_repeat():
lst = [4, 5, 1, 6, 2, 7, 2, 8]
assert get_kleast(lst, 0) == []
assert get_kleast(lst, 1) == [1]
assert get_kleast(lst, 2) == [2,1]
assert get_kleast(lst, 3) == [2,2,1]
assert get_kleast(lst, 5) == [4,5,1,2,2]
assert get_kleast(lst, 9) == []
def test_multi_repeat():
lst = [2,2,1,1,3,3]
assert get_kleast(lst, 0) == []
assert get_kleast(lst, 1) == [1]
assert get_kleast(lst, 2) == [1,1]
assert get_kleast(lst, 3) == [1,2,1]
assert get_kleast(lst, 5) == [2,2,1,1,3]
assert get_kleast(lst, 7) == []
def test_all_repeat():
lst = [2,2,2,2,2,2]
assert get_kleast(lst, 0) == []
assert get_kleast(lst, 1) == [2]
assert get_kleast(lst, 2) == [2,2]
assert get_kleast(lst, 3) == [2,2,2]
assert get_kleast(lst, 7) == []
def test_one():
lst = [1]
assert get_kleast(lst, 0) == []
assert get_kleast(lst, 1) == [1]
assert get_kleast(lst, 2) == []
def test_none():
lst = []
assert get_kleast(lst, 0) == []
assert get_kleast(lst, 1) == []
def test_normal_heap():
lst = [4, 5, 1, 3, 2]
assert get_kleast_heap(lst, 0) == []
assert get_kleast_heap(lst, 1) == [1]
assert get_kleast_heap(lst, 2) == [2,1]
assert get_kleast_heap(lst, 3) == [3,2,1]
assert get_kleast_heap(lst, 5) == [5,4,3,2,1]
assert get_kleast_heap(lst, 6) == []
def test_part_repeat_heap():
lst = [4, 5, 1, 6, 2, 7, 2, 8]
assert get_kleast_heap(lst, 0) == []
assert get_kleast_heap(lst, 1) == [1]
assert get_kleast_heap(lst, 2) == [2,1]
assert get_kleast_heap(lst, 3) == [2,2,1]
assert get_kleast_heap(lst, 5) == [5,4,2,2,1]
assert get_kleast_heap(lst, 9) == []
def test_multi_repeat_heap():
lst = [2,2,1,1,3,3]
assert get_kleast_heap(lst, 0) == []
assert get_kleast_heap(lst, 1) == [1]
assert get_kleast_heap(lst, 2) == [1,1]
assert get_kleast_heap(lst, 3) == [2,1,1]
assert get_kleast_heap(lst, 5) == [3,2,2,1,1]
assert get_kleast_heap(lst, 7) == []
def test_all_repeat_heap():
lst = [2,2,2,2,2,2]
assert get_kleast_heap(lst, 0) == []
assert get_kleast_heap(lst, 1) == [2]
assert get_kleast_heap(lst, 2) == [2,2]
assert get_kleast_heap(lst, 3) == [2,2,2]
assert get_kleast_heap(lst, 7) == []
def test_one_heap():
lst = [1]
assert get_kleast_heap(lst, 0) == []
assert get_kleast_heap(lst, 1) == [1]
assert get_kleast_heap(lst, 2) == []
def test_none_heap():
lst = []
assert get_kleast_heap(lst, 0) == []
assert get_kleast_heap(lst, 1) == []
def test_normal_recursion():
lst = [4, 5, 1, 3, 2]
assert get_kleast_partition(lst, 0) == []
assert get_kleast_partition(lst, 1) == [1]
assert get_kleast_partition(lst, 2) == [1,2]
assert get_kleast_partition(lst, 3) == [1,3,2]
assert get_kleast_partition(lst, 5) == [4,1,3,2,5]
assert get_kleast_partition(lst, 6) == []
def test_part_repeat_recursion():
lst = [4, 5, 1, 6, 2, 7, 2, 8]
assert get_kleast_partition(lst, 0) == []
assert get_kleast_partition(lst, 1) == [1]
assert get_kleast_partition(lst, 2) == [1,2]
assert get_kleast_partition(lst, 3) == [1,2,2]
assert get_kleast_partition(lst, 5) == [4,1,2,2,5]
assert get_kleast_partition(lst, 9) == []
def test_multi_repeat_recursion():
lst = [2,2,1,1,3,3]
assert get_kleast_partition(lst, 0) == []
assert get_kleast_partition(lst, 1) == [1]
assert get_kleast_partition(lst, 2) == [1,1]
assert get_kleast_partition(lst, 3) == [1,1,2]
assert get_kleast_partition(lst, 5) == [2,2,1,1,3]
assert get_kleast_partition(lst, 7) == []
def test_all_repeat_recursion():
lst = [2,2,2,2,2,2]
assert get_kleast_partition(lst, 0) == []
assert get_kleast_partition(lst, 1) == [2]
assert get_kleast_partition(lst, 2) == [2,2]
assert get_kleast_partition(lst, 3) == [2,2,2]
assert get_kleast_partition(lst, 7) == []
def test_one_recursion():
lst = [1]
assert get_kleast_partition(lst, 0) == []
assert get_kleast_partition(lst, 1) == [1]
assert get_kleast_partition(lst, 2) == []
def test_none_recursion():
lst = []
assert get_kleast_partition(lst, 0) == []
assert get_kleast_partition(lst, 1) == []
| 30.48366 | 54 | 0.609134 | 787 | 4,664 | 3.3723 | 0.036849 | 0.295026 | 0.474755 | 0.186888 | 0.974755 | 0.925772 | 0.821402 | 0.729842 | 0.614167 | 0.558026 | 0 | 0.076797 | 0.212693 | 4,664 | 152 | 55 | 30.684211 | 0.64597 | 0 | 0 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.682927 | 1 | 0.146341 | false | 0 | 0.02439 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0ddf5443a63b199e2e9716b90a54605e5ea2bfd4 | 1,685 | py | Python | examples/rd2cd_example.py | PolarisRisingWar/cogdl | ebe1e839de1b04bc0e677cb7412c91f3c65a85d6 | [
"MIT"
] | null | null | null | examples/rd2cd_example.py | PolarisRisingWar/cogdl | ebe1e839de1b04bc0e677cb7412c91f3c65a85d6 | [
"MIT"
] | null | null | null | examples/rd2cd_example.py | PolarisRisingWar/cogdl | ebe1e839de1b04bc0e677cb7412c91f3c65a85d6 | [
"MIT"
] | null | null | null | import sys
sys.path.insert(0,'whj_code2/cogdl_fork/cogdl')
from cogdl import experiment
experiment(task="node_classification", dataset="rd2cd_Github", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Elliptic", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Film", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Wiki", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Clothing", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Electronics", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Dblp", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Yelpchi", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Alpha", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Weibo", model="gcn")
experiment(task="node_classification", dataset="rd2cd_bgp", model="gcn")
experiment(task="node_classification", dataset="rd2cd_ssn5", model="gcn")
experiment(task="node_classification", dataset="rd2cd_ssn7", model="gcn")
experiment(task="node_classification", dataset="rd2cd_chameleon", model="gcn")
experiment(task="node_classification", dataset="rd2cd_squirrel", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Aids", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Nba", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Wisconsin", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Texas", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Cornell", model="gcn")
experiment(task="node_classification", dataset="rd2cd_Pokec_z", model="gcn")
| 62.407407 | 80 | 0.789911 | 205 | 1,685 | 6.273171 | 0.195122 | 0.228616 | 0.293935 | 0.522551 | 0.842924 | 0.842924 | 0.808709 | 0.808709 | 0 | 0 | 0 | 0.015499 | 0.04273 | 1,685 | 26 | 81 | 64.807692 | 0.781773 | 0 | 0 | 0 | 0 | 0 | 0.438576 | 0.01543 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
21d1c7d3e00dece999c03e8f3963b6aae8285403 | 222 | py | Python | gluon/packages/dal/pydal/representers/mysql.py | guadaltech/web2py-ruben | 45e0f4f316774e707a3075f23e3f8b9fed00c387 | [
"BSD-3-Clause"
] | null | null | null | gluon/packages/dal/pydal/representers/mysql.py | guadaltech/web2py-ruben | 45e0f4f316774e707a3075f23e3f8b9fed00c387 | [
"BSD-3-Clause"
] | null | null | null | gluon/packages/dal/pydal/representers/mysql.py | guadaltech/web2py-ruben | 45e0f4f316774e707a3075f23e3f8b9fed00c387 | [
"BSD-3-Clause"
] | null | null | null | from ..adapters.mysql import MySQL
from .base import SQLRepresenter, JSONRepresenter
from . import representers
@representers.register_for(MySQL)
class MySQLRepresenter(SQLRepresenter, JSONRepresenter):
pass
| 24.666667 | 57 | 0.792793 | 22 | 222 | 7.954545 | 0.590909 | 0.331429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144144 | 222 | 8 | 58 | 27.75 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.5 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
df57d18b646060b5069edcdd4221dc401abab921 | 74 | py | Python | fisher_exact/__init__.py | kunc/fisher_exact | 4364cff20e2276d26115b39ff1496c31d7fcea1c | [
"MIT"
] | null | null | null | fisher_exact/__init__.py | kunc/fisher_exact | 4364cff20e2276d26115b39ff1496c31d7fcea1c | [
"MIT"
] | null | null | null | fisher_exact/__init__.py | kunc/fisher_exact | 4364cff20e2276d26115b39ff1496c31d7fcea1c | [
"MIT"
] | null | null | null | from .fisher_exact import _fisher_exact
from .backend import fisher_exact
| 24.666667 | 39 | 0.864865 | 11 | 74 | 5.454545 | 0.454545 | 0.55 | 0.566667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 40 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
df8516c3639ff4b2b1b219a92fbdf04f4a7f353c | 45,894 | py | Python | vmware_nsxlib/tests/unit/v3/test_policy_resources.py | mail2nsrajesh/vmware-nsxlib | 3163126c450a092a5720e59a8443d52adfbe0610 | [
"Apache-2.0"
] | null | null | null | vmware_nsxlib/tests/unit/v3/test_policy_resources.py | mail2nsrajesh/vmware-nsxlib | 3163126c450a092a5720e59a8443d52adfbe0610 | [
"Apache-2.0"
] | null | null | null | vmware_nsxlib/tests/unit/v3/test_policy_resources.py | mail2nsrajesh/vmware-nsxlib | 3163126c450a092a5720e59a8443d52adfbe0610 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 VMware, Inc.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
import unittest
from vmware_nsxlib.tests.unit.v3 import nsxlib_testcase
from vmware_nsxlib import v3
from vmware_nsxlib.v3 import policy_constants
from vmware_nsxlib.v3 import policy_defs
TEST_TENANT = 'test'
class NsxPolicyLibTestCase(unittest.TestCase):
def setUp(self, *args, **kwargs):
super(NsxPolicyLibTestCase, self).setUp()
nsxlib_config = nsxlib_testcase.get_default_nsxlib_config()
self.policy_lib = v3.NsxPolicyLib(nsxlib_config)
self.policy_api = self.policy_lib.policy_api
self.maxDiff = None
def _compare_def(self, expected_def, actual_def):
# verify the resource definition class
self.assertEqual(expected_def.__class__, actual_def.__class__)
# verify the resource definition tenant
self.assertEqual(expected_def.tenant, actual_def.tenant)
# verify the resource definition values
self.assertEqual(expected_def.get_obj_dict(),
actual_def.get_obj_dict())
def assert_called_with_def(self, mock_api, expected_def, call_num=0):
# verify the api was called
mock_api.assert_called()
actual_def = mock_api.call_args_list[call_num][0][0]
self._compare_def(expected_def, actual_def)
def assert_called_with_defs(self, mock_api, expected_defs, call_num=0):
# verify the api & first resource definition
self.assert_called_with_def(mock_api, expected_defs[0],
call_num=call_num)
# compare the 2nd resource definition class & values
actual_def = mock_api.call_args_list[call_num][0][1]
expected_def = expected_defs[1]
self._compare_def(expected_def, actual_def)
def assert_called_with_def_and_dict(self, mock_api,
expected_def, expected_dict,
call_num=0):
# verify the api & resource definition
self.assert_called_with_def(mock_api, expected_def,
call_num=call_num)
# compare the 2nd api parameter which is a dictionary
actual_dict = mock_api.call_args_list[call_num][0][0].body
self.assertEqual(expected_dict, actual_dict)
class TestPolicyDomain(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyDomain, self).setUp()
self.resourceApi = self.policy_lib.domain
def test_create_with_id(self):
name = 'd1'
description = 'desc'
id = '111'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(name,
domain_id=id,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.DomainDef(domain_id=id,
name=name,
description=description,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_create_without_id(self):
name = 'd1'
description = 'desc'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(name, description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.DomainDef(domain_id=mock.ANY,
name=name,
description=description,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_delete(self):
id = '111'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(id, tenant=TEST_TENANT)
expected_def = policy_defs.DomainDef(domain_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
id = '111'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(id, tenant=TEST_TENANT)
expected_def = policy_defs.DomainDef(domain_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
name = 'd1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(name, tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.DomainDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(tenant=TEST_TENANT)
expected_def = policy_defs.DomainDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
id = '111'
name = 'new name'
description = 'new desc'
with mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
name=name,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.DomainDef(domain_id=id,
tenant=TEST_TENANT)
expected_dict = {'display_name': name,
'description': description}
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
class TestPolicyGroup(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyGroup, self).setUp()
self.resourceApi = self.policy_lib.group
def test_create_with_id(self):
domain_id = '111'
name = 'g1'
description = 'desc'
id = '222'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(name,
domain_id,
group_id=id,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=id,
name=name,
description=description,
conditions=[],
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_create_without_id(self):
domain_id = '111'
name = 'g1'
description = 'desc'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(name, domain_id,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=mock.ANY,
name=name,
description=description,
conditions=[],
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_create_with_condition(self):
domain_id = '111'
name = 'g1'
description = 'desc'
cond_val = '123'
cond_op = policy_constants.CONDITION_OP_EQUALS
cond_member_type = policy_constants.CONDITION_MEMBER_NET
cond_key = policy_constants.CONDITION_KEY_TAG
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(
name, domain_id, description=description,
cond_val=cond_val,
cond_op=cond_op,
cond_member_type=cond_member_type,
cond_key=cond_key,
tenant=TEST_TENANT)
exp_cond = policy_defs.Condition(value=cond_val,
key=cond_key,
operator=cond_op,
member_type=cond_member_type)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=mock.ANY,
name=name,
description=description,
conditions=[exp_cond],
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_delete(self):
domain_id = '111'
id = '222'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(domain_id, id, tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
domain_id = '111'
id = '222'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(domain_id, id, tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
domain_id = '111'
name = 'g1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(domain_id, name,
tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.GroupDef(domain_id, tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
domain_id = '111'
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(domain_id, tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
domain_id = '111'
id = '222'
name = 'new name'
description = 'new desc'
with mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(domain_id, id,
name=name,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=id,
tenant=TEST_TENANT)
expected_dict = {'display_name': name,
'description': description}
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
def test_update_condition(self):
domain_id = '111'
id = '222'
cond_val = '123'
with mock.patch.object(self.policy_api, "get",
return_value={}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update_condition(domain_id, id,
cond_val=cond_val,
tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=id,
tenant=TEST_TENANT)
exp_cond = {'resource_type': 'Condition',
'member_type': policy_constants.CONDITION_MEMBER_PORT,
'key': policy_constants.CONDITION_KEY_TAG,
'value': cond_val,
'operator': policy_constants.CONDITION_OP_EQUALS}
expected_dict = {'expression': [exp_cond]}
self.assert_called_with_def(get_call, expected_def)
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
def test_remove_condition(self):
domain_id = '111'
id = '222'
old_cond = {'resource_type': 'Condition',
'member_type': policy_constants.CONDITION_MEMBER_PORT,
'key': policy_constants.CONDITION_KEY_TAG,
'value': 'abc',
'operator': policy_constants.CONDITION_OP_EQUALS}
with mock.patch.object(self.policy_api, "get",
return_value={'expression': [old_cond]}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update_condition(domain_id, id,
cond_val=None,
tenant=TEST_TENANT)
expected_def = policy_defs.GroupDef(domain_id=domain_id,
group_id=id,
tenant=TEST_TENANT)
expected_dict = {'expression': []}
self.assert_called_with_def(get_call, expected_def)
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
class TestPolicyService(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyService, self).setUp()
self.resourceApi = self.policy_lib.service
def test_create(self):
name = 's1'
description = 'desc'
protocol = policy_constants.TCP
dest_ports = [81, 82]
with mock.patch.object(self.policy_api,
"create_with_parent") as api_call:
self.resourceApi.create_or_overwrite(name,
description=description,
protocol=protocol,
dest_ports=dest_ports,
tenant=TEST_TENANT)
exp_srv_def = policy_defs.ServiceDef(service_id=mock.ANY,
name=name,
description=description,
tenant=TEST_TENANT)
exp_entry_def = policy_defs.L4ServiceEntryDef(
service_id=mock.ANY,
name=name,
description=description,
protocol=protocol,
dest_ports=dest_ports,
tenant=TEST_TENANT)
self.assert_called_with_defs(
api_call, [exp_srv_def, exp_entry_def])
def test_delete(self):
id = '111'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(id, tenant=TEST_TENANT)
expected_def = policy_defs.ServiceDef(service_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
id = '111'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(id, tenant=TEST_TENANT)
expected_def = policy_defs.ServiceDef(service_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
name = 's1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(name, tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.ServiceDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(tenant=TEST_TENANT)
expected_def = policy_defs.ServiceDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
id = '111'
name = 'new name'
description = 'new desc'
with mock.patch.object(self.policy_api, "get",
return_value={}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
name=name,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.ServiceDef(service_id=id,
tenant=TEST_TENANT)
expected_dict = {'display_name': name,
'description': description,
'service_entries': []}
self.assert_called_with_def(get_call, expected_def)
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
def test_update_entry(self):
id = '111'
protocol = 'udp'
dest_ports = [555]
service_entry_id = '222'
service_entry = {'id': service_entry_id}
with mock.patch.object(
self.policy_api, "get",
return_value={'service_entries': [service_entry]}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
protocol=protocol,
dest_ports=dest_ports,
tenant=TEST_TENANT)
# get will be called for the entire service
expected_def = policy_defs.ServiceDef(service_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(get_call, expected_def)
# update will be called for the service entry only
expected_entry_def = policy_defs.L4ServiceEntryDef(
service_id=id,
service_entry_id=service_entry_id,
tenant=TEST_TENANT)
expected_entry_dict = {'id': service_entry_id,
'l4_protocol': protocol.upper(),
'destination_ports': dest_ports}
self.assert_called_with_def_and_dict(
update_call, expected_entry_def, expected_entry_dict)
def test_update_all(self):
id = '111'
name = 'new name'
description = 'new desc'
protocol = 'udp'
dest_ports = [555]
service_entry_id = '222'
service_entry = {'id': service_entry_id}
with mock.patch.object(
self.policy_api, "get",
return_value={'service_entries': [service_entry]}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call,\
mock.patch.object(self.policy_api, "list",
return_value={'results': []}):
self.resourceApi.update(id,
name=name,
description=description,
protocol=protocol,
dest_ports=dest_ports,
tenant=TEST_TENANT)
# get will be called for the entire service
expected_def = policy_defs.ServiceDef(service_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(get_call, expected_def)
# update will be called for the service and entry (2 calls)
expected_dict = {'display_name': name,
'description': description,
'service_entries': []}
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
expected_entry_def = policy_defs.L4ServiceEntryDef(
service_id=id,
service_entry_id=service_entry_id,
tenant=TEST_TENANT)
expected_entry_dict = {'id': service_entry_id,
'display_name': name,
'description': description,
'l4_protocol': protocol.upper(),
'destination_ports': dest_ports}
self.assert_called_with_def_and_dict(
update_call, expected_entry_def, expected_entry_dict,
call_num=1)
class TestPolicyCommunicationProfile(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyCommunicationProfile, self).setUp()
self.resourceApi = self.policy_lib.comm_profile
def test_create(self):
name = 'c1'
description = 'desc'
service_id = '333'
action = 'DENY'
with mock.patch.object(self.policy_api,
"create_with_parent") as api_call:
self.resourceApi.create_or_overwrite(name, description=description,
services=[service_id],
action=action,
tenant=TEST_TENANT)
exp_srv_def = policy_defs.CommunicationProfileDef(
profile_id=mock.ANY,
name=name,
description=description,
tenant=TEST_TENANT)
exp_entry_def = policy_defs.CommunicationProfileEntryDef(
profile_id=mock.ANY,
name=name,
description=description,
services=[service_id],
action=action,
tenant=TEST_TENANT)
self.assert_called_with_defs(
api_call, [exp_srv_def, exp_entry_def])
def test_delete(self):
id = '111'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(id, tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationProfileDef(
profile_id=id, tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
id = '111'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(id, tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationProfileDef(
profile_id=id, tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
name = 'c1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(name, tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.CommunicationProfileDef(
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationProfileDef(
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
id = '111'
name = 'new name'
description = 'new desc'
with mock.patch.object(self.policy_api, "get",
return_value={}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
name=name,
description=description,
tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationProfileDef(
profile_id=id, tenant=TEST_TENANT)
expected_dict = {'display_name': name,
'description': description,
'communication_profile_entries': []}
self.assert_called_with_def(get_call, expected_def)
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
def test_update_entry(self):
id = '111'
service_id = '333'
action = 'deny'
entry_id = '222'
profile_entry = {'id': entry_id}
entries_dict = {'communication_profile_entries': [profile_entry]}
with mock.patch.object(
self.policy_api, "get", return_value=entries_dict) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
services=[service_id],
action=action,
tenant=TEST_TENANT)
# get will be called for the entire service
expected_def = policy_defs.CommunicationProfileDef(
profile_id=id, tenant=TEST_TENANT)
self.assert_called_with_def(get_call, expected_def)
# update will be called for the service entry only
expected_entry_def = policy_defs.CommunicationProfileEntryDef(
profile_id=id,
profile_entry_id=entry_id,
tenant=TEST_TENANT)
expected_entry_dict = {'id': entry_id,
'action': action.upper(),
'services': [service_id]}
self.assert_called_with_def_and_dict(
update_call, expected_entry_def, expected_entry_dict)
def test_update_all(self):
id = '111'
name = 'new name'
description = 'new desc'
service_id = '333'
action = 'deny'
entry_id = '222'
profile_entry = {'id': entry_id}
entries_dict = {'communication_profile_entries': [profile_entry]}
with mock.patch.object(
self.policy_api, "get", return_value=entries_dict) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
name=name,
description=description,
services=[service_id],
action=action,
tenant=TEST_TENANT)
# get will be called for the entire service
expected_def = policy_defs.CommunicationProfileDef(
profile_id=id, tenant=TEST_TENANT)
self.assert_called_with_def(get_call, expected_def)
# update will be called for the service and entry (2 calls)
expected_dict = {'display_name': name,
'description': description,
'communication_profile_entries': []}
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
expected_entry_def = policy_defs.CommunicationProfileEntryDef(
profile_id=id,
profile_entry_id=entry_id,
tenant=TEST_TENANT)
expected_entry_dict = {'id': entry_id,
'display_name': name,
'description': description,
'action': action.upper(),
'services': [service_id]}
self.assert_called_with_def_and_dict(
update_call, expected_entry_def, expected_entry_dict,
call_num=1)
class TestPolicyCommunicationMap(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyCommunicationMap, self).setUp()
self.resourceApi = self.policy_lib.comm_map
def test_create(self):
domain_id = '111'
name = 'cm1'
description = 'desc'
source_group = 'g1'
dest_group = 'g2'
seq_num = 7
profile_id = 'c1'
list_return_value = {'results': [{'sequence_number': 1}]}
with mock.patch.object(self.policy_api,
"create_or_update") as api_call,\
mock.patch.object(self.policy_api, "list",
return_value=list_return_value):
self.resourceApi.create_or_overwrite(name, domain_id,
description=description,
sequence_number=seq_num,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
map_id=mock.ANY,
name=name,
description=description,
sequence_number=seq_num,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_create_first_seqnum(self):
domain_id = '111'
name = 'cm1'
description = 'desc'
source_group = 'g1'
dest_group = 'g2'
profile_id = 'c1'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call, \
mock.patch.object(self.resourceApi, "list", return_value=[]):
self.resourceApi.create_or_overwrite(name, domain_id,
description=description,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
map_id=mock.ANY,
name=name,
description=description,
sequence_number=1,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_create_without_seqnum(self):
domain_id = '111'
name = 'cm1'
description = 'desc'
source_group = 'g1'
dest_group = 'g2'
profile_id = 'c1'
with mock.patch.object(self.policy_api,
"create_with_parent") as api_call, \
mock.patch.object(self.resourceApi, "_get_last_seq_num",
return_value=-1):
self.resourceApi.create_or_overwrite(name, domain_id,
description=description,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
expected_map_def = policy_defs.CommunicationMapDef(
domain_id=domain_id,
tenant=TEST_TENANT)
expected_entry_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
map_id=mock.ANY,
name=name,
description=description,
sequence_number=1,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
self.assert_called_with_defs(
api_call,
[expected_map_def, expected_entry_def])
def test_delete(self):
domain_id = '111'
id = '222'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(domain_id, id, tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
map_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
domain_id = '111'
id = '222'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(domain_id, id, tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
map_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
domain_id = '111'
name = 'cm1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(domain_id, name,
tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
domain_id = '111'
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(domain_id, tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
domain_id = '111'
id = '222'
name = 'new name'
description = 'new desc'
source_group = 'ng1'
dest_group = 'ng2'
profile_id = 'nc1'
with mock.patch.object(self.policy_api, "get",
return_value={}) as get_call,\
mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(domain_id, id,
name=name,
description=description,
profile_id=profile_id,
source_groups=[source_group],
dest_groups=[dest_group],
tenant=TEST_TENANT)
expected_def = policy_defs.CommunicationMapEntryDef(
domain_id=domain_id,
map_id=id,
tenant=TEST_TENANT)
sgroup_path = "/%s/domains/%s/groups/%s" % (
TEST_TENANT, domain_id, source_group)
dgroup_path = "/%s/domains/%s/groups/%s" % (
TEST_TENANT, domain_id, dest_group)
profile_path = "/%s/communication-profiles/%s" % (
TEST_TENANT, profile_id)
expected_dict = {'display_name': name,
'description': description,
'communication_profile_path': profile_path,
'source_groups': [sgroup_path],
'destination_groups': [dgroup_path]}
self.assert_called_with_def(get_call, expected_def)
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
class TestPolicyEnforcementPoint(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyEnforcementPoint, self).setUp()
self.resourceApi = self.policy_lib.enforcement_point
def test_create(self):
name = 'ep'
description = 'desc'
ip_address = '1.1.1.1'
username = 'admin'
password = 'zzz'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(
name, description=description,
ip_address=ip_address,
username=username,
password=password,
tenant=TEST_TENANT)
expected_def = policy_defs.EnforcementPointDef(
ep_id=mock.ANY,
name=name,
description=description,
ip_address=ip_address,
username=username,
password=password,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_delete(self):
id = '111'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(id, tenant=TEST_TENANT)
expected_def = policy_defs.EnforcementPointDef(ep_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
id = '111'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(id, tenant=TEST_TENANT)
expected_def = policy_defs.EnforcementPointDef(ep_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
name = 'ep1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(name, tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.EnforcementPointDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(tenant=TEST_TENANT)
expected_def = policy_defs.EnforcementPointDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
id = '111'
name = 'new name'
username = 'admin'
password = 'zzz'
with mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
name=name,
username=username,
password=password,
tenant=TEST_TENANT)
expected_def = policy_defs.EnforcementPointDef(ep_id=id,
tenant=TEST_TENANT)
expected_dict = {'display_name': name,
'username': username,
'password': password}
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
class TestPolicyDeploymentMap(NsxPolicyLibTestCase):
def setUp(self, *args, **kwargs):
super(TestPolicyDeploymentMap, self).setUp()
self.resourceApi = self.policy_lib.deployment_map
def test_create(self):
name = 'map1'
description = 'desc'
domain_id = 'domain1'
ep_id = 'ep1'
with mock.patch.object(self.policy_api,
"create_or_update") as api_call:
self.resourceApi.create_or_overwrite(name,
description=description,
ep_id=ep_id,
domain_id=domain_id,
tenant=TEST_TENANT)
expected_def = policy_defs.DeploymentMapDef(
map_id=mock.ANY,
name=name,
description=description,
ep_id=ep_id,
domain_id=domain_id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_delete(self):
id = '111'
with mock.patch.object(self.policy_api, "delete") as api_call:
self.resourceApi.delete(id, tenant=TEST_TENANT)
expected_def = policy_defs.DeploymentMapDef(map_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get(self):
id = '111'
with mock.patch.object(self.policy_api, "get") as api_call:
self.resourceApi.get(id, tenant=TEST_TENANT)
expected_def = policy_defs.DeploymentMapDef(map_id=id,
tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_get_by_name(self):
name = 'ep1'
with mock.patch.object(
self.policy_api, "list",
return_value={'results': [{'display_name': name}]}) as api_call:
obj = self.resourceApi.get_by_name(name, tenant=TEST_TENANT)
self.assertIsNotNone(obj)
expected_def = policy_defs.DeploymentMapDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_list(self):
with mock.patch.object(self.policy_api, "list") as api_call:
self.resourceApi.list(tenant=TEST_TENANT)
expected_def = policy_defs.DeploymentMapDef(tenant=TEST_TENANT)
self.assert_called_with_def(api_call, expected_def)
def test_update(self):
id = '111'
name = 'new name'
domain_id = 'domain2'
ep_id = 'ep2'
with mock.patch.object(self.policy_api,
"create_or_update") as update_call:
self.resourceApi.update(id,
name=name,
ep_id=ep_id,
domain_id=domain_id,
tenant=TEST_TENANT)
expected_def = policy_defs.DeploymentMapDef(map_id=id,
tenant=TEST_TENANT)
domain_path = "/%s/domains/%s" % (TEST_TENANT, domain_id)
ep_path = ("/%s/deploymentzones/default/"
"enforcementpoints/%s" % (TEST_TENANT, ep_id))
expected_dict = {'display_name': name,
'enforcement_point_paths': [ep_path],
'domain_path': domain_path}
self.assert_called_with_def_and_dict(
update_call, expected_def, expected_dict)
| 44.950049 | 85 | 0.531834 | 4,566 | 45,894 | 5.019711 | 0.053657 | 0.05192 | 0.078883 | 0.057592 | 0.891841 | 0.879188 | 0.855497 | 0.821117 | 0.807068 | 0.799956 | 0 | 0.00917 | 0.391729 | 45,894 | 1,020 | 86 | 44.994118 | 0.811864 | 0.028348 | 0 | 0.852477 | 0 | 0 | 0.047328 | 0.006059 | 0 | 0 | 0 | 0 | 0.091216 | 1 | 0.073198 | false | 0.006757 | 0.006757 | 0 | 0.088964 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
805597939eb4b9795418a5265468416a12040ace | 168 | py | Python | classic_gym/envs/__init__.py | Chachay/ClassicGym | 929b885114723ae4da53c4d12ccc24a829d0ecdd | [
"MIT"
] | 1 | 2020-11-17T12:30:01.000Z | 2020-11-17T12:30:01.000Z | classic_gym/envs/__init__.py | Chachay/ClassicGym | 929b885114723ae4da53c4d12ccc24a829d0ecdd | [
"MIT"
] | null | null | null | classic_gym/envs/__init__.py | Chachay/ClassicGym | 929b885114723ae4da53c4d12ccc24a829d0ecdd | [
"MIT"
] | null | null | null | from classic_gym.envs.cartpole_swing_up import CartPoleSwingUp
from classic_gym.envs.evaporator import Evaporator
from classic_gym.envs.mobile_robot import MobileRobot
| 42 | 62 | 0.892857 | 24 | 168 | 6 | 0.541667 | 0.229167 | 0.291667 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 168 | 3 | 63 | 56 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
80563428ede2ba1c2c68753748573012ad656de3 | 477 | py | Python | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/keras/applications/nasnet/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 3 | 2019-04-01T11:03:04.000Z | 2019-12-31T02:17:15.000Z | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/keras/applications/nasnet/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 1 | 2021-04-15T18:46:45.000Z | 2021-04-15T18:46:45.000Z | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/keras/applications/nasnet/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 1 | 2021-09-23T13:43:07.000Z | 2021-09-23T13:43:07.000Z | """Imports for Python API.
This file is MACHINE GENERATED! Do not edit.
Generated by: tensorflow/tools/api/generator/create_python_api.py script.
"""
from tensorflow.python.keras._impl.keras.applications import NASNetLarge
from tensorflow.python.keras._impl.keras.applications import NASNetMobile
from tensorflow.python.keras._impl.keras.applications.densenet import decode_predictions
from tensorflow.python.keras._impl.keras.applications.inception_v3 import preprocess_input | 53 | 90 | 0.851153 | 64 | 477 | 6.203125 | 0.515625 | 0.141058 | 0.201511 | 0.251889 | 0.493703 | 0.493703 | 0.493703 | 0.261965 | 0 | 0 | 0 | 0.002252 | 0.069182 | 477 | 9 | 90 | 53 | 0.891892 | 0.29979 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1d1b8a3fafcd840a428d81b63fac2e63e0d4c5dd | 1,828 | py | Python | tests/test_name.py | ofek/pyproject-validate | 7417874ed092770c076b44b57458135b32a2044d | [
"MIT"
] | 2 | 2022-02-21T18:04:50.000Z | 2022-02-22T04:03:46.000Z | tests/test_name.py | ofek/pyproject-validate | 7417874ed092770c076b44b57458135b32a2044d | [
"MIT"
] | null | null | null | tests/test_name.py | ofek/pyproject-validate | 7417874ed092770c076b44b57458135b32a2044d | [
"MIT"
] | null | null | null | class TestInvalidCharacters:
BEFORE = """\
[build-system]
requires = [
"hatchling",
]
build-backend = "hatchling.build"
[project]
name = "foo bar"
version = "0.0.1"
"""
def test_error(self, project_file, invoke):
project_file.write(self.BEFORE)
result = invoke()
assert result.code == 1, result.output
assert (
result.output
== """\
<<< naming >>>
error: must only contain ASCII letters/digits, underscores, hyphens, and periods
"""
)
def test_cannot_fix(self, project_file, invoke):
project_file.write(self.BEFORE)
result = invoke("--fix")
assert result.code == 1, result.output
assert (
result.output
== """\
<<< naming >>>
error: must only contain ASCII letters/digits, underscores, hyphens, and periods
"""
)
class TestNormalization:
BEFORE = """\
[build-system]
requires = [
"hatchling",
]
build-backend = "hatchling.build"
[project]
name = "Foo.bAr"
version = "0.0.1"
"""
AFTER = """\
[build-system]
requires = [
"hatchling",
]
build-backend = "hatchling.build"
[project]
name = "foo-bar"
version = "0.0.1"
"""
def test_error(self, project_file, invoke):
project_file.write(self.BEFORE)
result = invoke()
assert result.code == 1, result.output
assert (
result.output
== """\
<<< naming >>>
error: should be foo-bar
"""
)
def test_fix(self, project_file, invoke):
project_file.write(self.BEFORE)
result = invoke("--fix")
assert result.code == 0, result.output
assert not result.output
assert project_file.read() == self.AFTER
result = invoke()
assert result.code == 0, result.output
assert not result.output
| 19.446809 | 80 | 0.583151 | 199 | 1,828 | 5.286432 | 0.226131 | 0.114068 | 0.102662 | 0.079848 | 0.888783 | 0.877376 | 0.877376 | 0.877376 | 0.877376 | 0.877376 | 0 | 0.010574 | 0.275711 | 1,828 | 93 | 81 | 19.655914 | 0.783988 | 0 | 0 | 0.783784 | 0 | 0 | 0.347374 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 1 | 0.054054 | false | 0 | 0 | 0 | 0.121622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1d82ca17cffba6ba683f6a716ecdeb32898c391c | 8,845 | py | Python | semantic-segmentation/lib/SegNet.py | bcrafton/icsrl-deep-learning | e3616982d1dda5f978d61d6591c91cb0da76ab02 | [
"MIT"
] | 1 | 2019-11-21T21:15:59.000Z | 2019-11-21T21:15:59.000Z | semantic-segmentation/lib/SegNet.py | bcrafton/icsrl-deep-learning | e3616982d1dda5f978d61d6591c91cb0da76ab02 | [
"MIT"
] | null | null | null | semantic-segmentation/lib/SegNet.py | bcrafton/icsrl-deep-learning | e3616982d1dda5f978d61d6591c91cb0da76ab02 | [
"MIT"
] | null | null | null |
import keras
import tensorflow as tf
import numpy as np
np.set_printoptions(threshold=1000)
from lib.Model import Model
from lib.Layer import Layer
from lib.ConvToFullyConnected import ConvToFullyConnected
from lib.FullyConnected import FullyConnected
from lib.Convolution import Convolution
from lib.MaxPool import MaxPool
from lib.AvgPool import AvgPool
from lib.Dropout import Dropout
from lib.FeedbackFC import FeedbackFC
from lib.FeedbackConv import FeedbackConv
from lib.Activation import Relu
from lib.ConvBlock import ConvBlock
from lib.VGGBlock import VGGBlock
from lib.MobileBlock import MobileBlock
from lib.BatchNorm import BatchNorm
from lib.DecodeBlock import DecodeBlock
'''
def SegNet(batch_size, init='alexnet'):
###########################################################################################
l0 = BatchNorm(input_size=[batch_size, 224, 224, 3], name='bn0')
l1 = ConvBlock(input_shape=[batch_size, 224, 224, 3], filter_shape=[3, 3, 3, 32], strides=[1,2,2,1], init=init, name='block1')
l2 = MobileBlock(input_shape=[batch_size, 112, 112, 32], filter_shape=[32, 64], strides=[1,1,1,1], init=init, name='block2')
l3 = MobileBlock(input_shape=[batch_size, 112, 112, 64], filter_shape=[64, 128], strides=[1,2,2,1], init=init, name='block3')
l4 = MobileBlock(input_shape=[batch_size, 56, 56, 128], filter_shape=[128, 128], strides=[1,1,1,1], init=init, name='block4')
l5 = MobileBlock(input_shape=[batch_size, 56, 56, 128], filter_shape=[128, 256], strides=[1,2,2,1], init=init, name='block5')
l6 = MobileBlock(input_shape=[batch_size, 28, 28, 256], filter_shape=[256, 256], strides=[1,1,1,1], init=init, name='block6')
l7 = MobileBlock(input_shape=[batch_size, 28, 28, 256], filter_shape=[256, 512], strides=[1,2,2,1], init=init, name='block7')
l8 = MobileBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block8')
l9 = MobileBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block9')
l10 = MobileBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block10')
l11 = MobileBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block11')
l12 = MobileBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block12')
l13 = MobileBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 1024], strides=[1,2,2,1], init=init, name='block13')
l14 = MobileBlock(input_shape=[batch_size, 7, 7, 1024], filter_shape=[1024, 1024], strides=[1,1,1,1], init=init, name='block14')
###########################################################################################
l15 = DecodeBlock(input_shape=[batch_size, 7, 7, 1024], filter_shape=[1024, 1024], ksize=1, init=init, name='block15')
l16 = DecodeBlock(input_shape=[batch_size, 7, 7, 1024], filter_shape=[1024, 512], ksize=2, init=init, name='block16')
l17 = DecodeBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 512], ksize=1, init=init, name='block17')
l18 = DecodeBlock(input_shape=[batch_size, 14, 14, 512], filter_shape=[512, 256], ksize=2, init=init, name='block18')
l19 = DecodeBlock(input_shape=[batch_size, 28, 28, 256], filter_shape=[256, 256], ksize=1, init=init, name='block19')
l20 = DecodeBlock(input_shape=[batch_size, 28, 28, 256], filter_shape=[256, 128], ksize=2, init=init, name='block20')
l21 = DecodeBlock(input_shape=[batch_size, 56, 56, 128], filter_shape=[128, 128], ksize=1, init=init, name='block21')
l22 = DecodeBlock(input_shape=[batch_size, 56, 56, 128], filter_shape=[128, 64], ksize=2, init=init, name='block22')
l23 = DecodeBlock(input_shape=[batch_size, 112, 112, 64], filter_shape=[64, 64], ksize=1, init=init, name='block23')
l24 = DecodeBlock(input_shape=[batch_size, 112, 112, 64], filter_shape=[64, 64], ksize=2, init=init, name='block24')
l25 = ConvBlock(input_shape=[batch_size, 224, 224, 64], filter_shape=[3, 3, 64, 30], strides=[1,1,1,1], init=init, name='block25')
###########################################################################################
layers = [l0, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, l25]
model = Model(layers=layers)
return model
###########################################################################################
'''
def SegNet(batch_size, init='alexnet', load=None):
###########################################################################################
l0 = BatchNorm(input_size=[batch_size, 480, 480, 3], name='bn0')
l1 = ConvBlock(input_shape=[batch_size, 480, 480, 3], filter_shape=[3, 3, 3, 32], strides=[1,2,2,1], init=init, name='block1', load=load, train=False)
l2 = MobileBlock(input_shape=[batch_size, 240, 240, 32], filter_shape=[32, 64], strides=[1,1,1,1], init=init, name='block2', load=load, train=False)
l3 = MobileBlock(input_shape=[batch_size, 240, 240, 64], filter_shape=[64, 128], strides=[1,2,2,1], init=init, name='block3', load=load, train=False)
l4 = MobileBlock(input_shape=[batch_size, 120, 120, 128], filter_shape=[128, 128], strides=[1,1,1,1], init=init, name='block4', load=load, train=False)
l5 = MobileBlock(input_shape=[batch_size, 120, 120, 128], filter_shape=[128, 256], strides=[1,2,2,1], init=init, name='block5', load=load, train=False)
l6 = MobileBlock(input_shape=[batch_size, 60, 60, 256], filter_shape=[256, 256], strides=[1,1,1,1], init=init, name='block6', load=load, train=False)
l7 = MobileBlock(input_shape=[batch_size, 60, 60, 256], filter_shape=[256, 512], strides=[1,2,2,1], init=init, name='block7', load=load, train=False)
l8 = MobileBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block8', load=load, train=False)
l9 = MobileBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block9', load=load, train=False)
l10 = MobileBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block10', load=load, train=False)
l11 = MobileBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block11', load=load, train=False)
l12 = MobileBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 512], strides=[1,1,1,1], init=init, name='block12', load=load, train=False)
l13 = MobileBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 1024], strides=[1,2,2,1], init=init, name='block13', load=load, train=False)
l14 = MobileBlock(input_shape=[batch_size, 15, 15, 1024], filter_shape=[1024, 1024], strides=[1,1,1,1], init=init, name='block14', load=load, train=False)
###########################################################################################
l15 = DecodeBlock(input_shape=[batch_size, 15, 15, 1024], filter_shape=[1024, 1024], ksize=1, init=init, name='block15')
l16 = DecodeBlock(input_shape=[batch_size, 15, 15, 1024], filter_shape=[1024, 512], ksize=2, init=init, name='block16')
l17 = DecodeBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 512], ksize=1, init=init, name='block17')
l18 = DecodeBlock(input_shape=[batch_size, 30, 30, 512], filter_shape=[512, 256], ksize=2, init=init, name='block18')
l19 = DecodeBlock(input_shape=[batch_size, 60, 60, 256], filter_shape=[256, 256], ksize=1, init=init, name='block19')
l20 = DecodeBlock(input_shape=[batch_size, 60, 60, 256], filter_shape=[256, 128], ksize=2, init=init, name='block20')
l21 = DecodeBlock(input_shape=[batch_size, 120, 120, 128], filter_shape=[128, 128], ksize=1, init=init, name='block21')
l22 = DecodeBlock(input_shape=[batch_size, 120, 120, 128], filter_shape=[128, 64], ksize=2, init=init, name='block22')
l23 = DecodeBlock(input_shape=[batch_size, 240, 240, 64], filter_shape=[64, 64], ksize=1, init=init, name='block23')
l24 = DecodeBlock(input_shape=[batch_size, 240, 240, 64], filter_shape=[64, 64], ksize=2, init=init, name='block24')
l25 = ConvBlock(input_shape=[batch_size, 480, 480, 64], filter_shape=[3, 3, 64, 30], strides=[1,1,1,1], init=init, name='block25')
###########################################################################################
layers = [l0, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, l25]
model = Model(layers=layers)
return model
###########################################################################################
| 66.503759 | 158 | 0.631204 | 1,301 | 8,845 | 4.170638 | 0.093005 | 0.022116 | 0.138223 | 0.175083 | 0.843347 | 0.841135 | 0.80317 | 0.781423 | 0.781423 | 0.767416 | 0 | 0.138198 | 0.123007 | 8,845 | 132 | 159 | 67.007576 | 0.561299 | 0 | 0 | 0 | 0 | 0 | 0.03842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0 | 0.38 | 0 | 0.42 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
d53054d912c86f79b34886d79ef1b0619781e92a | 40,707 | py | Python | symphony/bdk/gen/group_api/group_api.py | SymphonyOSF/symphony-api-client-python | 70137a893f4385381a3158ef80e1be156e0fc4bd | [
"Apache-2.0"
] | null | null | null | symphony/bdk/gen/group_api/group_api.py | SymphonyOSF/symphony-api-client-python | 70137a893f4385381a3158ef80e1be156e0fc4bd | [
"Apache-2.0"
] | null | null | null | symphony/bdk/gen/group_api/group_api.py | SymphonyOSF/symphony-api-client-python | 70137a893f4385381a3158ef80e1be156e0fc4bd | [
"Apache-2.0"
] | null | null | null | """
Symphony Profile Manager
Profile Manager is a microservice to manage users profile and groups # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from symphony.bdk.gen.api_client import ApiClient, Endpoint as _Endpoint
from symphony.bdk.gen.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from symphony.bdk.gen.group_model.add_member import AddMember
from symphony.bdk.gen.group_model.create_group import CreateGroup
from symphony.bdk.gen.group_model.error import Error
from symphony.bdk.gen.group_model.group_list import GroupList
from symphony.bdk.gen.group_model.read_group import ReadGroup
from symphony.bdk.gen.group_model.sort_order import SortOrder
from symphony.bdk.gen.group_model.status import Status
from symphony.bdk.gen.group_model.update_group import UpdateGroup
from symphony.bdk.gen.group_model.upload_avatar import UploadAvatar
class GroupApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.add_member_to_group_endpoint = _Endpoint(
settings={
'response_type': (ReadGroup,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups/{groupId}/member',
'operation_id': 'add_member_to_group',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
'group_id',
'add_member',
],
'required': [
'x_symphony_host',
'group_id',
'add_member',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
'group_id':
(str,),
'add_member':
(AddMember,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
'group_id': 'groupId',
},
'location_map': {
'x_symphony_host': 'header',
'group_id': 'path',
'add_member': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.delete_all_groups_endpoint = _Endpoint(
settings={
'response_type': (GroupList,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups/deleteAll',
'operation_id': 'delete_all_groups',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
],
'required': [
'x_symphony_host',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
},
'location_map': {
'x_symphony_host': 'header',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_group_endpoint = _Endpoint(
settings={
'response_type': (ReadGroup,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups/{groupId}',
'operation_id': 'get_group',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
'group_id',
],
'required': [
'x_symphony_host',
'group_id',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
'group_id':
(str,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
'group_id': 'groupId',
},
'location_map': {
'x_symphony_host': 'header',
'group_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.insert_group_endpoint = _Endpoint(
settings={
'response_type': (ReadGroup,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups',
'operation_id': 'insert_group',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
'create_group',
],
'required': [
'x_symphony_host',
'create_group',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
'create_group':
(CreateGroup,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
},
'location_map': {
'x_symphony_host': 'header',
'create_group': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.list_groups_endpoint = _Endpoint(
settings={
'response_type': (GroupList,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups/type/{typeId}',
'operation_id': 'list_groups',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
'type_id',
'status',
'before',
'after',
'limit',
'sort_order',
],
'required': [
'x_symphony_host',
'type_id',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
'type_id':
(str,),
'status':
(Status,),
'before':
(str,),
'after':
(str,),
'limit':
(int,),
'sort_order':
(SortOrder,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
'type_id': 'typeId',
'status': 'status',
'before': 'before',
'after': 'after',
'limit': 'limit',
'sort_order': 'sortOrder',
},
'location_map': {
'x_symphony_host': 'header',
'type_id': 'path',
'status': 'query',
'before': 'query',
'after': 'query',
'limit': 'query',
'sort_order': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.update_avatar_endpoint = _Endpoint(
settings={
'response_type': (ReadGroup,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups/{groupId}/avatar',
'operation_id': 'update_avatar',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
'group_id',
'upload_avatar',
],
'required': [
'x_symphony_host',
'group_id',
'upload_avatar',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
'group_id':
(str,),
'upload_avatar':
(UploadAvatar,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
'group_id': 'groupId',
},
'location_map': {
'x_symphony_host': 'header',
'group_id': 'path',
'upload_avatar': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_group_endpoint = _Endpoint(
settings={
'response_type': (ReadGroup,),
'auth': [
'bearerAuth'
],
'endpoint_path': '/v1/groups/{groupId}',
'operation_id': 'update_group',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'x_symphony_host',
'if_match',
'group_id',
'update_group',
],
'required': [
'x_symphony_host',
'if_match',
'group_id',
'update_group',
],
'nullable': [
],
'enum': [
],
'validation': [
'x_symphony_host',
]
},
root_map={
'validations': {
('x_symphony_host',): {
'min_length': 1,
},
},
'allowed_values': {
},
'openapi_types': {
'x_symphony_host':
(str,),
'if_match':
(str,),
'group_id':
(str,),
'update_group':
(UpdateGroup,),
},
'attribute_map': {
'x_symphony_host': 'X-Symphony-Host',
'if_match': 'If-Match',
'group_id': 'groupId',
},
'location_map': {
'x_symphony_host': 'header',
'if_match': 'header',
'group_id': 'path',
'update_group': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def add_member_to_group(
self,
x_symphony_host,
group_id,
add_member,
**kwargs
):
"""Add a new user to a an existing group # noqa: E501
Add a new user to a an existing group # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.add_member_to_group(x_symphony_host, group_id, add_member, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
group_id (str):
add_member (AddMember): JSON object containing the user member information and the group on which he will be added to
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ReadGroup
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
kwargs['group_id'] = \
group_id
kwargs['add_member'] = \
add_member
return self.add_member_to_group_endpoint.call_with_http_info(**kwargs)
def delete_all_groups(
self,
x_symphony_host,
**kwargs
):
"""Delete all data related to the current pod (extracted from JWT). This endpoint is for maintenance/test and it is usually disabled or restricted # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.delete_all_groups(x_symphony_host, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
GroupList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
return self.delete_all_groups_endpoint.call_with_http_info(**kwargs)
def get_group(
self,
x_symphony_host,
group_id,
**kwargs
):
"""Retrieve a group # noqa: E501
Retrieve a group # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.get_group(x_symphony_host, group_id, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
group_id (str): Group id
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ReadGroup
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
kwargs['group_id'] = \
group_id
return self.get_group_endpoint.call_with_http_info(**kwargs)
def insert_group(
self,
x_symphony_host,
create_group,
**kwargs
):
"""Insert a new group # noqa: E501
Insert a new group into database # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.insert_group(x_symphony_host, create_group, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
create_group (CreateGroup): JSON object containing Group info
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ReadGroup
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
kwargs['create_group'] = \
create_group
return self.insert_group_endpoint.call_with_http_info(**kwargs)
def list_groups(
self,
x_symphony_host,
type_id,
**kwargs
):
"""List all groups of specified type # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.list_groups(x_symphony_host, type_id, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
type_id (str): Group type id
Keyword Args:
status (Status): filter by status, active or deleted. If not specified both are returned. [optional]
before (str): NOT SUPPORTED YET, currently ignored. Cursor that points to the start of the current page of data. If not present, the current page is the first page. [optional]
after (str): cursor that points to the end of the current page of data. If not present, the current page is the last page. [optional]
limit (int): numbers of items to return. [optional]
sort_order (SortOrder): items sorting direction (ordered by createdDate). [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
GroupList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
kwargs['type_id'] = \
type_id
return self.list_groups_endpoint.call_with_http_info(**kwargs)
def update_avatar(
self,
x_symphony_host,
group_id,
upload_avatar,
**kwargs
):
"""Update the group avatar # noqa: E501
Update the group account avatar # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.update_avatar(x_symphony_host, group_id, upload_avatar, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
group_id (str): Group id
upload_avatar (UploadAvatar): JSON object containing Group avatar
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ReadGroup
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
kwargs['group_id'] = \
group_id
kwargs['upload_avatar'] = \
upload_avatar
return self.update_avatar_endpoint.call_with_http_info(**kwargs)
def update_group(
self,
x_symphony_host,
if_match,
group_id,
update_group,
**kwargs
):
"""Update a group # noqa: E501
Update an existing group # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = group_api.update_group(x_symphony_host, if_match, group_id, update_group, async_req=True)
>>> result = thread.get()
Args:
x_symphony_host (str):
if_match (str):
group_id (str): Group id
update_group (UpdateGroup): JSON object containing Group info
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
ReadGroup
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['x_symphony_host'] = \
x_symphony_host
kwargs['if_match'] = \
if_match
kwargs['group_id'] = \
group_id
kwargs['update_group'] = \
update_group
return self.update_group_endpoint.call_with_http_info(**kwargs)
| 36.443151 | 187 | 0.490137 | 3,855 | 40,707 | 4.916472 | 0.069261 | 0.043212 | 0.062418 | 0.019944 | 0.871946 | 0.854799 | 0.832058 | 0.814172 | 0.794544 | 0.784731 | 0 | 0.00308 | 0.425774 | 40,707 | 1,116 | 188 | 36.475806 | 0.807743 | 0.35699 | 0 | 0.649867 | 1 | 0 | 0.240462 | 0.034748 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01061 | false | 0 | 0.017241 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d550aeb58909ab267bee2c338a726384338c3f68 | 23,062 | py | Python | tests/EntrypointScriptBuilderTest.py | codefresh-contrib/cfstep-helm | 762b7d53ff95d091a286149b232f9f58bd26d905 | [
"MIT"
] | 11 | 2018-03-07T14:32:56.000Z | 2022-01-14T12:37:52.000Z | tests/EntrypointScriptBuilderTest.py | codefresh-contrib/cfstep-helm | 762b7d53ff95d091a286149b232f9f58bd26d905 | [
"MIT"
] | 18 | 2018-03-18T09:17:56.000Z | 2020-10-25T16:37:18.000Z | tests/EntrypointScriptBuilderTest.py | codefresh-contrib/cfstep-helm | 762b7d53ff95d091a286149b232f9f58bd26d905 | [
"MIT"
] | 25 | 2018-02-25T11:01:17.000Z | 2021-09-06T13:24:17.000Z | import unittest
import os
import sys
import urllib.request
import json
parent_dir_name = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
sys.path.append(parent_dir_name)
from lib.EntrypointScriptBuilder import EntrypointScriptBuilder
from unittest.mock import patch, MagicMock
class ResponseMock(object):
def __init__(self, headers):
self.headers = headers
@property
def _headers(self):
return self.headers
class EntrypointScriptBuilderTest(unittest.TestCase):
def test_custom_variables(self):
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://charts.helm.sh/stable',
'HELM_VERSION': '3',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://charts.helm.sh/stable/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
self.assertEqual(script_source, expect)
def test_helm_behind_firewall(self):
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'azsp://test.azure.io',
'HELM_VERSION': '3',
'HELM_REPO_TOKEN': 'helmRepoToken',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://00000000-0000-0000-0000-000000000000:helmRepoToken@test.azure.io/helm/v1/repo/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
self.assertEqual(script_source, expect)
def test_helm_behind_firewall_mi(self):
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'azmi://test2.azure.io',
'HELM_VERSION': '3',
'HELM_REPO_TOKEN': 'helmRepoToken2',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://00000000-0000-0000-0000-000000000000:helmRepoToken2@test2.azure.io/helm/v1/repo/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
self.assertEqual(script_source, expect)
@patch.dict(os.environ, {'CF_API_KEY': 'apiKey',
'CF_HOST_IP': 'local.codefresh.io', 'CF_BUILD_URL': 'local.codefresh.io'}, clear=True)
@patch('urllib.request.urlopen')
def test_helm_multiple_sp(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.return_value = '{"access_token": "accessToken"}'
mock_urlopen.return_value = cm
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'azsp://test2.azure.io',
'HELM_VERSION': '3',
'CLIENT_ID': 'clientId',
'CLIENT_SECRET': 'clientSecret',
'TENANT': 'tenant',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://00000000-0000-0000-0000-000000000000:accessToken@test2.azure.io/helm/v1/repo/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
args = mock_urlopen.call_args
self.assertEqual(str(args[0][0].full_url), 'http://local.codefresh.io/api/clusters/aks-sp/helm/repos/test2.azure.io/token')
self.assertEqual(str(args[0][0].headers['Authorization']), 'apiKey')
self.assertEqual(str(args[0][0].data), 'b\'clientId=clientId&clientSecret=clientSecret&tenant=tenant\'')
self.assertEqual(script_source, expect)
@patch.dict(os.environ, {'CF_API_KEY': 'apiKey'}, clear=True)
@patch('urllib.request.urlopen')
def test_helm_sp(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.return_value = '{"access_token": "accessToken"}'
mock_urlopen.return_value = cm
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'azsp://test2.azure.io',
'HELM_VERSION': '3',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://00000000-0000-0000-0000-000000000000:accessToken@test2.azure.io/helm/v1/repo/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
args = mock_urlopen.call_args
self.assertEqual(str(args[0][0].full_url), 'https://g.codefresh.io/api/clusters/aks-sp/helm/repos/test2.azure.io/token')
self.assertEqual(str(args[0][0].headers['Authorization']), 'apiKey')
self.assertIsNone(args[0][0].data)
self.assertEqual(script_source, expect)
@patch.dict(os.environ, {'CF_BUILD_URL': 'local.codefresh.io', 'CF_HOST_IP': 'local.codefresh.io', 'CF_API_KEY': 'apiKey'}, clear=True)
@patch('urllib.request.urlopen')
def test_helm_cf_ctx_context(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.return_value = '{"access_token": "accessToken"}'
mock_urlopen.return_value = cm
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
#'CHART_REPO_URL': 'azsp://test2.azure.io',
'HELM_VERSION': '3',
#'HELM_REPOSITORY_CONTEXT': 'helmSP',
'CF_CTX_test_URL': 'azsp://test3.azure.io',
'CF_CTX_test2_URL': 'azsp://test4.azure.io',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4',
'CLIENT_ID': 'clientId',
'CLIENT_SECRET': 'clientSecret',
'TENANT': 'tenant',
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm repo add test https://00000000-0000-0000-0000-000000000000:accessToken@test3.azure.io/helm/v1/repo\n'
expect += 'helm repo add test2 https://00000000-0000-0000-0000-000000000000:accessToken@test4.azure.io/helm/v1/repo\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://00000000-0000-0000-0000-000000000000:accessToken@test3.azure.io/helm/v1/repo/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
args = mock_urlopen.call_args
self.assertEqual(str(args[0][0].full_url), 'http://local.codefresh.io/api/clusters/aks-sp/helm/repos/test4.azure.io/token')
self.assertEqual(str(args[0][0].headers['Authorization']), 'apiKey')
self.assertEqual(str(args[0][0].data), 'b\'clientId=clientId&clientSecret=clientSecret&tenant=tenant\'')
self.assertEqual(script_source, expect)
@patch.dict(os.environ, {'CF_BUILD_URL': 'local.codefresh.io', 'CF_HOST_IP': 'local.codefresh.io', 'CF_API_KEY': 'apiKey'}, clear=True)
@patch('urllib.request.urlopen')
def test_helm_repository_integration(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.side_effect = [ '{"metadata":{"name":"helmSP"},"spec": {"data":{ "repositoryUrl": "azsp://test.azure.io", "variables": {"CLIENT_ID": "client", "CLIENT_SECRET": "secret", "TENANT": "mytenant"} }}}', '{"access_token": "accessToken"}' ]
mock_urlopen.return_value = cm
env = {
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'HELM_VERSION': '3',
'HELM_REPOSITORY_CONTEXT': 'helmSP',
'CUSTOM_containers_node_env_secret_VALUE1': 'value1,',
'CUSTOM_containers_node_env_secret_VALUE2': 'foo:bar;baz:qux;',
'CUSTOM_containers_node_env_secret_VALUE3': 'value3',
'CUSTOM_containers_node_env_secret_VALUE4': 'value4'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'kubectl config use-context "local"\n'
expect += 'helm version --short -c\n'
expect += 'helm repo add helmsp https://00000000-0000-0000-0000-000000000000:accessToken@test.azure.io/helm/v1/repo\n'
expect += 'helm upgrade tomcat tomcat --install --reset-values --repo https://00000000-0000-0000-0000-000000000000:accessToken@test.azure.io/helm/v1/repo/ '
expect += '--version 0.4.3 --namespace default --set containers.node.env.secret.VALUE1=value1, '
expect += '--set containers.node.env.secret.VALUE2="foo:bar;baz:qux;" '
expect += '--set containers.node.env.secret.VALUE3=value3 --set containers.node.env.secret.VALUE4=value4 '
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
args = mock_urlopen.call_args
self.assertEqual(str(args[0][0].full_url), 'http://local.codefresh.io/api/clusters/aks-sp/helm/repos/test.azure.io/token')
self.assertEqual(str(args[0][0].headers['Authorization']), 'apiKey')
self.assertEqual(str(args[0][0].data), 'b\'clientId=client&clientSecret=secret&tenant=mytenant\'')
self.assertEqual(script_source, expect)
@patch('urllib.request.urlopen')
def test_jfrog_repo(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.return_value = 'contents'
cm.info.return_value = ResponseMock({('X-Artifactory-Id')})
mock_urlopen.return_value = cm
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'helm version --short -c\n'
expect += 'helm repo add remote https://my-cm-repo.jfrog.io/ --username user --password pass \n'
expect += 'helm dependency build tomcat || helm dependency update tomcat || echo "dependencies cannot be updated"\n'
expect += 'PACKAGE="$(helm package tomcat --version 0.4.3 --destination /tmp | cut -d " " -f 8)"\n'
expect += 'curl -u $HELMREPO_USERNAME:$HELMREPO_PASSWORD -T $PACKAGE https://my-cm-repo.jfrog.io/$(basename $PACKAGE)'
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
self.assertEqual(script_source, expect)
@patch('urllib.request.urlopen')
def test_jfrog_repo_http_2(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.return_value = 'contents'
cm.info.return_value = ResponseMock({('server', 'artifactory')})
mock_urlopen.return_value = cm
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'helm version --short -c\n'
expect += 'helm repo add remote https://my-cm-repo.jfrog.io/ --username user --password pass \n'
expect += 'helm dependency build tomcat || helm dependency update tomcat || echo "dependencies cannot be updated"\n'
expect += 'PACKAGE="$(helm package tomcat --version 0.4.3 --destination /tmp | cut -d " " -f 8)"\n'
expect += 'curl -u $HELMREPO_USERNAME:$HELMREPO_PASSWORD -T $PACKAGE https://my-cm-repo.jfrog.io/$(basename $PACKAGE)'
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
self.assertEqual(script_source, expect)
cm.info.return_value = ResponseMock({('x-artifactory-id')})
script_source = builder.build()
self.assertEqual(script_source, expect)
def test_jfrog_repo_with_skip_repo_credentials_validation(self):
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'SKIP_REPO_CREDENTIALS_VALIDATION': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
expect = '#!/bin/bash -e\n'
expect += 'export HELM_REPO_ACCESS_TOKEN=$CF_API_KEY\n'
expect += 'export HELM_REPO_AUTH_HEADER=Authorization\n'
expect += 'helm version --short -c\n'
expect += 'helm repo add remote https://my-cm-repo.jfrog.io/ --username user --password pass \n'
expect += 'helm dependency build tomcat || helm dependency update tomcat || echo "dependencies cannot be updated"\n'
expect += 'PACKAGE="$(helm package tomcat --version 0.4.3 --destination /tmp | cut -d " " -f 8)"\n'
expect += 'curl -u $HELMREPO_USERNAME:$HELMREPO_PASSWORD -T $PACKAGE https://my-cm-repo.jfrog.io/$(basename $PACKAGE)'
builder = EntrypointScriptBuilder(env)
script_source = builder.build()
self.assertEqual(script_source, expect)
@patch('urllib.request.urlopen')
def test_jfrog_repo_exception(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 200
cm.read.return_value = 'contents'
cm.info.return_value = ResponseMock({'Server': 'Test'})
mock_urlopen.return_value = cm
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
builder = EntrypointScriptBuilder(env)
with self.assertRaises(Exception) as exc:
script_source = builder.build()
self.assertEquals(str(exc.exception), "\033[91mFailed to infer the Helm repository type\033[0m")
@patch('urllib.request.urlopen')
def test_jfrog_repo_url_validation(self, mock_urlopen):
cm = MagicMock()
cm.getcode.return_value = 302
cm.read.return_value = 'contents'
cm.info.return_value = ResponseMock({'Server': 'Test'})
mock_urlopen.return_value = cm
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
builder = EntrypointScriptBuilder(env)
with self.assertRaises(Exception) as exc:
script_source = builder.build()
self.assertEquals(str(exc.exception), "\033[91mFailed to infer the Helm repository type\033[0m")
@patch('urllib.request.urlopen')
def test_jfrog_repo_url_validation_exception(self, mock_urlopen):
mock_urlopen.side_effect = Exception('test')
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
builder = EntrypointScriptBuilder(env)
with self.assertRaises(SystemExit) as cm:
script_source = builder.build()
self.assertEqual(cm.exception.code, 1)
@patch('urllib.request.urlopen')
def test_jfrog_repo_url_validation_url_error(self, mock_urlopen):
err = urllib.error.URLError('test')
err.code = 401
mock_urlopen.side_effect = err
env = {
'ACTION': 'push',
'KUBE_CONTEXT': 'local',
'CHART_NAME': 'tomcat',
'RELEASE_NAME': 'tomcat',
'NAMESPACE': 'default',
'CHART_VERSION': '0.4.3',
'CHART_REPO_URL': 'https://my-cm-repo.jfrog.io/',
'HELM_VERSION': '3',
'CREDENTIALS_IN_ARGUMENTS': 'true',
'HELMREPO_USERNAME': 'user',
'HELMREPO_PASSWORD': 'pass'
}
builder = EntrypointScriptBuilder(env)
with self.assertRaises(SystemExit) as cm:
script_source = builder.build()
self.assertEqual(cm.exception.code, 1)
| 48.654008 | 249 | 0.617986 | 2,680 | 23,062 | 5.112313 | 0.080597 | 0.030144 | 0.069484 | 0.094008 | 0.927451 | 0.921977 | 0.920225 | 0.909642 | 0.892417 | 0.885775 | 0 | 0.032632 | 0.235929 | 23,062 | 473 | 250 | 48.756871 | 0.744907 | 0.003382 | 0 | 0.814815 | 0 | 0.087963 | 0.48105 | 0.164484 | 0 | 0 | 0 | 0 | 0.071759 | 1 | 0.037037 | false | 0.030093 | 0.016204 | 0.002315 | 0.060185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
63cf2f8b315aba88e82138e8a2d7470eae0ac9c8 | 582 | py | Python | utils/embeds.py | Nirlep5252/SynTech | 955cf600800f0cf0f03e6b1932ac2923d6beb2bf | [
"MIT"
] | 2 | 2021-12-12T03:17:10.000Z | 2022-03-28T08:04:07.000Z | utils/embeds.py | Nirlep5252/SynTech | 955cf600800f0cf0f03e6b1932ac2923d6beb2bf | [
"MIT"
] | null | null | null | utils/embeds.py | Nirlep5252/SynTech | 955cf600800f0cf0f03e6b1932ac2923d6beb2bf | [
"MIT"
] | null | null | null | from discord import Embed
from config import ERROR_COLOR, MAIN_COLOR
def error_embed(title: str, description: str) -> Embed:
return Embed(
title=title,
description=description,
color=ERROR_COLOR
)
def success_embed(title: str, description: str) -> Embed:
return Embed(
title=title,
description=description,
color=MAIN_COLOR
)
def custom_embed(title: str, description: str) -> Embed:
return Embed(
title=title,
description=description,
color=MAIN_COLOR
)
| 21.555556 | 58 | 0.618557 | 62 | 582 | 5.677419 | 0.241935 | 0.170455 | 0.119318 | 0.204545 | 0.732955 | 0.732955 | 0.732955 | 0.732955 | 0.732955 | 0.732955 | 0 | 0 | 0.302406 | 582 | 26 | 59 | 22.384615 | 0.866995 | 0 | 0 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.1 | 0.15 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
89749db3441260e998c2ec96391a2d1a20c04163 | 6,498 | py | Python | dataloaders/dataloader_LQ_HQ_diff_content_HQ.py | guanghaoyin/CVRKD-IQA | b596a53c064d5472feb63fc61abe0b100e40606f | [
"MIT"
] | 25 | 2021-12-09T10:01:16.000Z | 2022-03-25T03:10:27.000Z | dataloaders/dataloader_LQ_HQ_diff_content_HQ.py | guanghaoyin/CVRKD-IQA | b596a53c064d5472feb63fc61abe0b100e40606f | [
"MIT"
] | 1 | 2022-03-07T08:33:20.000Z | 2022-03-08T08:44:38.000Z | dataloaders/dataloader_LQ_HQ_diff_content_HQ.py | guanghaoyin/CVRKD-IQA | b596a53c064d5472feb63fc61abe0b100e40606f | [
"MIT"
] | 5 | 2022-03-02T08:12:29.000Z | 2022-03-17T05:22:19.000Z | import torch
import torchvision
import folders.folders_LQ_HQ_diff_content_HQ as folders
class DataLoader(object):
"""Dataset class for IQA databases"""
def __init__(self, dataset, path, ref_path, img_indx, patch_size, patch_num, batch_size=1, istrain=True, self_patch_num=10, use_HQref = True):
self.batch_size = batch_size
self.istrain = istrain
if (dataset == 'live') | (dataset == 'csiq') | (dataset == 'tid2013') | (dataset == 'livec') | (dataset == 'kadid10k'):
# Train transforms
if istrain:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.RandomCrop(size=patch_size),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.RandomVerticalFlip(),
torchvision.transforms.RandomRotation(degrees=180),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))
])
# Test transforms
else:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.RandomCrop(size=patch_size),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))
])
elif dataset == 'koniq-10k':
if istrain:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.RandomCrop(size=patch_size),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.RandomVerticalFlip(),
torchvision.transforms.RandomRotation(degrees=180),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
else:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.RandomCrop(size=patch_size),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
elif dataset == 'bid':
if istrain:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize((512, 512)),
torchvision.transforms.RandomCrop(size=patch_size),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.RandomVerticalFlip(),
torchvision.transforms.RandomRotation(degrees=180),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
else:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize((512, 512)),
torchvision.transforms.RandomCrop(size=patch_size),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
else:
HQ_diff_content_transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
transforms = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))])
if dataset == 'live':
self.data = folders.LIVEFolder(
root=path, HQ_diff_content_root=ref_path, index=img_indx, transform=transforms, HQ_diff_content_transform=HQ_diff_content_transform, patch_num=patch_num, patch_size = patch_size, self_patch_num=self_patch_num)
elif dataset == 'csiq':
self.data = folders.CSIQFolder(
root=path, HQ_diff_content_root=ref_path, index=img_indx, transform=transforms, HQ_diff_content_transform=HQ_diff_content_transform, patch_num=patch_num, patch_size = patch_size, self_patch_num=self_patch_num)
elif dataset == 'kadid10k':
self.data = folders.Kadid10kFolder(
root=path, HQ_diff_content_root=ref_path, index=img_indx, transform=transforms, HQ_diff_content_transform=HQ_diff_content_transform, patch_num=patch_num, patch_size = patch_size, self_patch_num=self_patch_num)
elif dataset == 'tid2013':
self.data = folders.TID2013Folder(
root=path, HQ_diff_content_root=ref_path, index=img_indx, transform=transforms, HQ_diff_content_transform=HQ_diff_content_transform, patch_num=patch_num, patch_size = patch_size, self_patch_num=self_patch_num)
elif dataset == 'koniq-10k':
self.data = folders.Koniq_10kFolder(
root=path, HQ_diff_content_root=ref_path, index=img_indx, transform=transforms, HQ_diff_content_transform=HQ_diff_content_transform, patch_num=patch_num, patch_size = patch_size, self_patch_num=self_patch_num)
elif dataset == 'livec':
self.data = folders.LIVEChallengeFolder(
root=path, HQ_diff_content_root=ref_path, index=img_indx, transform=transforms, HQ_diff_content_transform=HQ_diff_content_transform, patch_num=patch_num, patch_size = patch_size, self_patch_num=self_patch_num)
def get_dataloader(self):
if self.istrain:
dataloader = torch.utils.data.DataLoader(
self.data, batch_size=self.batch_size, shuffle=True)
else:
dataloader = torch.utils.data.DataLoader(
self.data, batch_size=self.batch_size, shuffle=False)
return dataloader
| 62.480769 | 225 | 0.596337 | 671 | 6,498 | 5.527571 | 0.122206 | 0.232138 | 0.09113 | 0.112699 | 0.823672 | 0.823672 | 0.823672 | 0.823672 | 0.823672 | 0.823672 | 0 | 0.053262 | 0.306556 | 6,498 | 103 | 226 | 63.087379 | 0.769862 | 0.010003 | 0 | 0.723404 | 0 | 0 | 0.011983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0 | 0.031915 | 0 | 0.074468 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
89a258555bd577eac28d4d95ec3b44ed3686b533 | 133 | py | Python | ezwrappers/__init__.py | jrminter/ezwrappers | 89da5bb0f555901813a4da0e1c60a193c3c77d65 | [
"MIT"
] | null | null | null | ezwrappers/__init__.py | jrminter/ezwrappers | 89da5bb0f555901813a4da0e1c60a193c3c77d65 | [
"MIT"
] | null | null | null | ezwrappers/__init__.py | jrminter/ezwrappers | 89da5bb0f555901813a4da0e1c60a193c3c77d65 | [
"MIT"
] | null | null | null | from .map_tools import *
from .plotting_tools import *
from .peak_detect import *
from .savitzky_golay import *
from .utils import *
| 22.166667 | 29 | 0.774436 | 19 | 133 | 5.210526 | 0.526316 | 0.40404 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150376 | 133 | 5 | 30 | 26.6 | 0.876106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
987c65c912783b7cff6007c2840271d123d580ea | 13,507 | py | Python | tests/test_events.py | rwhitt2049/nimble | e50587d6d8e38449e496a870f460e723f0f595bd | [
"MIT"
] | null | null | null | tests/test_events.py | rwhitt2049/nimble | e50587d6d8e38449e496a870f460e723f0f595bd | [
"MIT"
] | 24 | 2016-07-22T03:42:49.000Z | 2016-10-21T04:11:09.000Z | tests/test_events.py | rwhitt2049/nimble | e50587d6d8e38449e496a870f460e723f0f595bd | [
"MIT"
] | null | null | null | import numpy as np
import numpy.testing as npt
import pandas as pd
from unittest import TestCase, main
from nimble import Events
class EvTestCase(TestCase):
@staticmethod
def assertStartStops(events, vstarts, vstops):
npt.assert_array_equal(events._starts, vstarts)
npt.assert_array_equal(events._stops, vstops)
class TestDebouncing(EvTestCase):
def setUp(self):
condarr = np.array([0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1])
self.cond = condarr > 0
def test_adeb(self):
vstarts = np.array([2, 7])
vstops = np.array([4, 10])
events = Events(self.cond, period=1, adeb=2).find()
self.assertStartStops(events, vstarts, vstops)
def test_ddeb(self):
vstarts = np.array([2, 7])
vstops = np.array([4, 12])
events = Events(self.cond, period=1, ddeb=2).find()
self.assertStartStops(events, vstarts, vstops)
def test_adeb_ddeb(self):
vstarts = np.array([2])
vstops = np.array([12])
events = Events(self.cond, period=1, adeb=2, ddeb=3.1).find()
self.assertStartStops(events, vstarts, vstops)
def test_nonint_deb(self):
vstarts = np.array([2, 7, 11])
vstops = np.array([4, 10, 12])
events = Events(self.cond, period=1, adeb=float(0.00000001),
ddeb=float(0.99999999)).find()
self.assertStartStops(events, vstarts, vstops)
def test_period_100ms(self):
vstarts = np.array([2, 7])
vstops = np.array([4, 12])
events = Events(self.cond, period=0.1, adeb=0.15, ddeb=0.2).find()
self.assertStartStops(events, vstarts, vstops)
def test_period_120ms(self):
vstarts = np.array([2, 7])
vstops = np.array([4, 12])
events = Events(self.cond, period=0.12, adeb=0.15, ddeb=0.2).find()
self.assertStartStops(events, vstarts, vstops)
def test_no_events_found(self):
vstarts = np.array([])
vstops = np.array([])
x = np.array([0, 0, 0, 0, 0, 0, 0, 0])
events = Events(x > 0, period=1, adeb=0.15, ddeb=0.2).find()
self.assertStartStops(events, vstarts, vstops)
def test_event_always_active(self):
vstarts = np.array([0])
vstops = np.array([8])
x = np.array([0, 0, 0, 0, 0, 0, 0, 0])
events = Events(x == 0, period=1, adeb=0.15, ddeb=0.2).find()
self.assertStartStops(events, vstarts, vstops)
def test_end_conditions(self):
vstarts = np.array([0, 6])
vstops = np.array([2, 8])
x = np.array([1, 1, 0, 0, 0, 0, 1, 1])
events = Events(x == 1, period=1, adeb=2, ddeb=2).find()
self.assertStartStops(events, vstarts, vstops)
class TestDurationFilter(EvTestCase):
def setUp(self):
condarr = np.array([0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1])
self.cond = condarr > 0
def test_mindur(self):
vstarts = np.array([2, 7])
vstops = np.array([4, 10])
events = Events(self.cond, period=1, mindur=2).find()
self.assertStartStops(events, vstarts, vstops)
def test_maxdur(self):
vstarts = np.array([2, 11])
vstops = np.array([4, 12])
events = Events(self.cond, period=1, maxdur=2).find()
self.assertStartStops(events, vstarts, vstops)
def test_mindur_maxdur(self):
vstarts = np.array([2])
vstops = np.array([4])
events = Events(self.cond, period=1, mindur=2, maxdur=2.5).find()
self.assertStartStops(events, vstarts, vstops)
def test_nonint_durs(self):
vstarts = np.array([2])
vstops = np.array([4])
events = Events(self.cond, period=1, mindur=float(1.00000001),
maxdur=float(2.99999999)).find()
self.assertStartStops(events, vstarts, vstops)
def test_period_100ms(self):
vstarts = np.array([2])
vstops = np.array([4])
events = Events(self.cond, period=0.1, mindur=0.15, maxdur=0.2).find()
self.assertStartStops(events, vstarts, vstops)
def test_period_120ms(self):
vstarts = np.array([2])
vstops = np.array([4])
events = Events(self.cond, period=0.12, mindur=0.15, maxdur=0.35).find()
self.assertStartStops(events, vstarts, vstops)
def test_no_events_found(self):
vstarts = np.array([])
vstops = np.array([])
x = np.array([0, 0, 0, 0, 0, 0, 0, 0])
events = Events(x > 0, period=1, mindur=0.15, maxdur=0.2).find()
self.assertStartStops(events, vstarts, vstops)
def test_event_always_active(self):
vstarts = np.array([0])
vstops = np.array([8])
x = np.array([0, 0, 0, 0, 0, 0, 0, 0])
events = Events(x == 0, period=1, mindur=0.15, maxdur=20).find()
self.assertStartStops(events, vstarts, vstops)
def test_end_conditions(self):
vstarts = np.array([0, 6])
vstops = np.array([2, 8])
x = np.array([1, 1, 0, 0, 0, 0, 1, 1])
events = Events(x == 1, period=1, mindur=2, maxdur=2).find()
self.assertStartStops(events, vstarts, vstops)
class TestEventOffset(EvTestCase):
def setUp(self):
condarr = np.array([0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1])
self.cond = condarr > 0
def test_startoffset(self):
vstarts = np.array([1, 6, 10])
vstops = np.array([4, 10, 12])
events = Events(self.cond, period=1, startoffset=-1).find()
self.assertStartStops(events, vstarts, vstops)
def test_stopoffset(self):
vstarts = np.array([2, 7, 11])
vstops = np.array([5, 11, 12])
events = Events(self.cond, period=1, stopoffset=1).find()
self.assertStartStops(events, vstarts, vstops)
def test_startoffset_stopoffset(self):
vstarts = np.array([1, 6, 10])
vstops = np.array([5, 11, 12])
events = Events(self.cond, period=1, startoffset=-1, stopoffset=1).find()
self.assertStartStops(events, vstarts, vstops)
def test_period_100ms(self):
vstarts = np.array([1, 6, 10])
vstops = np.array([5, 11, 12])
events = Events(self.cond, period=0.1, startoffset=-0.1, stopoffset=0.1).find()
self.assertStartStops(events, vstarts, vstops)
def test_period_120ms(self):
vstarts = np.array([1, 6, 10])
vstops = np.array([5, 11, 12])
events = Events(self.cond, period=0.12, startoffset=-0.1, stopoffset=0.1).find()
self.assertStartStops(events, vstarts, vstops)
def test_no_events_found(self):
vstarts = np.array([])
vstops = np.array([])
x = np.array([0, 0, 0, 0, 0, 0, 0, 0])
events = Events(x > 0, period=1, startoffset=-1, stopoffset=1).find()
self.assertStartStops(events, vstarts, vstops)
def test_event_always_active(self):
vstarts = np.array([0])
vstops = np.array([8])
x = np.array([0, 0, 0, 0, 0, 0, 0, 0])
events = Events(x == 0, period=1, startoffset=-1, stopoffset=1).find()
self.assertStartStops(events, vstarts, vstops)
def test_end_conditions(self):
vstarts = np.array([0, 5])
vstops = np.array([3, 8])
x = np.array([1, 1, 0, 0, 0, 0, 1, 1])
events = Events(x == 1, period=1, startoffset=-1, stopoffset=1).find()
self.assertStartStops(events, vstarts, vstops)
class TestAsArrayMethod(TestCase):
def setUp(self):
conditional_array = np.array([0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1])
condition = (conditional_array > 0)
self.events = Events(condition, period=1).find()
def test_default_parameters(self):
"""Test as_array() with default settings"""
validation_array = np.array([0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1])
npt.assert_array_equal(validation_array, self.events.as_array())
def test_as_array_false_value(self):
"""Test as_array() with low value"""
validation_array = np.array([-1, 1, 1, 1, -1, -1, -1, 1, 1, -1, 1, 1])
npt.assert_array_equal(validation_array, self.events.as_array(
false_values=-1))
def test_as_array_true_value(self):
"""Test as_array() with high value"""
validation_array = np.array([0, 5, 5, 5, 0, 0, 0, 5, 5, 0, 5, 5])
npt.assert_array_equal(validation_array, self.events.as_array(
true_values=5))
def test_as_array_false_and_true_value(self):
"""Test as_array() with low and high values"""
validation_array = np.array([-1, 5, 5, 5, -1, -1, -1, 5, 5, -1, 5, 5])
npt.assert_array_equal(validation_array, self.events.as_array(
false_values=-1,
true_values=5))
def test_type(self):
typ = type(self.events.as_array(false_values=-1, true_values=5))
self.assertEqual(typ, np.ndarray)
class TestAsSeries(TestCase):
def setUp(self):
conditional_array = np.array([0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1])
condition = (conditional_array > 0)
self.events = Events(condition, period=1).find()
def test_default_parameters(self):
"""Test as_array() with default settings"""
validation_series = pd.Series([0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1])
npt.assert_array_equal(validation_series, self.events.as_series())
def test_as_array_false_value(self):
"""Test as_array() with low value"""
validation_series = np.array([-1, 1, 1, 1, -1, -1, -1, 1, 1, -1, 1, 1])
npt.assert_array_equal(validation_series, self.events.as_series(
false_values=-1))
def test_as_array_true_value(self):
"""Test as_array() with high value"""
validation_series = np.array([0, 5, 5, 5, 0, 0, 0, 5, 5, 0, 5, 5])
npt.assert_array_equal(validation_series, self.events.as_series(
true_values=5))
def test_as_array_false_and_true_value(self):
"""Test as_array() with low and high values"""
validation_series = np.array([-1, 5, 5, 5, -1, -1, -1, 5, 5, -1, 5, 5])
npt.assert_array_equal(validation_series, self.events.as_series(
false_values=-1,
true_values=5))
def test_type(self):
typ = type(self.events.as_series(false_values=-1, true_values=5))
self.assertEqual(typ, pd.core.series.Series)
class TestDurations(TestCase):
def setUp(self):
condition_array = np.array([1, 0, 1, 1, 1, 1, 0, 0, 1, 1,
0, 0, 0, 1, 0, 0, 0, 1, 0, 0])
condition = (condition_array > 0)
self.events = Events(condition, period=1/3,
adeb=0.5, ddeb=1).find()
def test_durations(self):
# validation_array = np.array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1,
# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
validation_durations = [(8 / 3)]
npt.assert_array_equal(validation_durations, self.events.durations)
class TestEventDetection(TestCase):
def test_default_parameters(self):
"""Test event detection with only a supplied condition"""
np.random.seed(10)
validation_array = np.random.random_integers(0, 1, 100)
condition = (validation_array > 0)
events = Events(condition, period=1).find()
npt.assert_array_equal(validation_array, events.as_array())
def test_multi_input_condition_event(self):
"""Test arrays that have multi-input conditions"""
x = np.array([0, 1, 1, 1, 0, 0, 0, 1, 1, 0])
y = np.array([0, 0, 1, 1, 1, 0, 0, 1, 0, 1])
validation_array = np.array([0, 0, 1, 1, 0, 0, 0, 1, 0, 0])
condition = ((x > 0) & (y > 0))
events = Events(condition, period=1).find()
npt.assert_array_equal(validation_array, events.as_array())
class TestSpecialMethods(TestCase):
def setUp(self):
condition_array = np.array([1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1])
self.condition = (condition_array > 0)
self.events = Events(self.condition, period=1).find()
def test__len__(self):
self.assertEquals(4, len(self.events))
def test__eq__(self):
other = Events(self.condition, period=1).find()
self.assertEqual(self.events, other)
class TestAttributes(TestCase):
def setUp(self):
condition_array = np.array([1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1])
self.condition = (condition_array > 0)
def test_period(self):
self.assertRaises(ValueError, Events, self.condition, period=0)
def test_startoffset(self):
self.assertRaises(ValueError, Events, self.condition,
period=1, startoffset=1)
def test_stopoffset(self):
self.assertRaises(ValueError, Events, self.condition, period=0, stopoffset=-1)
class TestProperties(TestCase):
def setUp(self):
self.events = Events(np.array([False, False]), period=0.12,
adeb=1, ddeb=1,
mindur=1, maxdur=1,
startoffset=-1, stopoffset=1)
def test_adeb(self):
self.assertEqual(self.events._adeb, 9)
def test_ddeb(self):
self.assertEqual(self.events._adeb, 9)
def test_mindur(self):
self.assertEqual(self.events._mindur, 9)
def test_maxdur(self):
self.assertEqual(self.events._maxdur, 8)
def test_startoffset(self):
self.assertEqual(self.events._startoffset, -9)
def test_stopoffset(self):
self.assertEqual(self.events._stopoffset, 9)
if __name__ == '__main__':
main()
| 37.005479 | 88 | 0.593544 | 1,914 | 13,507 | 4.07419 | 0.064786 | 0.026161 | 0.024237 | 0.020518 | 0.85663 | 0.806874 | 0.788023 | 0.783021 | 0.73506 | 0.655296 | 0 | 0.066979 | 0.256089 | 13,507 | 364 | 89 | 37.107143 | 0.709096 | 0.037018 | 0 | 0.608696 | 0 | 0 | 0.000618 | 0 | 0 | 0 | 0 | 0 | 0.192029 | 1 | 0.217391 | false | 0 | 0.018116 | 0 | 0.275362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
98a328a35000d881332bd7f9378e4b8aa5fe6443 | 182 | py | Python | examples/multitag_web_scraper.py | dimanil/fast_request | 39f6769e15474aea1aa3aced6bb07a817a2df3ba | [
"MIT"
] | 857 | 2018-11-18T17:55:01.000Z | 2022-03-31T23:39:10.000Z | examples/multitag_web_scraper.py | dimanil/fast_request | 39f6769e15474aea1aa3aced6bb07a817a2df3ba | [
"MIT"
] | 181 | 2018-12-08T18:31:05.000Z | 2022-03-29T01:40:02.000Z | examples/multitag_web_scraper.py | dimanil/fast_request | 39f6769e15474aea1aa3aced6bb07a817a2df3ba | [
"MIT"
] | 92 | 2018-11-22T03:53:31.000Z | 2022-03-21T10:54:24.000Z | from faster_than_requests import scraper2
print(scraper2(["https://nim-lang.org", "https://nim-lang.org"], list_of_tags=["h1", "a"], case_insensitive=False, deduplicate_urls=False))
| 60.666667 | 139 | 0.758242 | 27 | 182 | 4.888889 | 0.777778 | 0.121212 | 0.181818 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017442 | 0.054945 | 182 | 2 | 140 | 91 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.236264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
98a64cef7270f3bb0f99d197ced3f26a528d479e | 183 | py | Python | split_data.py | ece324-2020/Monumentum | cb52b9d8e19dd922f044a761d6523400d274709e | [
"MIT"
] | null | null | null | split_data.py | ece324-2020/Monumentum | cb52b9d8e19dd922f044a761d6523400d274709e | [
"MIT"
] | null | null | null | split_data.py | ece324-2020/Monumentum | cb52b9d8e19dd922f044a761d6523400d274709e | [
"MIT"
] | null | null | null | import splitfolders
import os
splitfolders.ratio('data_main'+os.sep+'dataset_delf_filtered_augmented', output="dataset_delf_filtered_augmented_split", seed=1337, ratio=(.8, 0.1,0.1))
| 45.75 | 152 | 0.814208 | 28 | 183 | 5.035714 | 0.642857 | 0.156028 | 0.269504 | 0.397163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.04918 | 183 | 3 | 153 | 61 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0.420765 | 0.371585 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
98ba667d127f9a119a835ba9a8a3536cce251498 | 1,099 | py | Python | operaciones.py | Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON | 52f08b9e1d40584491c28b685c6ffafdf38d06e1 | [
"Apache-2.0"
] | null | null | null | operaciones.py | Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON | 52f08b9e1d40584491c28b685c6ffafdf38d06e1 | [
"Apache-2.0"
] | null | null | null | operaciones.py | Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON | 52f08b9e1d40584491c28b685c6ffafdf38d06e1 | [
"Apache-2.0"
] | null | null | null |
# Solucion 10
i = 1
while (i==1):
try:
a = float(input("Introduce el primer número: "))
b = float(input("Introduce el segundo número: "))
print(f"El resultado de la suma es: {a+b}")
except:
print("Tipo de dato no valido")
try:
a = float(input("Introduce el primer número: "))
b = float(input("Introduce el segundo número: "))
print(f"El resultado de la resta es: {a-b}")
except:
print("Tipo de dato no valido")
try:
a = float(input("Introduce el primer número: "))
b = float(input("Introduce el segundo número: "))
print(f"El resultado de la multiplicacion es: {a*b}")
except:
print("Tipo de dato no valido")
try:
a = float(input("Introduce el primer número: "))
b = float(input("Introduce el segundo número: "))
print(f"El resultado de la division es: {a/b}")
except:
print("Tipo de dato no valido")
print("No es posible dividir enre cero")
men= input(" desea continuar s(si) n (no)")
| 31.4 | 62 | 0.55778 | 150 | 1,099 | 4.086667 | 0.266667 | 0.130506 | 0.247961 | 0.274062 | 0.822186 | 0.822186 | 0.822186 | 0.822186 | 0.822186 | 0.822186 | 0 | 0.005312 | 0.314832 | 1,099 | 35 | 63 | 31.4 | 0.808765 | 0.010009 | 0 | 0.714286 | 0 | 0 | 0.497148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.321429 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
7f997d1481f3815e03422b9dd6397b5c3e92872e | 1,267 | py | Python | tests/migrations/system/test_check_latest.py | jsangmeister/openslides-datastore-service | 7170f008ccac0b31c37ffeee083b972bc314660d | [
"MIT"
] | 2 | 2020-01-20T13:56:28.000Z | 2020-02-17T10:56:26.000Z | tests/migrations/system/test_check_latest.py | jsangmeister/openslides-datastore-service | 7170f008ccac0b31c37ffeee083b972bc314660d | [
"MIT"
] | 122 | 2020-01-16T15:13:37.000Z | 2022-03-17T10:32:47.000Z | tests/migrations/system/test_check_latest.py | jsangmeister/openslides-datastore-service | 7170f008ccac0b31c37ffeee083b972bc314660d | [
"MIT"
] | 7 | 2020-02-20T12:04:17.000Z | 2021-11-23T17:54:33.000Z | from unittest.mock import MagicMock
from ..util import get_noop_migration
def test_set_latest_migrate(
migration_handler, connection_handler, write, query_single_value
):
write({"type": "create", "fqid": "a/1", "fields": {}})
write({"type": "create", "fqid": "a/2", "fields": {}})
migration_handler.run_migrations = rm = MagicMock()
migration_handler.register_migrations(get_noop_migration(2), get_noop_migration(3))
migration_handler.migrate()
rm.assert_not_called()
assert query_single_value("select max(migration_index) from positions") == 3
assert query_single_value("select min(migration_index) from positions") == 3
def test_migration_index_too_high_finalize(
migration_handler, connection_handler, write, query_single_value
):
write({"type": "create", "fqid": "a/1", "fields": {}})
write({"type": "create", "fqid": "a/2", "fields": {}})
migration_handler.run_migrations = rm = MagicMock()
migration_handler.register_migrations(get_noop_migration(2), get_noop_migration(3))
migration_handler.finalize()
rm.assert_not_called()
assert query_single_value("select max(migration_index) from positions") == 3
assert query_single_value("select min(migration_index) from positions") == 3
| 37.264706 | 87 | 0.726914 | 161 | 1,267 | 5.397516 | 0.267081 | 0.147296 | 0.110472 | 0.087457 | 0.844649 | 0.844649 | 0.844649 | 0.844649 | 0.844649 | 0.844649 | 0 | 0.010969 | 0.136543 | 1,267 | 33 | 88 | 38.393939 | 0.783364 | 0 | 0 | 0.75 | 0 | 0 | 0.205209 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7fb58c50345310649f97edd7b3a1f2e743790ba5 | 4,232 | py | Python | src/test.py | gabriel-libardi/sorting_algorithms | f79195306c02f53a03dda2cb9c0c37ac2ad92ffd | [
"MIT"
] | null | null | null | src/test.py | gabriel-libardi/sorting_algorithms | f79195306c02f53a03dda2cb9c0c37ac2ad92ffd | [
"MIT"
] | null | null | null | src/test.py | gabriel-libardi/sorting_algorithms | f79195306c02f53a03dda2cb9c0c37ac2ad92ffd | [
"MIT"
] | null | null | null | import ctypes
import pytest
import random
sort = ctypes.cdll.LoadLibrary("sorting_algorithms.so")
IntVector = ctypes.c_int*25
def rand_list(length):
return [random.randint(-100,100) for _ in range(length)]
def test_bubble_sort():
'''Tests whether bubble_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.bubble_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_selection_sort():
'''Tests whether election_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.selection_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_insertion_sort():
'''Tests whether insertion_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.insertion_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_merge_sort():
'''Tests whether merge_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.merge_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_heap_sort():
'''Tests whether heap_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.heap_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_shell_sort():
'''Tests whether shell_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.shell_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_quick_sort():
'''Tests whether quick_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.quick_sort(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_quick_sort_lomuto():
'''Tests whether quick_sort_lomuto() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.quick_sort_lomuto(c_int_rand_list, length)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
def test_counting_sort():
'''Tests whether counting_sort() works properly.'''
for _ in range(1000):
int_rand_list = rand_list(25)
c_int_rand_list = IntVector()
length = ctypes.c_size_t(25)
for index in range(25):
c_int_rand_list[index] = int_rand_list[index]
sort.counting_sort(c_int_rand_list, length, 201)
assert sorted(int_rand_list) == [element for element in c_int_rand_list]
| 28.789116 | 80 | 0.662098 | 629 | 4,232 | 4.071542 | 0.074722 | 0.228036 | 0.270597 | 0.168684 | 0.814916 | 0.814916 | 0.806326 | 0.806326 | 0.806326 | 0.806326 | 0 | 0.031435 | 0.240785 | 4,232 | 146 | 81 | 28.986301 | 0.76564 | 0.095227 | 0 | 0.715909 | 0 | 0 | 0.005551 | 0.005551 | 0 | 0 | 0 | 0 | 0.102273 | 1 | 0.113636 | false | 0 | 0.034091 | 0.011364 | 0.159091 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f6de99564511c31e31e25bc2c45d7d25dd17079a | 40,960 | py | Python | sdk/python/pulumi_azure/compute/shared_image.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/compute/shared_image.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/compute/shared_image.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['SharedImageArgs', 'SharedImage']
@pulumi.input_type
class SharedImageArgs:
def __init__(__self__, *,
gallery_name: pulumi.Input[str],
identifier: pulumi.Input['SharedImageIdentifierArgs'],
os_type: pulumi.Input[str],
resource_group_name: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
eula: Optional[pulumi.Input[str]] = None,
hyper_v_generation: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
privacy_statement_uri: Optional[pulumi.Input[str]] = None,
purchase_plan: Optional[pulumi.Input['SharedImagePurchasePlanArgs']] = None,
release_note_uri: Optional[pulumi.Input[str]] = None,
specialized: Optional[pulumi.Input[bool]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a SharedImage resource.
:param pulumi.Input[str] gallery_name: Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
:param pulumi.Input['SharedImageIdentifierArgs'] identifier: An `identifier` block as defined below.
:param pulumi.Input[str] os_type: The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] description: A description of this Shared Image.
:param pulumi.Input[str] eula: The End User Licence Agreement for the Shared Image.
:param pulumi.Input[str] hyper_v_generation: The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
:param pulumi.Input[str] location: Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: Specifies the name of the Shared Image. Changing this forces a new resource to be created.
:param pulumi.Input[str] privacy_statement_uri: The URI containing the Privacy Statement associated with this Shared Image.
:param pulumi.Input['SharedImagePurchasePlanArgs'] purchase_plan: A `purchase_plan` block as defined below.
:param pulumi.Input[str] release_note_uri: The URI containing the Release Notes associated with this Shared Image.
:param pulumi.Input[bool] specialized: Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the Shared Image.
"""
pulumi.set(__self__, "gallery_name", gallery_name)
pulumi.set(__self__, "identifier", identifier)
pulumi.set(__self__, "os_type", os_type)
pulumi.set(__self__, "resource_group_name", resource_group_name)
if description is not None:
pulumi.set(__self__, "description", description)
if eula is not None:
pulumi.set(__self__, "eula", eula)
if hyper_v_generation is not None:
pulumi.set(__self__, "hyper_v_generation", hyper_v_generation)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if privacy_statement_uri is not None:
pulumi.set(__self__, "privacy_statement_uri", privacy_statement_uri)
if purchase_plan is not None:
pulumi.set(__self__, "purchase_plan", purchase_plan)
if release_note_uri is not None:
pulumi.set(__self__, "release_note_uri", release_note_uri)
if specialized is not None:
pulumi.set(__self__, "specialized", specialized)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="galleryName")
def gallery_name(self) -> pulumi.Input[str]:
"""
Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "gallery_name")
@gallery_name.setter
def gallery_name(self, value: pulumi.Input[str]):
pulumi.set(self, "gallery_name", value)
@property
@pulumi.getter
def identifier(self) -> pulumi.Input['SharedImageIdentifierArgs']:
"""
An `identifier` block as defined below.
"""
return pulumi.get(self, "identifier")
@identifier.setter
def identifier(self, value: pulumi.Input['SharedImageIdentifierArgs']):
pulumi.set(self, "identifier", value)
@property
@pulumi.getter(name="osType")
def os_type(self) -> pulumi.Input[str]:
"""
The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_type")
@os_type.setter
def os_type(self, value: pulumi.Input[str]):
pulumi.set(self, "os_type", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description of this Shared Image.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def eula(self) -> Optional[pulumi.Input[str]]:
"""
The End User Licence Agreement for the Shared Image.
"""
return pulumi.get(self, "eula")
@eula.setter
def eula(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eula", value)
@property
@pulumi.getter(name="hyperVGeneration")
def hyper_v_generation(self) -> Optional[pulumi.Input[str]]:
"""
The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "hyper_v_generation")
@hyper_v_generation.setter
def hyper_v_generation(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "hyper_v_generation", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Shared Image. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="privacyStatementUri")
def privacy_statement_uri(self) -> Optional[pulumi.Input[str]]:
"""
The URI containing the Privacy Statement associated with this Shared Image.
"""
return pulumi.get(self, "privacy_statement_uri")
@privacy_statement_uri.setter
def privacy_statement_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "privacy_statement_uri", value)
@property
@pulumi.getter(name="purchasePlan")
def purchase_plan(self) -> Optional[pulumi.Input['SharedImagePurchasePlanArgs']]:
"""
A `purchase_plan` block as defined below.
"""
return pulumi.get(self, "purchase_plan")
@purchase_plan.setter
def purchase_plan(self, value: Optional[pulumi.Input['SharedImagePurchasePlanArgs']]):
pulumi.set(self, "purchase_plan", value)
@property
@pulumi.getter(name="releaseNoteUri")
def release_note_uri(self) -> Optional[pulumi.Input[str]]:
"""
The URI containing the Release Notes associated with this Shared Image.
"""
return pulumi.get(self, "release_note_uri")
@release_note_uri.setter
def release_note_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_note_uri", value)
@property
@pulumi.getter
def specialized(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "specialized")
@specialized.setter
def specialized(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "specialized", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags to assign to the Shared Image.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _SharedImageState:
def __init__(__self__, *,
description: Optional[pulumi.Input[str]] = None,
eula: Optional[pulumi.Input[str]] = None,
gallery_name: Optional[pulumi.Input[str]] = None,
hyper_v_generation: Optional[pulumi.Input[str]] = None,
identifier: Optional[pulumi.Input['SharedImageIdentifierArgs']] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
privacy_statement_uri: Optional[pulumi.Input[str]] = None,
purchase_plan: Optional[pulumi.Input['SharedImagePurchasePlanArgs']] = None,
release_note_uri: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
specialized: Optional[pulumi.Input[bool]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering SharedImage resources.
:param pulumi.Input[str] description: A description of this Shared Image.
:param pulumi.Input[str] eula: The End User Licence Agreement for the Shared Image.
:param pulumi.Input[str] gallery_name: Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] hyper_v_generation: The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
:param pulumi.Input['SharedImageIdentifierArgs'] identifier: An `identifier` block as defined below.
:param pulumi.Input[str] location: Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: Specifies the name of the Shared Image. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
:param pulumi.Input[str] privacy_statement_uri: The URI containing the Privacy Statement associated with this Shared Image.
:param pulumi.Input['SharedImagePurchasePlanArgs'] purchase_plan: A `purchase_plan` block as defined below.
:param pulumi.Input[str] release_note_uri: The URI containing the Release Notes associated with this Shared Image.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[bool] specialized: Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the Shared Image.
"""
if description is not None:
pulumi.set(__self__, "description", description)
if eula is not None:
pulumi.set(__self__, "eula", eula)
if gallery_name is not None:
pulumi.set(__self__, "gallery_name", gallery_name)
if hyper_v_generation is not None:
pulumi.set(__self__, "hyper_v_generation", hyper_v_generation)
if identifier is not None:
pulumi.set(__self__, "identifier", identifier)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if privacy_statement_uri is not None:
pulumi.set(__self__, "privacy_statement_uri", privacy_statement_uri)
if purchase_plan is not None:
pulumi.set(__self__, "purchase_plan", purchase_plan)
if release_note_uri is not None:
pulumi.set(__self__, "release_note_uri", release_note_uri)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if specialized is not None:
pulumi.set(__self__, "specialized", specialized)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description of this Shared Image.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def eula(self) -> Optional[pulumi.Input[str]]:
"""
The End User Licence Agreement for the Shared Image.
"""
return pulumi.get(self, "eula")
@eula.setter
def eula(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eula", value)
@property
@pulumi.getter(name="galleryName")
def gallery_name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "gallery_name")
@gallery_name.setter
def gallery_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gallery_name", value)
@property
@pulumi.getter(name="hyperVGeneration")
def hyper_v_generation(self) -> Optional[pulumi.Input[str]]:
"""
The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "hyper_v_generation")
@hyper_v_generation.setter
def hyper_v_generation(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "hyper_v_generation", value)
@property
@pulumi.getter
def identifier(self) -> Optional[pulumi.Input['SharedImageIdentifierArgs']]:
"""
An `identifier` block as defined below.
"""
return pulumi.get(self, "identifier")
@identifier.setter
def identifier(self, value: Optional[pulumi.Input['SharedImageIdentifierArgs']]):
pulumi.set(self, "identifier", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Shared Image. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_type")
@os_type.setter
def os_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_type", value)
@property
@pulumi.getter(name="privacyStatementUri")
def privacy_statement_uri(self) -> Optional[pulumi.Input[str]]:
"""
The URI containing the Privacy Statement associated with this Shared Image.
"""
return pulumi.get(self, "privacy_statement_uri")
@privacy_statement_uri.setter
def privacy_statement_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "privacy_statement_uri", value)
@property
@pulumi.getter(name="purchasePlan")
def purchase_plan(self) -> Optional[pulumi.Input['SharedImagePurchasePlanArgs']]:
"""
A `purchase_plan` block as defined below.
"""
return pulumi.get(self, "purchase_plan")
@purchase_plan.setter
def purchase_plan(self, value: Optional[pulumi.Input['SharedImagePurchasePlanArgs']]):
pulumi.set(self, "purchase_plan", value)
@property
@pulumi.getter(name="releaseNoteUri")
def release_note_uri(self) -> Optional[pulumi.Input[str]]:
"""
The URI containing the Release Notes associated with this Shared Image.
"""
return pulumi.get(self, "release_note_uri")
@release_note_uri.setter
def release_note_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_note_uri", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def specialized(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "specialized")
@specialized.setter
def specialized(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "specialized", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags to assign to the Shared Image.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
class SharedImage(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
eula: Optional[pulumi.Input[str]] = None,
gallery_name: Optional[pulumi.Input[str]] = None,
hyper_v_generation: Optional[pulumi.Input[str]] = None,
identifier: Optional[pulumi.Input[pulumi.InputType['SharedImageIdentifierArgs']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
privacy_statement_uri: Optional[pulumi.Input[str]] = None,
purchase_plan: Optional[pulumi.Input[pulumi.InputType['SharedImagePurchasePlanArgs']]] = None,
release_note_uri: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
specialized: Optional[pulumi.Input[bool]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
Manages a Shared Image within a Shared Image Gallery.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_shared_image_gallery = azure.compute.SharedImageGallery("exampleSharedImageGallery",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
description="Shared images and things.",
tags={
"Hello": "There",
"World": "Example",
})
example_shared_image = azure.compute.SharedImage("exampleSharedImage",
gallery_name=example_shared_image_gallery.name,
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
os_type="Linux",
identifier=azure.compute.SharedImageIdentifierArgs(
publisher="PublisherName",
offer="OfferName",
sku="ExampleSku",
))
```
## Import
Shared Images can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:compute/sharedImage:SharedImage image1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Compute/galleries/gallery1/images/image1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A description of this Shared Image.
:param pulumi.Input[str] eula: The End User Licence Agreement for the Shared Image.
:param pulumi.Input[str] gallery_name: Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] hyper_v_generation: The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['SharedImageIdentifierArgs']] identifier: An `identifier` block as defined below.
:param pulumi.Input[str] location: Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: Specifies the name of the Shared Image. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
:param pulumi.Input[str] privacy_statement_uri: The URI containing the Privacy Statement associated with this Shared Image.
:param pulumi.Input[pulumi.InputType['SharedImagePurchasePlanArgs']] purchase_plan: A `purchase_plan` block as defined below.
:param pulumi.Input[str] release_note_uri: The URI containing the Release Notes associated with this Shared Image.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[bool] specialized: Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the Shared Image.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: SharedImageArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Shared Image within a Shared Image Gallery.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_shared_image_gallery = azure.compute.SharedImageGallery("exampleSharedImageGallery",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
description="Shared images and things.",
tags={
"Hello": "There",
"World": "Example",
})
example_shared_image = azure.compute.SharedImage("exampleSharedImage",
gallery_name=example_shared_image_gallery.name,
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
os_type="Linux",
identifier=azure.compute.SharedImageIdentifierArgs(
publisher="PublisherName",
offer="OfferName",
sku="ExampleSku",
))
```
## Import
Shared Images can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:compute/sharedImage:SharedImage image1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Compute/galleries/gallery1/images/image1
```
:param str resource_name: The name of the resource.
:param SharedImageArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(SharedImageArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
eula: Optional[pulumi.Input[str]] = None,
gallery_name: Optional[pulumi.Input[str]] = None,
hyper_v_generation: Optional[pulumi.Input[str]] = None,
identifier: Optional[pulumi.Input[pulumi.InputType['SharedImageIdentifierArgs']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
privacy_statement_uri: Optional[pulumi.Input[str]] = None,
purchase_plan: Optional[pulumi.Input[pulumi.InputType['SharedImagePurchasePlanArgs']]] = None,
release_note_uri: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
specialized: Optional[pulumi.Input[bool]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = SharedImageArgs.__new__(SharedImageArgs)
__props__.__dict__["description"] = description
__props__.__dict__["eula"] = eula
if gallery_name is None and not opts.urn:
raise TypeError("Missing required property 'gallery_name'")
__props__.__dict__["gallery_name"] = gallery_name
__props__.__dict__["hyper_v_generation"] = hyper_v_generation
if identifier is None and not opts.urn:
raise TypeError("Missing required property 'identifier'")
__props__.__dict__["identifier"] = identifier
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
if os_type is None and not opts.urn:
raise TypeError("Missing required property 'os_type'")
__props__.__dict__["os_type"] = os_type
__props__.__dict__["privacy_statement_uri"] = privacy_statement_uri
__props__.__dict__["purchase_plan"] = purchase_plan
__props__.__dict__["release_note_uri"] = release_note_uri
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["specialized"] = specialized
__props__.__dict__["tags"] = tags
super(SharedImage, __self__).__init__(
'azure:compute/sharedImage:SharedImage',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
eula: Optional[pulumi.Input[str]] = None,
gallery_name: Optional[pulumi.Input[str]] = None,
hyper_v_generation: Optional[pulumi.Input[str]] = None,
identifier: Optional[pulumi.Input[pulumi.InputType['SharedImageIdentifierArgs']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
privacy_statement_uri: Optional[pulumi.Input[str]] = None,
purchase_plan: Optional[pulumi.Input[pulumi.InputType['SharedImagePurchasePlanArgs']]] = None,
release_note_uri: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
specialized: Optional[pulumi.Input[bool]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None) -> 'SharedImage':
"""
Get an existing SharedImage resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A description of this Shared Image.
:param pulumi.Input[str] eula: The End User Licence Agreement for the Shared Image.
:param pulumi.Input[str] gallery_name: Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] hyper_v_generation: The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['SharedImageIdentifierArgs']] identifier: An `identifier` block as defined below.
:param pulumi.Input[str] location: Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: Specifies the name of the Shared Image. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
:param pulumi.Input[str] privacy_statement_uri: The URI containing the Privacy Statement associated with this Shared Image.
:param pulumi.Input[pulumi.InputType['SharedImagePurchasePlanArgs']] purchase_plan: A `purchase_plan` block as defined below.
:param pulumi.Input[str] release_note_uri: The URI containing the Release Notes associated with this Shared Image.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
:param pulumi.Input[bool] specialized: Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the Shared Image.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _SharedImageState.__new__(_SharedImageState)
__props__.__dict__["description"] = description
__props__.__dict__["eula"] = eula
__props__.__dict__["gallery_name"] = gallery_name
__props__.__dict__["hyper_v_generation"] = hyper_v_generation
__props__.__dict__["identifier"] = identifier
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
__props__.__dict__["os_type"] = os_type
__props__.__dict__["privacy_statement_uri"] = privacy_statement_uri
__props__.__dict__["purchase_plan"] = purchase_plan
__props__.__dict__["release_note_uri"] = release_note_uri
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["specialized"] = specialized
__props__.__dict__["tags"] = tags
return SharedImage(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
A description of this Shared Image.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def eula(self) -> pulumi.Output[Optional[str]]:
"""
The End User Licence Agreement for the Shared Image.
"""
return pulumi.get(self, "eula")
@property
@pulumi.getter(name="galleryName")
def gallery_name(self) -> pulumi.Output[str]:
"""
Specifies the name of the Shared Image Gallery in which this Shared Image should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "gallery_name")
@property
@pulumi.getter(name="hyperVGeneration")
def hyper_v_generation(self) -> pulumi.Output[Optional[str]]:
"""
The generation of HyperV that the Virtual Machine used to create the Shared Image is based on. Possible values are `V1` and `V2`. Defaults to `V1`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "hyper_v_generation")
@property
@pulumi.getter
def identifier(self) -> pulumi.Output['outputs.SharedImageIdentifier']:
"""
An `identifier` block as defined below.
"""
return pulumi.get(self, "identifier")
@property
@pulumi.getter
def location(self) -> pulumi.Output[str]:
"""
Specifies the supported Azure location where the Shared Image Gallery exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Specifies the name of the Shared Image. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="osType")
def os_type(self) -> pulumi.Output[str]:
"""
The type of Operating System present in this Shared Image. Possible values are `Linux` and `Windows`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="privacyStatementUri")
def privacy_statement_uri(self) -> pulumi.Output[Optional[str]]:
"""
The URI containing the Privacy Statement associated with this Shared Image.
"""
return pulumi.get(self, "privacy_statement_uri")
@property
@pulumi.getter(name="purchasePlan")
def purchase_plan(self) -> pulumi.Output[Optional['outputs.SharedImagePurchasePlan']]:
"""
A `purchase_plan` block as defined below.
"""
return pulumi.get(self, "purchase_plan")
@property
@pulumi.getter(name="releaseNoteUri")
def release_note_uri(self) -> pulumi.Output[Optional[str]]:
"""
The URI containing the Release Notes associated with this Shared Image.
"""
return pulumi.get(self, "release_note_uri")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the resource group in which the Shared Image Gallery exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter
def specialized(self) -> pulumi.Output[Optional[bool]]:
"""
Specifies that the Operating System used inside this Image has not been Generalized (for example, `sysprep` on Windows has not been run). Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "specialized")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags to assign to the Shared Image.
"""
return pulumi.get(self, "tags")
| 49.053892 | 256 | 0.665112 | 4,937 | 40,960 | 5.339072 | 0.05003 | 0.083046 | 0.077014 | 0.067605 | 0.929626 | 0.919876 | 0.906446 | 0.89533 | 0.889753 | 0.885656 | 0 | 0.003014 | 0.238477 | 40,960 | 834 | 257 | 49.11271 | 0.842043 | 0.386328 | 0 | 0.810235 | 1 | 0 | 0.113842 | 0.033598 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164179 | false | 0.002132 | 0.014925 | 0 | 0.277186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
63f0a1b42bb02fadbdbd25c9d4b91e099630b86d | 97 | py | Python | mmstructlib/IO/__init__.py | academicRobot/mmstructlib | 76949620c9e9ca26faf10ff1a21c6fda1a564f5c | [
"MIT"
] | null | null | null | mmstructlib/IO/__init__.py | academicRobot/mmstructlib | 76949620c9e9ca26faf10ff1a21c6fda1a564f5c | [
"MIT"
] | null | null | null | mmstructlib/IO/__init__.py | academicRobot/mmstructlib | 76949620c9e9ca26faf10ff1a21c6fda1a564f5c | [
"MIT"
] | null | null | null | from . import cif
from mmstructlib.IO.cif_loader import load_cif_from_mirror, load_cif_from_file
| 32.333333 | 78 | 0.865979 | 17 | 97 | 4.529412 | 0.529412 | 0.272727 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 97 | 2 | 79 | 48.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
123d3ff2f5f927389adcd25c39b445371fc37210 | 151 | py | Python | tests/io/__init__.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 11 | 2022-01-30T14:36:13.000Z | 2022-03-22T09:40:57.000Z | tests/io/__init__.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 2 | 2022-03-23T07:56:49.000Z | 2022-03-24T12:01:42.000Z | tests/io/__init__.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 8 | 2022-01-28T10:32:31.000Z | 2022-03-22T09:40:59.000Z | from tests.io.test_initialize_compound import *
from tests.io.test_embedder import *
from tests.io.test_data_manipulation import Test_DataManipulation
| 37.75 | 65 | 0.86755 | 22 | 151 | 5.681818 | 0.5 | 0.216 | 0.264 | 0.36 | 0.336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07947 | 151 | 3 | 66 | 50.333333 | 0.899281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
1267883bce645a96fb32d272b95632c9705a1317 | 935 | py | Python | admintools/decorators.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 206 | 2015-10-15T07:05:08.000Z | 2021-02-19T11:48:36.000Z | admintools/decorators.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 8 | 2017-10-16T10:18:31.000Z | 2022-03-09T14:24:27.000Z | admintools/decorators.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 61 | 2015-10-15T08:12:44.000Z | 2022-03-10T12:25:06.000Z | # Django Imports
from django.contrib.auth.decorators import login_required
from django.core.exceptions import PermissionDenied
def company_officer_required(view_func):
"""Decorator for views that require access by company officer or staff user
"""
@login_required
def wrapped_view(request, *args, **kwargs):
user = request.user
if not(user.profile and user.profile.is_company_officer):
raise PermissionDenied
return view_func(request, *args, **kwargs)
return wrapped_view
def company_employee_required(view_func):
"""Decorator for views that require access by company employee or staff user
"""
@login_required
def wrapped_view(request, *args, **kwargs):
user = request.user
if not(user.profile and user.profile.is_company_employee):
raise PermissionDenied
return view_func(request, *args, **kwargs)
return wrapped_view
| 34.62963 | 80 | 0.713369 | 116 | 935 | 5.586207 | 0.336207 | 0.049383 | 0.104938 | 0.07716 | 0.731481 | 0.731481 | 0.731481 | 0.731481 | 0.731481 | 0.731481 | 0 | 0 | 0.209626 | 935 | 26 | 81 | 35.961538 | 0.876861 | 0.183957 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
d63e0a3287292267d10a3683fa4071ca9b58b75b | 1,642 | py | Python | tests/_mock.py | noaione/tesaurus-python | d879eee99ac6463019f32a67b1500dbd1cd701c8 | [
"MIT"
] | 1 | 2022-01-20T00:40:35.000Z | 2022-01-20T00:40:35.000Z | tests/_mock.py | noaione/tesaurus-python | d879eee99ac6463019f32a67b1500dbd1cd701c8 | [
"MIT"
] | null | null | null | tests/_mock.py | noaione/tesaurus-python | d879eee99ac6463019f32a67b1500dbd1cd701c8 | [
"MIT"
] | null | null | null | from tesaurus import KelasKataTidakDiketahui, Tesaurus, TesaurusAsync
class MockTesaurus(Tesaurus):
HOST = "http://localhost:4000"
_HOST = Tesaurus.HOST
def __init__(self) -> None:
super().__init__()
def _buat_url(self):
"""Jangan dipakai, ini merupakan fungsi internal yang akan dipanggil otomatis"""
base_url = f"{self.HOST}/{self.kata}"
valid_kelas = ["adjektiva", "adverbia", "konjungsi", "nomina", "numeralia", "partikel", "verba"]
if isinstance(self.kelas_kata, str):
if self.kelas_kata not in valid_kelas:
self._on_queue = False
self._logger.error(f"Kelas kata {self.kelas_kata} tidak diketahui")
raise KelasKataTidakDiketahui(self.kelas_kata)
base_url += f"/{self.kelas_kata}"
return base_url + ".html"
class MockTesaurusAsync(TesaurusAsync):
HOST = "http://localhost:4000"
_HOST = Tesaurus.HOST
def __init__(self) -> None:
super().__init__()
def _buat_url(self):
"""Jangan dipakai, ini merupakan fungsi internal yang akan dipanggil otomatis"""
base_url = f"{self.HOST}/{self.kata}"
valid_kelas = ["adjektiva", "adverbia", "konjungsi", "nomina", "numeralia", "partikel", "verba"]
if isinstance(self.kelas_kata, str):
if self.kelas_kata not in valid_kelas:
self._on_queue = False
self._logger.error(f"Kelas kata {self.kelas_kata} tidak diketahui")
raise KelasKataTidakDiketahui(self.kelas_kata)
base_url += f"/{self.kelas_kata}"
return base_url + ".html"
| 39.095238 | 104 | 0.62972 | 186 | 1,642 | 5.301075 | 0.301075 | 0.109533 | 0.131846 | 0.048682 | 0.876268 | 0.876268 | 0.876268 | 0.876268 | 0.876268 | 0.876268 | 0 | 0.006494 | 0.249695 | 1,642 | 41 | 105 | 40.04878 | 0.793831 | 0.090743 | 0 | 0.903226 | 0 | 0 | 0.222672 | 0.031039 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.032258 | 0 | 0.419355 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d657f90382757dca6247c8bd64274ad63bf3df31 | 39 | py | Python | graphql_env/server/flask/__init__.py | GraphQL-python-archive/graphql-env | d82c02c4a82486c69a1a2fa9c262d74f335bdf26 | [
"MIT"
] | null | null | null | graphql_env/server/flask/__init__.py | GraphQL-python-archive/graphql-env | d82c02c4a82486c69a1a2fa9c262d74f335bdf26 | [
"MIT"
] | 3 | 2019-07-24T21:05:52.000Z | 2021-11-15T17:46:27.000Z | graphql_env/server/flask/__init__.py | GraphQL-python-archive/graphql-env | d82c02c4a82486c69a1a2fa9c262d74f335bdf26 | [
"MIT"
] | null | null | null | from .graphql_view import graphql_view
| 19.5 | 38 | 0.871795 | 6 | 39 | 5.333333 | 0.666667 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c3fb8633771b121c091444730203c44206b71a0b | 5,224 | py | Python | src/test/local_integration/test_get_data_sources.py | kbase/taxonomy_re_api | 95c34a1a9bfcb4c815d71acb2aee7efc989b21a5 | [
"MIT"
] | null | null | null | src/test/local_integration/test_get_data_sources.py | kbase/taxonomy_re_api | 95c34a1a9bfcb4c815d71acb2aee7efc989b21a5 | [
"MIT"
] | 1 | 2020-09-25T23:40:47.000Z | 2020-09-25T23:40:47.000Z | src/test/local_integration/test_get_data_sources.py | kbase/taxonomy_re_api | 95c34a1a9bfcb4c815d71acb2aee7efc989b21a5 | [
"MIT"
] | 4 | 2020-09-23T20:34:57.000Z | 2021-09-10T23:54:24.000Z | from src.test.test_base import TestBase
# Tests for get_data_sources
# These tests may be run against a Tax API which uses a local
# RE with data sources loaded.
# Initial data sources are included in the RE codebase.
class TestGetDataSources(TestBase):
# Happy path testing
def test_get_data_sources_all_null_ns(self):
"""Test a call to get sources without filtering"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': [{'ns': None}]
})
self.assertTrue(resp.ok, resp.text)
jsonrpc_response = resp.json()
result = self.assert_is_result_response(jsonrpc_response)
sources = result.get('sources')
self.assertIsInstance(sources, list)
self.assertEqual(len(sources), 4)
def test_get_data_sources_all_missing_ns(self):
"""Test a call to get sources without filtering"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': [{}]
})
self.assertTrue(resp.ok, resp.text)
jsonrpc_response = resp.json()
result = self.assert_is_result_response(jsonrpc_response)
sources = result.get('sources')
self.assertIsInstance(sources, list)
self.assertEqual(len(sources), 4)
def test_get_data_sources_all_no_params(self):
"""Test a call to get sources without filtering"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': []
})
self.assertTrue(resp.ok, resp.text)
jsonrpc_response = resp.json()
result = self.assert_is_result_response(jsonrpc_response)
sources = result.get('sources')
self.assertIsInstance(sources, list)
self.assertEqual(len(sources), 4)
def test_get_data_sources_with_filtering_one(self):
"""Test a call to get sources without filtering"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': [{
'ns': ['ncbi_taxonomy']
}]
})
self.assertTrue(resp.ok, resp.text)
jsonrpc_response = resp.json()
result = self.assert_is_result_response(jsonrpc_response)
sources = result.get('sources')
self.assertIsInstance(sources, list)
self.assertEqual(len(sources), 1)
def test_get_data_sources_with_filtering_three(self):
"""Test a call to get sources without filtering"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': [{
'ns': ['ncbi_taxonomy', 'gtdb', 'rdp_taxonomy']
}]
})
self.assertTrue(resp.ok, resp.text)
jsonrpc_response = resp.json()
result = self.assert_is_result_response(jsonrpc_response)
sources = result.get('sources')
self.assertIsInstance(sources, list)
self.assertEqual(len(sources), 3)
# Error conditions
def test_get_data_sources_bad_ns(self):
"""Test a call to get sources with an ns parameter of the wrong type"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': [{
'params': [{'ns': 1}]
}]
})
self.assertTrue(resp.status_code == 400, 'Expected the response to have status code 400')
rpc_response = resp.json()
self.assert_is_error_response(rpc_response, -32602, 'Invalid params')
def test_get_data_sources_provide_undefined_param(self):
"""Test a call to get sources an parameter not defined by the schema"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
'params': [{
'params': [{'foo': 'bar'}]
}]
})
self.assertTrue(resp.status_code == 400, 'Expected the response to have status code 400')
rpc_response = resp.json()
self.assert_is_error_response(rpc_response, -32602, 'Invalid params')
def test_get_data_sources_missing_method(self):
"""Test a call to get sources with missing method"""
resp = self.request({
'version': '1.1',
'params': [{
'params': [{'ns': 'ncbi_taxonomy'}]
}]
})
self.assertTrue(resp.status_code == 400, 'Expected the response to have status code 400')
rpc_response = resp.json()
self.assert_is_error_response(rpc_response, -32600, 'Invalid request')
def test_get_data_sources_missing_params(self):
"""Test a call to get sources with missing params"""
resp = self.request({
'version': '1.1',
'method': 'taxonomy_re_api.get_data_sources',
})
self.assertTrue(resp.status_code == 400, 'Expected the response to have status code 400')
rpc_response = resp.json()
self.assert_is_error_response(rpc_response, -32600, 'Invalid request')
| 38.131387 | 97 | 0.604326 | 616 | 5,224 | 4.899351 | 0.163961 | 0.072896 | 0.083499 | 0.04175 | 0.873757 | 0.866799 | 0.839298 | 0.808814 | 0.794566 | 0.770709 | 0 | 0.018042 | 0.278522 | 5,224 | 136 | 98 | 38.411765 | 0.782701 | 0.125766 | 0 | 0.801887 | 0 | 0 | 0.178034 | 0.056687 | 0 | 0 | 0 | 0 | 0.264151 | 1 | 0.084906 | false | 0 | 0.009434 | 0 | 0.103774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
613bac65710af877773f1d950be30e192e35b8ba | 82 | py | Python | data_vis/views/api/__init__.py | jneuendorf/dkb_pdf2csv | 836257403054242fe2971fb3e9c0dfd909b2d199 | [
"MIT"
] | null | null | null | data_vis/views/api/__init__.py | jneuendorf/dkb_pdf2csv | 836257403054242fe2971fb3e9c0dfd909b2d199 | [
"MIT"
] | null | null | null | data_vis/views/api/__init__.py | jneuendorf/dkb_pdf2csv | 836257403054242fe2971fb3e9c0dfd909b2d199 | [
"MIT"
] | null | null | null | from .tags import tags # NOQA
from . import data # NOQA
from . import analytics
| 20.5 | 30 | 0.719512 | 12 | 82 | 4.916667 | 0.5 | 0.271186 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219512 | 82 | 3 | 31 | 27.333333 | 0.921875 | 0.109756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
613f0cafdfbb667cd5ee8e512aa5dc0f7a7bb7ea | 166 | py | Python | backend/apps/mails/admin.py | KuanWeiLee/froggy-service | 0db6cd90c1641a98c1e06638f8e9591c2daf39e0 | [
"MIT"
] | 174 | 2019-02-19T11:35:45.000Z | 2021-12-20T03:20:28.000Z | backend/apps/mails/admin.py | KuanWeiLee/froggy-service | 0db6cd90c1641a98c1e06638f8e9591c2daf39e0 | [
"MIT"
] | 56 | 2019-01-02T06:49:13.000Z | 2021-03-23T09:31:18.000Z | backend/apps/mails/admin.py | KuanWeiLee/froggy-service | 0db6cd90c1641a98c1e06638f8e9591c2daf39e0 | [
"MIT"
] | 36 | 2018-12-28T02:10:06.000Z | 2021-09-02T03:06:35.000Z | from django.contrib import admin
from .models import SendGridMail, SendGridMailTemplate
admin.site.register(SendGridMailTemplate)
admin.site.register(SendGridMail)
| 23.714286 | 54 | 0.855422 | 18 | 166 | 7.888889 | 0.555556 | 0.352113 | 0.408451 | 0.521127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078313 | 166 | 6 | 55 | 27.666667 | 0.928105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
61bcfafc8b5ab554978691ed828e2f2931a3dc33 | 31,227 | py | Python | software/scripts/tests/tests.py | pnallin/SPIxCONV | dc63e3258b1244c3fed5cf27d88bc099317a1052 | [
"MIT"
] | null | null | null | software/scripts/tests/tests.py | pnallin/SPIxCONV | dc63e3258b1244c3fed5cf27d88bc099317a1052 | [
"MIT"
] | null | null | null | software/scripts/tests/tests.py | pnallin/SPIxCONV | dc63e3258b1244c3fed5cf27d88bc099317a1052 | [
"MIT"
] | null | null | null | #!/usr/bin/python
from Adafruit_BBIO.SPI import SPI
import Adafruit_BBIO.GPIO as GPIO
import selection
import dac
import adc
import sys
import math
import time
import matplotlib.pyplot as plt
#-------------------------------------------------------
# initialize the bus and device /dev/spidev1.0
spi = SPI(0,0)
#defining mode (CPOL = 0; CPHA = 1)
spi.mode = 1
#defining speed (in bps)
spi.msh = 10000000
#=======================================================
# linearity test with multimeter
#=======================================================
def linearity_multimeter(board):
time.sleep(1)
#=======================================================
# linearity test without multimeter
#=======================================================
def linearity(board):
time.sleep(1)
#=======================================================
# repetibility test with multimeter
#=======================================================
def repetibility_multimeter(board):
time.sleep(1)
#=======================================================
# repetibility test without multimeter
#=======================================================
def repetibility(board):
# select DAC of the board requested
selection.dac(board)
dac.config()
print " ======================================================\n"
from time import gmtime, strftime
timestr = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
filename = "repetibility/" + timestr + "_"
tensoes = [-9, -5, 0, 5, 9]
# total time of the test (in seconds)
# total_time = 12*60*60
total_time = 0.07 * 60 * 60
# save time when test started
startTime = time.time()
############################################################
for x in tensoes:
if (x > 0):
log = open(filename + "+" + str(x) + "V.csv", "a+")
else:
log = open(filename + str(x) + "V.csv", "a+")
# set tabs of .csv file
log.write(';Valor lido no multimetro (V)')
log.write(';Valor lido no multimetro (LSB)')
log.write(';ADC - Leitura do valor integrado (V)')
log.write(';ADC - Leitura do valor integrado (LSB)')
log.write(';MBTemp1:Channel5 (graus C)')
log.write('\n')
# Update the file
log.close()
print " ============================================================================"
print " | REPETIBILIDADE |"
print " ============================================================================"
print " | DAC\t\tMULT.\t\tMULT.(LSB)\tADC\tADC(V)\t\tTEMP.|"
print " |--------------------------------------------------------------------------|"
while ((time.time() - startTime) < total_time):
for x in tensoes:
base = int(((x + 10) / (20 / float(262144))))
# select DAC and write correspondent value
selection.dac(board)
dac.write(base)
time.sleep(0.01)
# ---------------------------------------------------
if (x > 0):
log = open(filename + "+" + str(x) + "V.csv", "a+")
else:
log = open(filename + str(x) + "V.csv", "a+")
# ---------------------------------------------------
selection.adc(board)
adc_value = adc.read()
'''
measure = []
for j in range(100):
measure.append(adc.read())
# #print numpy.mean(measure)
adc_value = sum(measure) / len(measure)
'''
if (abs(adc_value - base) > 1000):
error += 1
print error
# adc = "{:1}".format(adc)
# adc = numpy.mean(measure)
adc_volt = float(adc_value) / 262143 * 20 - 10
adc_volt_str = '{:.8f}'.format(adc_volt)
#adc_volt_str = str(adc_volt)
#adc_volt_str = adc_volt_str[0:adc_volt_str.find(".") + 8]
# ---------------------------------------------------
log.write(str(base) + ';' + ';' + str(adc_value) + ';' + str(adc_volt) + ';;')
'''
for j in range(100):
log.write(str(measure[j]) + ';')
log.write('\n')
'''
# Update the file
log.close()
# print data on terminal
sys.stdout.write(" | " + str(base) + "\t" + "----- " + "\t" + " ----- " + "\t")
# ---------------------------------------------------------
sys.stdout.write(str(adc_value) + "\t")
# ---------------------------------------------------------
if (adc_volt < 0):
sys.stdout.write(str(adc_volt_str) + "\t")
else:
sys.stdout.write("+" + str(adc_volt_str) + "\t")
# ---------------------------------------------------------
# sys.stdout.write(temp_str + "|" + "\n")
sys.stdout.write('---\t' + "|" + "\n")
print " |--------------------------------------------------------------------------|"
print "ERROR = " + str(error)
#=======================================================
# repetibility ERROR test without multimeter
#=======================================================
def repetibility_error(board):
# run calibration function and get the step that should be used
#step = calibration(2)
# turns on DAC and ADC circuit
dac.on(board)
adc.on(board)
# select DAC of the board requested
selection.dac(board)
dac.config()
print " ======================================================\n"
from time import gmtime, strftime
timestr = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
filename = "repetibility/" + timestr + "_error_log_file.csv"
log = open(filename, "a+")
# set tabs of .csv file
log.write('Iteracao')
log.write(';Status')
log.write(';Horario')
log.write(';Valor setado [LSB]')
log.write(';Valor lido [LSB]')
log.write(';Valor lido [V]')
log.write(';Diferenca [LSB]')
log.write('\n')
# Update the file
log.close()
# save time when test started
startTime = time.time()
############################################################
print " ============================================================================"
print " | REPETIBILIDADE |"
print " ============================================================================"
print " | DAC\t\tMULT.\t\tMULT.(LSB)\tADC\tADC(V)\t\tTEMP.|"
print " |--------------------------------------------------------------------------|"
iteration = 0
error = 0
while (1):
# read current time
startTime = time.time()
#while ((time.time() - startTime) < 1*60*60):
points = 1024
while ((time.time() - startTime) < 1*60*60):
for i in range(points):
base = int((math.sin(i*1.0/points*2*math.pi) + 1)*131071.5)
# select DAC and write correspondent value
selection.dac(board)
dac.write(base)
#time.sleep(0.01)
selection.adc(board)
adc_value = adc.read()
adc_volt = float(adc_value) / 262143 * 20 - 10
adc_volt_str = '{:.8f}'.format(adc_volt)
# check if an error occurred
if (abs(adc_value - base) > 100):
error += 1
print error
# write in log file
log = open(filename, "a+")
timestr = strftime("%Y/%m/%d_%H:%M:%S", gmtime())
log.write(str(iteration) + ";erro;" + timestr + ';' + str(base) + ';' + str(adc_value) + ';' + str(adc_volt) + ';' + str((adc_value - base)) + "\n")
# Update the file
log.close()
# print data on terminal
sys.stdout.write(" | " + str(base) + "\t" + "----- " + "\t" + " ----- " + "\t")
# ---------------------------------------------------------
sys.stdout.write(str(adc_value) + "\t")
# ---------------------------------------------------------
if (adc_volt < 0):
sys.stdout.write(str(adc_volt_str) + "\t")
else:
sys.stdout.write("+" + str(adc_volt_str) + "\t")
# ---------------------------------------------------------
# sys.stdout.write(temp_str + "|" + "\n")
sys.stdout.write('---\t' + "|" + "\n")
print " |--------------------------------------------------------------------------|"
print "ERROR = " + str(error)
# write in log file
log = open(filename, "a+")
timestr = strftime("%Y/%m/%d_%H:%M:%S", gmtime())
log.write(str(iteration) + ";fim de ciclo;" + timestr + "\n")
# Update the file
log.close()
iteration += 1
print "ERRO = " + str(error)
#=======================================================
# stability test with multimeter
#=======================================================
def stability_multimeter(board):
# turns on DAC and ADC circuit
dac.on(board)
adc.on(board)
# set up DAC
selection.dac(board)
dac.config()
from time import gmtime, strftime
timestr = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
filename = "stability/" + timestr + "_"
#from epics import caput
#from epics import caget
#import Agilent34420A
#voltage = [-9, -5, 0, 5, 9]
voltage = [9, -5]
#total time of the test (in seconds)
total_measures = 10000
# defining variables for MAX, MIN and MEAN (ADC measure)
min_adc = [0] * 5
max_adc = [0] * 5
mean_adc = [0] * 5
std_var = [0] * 5
i = 0
j = 0
############################################################
for x in voltage:
measure = []
if (x > 0):
log = open(filename + "+" + str(x) + "V.csv", "a+")
else:
log = open(filename + str(x) + "V.csv", "a+")
#set tabs of .csv file
log.write('Indice')
log.write(';Valor lido no multimetro (V)')
log.write(';Valor lido no multimetro (LSB)')
log.write(';ADC - Leitura do valor integrado (V)')
log.write(';ADC - Leitura do valor integrado (LSB)')
log.write(';MBTemp1:Channel5 (graus C)')
log.write('\n')
#Update the file
log.close()
print " ============================================================================"
# sys.stdout.write(" | ESTABILIDADE: ")
sys.stdout.write(" | STABILITY: ")
if(x < 0):
sys.stdout.write(str(x) + "V" + " |\n")
elif(x > 0):
sys.stdout.write("+" + str(x) + "V" + " |\n")
else:
sys.stdout.write(str(x) + "V" + " |\n")
print " ============================================================================"
print " | INDEX\tMULT.\t\tMULT.[LSB]\tADC\tADC(V)\t\tTEMP.|"
print " |--------------------------------------------------------------------------|"
# select DAC and write correspondent value
base = int(((x+10)/(20/float(262144))))
selection.dac(board)
dac.write(base)
time.sleep(2)
measure = []
for i in range (total_measures):
if (x > 0):
log = open(filename + "+" + str(x) + "V.csv", "a+")
else:
log = open(filename + str(x) + "V.csv", "a+")
#---------------------------------------------------
# for k in range(100):
# measure.append(adc.read())
# # #print numpy.mean(measure)
# adc_value = sum(measure) / len(measure)
selection.adc(board)
adc_value = adc.read()
measure.append(adc_value)
# check if it is the first measure
if(i == 0):
min_adc[j] = measure[0]
max_adc[j] = measure[0]
mean_adc[j] = measure[0]*1.0
# if not, calculate max, min and mean
else:
if(measure[i] < min_adc[j]):
min_adc[j] = measure[i]
if(measure[i] > max_adc[j]):
max_adc[j] = measure[i]
mean_adc[j] = (mean_adc[j]*i + measure[i])/(i + 1)
i += 1
#adc = "{:1}".format(adc)
#adc = numpy.mean(measure)
adc_volt = float(adc_value)/262143*20-10
adc_volt_str = str(adc_volt)
adc_volt_str = adc_volt_str[0:adc_volt_str.find(".")+8]
#---------------------------------------------------
#Get temperature
#temp = caget("MBTemp_RAFAEL_1:Channel5")
#temp_str = ("%.2f" %temp)
#temp_str = str(temp_str)
#temp_str = temp_str[0:temp_str.find(".")+3]
#---------------------------------------------------
#Write all data
#log.write(str(base+i)+ ';' + multimeter_int_str + ';' + str(multimeter_lsb) + ';' + str(adc) + ';' + str(adc_volt) + ';' + str(temp) + '\n')
#log.write(str(base+i)+ ';' + multimeter_int_str + ';' + str(multimeter_lsb) + ';' + str(adc) + ';' + str(adc_volt) + ';' + '\n')
#log.write(str(base+i)+ ';' + multimeter_int_str + ';' + str(multimeter_lsb) + ';' + str(adc) + ';' + str(adc_volt) + ';;')
log.write(str(base+i)+ ';;;' + str(adc_value) + ';' + str(adc_volt) + ';;')
# for k in range(100):
# log.write(str(measure[k]) + ';')
log.write('\n')
#Update the file
log.close()
#print data on terminal
sys.stdout.write(" | " + str(base) + "\t" + "------" + "\t\t" + "------\t" + "\t")
#---------------------------------------------------------
sys.stdout.write(str(adc_value) + "\t")
#---------------------------------------------------------
if(adc_volt < 0):
sys.stdout.write(str(adc_volt_str) + "\t")
else:
sys.stdout.write("+" + str(adc_volt_str) + "\t")
#---------------------------------------------------------
#sys.stdout.write(temp_str + "|" + "\n")
sys.stdout.write('---\t' + "|" + "\n")
print " | |"
# #calculate standard deviation
# part_sum = 0
# for i in range(len(measure)):
# part_sum = part_sum + (measure[i] - mean_adc[j])**2
# std_var[j] = part_sum/(len(measure)*1.0)
# std_var[j] = math.sqrt(std_var[j])
# std_var[j] = "{0:.4f}".format(std_var[j])
# mean_adc[j] = "{0:.2f}".format(mean_adc[j])
#---------------------------------------------------
# plot and save Histogram
std_var[j] = plot_hist_multimeter(board, x, measure, mean_adc[j])
mean_adc[j] = "{0:.2f}".format(mean_adc[j])
print " ============================================================================"
#---------------------------------------------------
# print standard variation
sys.stdout.write(" | std_dev = %s" %str(std_var[j]))
for i in range (0, (6 - len(str(std_var[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#-------------------------------------------------------
# print minimum value acquired
sys.stdout.write(" | ADC_min = %s" %min_adc[j])
for i in range (0, (6 - len(str(min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print maximum value acquired
sys.stdout.write(" | ADC_max = %s" %max_adc[j])
for i in range (0, (6 - len(str(max_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print mean
sys.stdout.write(" | ADC_mean = %s" %mean_adc[j])
for i in range (0, (6 - len(str(mean_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print difference between max and min (histogram thickness)
sys.stdout.write(" | diff = %s" %(max_adc[j] - min_adc[j]))
for i in range (0, (6 - len(str(max_adc[j] - min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
print " ============================="
j += 1
# Print it all again after all the data were acquired
j = 0
for x in voltage:
sys.stdout.write(" | STABILITY: ")
if(x > 0):
sys.stdout.write("+")
if(x == 0):
sys.stdout.write(" ")
sys.stdout.write(str(x) + "V |\n")
print " ============================="
#---------------------------------------------------
# print standard variation
sys.stdout.write(" | std_dev = %s" %str(std_var[j]))
for i in range (0, (6 - len(str(std_var[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#-------------------------------------------------------
# print minimum value acquired
sys.stdout.write(" | ADC_min = %s" %min_adc[j])
for i in range (0, (6 - len(str(min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print maximum value acquired
sys.stdout.write(" | ADC_max = %s" %max_adc[j])
for i in range (0, (6 - len(str(max_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print mean
sys.stdout.write(" | ADC_mean = %s" %mean_adc[j])
for i in range (0, (6 - len(str(mean_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print difference between max and min (histogram thickness)
sys.stdout.write(" | diff = %s" %(max_adc[j] - min_adc[j]))
for i in range (0, (6 - len(str(max_adc[j] - min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
print " ============================="
j += 1
#-------------------------------------------------------
# function that plot histogram for stability test
#-------------------------------------------------------
def plot_hist_multimeter(board, voltage, data, mu):
#calculate standard deviation
part_sum = 0
for i in range(len(data)):
part_sum = part_sum + (data[i] - mu)**2
sigma = part_sum/(len(data)*1.0)
sigma = math.sqrt(sigma)
# plot histogram
plt.clf()
plt.title(r'$\mathrm{Histogram\ for\ Board\ %d:}\ \mu=%.2f,\ \sigma=%.4f$' %(board, mu, sigma))
plt.ylabel('Counts')
plt.xlabel('Code in decimal')
# disable scientific notation for numbers in X-axis
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.hist(data, bins = range(min(data), max(data) + 1))
plt.show()
if(voltage >= 0):
voltage_str = "+" + str(voltage)
else:
voltage_str = str(voltage)
plt.savefig('/root/scripts/stability/board_' + str(board) + '_voltage_' + voltage_str)
# return stardard deviation (string format)
sigma = "{0:.4f}".format(sigma)
return sigma
#=======================================================
# stability test without multimeter
#=======================================================
def stability(board):
# turns on DAC and ADC circuit
dac.on(board)
adc.on(board)
# set up DAC
selection.dac(board)
dac.config(board)
from time import gmtime, strftime
timestr = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
filename = "stability/" + timestr + "_"
#from epics import caput
#from epics import caget
#import Agilent34420A
voltage = [-9, -5, 0, 5, 9]
#voltage = [9]
#total time of the test (in seconds)
total_measures = 10000
# defining variables for MAX, MIN and MEAN (ADC measure)
min_adc = [0] * 5
max_adc = [0] * 5
mean_adc = [0] * 5
std_var = [0] * 5
i = 0
j = 0
############################################################
for x in voltage:
measure = []
if (x > 0):
log = open(filename + "+" + str(x) + "V.csv", "a+")
else:
log = open(filename + str(x) + "V.csv", "a+")
#set tabs of .csv file
log.write('Indice')
log.write(';Valor lido no multimetro (V)')
log.write(';Valor lido no multimetro (LSB)')
log.write(';ADC - Leitura do valor integrado (V)')
log.write(';ADC - Leitura do valor integrado (LSB)')
log.write(';MBTemp1:Channel5 (graus C)')
log.write('\n')
#Update the file
log.close()
print " ============================================================================"
# sys.stdout.write(" | ESTABILIDADE: ")
sys.stdout.write(" | STABILITY: ")
if(x < 0):
sys.stdout.write(str(x) + "V" + " |\n")
elif(x > 0):
sys.stdout.write("+" + str(x) + "V" + " |\n")
else:
sys.stdout.write(str(x) + "V" + " |\n")
print " ============================================================================"
print " | INDEX\tMULT.\t\tMULT.[LSB]\tADC\tADC(V)\t\tTEMP.|"
print " |--------------------------------------------------------------------------|"
# select DAC and write correspondent value
base = int(((x+10)/(20/float(262144))))
selection.dac(board)
dac.write(base)
time.sleep(30)
measure = []
for i in range (total_measures):
if (x > 0):
log = open(filename + "+" + str(x) + "V.csv", "a+")
else:
log = open(filename + str(x) + "V.csv", "a+")
#---------------------------------------------------
selection.adc(board)
mean_measure = []
for k in range(3):
mean_measure.append(adc.read())
# #print numpy.mean(measure)
adc_value = sum(mean_measure) / len(mean_measure)
# adc_value = adc.read()
measure.append(adc_value)
# check if it is the first measure
if(i == 0):
min_adc[j] = measure[0]
max_adc[j] = measure[0]
mean_adc[j] = measure[0]*1.0
# if not, calculate max, min and mean
else:
if(measure[i] < min_adc[j]):
min_adc[j] = measure[i]
if(measure[i] > max_adc[j]):
max_adc[j] = measure[i]
mean_adc[j] = (mean_adc[j]*i + measure[i])/(i + 1)
i += 1
#adc = "{:1}".format(adc)
#adc = numpy.mean(measure)
adc_volt = float(adc_value)/262143*20-10
adc_volt_str = str(adc_volt)
adc_volt_str = adc_volt_str[0:adc_volt_str.find(".")+8]
#---------------------------------------------------
#Get temperature
#temp = caget("MBTemp_RAFAEL_1:Channel5")
#temp_str = ("%.2f" %temp)
#temp_str = str(temp_str)
#temp_str = temp_str[0:temp_str.find(".")+3]
#---------------------------------------------------
#Write all data
#log.write(str(base+i)+ ';' + multimeter_int_str + ';' + str(multimeter_lsb) + ';' + str(adc) + ';' + str(adc_volt) + ';' + str(temp) + '\n')
#log.write(str(base+i)+ ';' + multimeter_int_str + ';' + str(multimeter_lsb) + ';' + str(adc) + ';' + str(adc_volt) + ';' + '\n')
#log.write(str(base+i)+ ';' + multimeter_int_str + ';' + str(multimeter_lsb) + ';' + str(adc) + ';' + str(adc_volt) + ';;')
log.write(str(base+i)+ ';;;' + str(adc_value) + ';' + str(adc_volt) + ';;')
# for k in range(100):
# log.write(str(measure[k]) + ';')
log.write('\n')
#Update the file
log.close()
#print data on terminal
sys.stdout.write(" | " + str(base) + "\t" + "------" + "\t\t" + "------\t" + "\t")
#---------------------------------------------------------
sys.stdout.write(str(adc_value) + "\t")
#---------------------------------------------------------
if(adc_volt < 0):
sys.stdout.write(str(adc_volt_str) + "\t")
else:
sys.stdout.write("+" + str(adc_volt_str) + "\t")
#---------------------------------------------------------
#sys.stdout.write(temp_str + "|" + "\n")
sys.stdout.write('---\t' + "|" + "\n")
print " | |"
# #calculate standard deviation
# part_sum = 0
# for i in range(len(measure)):
# part_sum = part_sum + (measure[i] - mean_adc[j])**2
# std_var[j] = part_sum/(len(measure)*1.0)
# std_var[j] = math.sqrt(std_var[j])
# std_var[j] = "{0:.4f}".format(std_var[j])
# mean_adc[j] = "{0:.2f}".format(mean_adc[j])
#---------------------------------------------------
# plot and save Histogram
std_var[j] = plot_hist(board, x, measure, mean_adc[j])
mean_adc[j] = "{0:.2f}".format(mean_adc[j])
print " ============================================================================"
#---------------------------------------------------
# print standard variation
sys.stdout.write(" | std_dev = %s" %str(std_var[j]))
for i in range (0, (6 - len(str(std_var[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#-------------------------------------------------------
# print minimum value acquired
sys.stdout.write(" | ADC_min = %s" %min_adc[j])
for i in range (0, (6 - len(str(min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print maximum value acquired
sys.stdout.write(" | ADC_max = %s" %max_adc[j])
for i in range (0, (6 - len(str(max_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print mean
sys.stdout.write(" | ADC_mean = %s" %mean_adc[j])
for i in range (0, (6 - len(str(mean_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print difference between max and min (histogram thickness)
sys.stdout.write(" | diff = %s" %(max_adc[j] - min_adc[j]))
for i in range (0, (6 - len(str(max_adc[j] - min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
print " ============================="
j += 1
# Print it all again after all the data were acquired
j = 0
for x in voltage:
sys.stdout.write(" | STABILITY: ")
if(x > 0):
sys.stdout.write("+")
if(x == 0):
sys.stdout.write(" ")
sys.stdout.write(str(x) + "V |\n")
print " ============================="
#---------------------------------------------------
# print standard variation
sys.stdout.write(" | std_dev = %s" %str(std_var[j]))
for i in range (0, (6 - len(str(std_var[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#-------------------------------------------------------
# print minimum value acquired
sys.stdout.write(" | ADC_min = %s" %min_adc[j])
for i in range (0, (6 - len(str(min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print maximum value acquired
sys.stdout.write(" | ADC_max = %s" %max_adc[j])
for i in range (0, (6 - len(str(max_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print mean
sys.stdout.write(" | ADC_mean = %s" %mean_adc[j])
for i in range (0, (6 - len(str(mean_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
# print difference between max and min (histogram thickness)
sys.stdout.write(" | diff = %s" %(max_adc[j] - min_adc[j]))
for i in range (0, (6 - len(str(max_adc[j] - min_adc[j])))):
sys.stdout.write(' ')
sys.stdout.write(' |\n')
#---------------------------------------------------
print " ============================="
j += 1
#-------------------------------------------------------
# function that plot histogram for stability test
#-------------------------------------------------------
def plot_hist(board, voltage, data, mu):
#calculate standard deviation
part_sum = 0
for i in range(len(data)):
part_sum = part_sum + (data[i] - mu)**2
sigma = part_sum/(len(data)*1.0)
sigma = math.sqrt(sigma)
# plot histogram
plt.clf()
plt.title(r'$\mathrm{Histogram\ for\ Board\ %d:}\ \mu=%.2f,\ \sigma=%.4f$' %(board, mu, sigma))
plt.ylabel('Counts')
plt.xlabel('Code in decimal')
# disable scientific notation for numbers in X-axis
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.hist(data, bins = range(min(data), max(data) + 1))
plt.show()
if(voltage >= 0):
voltage_str = "+" + str(voltage)
else:
voltage_str = str(voltage)
plt.savefig('/root/scripts/stability/board_' + str(board) + '_voltage_' + voltage_str)
# return stardard deviation (string format)
sigma = "{0:.4f}".format(sigma)
return sigma
| 43.370833 | 168 | 0.395715 | 3,181 | 31,227 | 3.785916 | 0.079849 | 0.076227 | 0.118575 | 0.024662 | 0.911982 | 0.893714 | 0.886739 | 0.875114 | 0.856016 | 0.853276 | 0 | 0.016915 | 0.299516 | 31,227 | 719 | 169 | 43.431154 | 0.533647 | 0.293144 | 0 | 0.868597 | 0 | 0.013363 | 0.208222 | 0.082687 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028953 | null | null | 0.082405 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4ef5d337f41969ed3bd955dfc00530c9a920c94e | 7,317 | py | Python | scify/specfunc/airy.py | DanielBok/scify | 9d4d31deb4379b9782e09f56fa39249a70f9e495 | [
"MIT"
] | 6 | 2019-04-06T09:07:36.000Z | 2020-12-27T19:05:16.000Z | scify/specfunc/airy.py | DanielBok/scify | 9d4d31deb4379b9782e09f56fa39249a70f9e495 | [
"MIT"
] | null | null | null | scify/specfunc/airy.py | DanielBok/scify | 9d4d31deb4379b9782e09f56fa39249a70f9e495 | [
"MIT"
] | null | null | null | from scify.types import Real
from .._specfunc import airy as a
from .._specfunc import airy_deriv as d
from .._specfunc import airy_zero as z
__all__ = ['airy_Ai', 'airy_Ai_scaled', 'airy_Ai_deriv', 'airy_Ai_deriv_scaled', 'airy_zero_Ai', 'airy_zero_Ai_deriv',
'airy_Bi', 'airy_Bi_scaled', 'airy_Bi_deriv', 'airy_Bi_deriv_scaled', 'airy_zero_Bi', 'airy_zero_Bi_deriv']
def airy_Ai(x, threaded=True) -> Real:
r"""
Computes the Airy function of the first kind. This is defined as
.. math::
Ai(x) = (1/\pi) \int_0^\infty \cos(\t^3/3 + xt) dt
For more information, checkout the article on `Wikipedia <https://en.wikipedia.org/wiki/Airy_function>`_
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Values from the Airy function
"""
return a.airy_Ai(x, threaded)
def airy_Ai_deriv(x, threaded=True) -> Real:
"""
Compute the derivative of the Airy function the first kind
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Derivative values from the Airy function
"""
return d.airy_Ai_deriv(x, threaded)
def airy_Ai_scaled(x, threaded=True) -> Real:
r"""
Computes a scaled version of the Airy function of the first kind.
This is defined as
.. math::
Ai_s = \left.
\begin{cases}
(1/\pi) \int_0^\infty \cos(\t^3/3 + xt) dt, & x < 0 \\
\exp^{1.5 x^1.5} (1/\pi) \int_0^\infty \cos(\t^3/3 + xt) dt, & x \geq 0
\end{cases}
\right}
For more information, checkout the article on `Wikipedia <https://en.wikipedia.org/wiki/Airy_function>`_
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Values from the Airy function
"""
return a.airy_Ai_scaled(x, threaded)
def airy_Ai_deriv_scaled(x, threaded=True) -> Real:
"""
Compute the scaled derivative of the Airy function the first kind
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Derivative values from the Airy function
"""
return d.airy_Ai_deriv_scaled(x, threaded)
def airy_zero_Ai(x, threaded=True) -> Real:
r"""
Compute the location of the s-th zero of the Airy function :math:`Ai(x)`
Parameters
----------
x: array_like
Integer valued scalar or vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Location of the s-th zero of the Airy function
"""
return z.airy_zero_Ai(x, threaded)
def airy_zero_Ai_deriv(x, threaded=True) -> Real:
r"""
Compute the location of the s-th zero of the Airy function derivative :math:`Ai'(x)`.
Parameters
----------
x: array_like
Integer valued scalar or vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Location of the s-th zero of the Airy function derivative
"""
return z.airy_zero_Ai_deriv(x, threaded)
def airy_Bi(x, threaded=True) -> Real:
r"""
Computes the Airy function of the second kind. This is defined as
.. math::
Bi(x) = (1/\pi) \int_0^\infty \left[ e^{-(t^3/3) + xt} + \sin((t^3/3) + xt) \right] dt
For more information, checkout the article on `Wikipedia <https://en.wikipedia.org/wiki/Airy_function>`_
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Values from the Airy function
"""
return a.airy_Bi(x, threaded)
def airy_Bi_deriv(x, threaded=True) -> Real:
r"""
Compute the derivative of the Airy function the second kind.
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Derivative values from the Airy function
"""
return d.airy_Bi_deriv(x, threaded)
def airy_Bi_scaled(x, threaded=True) -> Real:
r"""
Computes a scaled version of the Airy function of the second kind.
This is defined as
.. math::
Bi_s = \left.
\begin{cases}
(1/\pi) \int_0^\infty \left[ e^{-(t^3/3) + xt} + \sin((t^3/3) + xt) \right] dt, & x < 0 \\
\exp^{1.5 x^1.5} (1/\pi) \int_0^\infty \left[ e^{-(t^3/3) + xt} + \sin((t^3/3) + xt) \right] dt, & x \geq 0
\end{cases}
\right}
For more information, checkout the article on `Wikipedia <https://en.wikipedia.org/wiki/Airy_function>`_
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Values from the Airy function
"""
return a.airy_Bi_scaled(x, threaded)
def airy_Bi_deriv_scaled(x, threaded=True) -> Real:
r"""
Compute the scaled derivative of the Airy function the second kind.
Parameters
----------
x: array_like
Numerical vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Derivative values from the Airy function
"""
return d.airy_Bi_deriv_scaled(x, threaded)
def airy_zero_Bi(x, threaded=True) -> Real:
r"""
Compute the location of the s-th zero of the Airy function :math:`Bi(x)`
Parameters
----------
x: array_like
Integer valued scalar or vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Location of the s-th zero of the Airy function
"""
return z.airy_zero_Bi(x, threaded)
def airy_zero_Bi_deriv(x, threaded=True) -> Real:
r"""
Compute the location of the s-th zero of the Airy function derivative :math:`Bi'(x)`.
Parameters
----------
x: array_like
Integer valued scalar or vector
threaded: bool, optional
If True, uses multi-threading. Multi-threading is supported by the OpenMP api.
Returns
-------
array_like or scalar
Location of the s-th zero of the Airy function derivative
"""
return z.airy_zero_Bi_deriv(x, threaded)
| 25.583916 | 119 | 0.62471 | 1,033 | 7,317 | 4.302033 | 0.089061 | 0.075608 | 0.081008 | 0.053555 | 0.941494 | 0.937444 | 0.879838 | 0.864086 | 0.861836 | 0.839559 | 0 | 0.007759 | 0.260216 | 7,317 | 285 | 120 | 25.673684 | 0.813227 | 0.688944 | 0 | 0 | 0 | 0 | 0.110599 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f633f5fe455ad6b84e350f184a974686cefdb9d8 | 45 | py | Python | ocetrac/_version.py | jbusecke/ocetrac | 9e92246036ae87aea527265ef17d99e91a846c03 | [
"MIT"
] | null | null | null | ocetrac/_version.py | jbusecke/ocetrac | 9e92246036ae87aea527265ef17d99e91a846c03 | [
"MIT"
] | null | null | null | ocetrac/_version.py | jbusecke/ocetrac | 9e92246036ae87aea527265ef17d99e91a846c03 | [
"MIT"
] | null | null | null | __version__ = "0.1.1.dev1+g5b65264.d20210422" | 45 | 45 | 0.777778 | 7 | 45 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.418605 | 0.044444 | 45 | 1 | 45 | 45 | 0.302326 | 0 | 0 | 0 | 0 | 0 | 0.630435 | 0.630435 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c9cd05e4cc2f6a789814cc3ec3a46dac1424666 | 3,755 | py | Python | tturtle run.py | HarleyEDU/pythoneduwork | 6c5a28217c96fac394cb7ad0fb8d186b5080f1de | [
"bzip2-1.0.6"
] | null | null | null | tturtle run.py | HarleyEDU/pythoneduwork | 6c5a28217c96fac394cb7ad0fb8d186b5080f1de | [
"bzip2-1.0.6"
] | null | null | null | tturtle run.py | HarleyEDU/pythoneduwork | 6c5a28217c96fac394cb7ad0fb8d186b5080f1de | [
"bzip2-1.0.6"
] | null | null | null | import turtle
turtle.speed(600)
turtle.bgcolor("pink")
for i in range(10):
for colours in["purple", "black", "white", "red", "cyan"]:
turtle.color(colours)
turtle.pensize(2)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(2)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
turtle.color(colours)
turtle.pensize(3)
turtle.left(10)
turtle.forward(210)
turtle.left(90)
turtle.forward(200)
turtle.left(90)
turtle.forward(210)
turtle.left(90)
turtle.forward(190)
turtle.left(90)
turtle.forward(220)
| 26.821429 | 63 | 0.528096 | 417 | 3,755 | 4.755396 | 0.067146 | 0.277358 | 0.266263 | 0.399395 | 0.95411 | 0.95411 | 0.95411 | 0.95411 | 0.95411 | 0.95411 | 0 | 0.120897 | 0.358988 | 3,755 | 139 | 64 | 27.014388 | 0.70295 | 0 | 0 | 0.963504 | 0 | 0 | 0.007467 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007299 | 0 | 0.007299 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
9cb89a16b99367e7d899128caabce5817470f42a | 9,414 | py | Python | tests/modules/marketplace/test_marketplace_routes.py | rlin0/donut | 5672df8e853b4b775d7d50665128b255cd695ec2 | [
"MIT"
] | null | null | null | tests/modules/marketplace/test_marketplace_routes.py | rlin0/donut | 5672df8e853b4b775d7d50665128b255cd695ec2 | [
"MIT"
] | null | null | null | tests/modules/marketplace/test_marketplace_routes.py | rlin0/donut | 5672df8e853b4b775d7d50665128b255cd695ec2 | [
"MIT"
] | null | null | null | import flask
from donut.testing.fixtures import client
from donut import app
from donut.modules.marketplace import helpers
def test_marketplace_home(client):
rv = client.get(flask.url_for('marketplace.marketplace'))
assert rv.status_code == 200
def test_marketplace_category(client):
rv = client.get(
flask.url_for('marketplace.query'), query_string={'cat': 1})
assert rv.status_code == 200
rv = client.get(
flask.url_for('marketplace.query'), query_string={'cat': 'all'})
assert rv.status_code == 200
def test_marketplace_query(client):
rv = client.get(
flask.url_for('marketplace.query'),
query_string={'cat': 2,
'q': 'great'})
assert rv.status_code == 200
rv = client.get(flask.url_for('marketplace.query'))
assert rv.status_code == 404
rv = client.get(
flask.url_for('marketplace.query'), query_string={'cat': 'abc'})
assert rv.status_code == 404
def test_marketplace_view_item(client):
rv = client.get(flask.url_for('marketplace.view_item', item_id=1))
assert rv.status_code == 200
rv = client.get(flask.url_for('marketplace.view_item', item_id=1000))
assert rv.status_code == 404
def test_marketplace_manage(client):
rv = client.get(flask.url_for('marketplace.manage'))
assert rv.status_code == 302
assert rv.location == flask.url_for('auth.login')
with client.session_transaction() as sess:
sess['username'] = 'csander'
rv = client.get(flask.url_for('marketplace.manage'))
assert rv.status_code == 200
assert b'Your listings' in rv.data
rv = client.get(flask.url_for('marketplace.archive', item_id=3))
assert rv.status_code == 302
assert rv.location == flask.url_for('marketplace.manage')
assert not helpers.table_fetch(
'marketplace_items',
one=True,
fields=['item_active'],
attrs={'item_id': 3})
rv = client.get(flask.url_for('marketplace.view_item', item_id=3))
assert rv.status_code == 200
assert b'This item has been archived!' in rv.data
rv = client.get(flask.url_for('marketplace.unarchive', item_id=3))
assert rv.status_code == 302
assert rv.location == flask.url_for('marketplace.manage')
assert helpers.table_fetch(
'marketplace_items',
one=True,
fields=['item_active'],
attrs={'item_id': 3})
rv = client.get(flask.url_for('marketplace.view_item', item_id=3))
assert rv.status_code == 200
assert b'This item has been archived!' not in rv.data
# Manage should fail if permissions are missing
with client.session_transaction() as sess:
sess['username'] = 'ruddock_pres'
rv = client.get(flask.url_for('marketplace.archive', item_id=3))
assert rv.status_code == 302
assert rv.location == flask.url_for('marketplace.marketplace')
assert helpers.table_fetch(
'marketplace_items',
one=True,
fields=['item_active'],
attrs={'item_id': 3})
rv = client.get(flask.url_for('marketplace.unarchive', item_id=3))
assert rv.status_code == 302
assert rv.location == flask.url_for('marketplace.marketplace')
def test_marketplace_sell(client):
rv = client.get(flask.url_for('marketplace.sell'))
assert rv.status_code == 302
assert rv.location == flask.url_for('auth.login')
with client.session_transaction() as sess:
sess['username'] = 'csander'
rv = client.get(flask.url_for('marketplace.sell', state='abc'))
assert rv.status_code == 302
assert rv.location == flask.url_for('marketplace.sell')
rv = client.get(flask.url_for('marketplace.sell'))
assert rv.status_code == 200
assert b'Please select a category for your item' in rv.data
item = {}
for cat in (None, 'abc', '10'):
item['cat'] = cat
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid category' in rv.data
item['cat'] = '1' # Furniture
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid category' not in rv.data
assert b'Missing item title' in rv.data
item['item_title'] = 'Couch'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Missing item title' not in rv.data
assert b'Missing condition' in rv.data
item['item_condition'] = 'Saggy'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Missing condition' not in rv.data
assert b'Invalid price' in rv.data
for price in ('cash $$$', '1.3'):
item['item_price'] = price
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid price' in rv.data
item['item_price'] = '12.34'
item['images'] = ['not_an_image']
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid price' not in rv.data
assert b'Invalid image' in rv.data
item['images'] = ['http://imgur.com/abcdef123']
rv = client.post(
flask.url_for('marketplace.sell'), data=item, follow_redirects=True)
assert rv.status_code == 200
assert b'Invalid image' not in rv.data
assert b'Posted!' in rv.data
rv = client.get(flask.url_for('marketplace.view_item', item_id=4))
assert rv.status_code == 200
assert b'Furniture' in rv.data
assert b'Couch' in rv.data
assert b'Saggy' in rv.data
assert b'$12.34' in rv.data
assert b'https://i.imgur.com/abcdef123.png' in rv.data
assert b'csander' in rv.data
item = {'cat': '2'}
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Missing textbook title' in rv.data
item['textbook_title'] = 'Algebra'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Missing textbook title' not in rv.data
assert b'Missing textbook author' in rv.data
item['textbook_author'] = 'Serge Lang'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Missing textbook author' not in rv.data
item['textbook_id'] = '10'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid textbook' in rv.data
del item['textbook_id']
item['textbook_edition'] = 'not_an_edition'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid textbook edition' in rv.data
item['textbook_edition'] = '3'
item['textbook_isbn'] = 'not_an_isbn'
rv = client.post(flask.url_for('marketplace.sell'), data=item)
assert rv.status_code == 200
assert b'Invalid textbook edition' not in rv.data
assert b'Invalid textbook ISBN' in rv.data
item['textbook_isbn'] = '0-387-95385-X'
item['item_condition'] = 'New'
item['item_price'] = '69'
item['item_details'] = 'Caused much pain and suffering'
rv = client.post(
flask.url_for('marketplace.sell'), data=item, follow_redirects=True)
assert rv.status_code == 200
assert b'Invalid textbook ISBN' not in rv.data
assert b'Posted!' in rv.data
rv = client.get(flask.url_for('marketplace.view_item', item_id=5))
assert rv.status_code == 200
assert b'Textbooks' in rv.data
assert b'Algebra' in rv.data
assert b'Serge Lang' in rv.data
assert b'New' in rv.data
assert b'038795385X' in rv.data
assert b'$69.00' in rv.data
assert b'Caused much pain and suffering' in rv.data
assert b'csander' in rv.data
def test_marketplace_edit(client):
with client.session_transaction() as sess:
sess['username'] = 'csander'
rv = client.get(
flask.url_for('marketplace.sell', state='edit'), follow_redirects=True)
assert rv.status_code == 200
assert b'Invalid item' in rv.data
rv = client.get(
flask.url_for('marketplace.sell', state='edit', item_id=100),
follow_redirects=True)
assert rv.status_code == 200
assert b'Invalid item' in rv.data
rv = client.get(
flask.url_for('marketplace.sell', state='edit', item_id=1),
follow_redirects=True)
assert rv.status_code == 200
assert b'You do not have permission to edit this item' in rv.data
rv = client.get(flask.url_for('marketplace.sell', state='edit', item_id=4))
assert rv.status_code == 200
assert b'Couch' in rv.data
assert b'12.34' in rv.data
new_item = {
'cat': 1,
'item_title': 'Slouch',
'item_condition': 'Poor',
'item_price': '.77',
'item_details': 'Possibly cursed'
}
rv = client.post(
flask.url_for('marketplace.sell', state='edit', item_id=4),
data=new_item,
follow_redirects=True)
assert rv.status_code == 200
assert b'Updated!' in rv.data
rv = client.get(flask.url_for('marketplace.view_item', item_id=4))
assert rv.status_code == 200
assert b'Furniture' in rv.data
assert b'Slouch' in rv.data
assert b'Poor' in rv.data
assert b'$0.77' in rv.data
assert b'https://i.imgur.com/abcdef123.png' not in rv.data
assert b'csander' in rv.data
| 34.610294 | 79 | 0.663055 | 1,385 | 9,414 | 4.376895 | 0.106137 | 0.060046 | 0.068624 | 0.166942 | 0.850379 | 0.782085 | 0.779776 | 0.751897 | 0.693995 | 0.677829 | 0 | 0.027781 | 0.204695 | 9,414 | 271 | 80 | 34.738007 | 0.781889 | 0.005842 | 0 | 0.542986 | 0 | 0 | 0.244763 | 0.027576 | 0 | 0 | 0 | 0 | 0.466063 | 1 | 0.031674 | false | 0 | 0.0181 | 0 | 0.049774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9cf06f41a85a608bc6fae2a185e1a516f8d4246c | 2,150 | py | Python | aiochan/test/test_buffer.py | agentOfChaos/aiochan | 46fdfa038f376edec632a1552475eaf60c860198 | [
"Apache-2.0"
] | 128 | 2018-08-24T06:39:10.000Z | 2022-02-21T19:15:35.000Z | aiochan/test/test_buffer.py | agentOfChaos/aiochan | 46fdfa038f376edec632a1552475eaf60c860198 | [
"Apache-2.0"
] | 3 | 2019-01-30T11:13:32.000Z | 2020-03-12T16:40:21.000Z | aiochan/test/test_buffer.py | agentOfChaos/aiochan | 46fdfa038f376edec632a1552475eaf60c860198 | [
"Apache-2.0"
] | 10 | 2018-09-14T11:15:03.000Z | 2022-02-20T15:23:28.000Z | from aiochan.buffers import *
def test_fixed_buffer():
buffer = FixedLengthBuffer(3)
assert buffer.can_add
assert not buffer.can_take
buffer.add(1)
buffer.add(2)
assert buffer.can_add
assert buffer.can_take
buffer.add(3)
assert not buffer.can_add
assert buffer.can_take
assert buffer.take() == 1
assert buffer.can_add
assert buffer.can_take
assert buffer.take() == 2
assert buffer.take() == 3
assert buffer.can_add
assert not buffer.can_take
assert buffer.__repr__()
def test_dropping_buffer():
buffer = DroppingBuffer(2)
assert buffer.can_add
assert not buffer.can_take
buffer.add(1)
buffer.add(2)
assert buffer.can_add
assert buffer.can_take
assert buffer.take() == 1
buffer.add(3)
buffer.add(4)
assert buffer.take() == 2
assert buffer.take() == 3
assert buffer.can_add
assert not buffer.can_take
assert buffer.__repr__()
def test_sliding_buffer():
buffer = SlidingBuffer(2)
assert buffer.can_add
assert not buffer.can_take
buffer.add(1)
buffer.add(2)
assert buffer.can_add
assert buffer.can_take
assert buffer.take() == 1
buffer.add(3)
buffer.add(4)
assert buffer.take() == 3
assert buffer.take() == 4
assert buffer.can_add
assert not buffer.can_take
assert buffer.__repr__()
def test_promise_buffer():
buffer = PromiseBuffer(None)
assert buffer.can_add
assert not buffer.can_take
buffer.add(1)
assert buffer.can_add
assert buffer.can_take
assert buffer.take() == 1
buffer.add(2)
assert buffer.can_add
assert buffer.can_take
assert buffer.take() == 1
assert buffer.__repr__()
def test_it_buffer():
buffer = IterBuffer(())
assert not buffer.can_add
assert not buffer.can_take
buffer = IterBuffer(range(2))
assert not buffer.can_add
assert buffer.can_take
assert buffer.take() == 0
assert not buffer.can_add
assert buffer.can_take
assert buffer.take() == 1
assert not buffer.can_add
assert not buffer.can_take
| 16.538462 | 33 | 0.664651 | 303 | 2,150 | 4.511551 | 0.09901 | 0.342356 | 0.241405 | 0.237015 | 0.843453 | 0.828822 | 0.819312 | 0.819312 | 0.819312 | 0.814923 | 0 | 0.018519 | 0.246512 | 2,150 | 129 | 34 | 16.666667 | 0.825309 | 0 | 0 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.679487 | 1 | 0.064103 | false | 0 | 0.012821 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
14946801cebb3600c1a8d6769fae59c6e0d91fb2 | 1,451 | py | Python | test/test_mock.py | umarcor/svunit | 1a086aed27d8be3520c07c53b2f4e77bf20d266f | [
"Apache-2.0"
] | 65 | 2015-11-27T21:35:09.000Z | 2020-06-22T01:51:21.000Z | test/test_mock.py | umarcor/svunit | 1a086aed27d8be3520c07c53b2f4e77bf20d266f | [
"Apache-2.0"
] | 61 | 2016-05-23T14:24:52.000Z | 2020-06-25T11:43:35.000Z | test/test_mock.py | umarcor/svunit | 1a086aed27d8be3520c07c53b2f4e77bf20d266f | [
"Apache-2.0"
] | 32 | 2015-12-22T19:01:39.000Z | 2020-06-22T01:55:11.000Z | import subprocess
from utils import *
@all_files_in_dir('mock_uvm_report')
@all_available_simulators()
@pytest.mark.skip(reason="'uvm_report_mock' seems to be busted for UVM 1.2")
def test_mock_uvm_report(datafiles, simulator):
with datafiles.as_cwd():
subprocess.check_call(['runSVUnit', '-sim', simulator, '-uvm', '-define', 'UVM_NO_DEPRECATED', '-define', 'RUN_SVUNIT_WITH_UVM_REPORT_MOCK'])
expect_testrunner_pass('run.log')
# TODO This is redundant with the test that loops over all simulators.
@all_files_in_dir('mock_uvm_report_ius')
@all_available_simulators()
def test_mock_uvm_report_ius(datafiles, simulator):
with datafiles.as_cwd():
if simulator == 'irun':
subprocess.check_call(['runSVUnit', '-sim', simulator, '-uvm', '-define', 'UVM_NO_DEPRECATED', '-define', 'RUN_SVUNIT_WITH_UVM_REPORT_MOCK'])
expect_testrunner_pass('run.log')
@all_files_in_dir('mock_uvm_report_ius_uvm1.2')
@all_available_simulators()
@pytest.mark.skip(reason="'uvm_report_mock' seems to be busted for UVM 1.2")
def test_mock_uvm_report_ius_uvm1_2(datafiles, simulator):
with datafiles.as_cwd():
if simulator == 'irun':
subprocess.check_call(['runSVUnit', '-sim', simulator, '-uvm', '-define', 'UVM_NO_DEPRECATED', '-c_arg', '-uvmhome $INCISIV_HOME/tools/methodology/UVM/CDNS-1.2/sv', '-define', 'RUN_SVUNIT_WITH_UVM_REPORT_MOCK'])
expect_testrunner_pass('run.log')
| 45.34375 | 223 | 0.726396 | 205 | 1,451 | 4.770732 | 0.317073 | 0.101227 | 0.079755 | 0.06544 | 0.838446 | 0.838446 | 0.764826 | 0.738241 | 0.678937 | 0.678937 | 0 | 0.007943 | 0.132323 | 1,451 | 31 | 224 | 46.806452 | 0.768864 | 0.046864 | 0 | 0.625 | 0 | 0 | 0.350471 | 0.120203 | 0 | 0 | 0 | 0.032258 | 0 | 1 | 0.125 | false | 0.125 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
212b1d851cd629166a5218c980a4e4fe5f6b29ec | 11,701 | py | Python | interlacer/models.py | nalinimsingh/interlacer | d447b7cd6b64337028342377218b61b6cb474a97 | [
"MIT"
] | 16 | 2020-07-06T00:33:46.000Z | 2021-04-22T20:17:12.000Z | interlacer/models.py | nalinimsingh/interlacer | d447b7cd6b64337028342377218b61b6cb474a97 | [
"MIT"
] | 1 | 2020-07-11T21:21:36.000Z | 2021-02-18T19:29:03.000Z | interlacer/models.py | nalinimsingh/interlacer | d447b7cd6b64337028342377218b61b6cb474a97 | [
"MIT"
] | 5 | 2020-07-06T01:17:31.000Z | 2021-01-20T15:15:31.000Z | import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import *
from tensorflow.keras.utils import get_custom_objects
from interlacer import layers, utils
def get_conv_no_residual_model(
input_size,
nonlinearity,
kernel_size,
num_features,
num_layers,
enforce_dc):
"""Generic conv model without residual convolutions.
Args:
input_size(int): Tuple containing input shape, excluding batch size
nonlinearity(str): 'relu' or '3-piece'
kernel_size(int): Dimension of each convolutional filter
num_features(int): Number of features in each intermediate network layer
num_layers(int): Number of convolutional layers in model
Returns:
model: Keras model comprised of num_layers core convolutional layers with specified nonlinearities
"""
inputs = Input(input_size)
if(enforce_dc):
masks = Input(input_size)
prev_layer = inputs
for i in range(num_layers):
conv = layers.BatchNormConv(num_features, kernel_size)(prev_layer)
nonlinear = layers.get_nonlinear_layer(nonlinearity)(conv)
prev_layer = nonlinear
output = Conv2D(2, kernel_size, activation=None, padding='same',
kernel_initializer='he_normal')(prev_layer)
if(enforce_dc):
output = masks * inputs + (1 - masks) * output
model = keras.models.Model(inputs=(inputs, masks), outputs=output)
else:
model = keras.models.Model(inputs=inputs, outputs=output)
return model
def get_conv_residual_model(
input_size,
nonlinearity,
kernel_size,
num_features,
num_layers,
enforce_dc):
"""Generic conv model with residual convolutions.
Args:
input_size(int): Tuple containing input shape, excluding batch size
nonlinearity(str): 'relu' or '3-piece'
kernel_size(int): Dimension of each convolutional filter
num_features(int): Number of features in each intermediate network layer
num_layers(int): Number of convolutional layers in model
Returns:
model: Keras model comprised of num_layers core convolutional layers with specified nonlinearities
"""
inputs = Input(input_size)
if(enforce_dc):
masks = Input(input_size)
prev_layer = inputs
for i in range(num_layers):
conv = layers.BatchNormConv(num_features, kernel_size)(prev_layer)
nonlinear = layers.get_nonlinear_layer(nonlinearity)(conv)
prev_layer = nonlinear + \
tf.tile(inputs, [1, 1, 1, int(num_features / 2)])
output = Conv2D(2, kernel_size, activation=None, padding='same',
kernel_initializer='he_normal')(prev_layer) + inputs
if(enforce_dc):
output = masks * inputs + (1 - masks) * output
model = keras.models.Model(inputs=(inputs, masks), outputs=output)
else:
model = keras.models.Model(inputs=inputs, outputs=output)
return model
def get_interlacer_residual_model(
input_size,
nonlinearity,
kernel_size,
num_features,
num_convs,
num_layers,
enforce_dc):
"""Interlacer model with residual convolutions.
Returns a model that takes a frequency-space input (of shape (batch_size, n, n, 2)) and returns a frequency-space output of the same size, comprised of interlacer layers and with connections from the input to each layer.
Args:
input_size(int): Tuple containing input shape, excluding batch size
nonlinearity(str): 'relu' or '3-piece'
kernel_size(int): Dimension of each convolutional filter
num_features(int): Number of features in each intermediate network layer
num_layers(int): Number of convolutional layers in model
Returns:
model: Keras model comprised of num_layers core interlaced layers with specified nonlinearities
"""
inputs = Input(input_size)
if(enforce_dc):
masks = Input(input_size)
n = inputs.get_shape().as_list()[1]
inp_real = tf.expand_dims(inputs[:, :, :, 0], -1)
inp_imag = tf.expand_dims(inputs[:, :, :, 1], -1)
n_copies = int(num_features / 2)
inp_copy = tf.reshape(tf.tile(tf.expand_dims(tf.concat(
[inp_real, inp_imag], axis=3), 4), [1, 1, 1, 1, n_copies]), [-1, n, n, num_features])
inputs_img = utils.convert_tensor_to_image_domain(inputs)
inp_img_real = tf.expand_dims(inputs_img[:, :, :, 0], -1)
inp_img_imag = tf.expand_dims(inputs_img[:, :, :, 1], -1)
inp_img_copy = tf.reshape(tf.tile(tf.expand_dims(tf.concat(
[inp_img_real, inp_img_imag], axis=3), 4), [1, 1, 1, 1, n_copies]), [-1, n, n, num_features])
freq_in = inputs
img_in = inputs_img
for i in range(num_layers):
img_conv, k_conv = layers.Interlacer(
num_features, kernel_size, num_convs)([img_in, freq_in])
freq_in = k_conv + inp_copy
img_in = img_conv + inp_img_copy
output = Conv2D(2, kernel_size, activation=None, padding='same',
kernel_initializer='he_normal')(freq_in) + inputs
if(enforce_dc):
output = masks * inputs + (1 - masks) * output
model = keras.models.Model(inputs=(inputs, masks), outputs=output)
else:
model = keras.models.Model(inputs=inputs, outputs=output)
return model
def crop_320(inputs):
inputs = tf.expand_dims(inputs, 0)
inputs_img = utils.convert_tensor_to_image_domain(inputs)[0, :, :, :]
inputs_img = tf.signal.ifftshift(inputs_img, axes=(0, 1))
shape = tf.shape(inputs_img)
x = shape[0]
y = shape[1]
n = 320
x_l = tf.cast(x / 2 - n / 2, tf.int32)
x_r = tf.cast(x / 2 + n / 2, tf.int32)
y_l = tf.cast(y / 2 - n / 2, tf.int32)
y_r = tf.cast(y / 2 + n / 2, tf.int32)
icrop_img = tf.expand_dims(
tf.slice(inputs_img, (x_l, y_l, 0), (n, n, 2)), 0)
icrop_k = utils.convert_tensor_to_frequency_domain(icrop_img)[0, :, :, :]
return icrop_k
def get_fastmri_interlacer_residual_model(
input_size,
nonlinearity,
kernel_size,
num_features,
num_convs,
num_layers,
enforce_dc):
"""Interlacer model with residual convolutions.
Returns a model that takes a frequency-space input (of shape (batch_size, n, n, 2)) and returns
a frequency-space output of the same size, comprised of interlacer layers and with connections
from the input to each layer. Handles variable input size, and crops to a 320x320 image at the end.
Args:
input_size(int): Tuple containing input shape, excluding batch size
nonlinearity(str): 'relu' or '3-piece'
kernel_size(int): Dimension of each convolutional filter
num_features(int): Number of features in each intermediate network layer
num_convs(int): Number of convolutions per layer
num_layers(int): Number of convolutional layers in model
enforce_dc(Bool): Whether to paste in original acquired k-space lines in final output
Returns:
model: Keras model comprised of num_layers core interlaced layers with specified nonlinearities
"""
inputs = Input(input_size)
if(enforce_dc):
masks = Input(input_size)
x = tf.shape(inputs)[1]
y = tf.shape(inputs)[2]
inp_real = tf.expand_dims(inputs[:, :, :, 0], -1)
inp_imag = tf.expand_dims(inputs[:, :, :, 1], -1)
n_copies = int(num_features / 2)
inp_copy = tf.reshape(tf.tile(tf.expand_dims(tf.concat(
[inp_real, inp_imag], axis=3), 4), [1, 1, 1, 1, n_copies]), [-1, x, y, num_features])
inputs_img = utils.convert_tensor_to_image_domain(inputs)
inp_img_real = tf.expand_dims(inputs_img[:, :, :, 0], -1)
inp_img_imag = tf.expand_dims(inputs_img[:, :, :, 1], -1)
inp_img_copy = tf.reshape(tf.tile(tf.expand_dims(tf.concat(
[inp_img_real, inp_img_imag], axis=3), 4), [1, 1, 1, 1, n_copies]), [-1, x, y, num_features])
freq_in = inputs
img_in = inputs_img
for i in range(num_layers):
img_conv, k_conv = layers.Interlacer(
num_features, kernel_size, num_convs, shift=True)([img_in, freq_in])
freq_in = k_conv + inp_copy
img_in = img_conv + inp_img_copy
output = Conv2D(2, kernel_size, activation=None, padding='same',
kernel_initializer='he_normal')(freq_in) + inputs
if(enforce_dc):
output = masks * inputs + (1 - masks) * output
output_crop = tf.keras.layers.Lambda(
lambda x: tf.map_fn(
crop_320, x, dtype=tf.float32))(output)
if(enforce_dc):
model = keras.models.Model(
inputs={
'input': inputs, 'mask': masks}, outputs={
'output': output, 'output_crop': output_crop})
else:
model = keras.models.Model(
inputs=inputs,
outputs={
'output': output,
'output_crop': output_crop})
return model
def get_alternating_residual_model(
input_size,
nonlinearity,
kernel_size,
num_features,
num_layers,
enforce_dc):
"""Alternating model with residual convolutions.
Returns a model that takes a frequency-space input (of shape (batch_size, n, n, 2)) and returns a frequency-space output of the same size, comprised of alternating frequency- and image-space convolutional layers and with connections from the input to each layer.
Args:
input_size(int): Tuple containing input shape, excluding batch size
nonlinearity(str): 'relu' or '3-piece'
kernel_size(int): Dimension of each convolutional filter
num_features(int): Number of features in each intermediate network layer
num_layers(int): Number of convolutional layers in model
Returns:
model: Keras model comprised of num_layers alternating image- and frequency-space convolutional layers with specified nonlinearities
"""
inputs = Input(input_size)
if(enforce_dc):
masks = Input(input_size)
n = inputs.get_shape().as_list()[1]
inp_real = tf.expand_dims(inputs[:, :, :, 0], -1)
inp_imag = tf.expand_dims(inputs[:, :, :, 1], -1)
n_copies = int(num_features / 2)
inp_copy = tf.reshape(tf.tile(tf.expand_dims(tf.concat(
[inp_real, inp_imag], axis=3), 4), [1, 1, 1, 1, n_copies]), [-1, n, n, num_features])
inputs_img = utils.convert_tensor_to_image_domain(inputs)
inp_img_real = tf.expand_dims(inputs_img[:, :, :, 0], -1)
inp_img_imag = tf.expand_dims(inputs_img[:, :, :, 1], -1)
inp_img_copy = tf.reshape(tf.tile(tf.expand_dims(tf.concat(
[inp_img_real, inp_img_imag], axis=3), 4), [1, 1, 1, 1, n_copies]), [-1, n, n, num_features])
prev_layer = inputs
for i in range(num_layers):
k_conv = layers.BatchNormConv(
num_features, kernel_size)(prev_layer) + inp_copy
nonlinear = layers.get_nonlinear_layer('3-piece')(k_conv)
img = utils.convert_channels_to_image_domain(nonlinear)
img_conv = layers.BatchNormConv(
num_features, kernel_size)(img) + inp_img_copy
nonlinear = layers.get_nonlinear_layer('relu')(img_conv)
prev_layer = utils.convert_channels_to_frequency_domain(nonlinear)
output = Conv2D(2, kernel_size, activation=None, padding='same',
kernel_initializer='he_normal')(prev_layer) + inputs
if(enforce_dc):
output = masks * inputs + (1 - masks) * output
model = keras.models.Model(inputs=(inputs, masks), outputs=output)
else:
model = keras.models.Model(inputs=inputs, outputs=output)
return model
| 36.114198 | 266 | 0.659004 | 1,613 | 11,701 | 4.572226 | 0.095474 | 0.03878 | 0.032542 | 0.031729 | 0.860475 | 0.854237 | 0.844203 | 0.829424 | 0.820203 | 0.795254 | 0 | 0.016091 | 0.235194 | 11,701 | 323 | 267 | 36.226006 | 0.808023 | 0.280916 | 0 | 0.712821 | 0 | 0 | 0.014565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.035897 | 0 | 0.097436 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dcbd25986764ab4acc1e3b4efaed2035bddd6691 | 56,798 | py | Python | Stage4_Left.py | yves-weissenberger/Sofia-Predictive-Coding | 494482960233c6ef26afaee82eb724d68858d922 | [
"MIT"
] | 1 | 2019-01-12T22:42:33.000Z | 2019-01-12T22:42:33.000Z | Stage4_Left.py | yves-weissenberger/Sofia-Predictive-Coding | 494482960233c6ef26afaee82eb724d68858d922 | [
"MIT"
] | null | null | null | Stage4_Left.py | yves-weissenberger/Sofia-Predictive-Coding | 494482960233c6ef26afaee82eb724d68858d922 | [
"MIT"
] | null | null | null | from __future__ import division
import numpy.random as rnd
import RPi.GPIO as GPIO
import csv
import requests as req
import sys
import pygame
from pygame.locals import *
import numpy as np
import random
from random import shuffle
import os
import time
print "Im online :)"
# Data sending function
pi_IP = [(s.connect(('8.8.8.8', 80)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]
pi_ID = str(int(pi_IP[-3:])-100)
def send_data(load):
headers = {'User-Agent': 'Mozilla/5.0'}
link = 'http://192.168.0.99:8000/getData/' + pi_ID + '/get_PiData/'
session = req.Session()
r1 = session.get(link,headers=headers)
link1 = 'http://192.168.0.99:8000/getData/' + pi_ID + '/write_PiData/'
payload = {'piData':load,'csrfmiddlewaretoken':r1.cookies['csrftoken']}
#cookies = dict(session.cookies)
session.post(link1,headers=headers,data=payload)
return None
# Setup RPi.GPIO
GPIO.setmode(GPIO.BOARD)
lickL = 36 #Input channel wired up to the left licking spout
lickR = 38 #Input channel wired up to the right spout
GPIO.setup(lickL,GPIO.IN) #Input pin receiving voltage change resulting from lick
GPIO.setup(lickR,GPIO.IN)
GPIO.add_event_detect(lickL,GPIO.RISING)
GPIO.add_event_detect(lickR,GPIO.RISING)
#The pins I'm using to send a pulse to the second RPi to trigger the presentation of the other stimulus.
GPIO.setup(33,GPIO.OUT)
GPIO.setup(31,GPIO.OUT)
GPIO.setup(29,GPIO.OUT)
GPIO.setup(15,GPIO.OUT)
GPIO.setup(23,GPIO.OUT)
GPIO.setup(21,GPIO.OUT)
GPIO.setup(19,GPIO.OUT)
GPIO.setup(40,GPIO.OUT)
GPIO.setup(26,GPIO.OUT)
GPIO.setup(32,GPIO.OUT)
solOpenDur = 0.03 #Really? Really short reward delivery window...
rewL = 35
rewR = 37
GPIO.setup(rewL,GPIO.OUT) #Output pin specified; used to deliver rewards
GPIO.setup(rewR,GPIO.OUT)
sound_dur=0.4
minILI=0.05 #Minimum interlick interval in seconds; needed to calculate licking frequency
punishment_delay = 5
contrast_var= 0.6 #ADJUST BASED ON INDIVIDUAL MOUSE'S 75% THRESHOLD AT STAGE 3
# Reward Delivery Helper Functions
def deliverRew(channel):
rewstart=time.time()
while time.time()<= rewstart+solOpenDur:
GPIO.output(channel,1)
GPIO.output(channel,0)
rewProcL= billiard.Process(target=deliverRew,args=(rewL,))
rewProcR=billiard.Process(target=deliverRew, args=(rewR,))
def sendpulse(channel):
pulsestart=time.time()
while time.time()<=pulsestart+0.05:
GPIO.output(channel,1)
GPIO.output(channel,0)
def grey_screenpulse(channel):
pulsestart=time.time()
while time.time()<=pulsestart+0.05:
GPIO.output(channel,1)
GPIO.output(channel,0)
def punish_pulse(channel):
pulsestart=time.time()
while time.time()<=pulsestart+0.05:
GPIO.output(channel,1)
GPIO.output(channel,0)
def data_sender (lickLst,rewLst,orientation, location, sendT): #Modify here since I have more than two entries in each string
lickStr = 'LickList:' + '-'.join([str(np.round(entry[0],decimals=3))+ ' ' + str(np.round(entry[1],decimals=3))+ ' ' + str(np.round(entry[2],decimals=3))+ ' ' + entry[3] + ' ' + entry[4] for entry in lickLst])
rewStr = 'rewList:' + '-'.join([str(np.round(entry[0],decimals=3))+ ' ' + str(np.round(entry[1],decimals=3))+ ' ' + str(np.round(entry[2],decimals=3))+ ' ' + entry[3] for entry in rewLst])
locStr = 'Location:' + '-'.join([str(np.round(location,decimals=3))])
orStr= 'Orientation:' + '-'.join([str(np.round(orientation,decimals=3))])
sendStr = ', '.join([rewStr,lickStr,locStr,orStr])
sendProc = billiard.Process(target=send_data,args=(sendStr,))
sendProc.start()
print 'seeeeeending'
#send_data(sendStr)
sendT = time.time()
lickLst = []; rewLst = []; #No need to empty / update the location/orientation values
#these will be updated at the start of each trial
return lickLst,rewLst,sendT
#Defining my visual stimuli and task parameters
timeout = 0.1 # Every 100 msec, trial frequency
FPS =30
Clock= pygame.time.Clock()
BLACK = (0, 0, 0)
GRAY = (127, 127, 127)
grey_rect=pygame.Rect(160,0,480,480)
gameDisplay=pygame.display.set_mode((800, 480)) #,pygame.FULLSCREEN
changex=4
freq=6 #Originally 18. There is a MATLAB script named sine_experiment in the Matlabcourse folder where you can adjust the parameters in trying to identify the best frequency to pick. Use it.
stim_dur=5
greyscreen_dur=2
refresh_rate = 0.05 #originally 0.05
cue_period = 2
#Defining trial structure
Location = []
Orientation = []
location = []
orientation = []
Location_Array = []
Orientation_Array = []
block_repeats=1
while block_repeats <=18:
t_perblock=1
while t_perblock<=10:
if t_perblock<=9:
Location = random.randrange(1,3) #Location
Orientation = random.randrange(3,5) #Orientation
Location_Array.append(Location)
Orientation_Array.append(Orientation)
t_perblock+=1
elif t_perblock == 10: #Once every ten times that a trial condition is picked, I want an invalid condition to be picked.
#Even though now, the invalid condition is always the last to be picked in a block of 10 trials,
#I will shuffle the contents of each block before concatenating the 18 of them into a single block of 180 trials.
Location = random.randrange(1,3) # Location, works the same on invalid trials.
Orientation = random.randrange(6,8) #This will represent the two types of invalidly cued trials.
#7 = aud_cueH incorrect, 8 = aud_cueV incorrect.
Location_Array.append(Location)
Orientation_Array.append(Orientation)
t_perblock+=1
block_repeats+=1
shuffle (Location_Array)
shuffle(Orientation_Array)
Location_Array = np.array(Location_Array)
Orientation_Array = np.array(Orientation_Array)
#MAKING GRATINGS
h_gab=[] #Horizontal sine wave array, to be filled
v_gab=[] #Vertical sine wave array
print "Currently making gratings"
j=0
x=0
while j <=38: #Horizontal grating, originally made 100 meshgrids (j going from 0 to <=100, but smaller number is preferable because of memory allocation issues on the RPpi).
pixels = np.linspace(np.pi+x,3*np.pi+x,480)
[sinexgrid, sineygrid] = np.meshgrid(pixels, pixels)
gaussianinputs= np.linspace(-np.pi, np.pi,480)
[gaussxgrid,gaussygrid] = np.meshgrid(gaussianinputs, gaussianinputs)
# Gaussian : mean = 0, std = 1, amplitude = 1
gaussian = np.exp(-(gaussxgrid/2)**2-(gaussygrid/2)**2) #originally grids divided by 2
# Sine wave grating : orientation = 0, phase = 0, amplitude = 1, frequency = 10/(2*pi)
horizontal_sine= (np.sin(sinexgrid*freq)) * contrast_var
hgabor = horizontal_sine * gaussian
hgabor = ((hgabor+1)/2*255).astype('uint8')
hgabor = hgabor[..., None].repeat(3, -1).astype("uint8")
h_gab.append(hgabor)
x+=changex
j+=1
h_gab.append(hgabor)
h_gab=np.array(h_gab)
v_gab=h_gab.transpose(0,2,1,3) #Transpose the horizontal grating matrix to create vertical gratings
surface_maker=0
h_surf_list = []
v_surf_list = []
while surface_maker<=39:
h_surface = pygame.surfarray.make_surface(h_gab[surface_maker])
h_surf_list.append([h_surface])
v_surface = pygame.surfarray.make_surface(v_gab[surface_maker])
v_surf_list.append([v_surface])
surface_maker+=1
# MAKING THE NOISE VIDEO
print "Making noise video now"
noise_movie_frames=0
destroyed_gratings=[]
gaussianinputs= np.linspace(-np.pi, np.pi,480)
[gaussxgrid,gaussygrid] = np.meshgrid(gaussianinputs, gaussianinputs)
gaussian = np.exp(-(gaussxgrid/2)**2-(gaussygrid/2)**2) #originally grids divided by 2
while noise_movie_frames <=39:
randomisation = 0
pixels = np.linspace(np.pi,3*np.pi,480)
[sinexgrid, sineygrid] = np.meshgrid(pixels, pixels)
destroyedgabor= (np.sin(sinexgrid*15)) * contrast_var #*contrast_var originally and freq instead of 15
while randomisation <=479:
destroyedgabor [randomisation] [0:480] = np.random.permutation(destroyedgabor[randomisation] [0:480])
destroyedgabor [0:480] [randomisation] = np.random.permutation(destroyedgabor [0:480][randomisation])
randomisation+=1
destroyedgabor = destroyedgabor * gaussian
destroyedgabor = ((destroyedgabor+1)/2*255).astype('uint8')
destroyedgabor = destroyedgabor[..., None].repeat(3, -1).astype("uint8")
destroyed_gratings.append(destroyedgabor)
noise_movie_frames+=1
destroyed_gratings = np.array(destroyed_gratings)
print destroyed_gratings.shape
making_noise_frames=0
noise_frame_list=[]
while making_noise_frames <=39:
Noise=pygame.surfarray.make_surface(destroyed_gratings[making_noise_frames])
noise_frame_list.append([Noise])
making_noise_frames+=1
print "Finished making noise video"
#MAKING AUDITORY CUES NOW
pygame.mixer.pre_init(96000,-16,1,4096) #if jitter, change 256 to different value
pygame.init()
sR = 96000 #Sampling rate
cue_dur = 0.4 # Duration of auditory cue
max16bit = 32766
aud_cues = np.zeros((1,2))
aud_cues [0] [0] = 20 * 10**2
aud_cues [0] [1] = 5 * 10**2
making_sounds=0
aud_cueH = []
aud_cueV = []
print "Making sounds now"
while making_sounds <=1:
def gensin(frequency=aud_cues [0][making_sounds], duration= cue_dur, sampRate = sR, edgeWin = 0.01):
cycles = np.linspace
cycles = np.linspace(0,duration*2*np.pi,num=duration*sampRate)
wave = np.sin(cycles*frequency, dtype='float32')
#smooth sine wave at the edges
numSmoothSamps = int(edgeWin*sR)
wave[0:numSmoothSamps] = wave[0:numSmoothSamps] * np.cos(np.pi*np.linspace(0.5,1,num=numSmoothSamps))**2
wave[-numSmoothSamps:] = wave[-numSmoothSamps:] * np.cos(np.pi*np.linspace(1,0.5,num=numSmoothSamps))**2
wave = np.round(wave*max16bit)
return wave.astype('int16')
sndArray=gensin()
snd_Audio = pygame.sndarray.make_sound(sndArray)
if making_sounds==0:
aud_cueH = snd_Audio
elif making_sounds==1:
aud_cueV = snd_Audio #np.concatenate((snd_Arr,snd_Audio),axis=0)
making_sounds+=1
print "Sounds done"
#Initialising data lists for licks and tones
lickLst = [] #[trial number] [lick time relative to start of task]
#[lick time relative to stimulus onset] [lick location: R/L] [Correct/Incorrect]
rewLst = [] #[trial number] [relative to stimulus onset] [reward side]
sendT = time.time() #Not sure if these three should be just at the start of the trial counter or outside of it...
lickT = time.time()
prevL = time.time()
start = time.time()
counter = 0
while counter <=179:
orientation = Orientation_Array [counter]
location = Location_Array [counter]
if (time.time()-sendT> 5): #Basically, if 5 seconds have elapsed since the last data_send, then call on that function
#and update the contents of the strings
lickLst,rewLst,orientation,location,contrast,sendT = data_sender(lickLst,rewLst,orientation,location,contrast,sendT)
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if Location_Array [counter] == 1:
if Orientation_Array [counter] == 3: #Right side, vertical
licknumL=0
licknumR=0
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueV.play()
#Pulses should be sent here because this screen is meant to contain greyscreen
#while the other RPi should be getting a pulse to trigger grating presentation
sendpulse(15)
startmoment = time.time()
finishmoment = startmoment+stim_dur
making_noise_frames=0
x=0
while time.time() <= finishmoment:
gameDisplay.blit(noise_frame_list[making_noise_frames][0],((160,0))) #Originally second value was [0]
if time.time()>=start+(x*refresh_rate):
pygame.display.update()
making_noise_frames+=1
x+=1
if making_noise_frames ==40:
making_noise_frames=0
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickR)): #Right lick - correct side; rewarded
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Correct')])
prevL = time.time()
print "Correct response detected"
#rewprocr = billiard.Process(target=deliverRew,args=(rewR,))
rewProcR.start()
#deliverRew(rewR)
rewT = time.time() #Time elapsed since grating onset and reward OR ASK YVES IF MORE USEFUL TO COLLECT TIMINGS RELATIVE TO THE ORIGINAL START OF THE EXPERIMENT RATHER THAN TRIAL
rewLst.append([counter, rewT-start, rewT-startmoment,'' +str('RR')])
print "Reward delivered"
else:
prevL = time.time() #ASK YVES WHY YOU'D WANT TO RESET THE TIMER IN CASE OF A PREMATURE LICK...
if (GPIO.event_detected(lickL)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Incorrect')])
prevL = time.time()
punish_pulse(26)
print "Incorrect response"
#punishment for incorrect spout - grey screen and delay of 5 secs before next trial onset
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif Orientation_Array [counter] == 7: #Right side, vertical, INVALID prediction by cue
licknumL=0
licknumR=0
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueH.play()
#Pulses should be sent here because this screen is meant to contain greyscreen
#while the other RPi should be getting a pulse to trigger grating presentation
sendpulse(15)
startmoment = time.time()
finishmoment = startmoment+stim_dur
making_noise_frames=0
x=0
while time.time() <= finishmoment:
gameDisplay.blit(noise_frame_list[making_noise_frames][0],((160,0))) #Originally second value was [0]
if time.time()>=start+(x*refresh_rate):
pygame.display.update()
making_noise_frames+=1
x+=1
if making_noise_frames ==40:
making_noise_frames=0
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickR)): #I need the mice to withhold responding.
#ALL responses are punished with a break
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Response on no-go trial')])
prevL = time.time()
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickL)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Response on no-go trial')])
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif Orientation_Array [counter] == 4:
licknumL=0
licknumR=0
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueH.play()
sendpulse(23)
startmoment = time.time()
finishmoment = startmoment+stim_dur
making_noise_frames=0
x=0
while time.time() <= finishmoment:
gameDisplay.blit(noise_frame_list[making_noise_frames][0],((160,0))) #Originally second value was [0]
if time.time()>=start+(x*refresh_rate):
pygame.display.update()
making_noise_frames+=1
x+=1
if making_noise_frames ==40:
making_noise_frames=0
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickR)): #Right lick - correct side; rewarded
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Correct')])
prevL = time.time()
print "Correct response detected"
#rewprocr = billiard.Process(target=deliverRew,args=(rewR,))
rewProcR.start()
#deliverRew(rewR)
rewT = time.time()
rewLst.append([counter, rewT-start, rewT-startmoment,'' +str('RR')])
print "Reward delivered"
else:
prevL = time.time() #ASK YVES WHY YOU'D WANT TO RESET THE TIMER IN CASE OF A PREMATURE LICK...
if (GPIO.event_detected(lickL)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Incorrect')])
prevL = time.time()
punish_pulse(26)
print "Incorrect response"
#punishment for incorrect spout - grey screen and delay of 5 secs before next trial onset
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif Orientation_Array [counter] == 6: #Horizontal grating, INVALID prediction by auditory cue
licknumL=0
licknumR=0
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueV.play()
sendpulse(23)
startmoment = time.time()
finishmoment = startmoment+stim_dur
making_noise_frames=0
x=0
while time.time() <= finishmoment:
gameDisplay.blit(noise_frame_list[making_noise_frames][0],((160,0))) #Originally second value was [0]
if time.time()>=start+(x*refresh_rate):
pygame.display.update()
making_noise_frames+=1
x+=1
if making_noise_frames ==40:
making_noise_frames=0
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickR)): #I need the mice to withhold responding.
#ALL responses are punished with a break
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Response on no-go trial')])
prevL = time.time()
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickL)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Response on no-go trial')])
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
startmoment = time.time()
finishmoment = startmoment+greyscreen_dur
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
counter+=1
elif Location_Array [counter] == 2:
if Orientation_Array [counter] == 3: #Let's make this the left, vertical
licknumL=0
licknumR=0
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueV.play()
grey_screenpulse(40)
startmoment = time.time()
finishmoment = startmoment+stim_dur
frame_num=0 #Gonna go through the frames containing the vertical sine
#gratings (after the 102nd element in the all_gabors array)
x=0 #This variable is going to increase by one at every iteration of the time loop below. I will multiply it by the refresh period of 50 msec every iteration to get an update on the screen (needed for moving sine gratinfor event in pygame.event.get():
while time.time() <= finishmoment:
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
gameDisplay.blit(v_surf_list[frame_num][0],((160,0))) # Originally 4 and 10 for 300 x 300 pixel size matrix
if time.time()>= startmoment+(x*refresh_rate):
pygame.display.update()
frame_num+=1
x+=1
if frame_num ==40:
frame_num=0
if (GPIO.event_detected(lickL)): #Left lick - correct side; rewarded
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Correct')])
prevL = time.time()
print "Correct response detected"
#rewprocl = billiard.Process(target=deliverRew,args=(rewL,))
rewProcL.start()
#deliverRew(rewL)
rewT = time.time() #Time elapsed since grating onset and reward OR ASK YVES IF MORE USEFUL TO COLLECT TIMINGS RELATIVE TO THE ORIGINAL START OF THE EXPERIMENT RATHER THAN TRIAL
rewLst.append([counter, rewT-start, rewT-startmoment,'' +str('LR')])
print "Reward delivered"
else:
prevL = time.time() #ASK YVES WHY YOU'D WANT TO RESET THE TIMER IN CASE OF A PREMATURE LICK...
if (GPIO.event_detected(lickR)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Incorrect')])
prevL= time.time()
punish_pulse(26)
print "Incorrect response"
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif Orientation_Array [counter] == 7: # Vertical grating, invalid prediction by auditory cue
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueH.play()
grey_screenpulse(40)
startmoment = time.time()
finishmoment = startmoment+stim_dur
frame_num=0 #Gonna go through the frames containing the vertical sine
#gratings (after the 102nd element in the all_gabors array)
x=0 #This variable is going to increase by one at every iteration of the time loop below. I will multiply it by the refresh period of 50 msec every iteration to get an update on the screen (needed for moving sine gratinfor event in pygame.event.get():
while time.time() <= finishmoment:
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
gameDisplay.blit(v_surf_list[frame_num][0],((160,0))) # Originally 4 and 10 for 300 x 300 pixel size matrix
if time.time()>= startmoment+(x*refresh_rate):
pygame.display.update()
frame_num+=1
x+=1
if frame_num ==40:
frame_num=0
if (GPIO.event_detected(lickR)): #I need the mice to withhold responding.
#ALL responses are punished with a break
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Response on no-go trial')])
prevL = time.time()
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickL)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Response on no-go trial')])
punish_pulse(26)
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif Orientation_Array [counter] == 4: #Horizontal grating, valid auditory cue
licknumL=0
licknumR=0
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueH.play()
grey_screenpulse(40)
startmoment = time.time()
finishmoment = startmoment+stim_dur
frame_num=0 #Gonna go through the frames containing the vertical sine
#gratings (after the 102nd element in the all_gabors array)
x=0 #This variable is going to increase by one at every iteration of the time loop below. I will multiply it by the refresh period of 50 msec every iteration to get an update on the screen (needed for moving sine grating)
while time.time() <= finishmoment:
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
gameDisplay.blit(h_surf_list[frame_num][0],((160,0))) # Originally 4 and 10 for 300 x 300 pixel size matrix
if time.time()>= startmoment+(x*refresh_rate):
pygame.display.update()
frame_num+=1
x+=1
if frame_num ==40:
frame_num = 0
if (GPIO.event_detected(lickL)): #Left lick - correct side; rewarded
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Correct')])
prevL = time.time()
print "Correct response detected"
#rewprocl = billiard.Process(target=deliverRew,args=(rewL,))
rewProcL.start()
#deliverRew(rewL)
rewT = time.time() #Time elapsed since grating onset and reward OR ASK YVES IF MORE USEFUL TO COLLECT TIMINGS RELATIVE TO THE ORIGINAL START OF THE EXPERIMENT RATHER THAN TRIAL
rewLst.append([counter, rewT-start, rewT-startmoment,'' +str('LR')])
print "Reward delivered"
else:
prevL = time.time()
if (GPIO.event_detected(lickR)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Incorrect')])
prevL= time.time()
punish_pulse(26)
print "Incorrect response"
#punishment for incorrect spout - grey screen and delay of 5 secs before next trial onset
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif Orientation_Array [counter] == 6: #Horizontal grating, invalid auditory cue
cue_start=time.time()
while time.time()<=cue_start+cue_period:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if time.time()>=cue_start+0.6 and time.time()<=cue_start+1:
aud_cueV.play()
grey_screenpulse(40)
startmoment = time.time()
finishmoment = startmoment+stim_dur
frame_num=0 #Gonna go through the frames containing the vertical sine
#gratings (after the 102nd element in the all_gabors array)
x=0 #This variable is going to increase by one at every iteration of the time loop below. I will multiply it by the refresh period of 50 msec every iteration to get an update on the screen (needed for moving sine grating)
while time.time() <= finishmoment:
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
gameDisplay.blit(h_surf_list[frame_num][0],((160,0))) # Originally 4 and 10 for 300 x 300 pixel size matrix
if time.time()>= startmoment+(x*refresh_rate):
pygame.display.update()
frame_num+=1
x+=1
if frame_num ==40:
frame_num = 0
if (GPIO.event_detected(lickR)): #I need the mice to withhold responding.
#ALL responses are punished with a break
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumR = licknumR + 1
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('RL'), ''+str ('Response on no-go trial')])
prevL = time.time()
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
if (GPIO.event_detected(lickL)): #Incorrect side. Punishment by timeout before next trial?
if (time.time()-prevL)>minILI:
lickT = time.time()
licknumL = licknumL + 1 #Figure out where to initialise this variable. It's just a lick counter
lickLst.append([counter,lickT-start,lickT-startmoment,'' +str('LL'), ''+str ('Response on no-go trial')])
punish_pulse(26)
startmoment = time.time()
finishmoment = startmoment+punishment_delay
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect) #when movie finishes, replace with blank grey screen for 2 seconds
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
startmoment = time.time()
finishmoment = startmoment+greyscreen_dur
while time.time() <= finishmoment:
gameDisplay.fill(BLACK)
pygame.draw.rect(gameDisplay,GRAY,grey_rect)
pygame.display.update()
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
counter+=1
#lickLst=np.array(lickLst)
#lickLst=np.concatenate((lickLst[0],lickLst[1],lickLst[2],lickLst[3],lickLst[4]),axis=1)
#rewLst=np.array(rewLst)
#rewLst=np.concatenate((rewLst[0],rewLst[1],rewLst[2]),axis=1)
#print lickLst
#print rewLst
#print Task_Matrix
for event in pygame.event.get():
if event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
elif event.type == KEYUP:
if event.key == K_ESCAPE:
task = False
pygame.quit()
Clock.tick(FPS)
pygame.quit()
quit()
| 41.610256 | 593 | 0.463978 | 5,475 | 56,798 | 4.734977 | 0.100274 | 0.046906 | 0.034563 | 0.0395 | 0.759065 | 0.749383 | 0.736268 | 0.726778 | 0.72539 | 0.723229 | 0 | 0.021407 | 0.453062 | 56,798 | 1,364 | 594 | 41.640762 | 0.813102 | 0.154706 | 0 | 0.792842 | 0 | 0 | 0.019077 | 0 | 0.004338 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0141 | null | null | 0.021692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dcef2b95b3071c9588f25a03940d628d6a383bf0 | 3,268 | py | Python | sensirion_i2c_scd/scd4x/response_types.py | Sensirion/python-i2c-scd | ede2739caa23082b446f28558c0e15925f8fcb4e | [
"BSD-3-Clause"
] | 2 | 2021-07-21T06:03:01.000Z | 2021-08-18T02:27:14.000Z | sensirion_i2c_scd/scd4x/response_types.py | Sensirion/python-i2c-scd | ede2739caa23082b446f28558c0e15925f8fcb4e | [
"BSD-3-Clause"
] | 2 | 2021-04-06T07:00:19.000Z | 2022-01-27T16:53:09.000Z | sensirion_i2c_scd/scd4x/response_types.py | Sensirion/python-i2c-scd | ede2739caa23082b446f28558c0e15925f8fcb4e | [
"BSD-3-Clause"
] | 1 | 2022-03-30T11:49:50.000Z | 2022-03-30T11:49:50.000Z | # -*- coding: utf-8 -*-
# (c) Copyright 2021 Sensirion AG, Switzerland
from __future__ import absolute_import, division, print_function
class Scd4xTemperature(object):
"""
Represents a measurement response for the temperature.
With the :py:attr:`ticks` you can access the raw data as received from the
device. For the converted values you can choose between
:py:attr:`degrees_celsius` and :py:attr:`degrees_fahrenheit`.
:param int ticks:
The read ticks as received from the device.
"""
def __init__(self, ticks):
"""
Creates an instance from the received raw data.
"""
#: The ticks (int) as received from the device.
self.ticks = ticks
#: The converted temperature in °C.
self.degrees_celsius = -45. + 175. * ticks / 65536.
#: The converted temperature in °F.
self.degrees_fahrenheit = -49. + 315. * ticks / 65536.
def __str__(self):
return '{:0.1f} °C'.format(self.degrees_celsius)
class Scd4xHumidity(object):
"""
Represents a measurement response for the humidity.
With the :py:attr:`ticks` you can access the raw data as received from the
device. For the converted value the :py:attr:`percent_rh` attribute is
available.
:param int ticks:
The read ticks as received from the device.
"""
def __init__(self, ticks):
"""
Creates an instance from the received raw data.
"""
#: The ticks (int) as received from the device.
self.ticks = ticks
#: The converted humidity in %RH.
self.percent_rh = 100. * ticks / 65536.
def __str__(self):
return '{:0.1f} %RH'.format(self.percent_rh)
class Scd4xCarbonDioxide(object):
"""
Represents a measurement response for the humidity.
With the :py:attr:`ticks` you can access the raw data as received from the
device. For the converted value the :py:attr:`percent_rh` attribute is
available.
:param int ticks:
The read ticks as received from the device.
"""
def __init__(self, ticks):
"""
Creates an instance from the received raw data.
"""
#: The ticks (int) as received from the device.
self.ticks = ticks
#: CO2 ppm.
self.co2 = ticks
def __str__(self):
return '{:d} ppm'.format(self.co2)
class Scd4xTemperatureOffset(object):
"""
Represents a temperature offset.
With the :py:attr:`ticks` you can access the raw data as received from the
device. For the converted values you can choose between
:py:attr:`degrees_celsius` and :py:attr:`degrees_fahrenheit`.
:param int ticks:
The read ticks as received from the device.
"""
def __init__(self, ticks):
"""
Creates an instance from the received raw data.
"""
#: The ticks (int) as received from the device.
self.ticks = ticks
#: The converted temperature offset in °C.
self.degrees_celsius = 175. * ticks / 65536.
#: The converted temperature offset in °F.
self.degrees_fahrenheit = 32. + (self.degrees_celsius * 9. / 5.)
def __str__(self):
return '{:0.1f} °C'.format(self.degrees_celsius)
| 28.417391 | 78 | 0.629743 | 427 | 3,268 | 4.709602 | 0.201405 | 0.055694 | 0.083541 | 0.101442 | 0.820985 | 0.820985 | 0.732471 | 0.711586 | 0.692193 | 0.692193 | 0 | 0.024421 | 0.273256 | 3,268 | 114 | 79 | 28.666667 | 0.819789 | 0.55049 | 0 | 0.518519 | 0 | 0 | 0.031837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.296296 | false | 0 | 0.037037 | 0.148148 | 0.62963 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
0d273a0d5fae246908cfd51cd32e39c87fc5b723 | 8,461 | py | Python | statsfig/normal.py | shinokada/ndfig | 214dee0f53f7feef43ebda64638bf0375125990e | [
"MIT"
] | 4 | 2020-08-17T14:14:41.000Z | 2021-06-05T17:30:40.000Z | statsfig/normal.py | shinokada/ndfig | 214dee0f53f7feef43ebda64638bf0375125990e | [
"MIT"
] | null | null | null | statsfig/normal.py | shinokada/ndfig | 214dee0f53f7feef43ebda64638bf0375125990e | [
"MIT"
] | null | null | null | from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
def normcdf(x_min=-4, x_max=4, mean=0, std=1, y_max=0.45, xlabel='x', ylabel='pdf(x)', legend_size=12,
lb=-10, ub=10, font_size=20, alpha=1, fill_color='skyblue', bg_color='white',
title='Normal Distribution ', fig_w=8, fig_l=8, grid=True, title_size=20, label_size=16,
tick_size=12):
"""
Normal Distribution
parameters
----------
x_min: The x-axis min value. The default value is -4.
x_max: The x-axis max value. The default value is 4.
mean: The Mean value. The default value is 0
std: The Standard deviation value. The default value is 1.
y_max: The y-axix max value. The default value is 0.45.
xlabel: The x-axis label. The default value is 'x'.
ylabel: The y-axis label. The default value is 'pdf(x)'.
legend_size: The legend font size. The default value is 12.
lb: The lower bound value. The default value is -10.
up: The lower bound value. The default value is 10.
font_size: The title font size. The default value is 20.
alpha: Alpha(transparency) value. The default value is 1.
fill_color: The filling color. The default value is 'skyblue'.
bg_color: The background color. If it is not white, it will show the probability. The default value is 'white'.
title: The figure title. The default value is 'Normal Distribution '.
fig_w: The Matplotlib `figsize` width. The default value is 8.
fig_l: The Matplotlib `figsize` length. The default value is 8.
grid: Use 'True' or 'False' to show the grid. The default value is 'True'.
title_size: The x and y-axis title size. The default value is 20.
label_size: The label font size. The default value is 16.
tick_size: The x and y-axis tick size. The default value is 12.
examples
--------
import statfig as sf
sf.normcdf()
sf.normcdf(x_min=-4, x_max=10, mean=3, std=2, y_max=0.25,
xlabel='x', ylabel='pdf(x)', lb=-10, ub=2, font_size=20, alpha=0.5, fill_color='g',
title='P(X<2) where ', fig_w=10, fig_l=5)
sf.normcdf(x_min=-4, x_max=10, mean=3, std=2, y_max=0.25,
xlabel='x', ylabel='pdf(x)', lb=-10, ub=2, font_size=20, fill_color='#73f562', alpha=1,
bg_color='#f7636f')
sf.normcdf(mean=1, std=2, lb=0.5, ub=2, y_max=0.25, x_min=-6, x_max=10, bg_color='#fccda7')
sf.normcdf(mean=3, std=2, lb=4, ub=10, y_max=0.25, x_min=-4, x_max=10)
"""
fig, ax = plt.subplots(1, 1, figsize=(fig_w, fig_l))
# for distribution curve
x = np.arange(x_min, x_max, 0.1)
ax.plot(x, norm.pdf(x, loc=mean, scale=std), label=None)
# title
title = title + ' X~N({}, {}\u00b2)'.format(mean, std, 2)
ax.set_title(title, fontsize=font_size)
ax.set(xlabel=xlabel, ylabel=ylabel)
# probability
prob = round(norm(mean, std).cdf(ub) - norm(mean, std).cdf(lb), 2)
# fill background
# if the background is not white, w or #fff set the label to 1- prob
prob_com = 1-prob
bg_prob = 'P(x)=%.2f' % prob_com
bg_label = None if bg_color == 'white' or bg_color == 'w' or bg_color == '#fff' else bg_prob
ax.fill_between(x, norm.pdf(x, loc=mean, scale=std),
alpha=alpha, color=bg_color, label=bg_label)
# for fill_between
px = np.arange(lb, ub, 0.01)
ax.set_ylim(0, y_max)
ax.set_xlim(x_min, x_max)
ax.fill_between(px, norm.pdf(px, loc=mean, scale=std),
alpha=alpha, color=fill_color, label='P(x)=%.2f' % prob)
ax.legend(fontsize=legend_size)
ax.set_title(title, fontsize=font_size)
ax.set(xlabel=xlabel, ylabel=ylabel)
plt.rc('axes', titlesize=title_size) # fontsize of the axes title
plt.rc('axes', labelsize=label_size) # fontsize of the x and y labels
plt.rc('xtick', labelsize=tick_size) # fontsize of the tick labels
plt.rc('ytick', labelsize=tick_size) # fontsize of the tick labels
ax.grid(grid)
plt.show()
def normpdf_std(val=[1, 2, 3, 4], x_min=-4, x_max=4, fig_w=8, fig_l=8, grid=True, xlabel='x', ylabel='pdf(x)',
title='Normal Distribution', legend_size=12, font_size=20, label_size=16,
tick_size=12, y_max=0.6, title_size=20):
"""
Normal Distribution with different standard deviations
parameters
----------
val: The Degree of freedom values to display. The default value is [1,2,3,4].
x_min: The x-axis min value. The default value is -4.
x_max: The x-axis max value. The default value is 4.
y_max: The y-axix max value. The default value is 0.45.
xlabel: The x-axis label. The default value is 'x'.
ylabel: The y-axis label. The default value is 'pdf(x)'.
legend_size: The legend font size. The default value is 12.
font_size: The title font size. The default value is 20.
title: The figure title. The default value is 'Normal Distribution '.
fig_w: The Matplotlib `figsize` width. The default value is 8.
fig_l: The Matplotlib `figsize` length. The default value is 8.
grid: Use 'True' or 'False' to show the grid. The default value is 'True'.
title_size: The x and y-axis title size. The default value is 20.
label_size: Label font size. The default value is 16.
tick_size: The x and y-axis tick size. The default value is 12.
examples
--------
import statfig as sf
sf.normpdf_std()
"""
fig, ax = plt.subplots(1, 1, figsize=(fig_w, fig_l))
x = np.linspace(x_min, x_max, 100)
for s in val:
ax.plot(x, norm.pdf(x, scale=s), label='std=%.1f' % s)
ax.set_ylim(0, y_max)
ax.set_xlim(x_min, x_max)
ax.legend(fontsize=legend_size)
ax.set_title(title, fontsize=font_size)
ax.set(xlabel=xlabel, ylabel=ylabel)
plt.rc('axes', titlesize=title_size) # fontsize of the axes title
plt.rc('axes', labelsize=label_size) # fontsize of the x and y labels
plt.rc('xtick', labelsize=tick_size) # fontsize of the tick labels
plt.rc('ytick', labelsize=tick_size) # fontsize of the tick labels
ax.grid(grid)
plt.show()
def normpdf_mean(val=[0, 1, 2, 3], x_min=-10, x_max=10, y_max=0.6, xlabel='x', ylabel='pdf(x)', legend_size=12,
font_size=20, title='Normal Distribution', fig_w=8, fig_l=8, grid=True,
title_size=20, label_size=16, tick_size=12):
"""
Normal Distribution with different means
parameters
----------
val: The Mean values to display. The default value is [0,1,2,3].
x_min: The x-axis min value. The default value is -10.
x_max: The x-axis max value. The default value is 10.
y_max: The y-axix max value. The default value is 0.45.
xlabel: The x-axis label. The default value is 'x'.
ylabel: The y-axis label. The default value is 'pdf(x)'.
legend_size: The legend font size. The default value is 12.
font_size: The title font size. The default value is 20.
title: The figure title. The default value is 'Normal Distribution '.
fig_w: The Matplotlib `figsize` width. The default value is 8.
fig_l: The Matplotlib `figsize` length. The default value is 8.
grid: Use 'True' or 'False' to show the grid. The default value is 'True'.
title_size: The x and y-axis title size. The default value is 20.
label_size: Label font size. The default value is 16.
tick_size: The x and y-axis tick size. The default value is 12.
examples
--------
import statfig as sf
sf.normpdf_mean()
y_max=0.45, xlabel='x', ylabel='pdf(x)', legend_size=12,
lb=-10, ub=10, font_size=20, alpha=1, fill_color='skyblue', bg_color='white',
title='Normal Distribution ', fig_w=8, fig_l=8,
"""
fig, ax = plt.subplots(1, 1, figsize=(fig_w, fig_l))
x = np.linspace(x_min, x_max, 100)
for mean in val:
ax.plot(x, norm.pdf(x, loc=mean), label='mean=%.1f' % mean)
ax.set_ylim(0, y_max)
ax.legend(fontsize=legend_size)
ax.set_title(title, fontsize=font_size)
ax.set(xlabel=xlabel, ylabel=ylabel)
plt.rc('axes', titlesize=title_size) # fontsize of the axes title
plt.rc('axes', labelsize=label_size) # fontsize of the x and y labels
plt.rc('xtick', labelsize=tick_size) # fontsize of the tick labels
plt.rc('ytick', labelsize=tick_size) # fontsize of the tick labels
ax.grid(grid)
plt.show()
| 38.112613 | 115 | 0.645787 | 1,449 | 8,461 | 3.663216 | 0.100069 | 0.096081 | 0.144122 | 0.163338 | 0.800867 | 0.797476 | 0.764506 | 0.741899 | 0.70893 | 0.694989 | 0 | 0.037337 | 0.227633 | 8,461 | 221 | 116 | 38.285068 | 0.774904 | 0.561518 | 0 | 0.573529 | 0 | 0 | 0.062651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044118 | false | 0 | 0.044118 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b492abf92961ca20618726f8b43a514417a13a19 | 880 | py | Python | colour_hdri/exposure/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 92 | 2015-09-19T22:11:15.000Z | 2022-03-13T06:37:53.000Z | colour_hdri/exposure/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 24 | 2017-05-25T08:55:10.000Z | 2022-03-30T18:26:43.000Z | colour_hdri/exposure/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 9 | 2016-01-18T17:29:51.000Z | 2020-11-12T12:54:18.000Z | # -*- coding: utf-8 -*-
from .common import (average_luminance, average_illuminance,
luminance_to_exposure_value,
illuminance_to_exposure_value, adjust_exposure)
from .dsc import (focal_plane_exposure, arithmetic_mean_focal_plane_exposure,
saturation_based_speed_focal_plane_exposure,
exposure_index_values, exposure_value_100,
photometric_exposure_scale_factor_Lagarde2014)
__all__ = [
'average_luminance',
'average_illuminance',
'luminance_to_exposure_value',
'illuminance_to_exposure_value',
'adjust_exposure',
]
__all__ += [
'focal_plane_exposure',
'arithmetic_mean_focal_plane_exposure',
'saturation_based_speed_focal_plane_exposure',
'exposure_index_values',
'exposure_value_100',
'photometric_exposure_scale_factor_Lagarde2014',
]
| 33.846154 | 77 | 0.719318 | 89 | 880 | 6.393258 | 0.325843 | 0.137083 | 0.189807 | 0.119508 | 0.920914 | 0.920914 | 0.920914 | 0.920914 | 0.920914 | 0.920914 | 0 | 0.021521 | 0.207955 | 880 | 25 | 78 | 35.2 | 0.794835 | 0.023864 | 0 | 0 | 0 | 0 | 0.33839 | 0.234539 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4b2de276ddde4c4acdd760783bba56bcb39093c | 9,767 | py | Python | test/cnnl/op_test/test_abs.py | Cambricon/catch | 2625da389f25a67066d20fb6b0c38250ef98f8ab | [
"BSD-2-Clause"
] | 20 | 2022-03-01T11:40:51.000Z | 2022-03-30T08:17:47.000Z | test/cnnl/op_test/test_abs.py | Cambricon/catch | 2625da389f25a67066d20fb6b0c38250ef98f8ab | [
"BSD-2-Clause"
] | null | null | null | test/cnnl/op_test/test_abs.py | Cambricon/catch | 2625da389f25a67066d20fb6b0c38250ef98f8ab | [
"BSD-2-Clause"
] | null | null | null | from __future__ import print_function
import sys
import logging
import os
os.environ['ENABLE_CNNL_TRYCATCH'] = 'OFF' # pylint: disable=C0413
import copy
import unittest
import torch
import torch_mlu.core.mlu_model as ct
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(cur_dir + "/../../")
from common_utils import testinfo, TestCase # pylint: disable=C0413,C0411
logging.basicConfig(level=logging.DEBUG)
class TestAbsOp(TestCase):
# @unittest.skip("not test")
@testinfo()
def test_abs_contiguous(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3), (1000), ()]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
out_cpu = torch.abs(x)
out_mlu = torch.abs(x.to('mlu'))
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_channel_last(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3), (1000), ()]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x = self.convert_to_channel_last(x)
out_cpu = torch.abs(x)
out_mlu = torch.abs(x.to('mlu'))
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_not_dense(self):
shape_list = [(512, 1024, 2, 2, 8), (10, 3, 32, 64), (2, 3, 8),
(254, 254, 112, 1, 1, 6)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x_mlu = x.to(ct.mlu_device())
if len(shape) == 4:
x = x[:, :, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :, :int(shape[-1] / 2)]
elif len(shape) == 3:
x = x[:, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :int(shape[-1] / 2)]
out_cpu = torch.abs(x)
out_mlu = torch.abs(x.to('mlu'))
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_absout_contiguous(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3), (1000), ()]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
y = torch.randn(shape, dtype=torch.float)
y_mlu = copy.deepcopy(y).to(ct.mlu_device())
out_cpu = torch.abs(x, out=y)
ori_ptr = y_mlu.data_ptr()
out_mlu = torch.abs(self.to_mlu(x), out=y_mlu)
out_ptr = y_mlu.data_ptr()
self.assertEqual(ori_ptr, out_ptr)
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_absout_channel_last(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3), (1000), ()]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
y = torch.randn(shape, dtype=torch.float)
x = self.convert_to_channel_last(x)
y_mlu = copy.deepcopy(y).to(ct.mlu_device())
out_cpu = torch.abs(x, out=y)
ori_ptr = y_mlu.data_ptr()
out_mlu = torch.abs(self.to_mlu(x), out=y_mlu)
out_ptr = y_mlu.data_ptr()
self.assertEqual(ori_ptr, out_ptr)
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_absout_not_dense(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x_mlu = x.to(ct.mlu_device())
if len(shape) == 4:
x = x[:, :, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :, :int(shape[-1] / 2)]
elif len(shape) == 3:
x = x[:, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :int(shape[-1] / 2)]
y = torch.randn(shape, dtype=torch.float)
x = self.convert_to_channel_last(x)
y_mlu = copy.deepcopy(y).to(ct.mlu_device())
out_cpu = torch.abs(x, out=y)
ori_ptr = y_mlu.data_ptr()
out_mlu = torch.abs(self.to_mlu(x), out=y_mlu)
out_ptr = y_mlu.data_ptr()
self.assertEqual(ori_ptr, out_ptr)
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_absout_shape_contiguous(self):
x = torch.randn(10000, dtype=torch.float)
y = torch.randn(1000, dtype=torch.float)
y_mlu = copy.deepcopy(y).to(ct.mlu_device())
out_cpu = torch.abs(x, out=y)
ori_ptr = y_mlu.data_ptr()
out_mlu = torch.abs(self.to_mlu(x), out=y_mlu)
out_ptr = y_mlu.data_ptr()
assert ori_ptr != out_ptr
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
x = torch.randn(1000, dtype=torch.float)
y = torch.randn(10000, dtype=torch.float)
y_mlu = copy.deepcopy(y).to(ct.mlu_device())
out_cpu = torch.abs(x, out=y)
ori_ptr = y_mlu.data_ptr()
out_mlu = torch.abs(self.to_mlu(x), out=y_mlu)
out_ptr = y_mlu.data_ptr()
self.assertEqual(ori_ptr, out_ptr)
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_t_contiguous(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
out_cpu = x.abs()
out_mlu = self.to_mlu(x).abs()
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_t_channel_last(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x = self.convert_to_channel_last(x)
out_cpu = x.abs()
out_mlu = self.to_mlu(x).abs()
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_t_not_dense(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x_mlu = x.to(ct.mlu_device())
if len(shape) == 4:
x = x[:, :, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :, :int(shape[-1] / 2)]
elif len(shape) == 3:
x = x[:, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :int(shape[-1] / 2)]
out_cpu = x.abs()
out_mlu = self.to_mlu(x).abs()
self.assertTensorsEqual(out_cpu, out_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_inplace_contiguous(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x_mlu = copy.deepcopy(x).to(ct.mlu_device())
x.abs_()
x_mlu.abs_()
self.assertTensorsEqual(x, x_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_inplace_channel_last(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x = self.convert_to_channel_last(x)
x_mlu = copy.deepcopy(x).to(ct.mlu_device())
x.abs_()
x_mlu.abs_()
self.assertTensorsEqual(x, x_mlu.cpu(), 0.0, use_MSE=True)
# @unittest.skip("not test")
@testinfo()
def test_abs_inplace_not_dense(self):
shape_list = [(512, 1024, 2, 2, 4), (10, 3, 32, 32), (2, 3, 4),
(254, 254, 112, 1, 1, 3)]
for shape in shape_list:
x = torch.randn(shape, dtype=torch.float)
x_mlu = copy.deepcopy(x).to(ct.mlu_device())
if len(shape) == 4:
x = x[:, :, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :, :int(shape[-1] / 2)]
elif len(shape) == 3:
x = x[:, :, :int(shape[-1] / 2)]
x_mlu = x_mlu[:, :, :int(shape[-1] / 2)]
x_mlu = copy.deepcopy(x).to(ct.mlu_device())
x.abs_()
x_mlu.abs_()
self.assertTensorsEqual(x, x_mlu.cpu(), 0.0, use_MSE=True)
#@unittest.skip("not test")
@testinfo()
def test_abs_exception(self):
a = torch.randn(3).int().to('mlu')
ref_msg = "Expected tensor for argument #1 'input' to have one of the following"
ref_msg = ref_msg + " scalar types: Float, Half; but got MLUIntType instead"
ref_msg = ref_msg + r" \(while checking arguments for abs\)"
with self.assertRaisesRegex(RuntimeError, ref_msg):
torch.abs(a)
if __name__ == '__main__':
unittest.main()
| 41.21097 | 88 | 0.529129 | 1,424 | 9,767 | 3.436096 | 0.089185 | 0.023707 | 0.058246 | 0.0327 | 0.867975 | 0.865727 | 0.865727 | 0.847333 | 0.839771 | 0.83364 | 0 | 0.074173 | 0.309819 | 9,767 | 236 | 89 | 41.385593 | 0.651684 | 0.043616 | 0 | 0.788177 | 0 | 0 | 0.022415 | 0 | 0 | 0 | 0 | 0 | 0.098522 | 1 | 0.068966 | false | 0 | 0.044335 | 0 | 0.118227 | 0.004926 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4c0b7a0fd8d1f2cba3f61d5283dbde1fbedbcba | 18,039 | py | Python | opentelekom/tests/unit/cce/v3/test_cluster_node.py | tsdicloud/python-opentelekom-sdk | 809f3796dba48ad0535990caf7519bb9afa71d2d | [
"Apache-2.0"
] | null | null | null | opentelekom/tests/unit/cce/v3/test_cluster_node.py | tsdicloud/python-opentelekom-sdk | 809f3796dba48ad0535990caf7519bb9afa71d2d | [
"Apache-2.0"
] | null | null | null | opentelekom/tests/unit/cce/v3/test_cluster_node.py | tsdicloud/python-opentelekom-sdk | 809f3796dba48ad0535990caf7519bb9afa71d2d | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import requests
import unittest
from unittest import mock
from openstack import exceptions
from opentelekom.cce import cce_service
from opentelekom import otc_proxy
from opentelekom.cce.v3 import cluster as _cluster
from opentelekom.cce.v3 import cluster_node as _cluster_node
from opentelekom.tests.unit.otc_mockservice import OtcMockService, OtcMockResponse
from opentelekom.tests.functional import base
class TestClusterNode(base.BaseFunctionalTest):
''' A test to debug the filters used in the ansible module for cce nodes'''
def setUp(self):
super().setUp()
self.prefix = "rbe-sdkunit-filter"
self.cluster_id="0aa55501-a3e8-11e9-9e49-0255ac101611"
self.user_cloud.add_service( cce_service.CceService("ccev2.0", aliases=["cce2"]) )
self.node_ids = ["65a87e5d-a3e9-11e9-92b3-0255ac101711",
"65a9727f-a3e9-11e9-92b3-0255ac101711",
"65a73294-a3e9-11e9-92b3-0255ac101711",
"65a87e6d-a3e9-11e9-92b3-0255ac101711"]
self.nodes = [ _cluster_node.ClusterNode.new(id="65a87e5d-a3e9-11e9-92b3-0255ac101711"),
_cluster_node.ClusterNode.new(id="65a9727f-a3e9-11e9-92b3-0255ac101711"),
_cluster_node.ClusterNode.new(id="65a73294-a3e9-11e9-92b3-0255ac101711"),
_cluster_node.ClusterNode.new(id="65a87e6d-a3e9-11e9-92b3-0255ac101711")]
class MockNodesActiveList(OtcMockService):
responses = [
OtcMockResponse(method="GET",
url_match="cce",
path="/api/v3/projects/0391e4486e864c26be5654c522f440f2/clusters/0aa55501-a3e8-11e9-9e49-0255ac101611/nodes",
status_code=200,
max_calls=1,
json= {"kind":"List","apiVersion":"v3","items":[
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e5d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Creating","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7a","privateIP":"10.248.2.138"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-n8u63","uid":"65a9727f-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.982322 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.641575 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Creating","serverId":"f8e3b401-d191-43d3-b828-ee67f060aee7","privateIP":"10.248.6.110"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-lnmtx","uid":"65a73294-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.96758 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.815918 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SSD","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"df6e4fa8-75b0-4a36-b182-7bc6bee5b0c6","privateIP":"10.248.7.196"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e6d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":150},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Creating","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7b","privateIP":"10.248.2.139"}},]},
),
OtcMockResponse(method="GET",
url_match="cce",
path="/api/v3/projects/0391e4486e864c26be5654c522f440f2/clusters/0aa55501-a3e8-11e9-9e49-0255ac101611/nodes",
status_code=200,
max_calls=1,
json= {"kind":"List","apiVersion":"v3","items":[
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e5d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7a","privateIP":"10.248.2.138"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-n8u63","uid":"65a9727f-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.982322 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.641575 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"f8e3b401-d191-43d3-b828-ee67f060aee7","privateIP":"10.248.6.110"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-lnmtx","uid":"65a73294-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.96758 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.815918 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SSD","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"df6e4fa8-75b0-4a36-b182-7bc6bee5b0c6","privateIP":"10.248.7.196"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e6d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":150},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7b","privateIP":"10.248.2.139"}},]},
)
]
@mock.patch.object(requests.Session, "request", side_effect=MockNodesActiveList().request)
def test_wait_status_ids(self, mock):
nodes=self.user_cloud.cce2.wait_for_status_nodes(self.cluster_id, self.node_ids, interval=1, wait=1000)
self.assertEqual(len(nodes), 4)
self.assertEqual(nodes[0].id, "65a87e5d-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[1].id, "65a9727f-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[2].id, "65a73294-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[3].id, "65a87e6d-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[0].status, "Active")
self.assertEqual(nodes[1].status, "Active")
self.assertEqual(nodes[2].status, "Active")
self.assertEqual(nodes[3].status, "Active")
@mock.patch.object(requests.Session, "request", side_effect=MockNodesActiveList().request)
def test_wait_status_nodes(self, mock):
nodes=self.user_cloud.cce2.wait_for_status_nodes(self.cluster_id, self.nodes, interval=1, wait=1000)
self.assertEqual(len(nodes), 4)
self.assertEqual(nodes[0].id, "65a87e5d-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[1].id, "65a9727f-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[2].id, "65a73294-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[3].id, "65a87e6d-a3e9-11e9-92b3-0255ac101711")
self.assertEqual(nodes[0].status, "Active")
self.assertEqual(nodes[1].status, "Active")
self.assertEqual(nodes[2].status, "Active")
self.assertEqual(nodes[3].status, "Active")
class MockNodesDeleteList(OtcMockService):
responses = [
OtcMockResponse(method="GET",
url_match="cce",
path="/api/v3/projects/0391e4486e864c26be5654c522f440f2/clusters/0aa55501-a3e8-11e9-9e49-0255ac101611/nodes",
status_code=200,
max_calls=1,
json= {"kind":"List","apiVersion":"v3","items":[
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e5d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Creating","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7a","privateIP":"10.248.2.138"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-n8u63","uid":"65a9727f-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.982322 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.641575 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Creating","serverId":"f8e3b401-d191-43d3-b828-ee67f060aee7","privateIP":"10.248.6.110"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-lnmtx","uid":"65a73294-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.96758 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.815918 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SSD","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"df6e4fa8-75b0-4a36-b182-7bc6bee5b0c6","privateIP":"10.248.7.196"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e6d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":150},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Creating","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7b","privateIP":"10.248.2.139"}},]},
),
OtcMockResponse(method="GET",
url_match="cce",
path="/api/v3/projects/0391e4486e864c26be5654c522f440f2/clusters/0aa55501-a3e8-11e9-9e49-0255ac101611/nodes",
status_code=200,
max_calls=1,
json= {"kind":"List","apiVersion":"v3","items":[
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e5d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7a","privateIP":"10.248.2.138"}},
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-lnmtx","uid":"65a73294-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.96758 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.815918 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SSD","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Active","serverId":"df6e4fa8-75b0-4a36-b182-7bc6bee5b0c6","privateIP":"10.248.7.196"}},]},
),
OtcMockResponse(method="GET",
url_match="cce",
path="/api/v3/projects/0391e4486e864c26be5654c522f440f2/clusters/0aa55501-a3e8-11e9-9e49-0255ac101611/nodes",
status_code=200,
max_calls=1,
json= {"kind":"List","apiVersion":"v3","items":[
{"kind":"Node","apiVersion":"v3","metadata":{"name":"rbe-sdkunit-filter-node-t4ywk","uid":"65a87e5d-a3e9-11e9-92b3-0255ac101711","creationTimestamp":"2019-07-11 14:37:23.976068 +0000 UTC","updateTimestamp":"2019-07-11 14:41:20.812487 +0000 UTC","annotations":{"kubernetes.io/node-pool.id":"eu-de-01#s2.large.1#EulerOS 2.2"}},"spec":{"flavor":"s2.large.1","az":"eu-de-01","os":"EulerOS 2.2","login":{"sshKey":"dummy-key","userPassword":{}},"rootVolume":{"volumetype":"SATA","size":100},"dataVolumes":[{"volumetype":"SATA","size":150}],"publicIP":{"eip":{"bandwidth":{}}},"nodeNicSpec":{"primaryNic":{}},"billingMode":0},"status":{"phase":"Deleted","serverId":"fc65016a-f558-4095-8258-2dcc8e7a2f7a","privateIP":"10.248.2.138"}},
]})
]
@mock.patch.object(requests.Session, "request", side_effect=MockNodesDeleteList().request)
def test_wait_delete_ids(self, mock):
nodes=self.user_cloud.cce2.wait_for_delete_nodes(self.cluster_id, self.node_ids, interval=1, wait=5)
self.assertEqual(len(nodes), 1)
self.assertEqual(nodes[0].id, "65a87e5d-a3e9-11e9-92b3-0255ac101711")
#self.assertEqual(nodes[1].id, "65a9737f-a3e9-11e9-92b3-0255ac101711")
#self.assertEqual(nodes[2].id, "65a73594-a3e9-11e9-92b3-0255ac101711")
#self.assertEqual(nodes[3].id, "65a87e6d-a3e9-11e9-92b3-0255ac101711")
@mock.patch.object(requests.Session, "request", side_effect=MockNodesDeleteList().request)
def test_wait_delete_nodes(self, mock):
nodes=self.user_cloud.cce2.wait_for_delete_nodes(self.cluster_id, self.nodes, interval=1, wait=5)
self.assertEqual(len(nodes), 1)
self.assertEqual(nodes[0].id, "65a87e5d-a3e9-11e9-92b3-0255ac101711")
| 119.463576 | 758 | 0.646932 | 2,270 | 18,039 | 5.108811 | 0.109692 | 0.024834 | 0.037251 | 0.074502 | 0.912822 | 0.899629 | 0.885833 | 0.885833 | 0.885833 | 0.867724 | 0 | 0.167949 | 0.120683 | 18,039 | 150 | 759 | 120.26 | 0.56317 | 0.044293 | 0 | 0.626087 | 0 | 0.043478 | 0.553078 | 0.203194 | 0 | 0 | 0 | 0 | 0.191304 | 1 | 0.043478 | false | 0.130435 | 0.095652 | 0 | 0.165217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
b4d626874590b853ce46f59851b46fba9378da47 | 24,501 | py | Python | blockchain-workbench/rest-api-samples/python/swagger_client/api/connections_api.py | chaosmail/blockchain | c78799d548c0d5deb86e03d16bf919df508d09fd | [
"MIT"
] | 738 | 2018-05-07T15:37:38.000Z | 2022-03-30T08:16:04.000Z | blockchain-workbench/rest-api-samples/python/swagger_client/api/connections_api.py | chaosmail/blockchain | c78799d548c0d5deb86e03d16bf919df508d09fd | [
"MIT"
] | 156 | 2018-05-08T14:01:25.000Z | 2022-01-31T22:03:32.000Z | blockchain-workbench/rest-api-samples/python/swagger_client/api/connections_api.py | cocoytech/blockchain | 4a64a41275cf149c0ad66b7fd9864498ec6a7ed9 | [
"MIT"
] | 682 | 2018-05-07T16:45:10.000Z | 2022-03-31T16:50:13.000Z | # coding: utf-8
"""
Azure Blockchain Workbench REST API
The Azure Blockchain Workbench REST API is a Workbench extensibility point, which allows developers to create and manage blockchain applications, manage users and organizations within a consortium, integrate blockchain applications into services and platforms, perform transactions on a blockchain, and retrieve transactional and contract data from a blockchain. # noqa: E501
OpenAPI spec version: v1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class ConnectionsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def block_get(self, connection_id, block_id, **kwargs): # noqa: E501
""" # noqa: E501
Gets the block matching a specific block ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.block_get(connection_id, block_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The connectionId of the block (required)
:param int block_id: The id of the block (required)
:return: Block
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.block_get_with_http_info(connection_id, block_id, **kwargs) # noqa: E501
else:
(data) = self.block_get_with_http_info(connection_id, block_id, **kwargs) # noqa: E501
return data
def block_get_with_http_info(self, connection_id, block_id, **kwargs): # noqa: E501
""" # noqa: E501
Gets the block matching a specific block ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.block_get_with_http_info(connection_id, block_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The connectionId of the block (required)
:param int block_id: The id of the block (required)
:return: Block
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['connection_id', 'block_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method block_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'connection_id' is set
if ('connection_id' not in params or
params['connection_id'] is None):
raise ValueError("Missing the required parameter `connection_id` when calling `block_get`") # noqa: E501
# verify the required parameter 'block_id' is set
if ('block_id' not in params or
params['block_id'] is None):
raise ValueError("Missing the required parameter `block_id` when calling `block_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'connection_id' in params:
path_params['connectionId'] = params['connection_id'] # noqa: E501
if 'block_id' in params:
path_params['blockId'] = params['block_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/ledgers/connections/{connectionId}/blocks/{blockId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Block', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def blocks_get(self, connection_id, **kwargs): # noqa: E501
""" # noqa: E501
Lists the blocks for a connected blockchain network. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.blocks_get(connection_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The id of the connection (required)
:param int top: The maximum number of items to return
:param int skip: The number of items to skip before returning
:return: BlockList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.blocks_get_with_http_info(connection_id, **kwargs) # noqa: E501
else:
(data) = self.blocks_get_with_http_info(connection_id, **kwargs) # noqa: E501
return data
def blocks_get_with_http_info(self, connection_id, **kwargs): # noqa: E501
""" # noqa: E501
Lists the blocks for a connected blockchain network. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.blocks_get_with_http_info(connection_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The id of the connection (required)
:param int top: The maximum number of items to return
:param int skip: The number of items to skip before returning
:return: BlockList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['connection_id', 'top', 'skip'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method blocks_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'connection_id' is set
if ('connection_id' not in params or
params['connection_id'] is None):
raise ValueError("Missing the required parameter `connection_id` when calling `blocks_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'connection_id' in params:
path_params['connectionID'] = params['connection_id'] # noqa: E501
query_params = []
if 'top' in params:
query_params.append(('top', params['top'])) # noqa: E501
if 'skip' in params:
query_params.append(('skip', params['skip'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/ledgers/connections/{connectionId}/blocks', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BlockList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def connection_get(self, connection_id, **kwargs): # noqa: E501
""" # noqa: E501
Gets the connected blockchain network matching a specific chain instance ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.connection_get(connection_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The id of the connection (required)
:return: Connection
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.connection_get_with_http_info(connection_id, **kwargs) # noqa: E501
else:
(data) = self.connection_get_with_http_info(connection_id, **kwargs) # noqa: E501
return data
def connection_get_with_http_info(self, connection_id, **kwargs): # noqa: E501
""" # noqa: E501
Gets the connected blockchain network matching a specific chain instance ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.connection_get_with_http_info(connection_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The id of the connection (required)
:return: Connection
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['connection_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method connection_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'connection_id' is set
if ('connection_id' not in params or
params['connection_id'] is None):
raise ValueError("Missing the required parameter `connection_id` when calling `connection_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'connection_id' in params:
path_params['connectionID'] = params['connection_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/ledgers/connections/{connectionId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Connection', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def connections_get(self, **kwargs): # noqa: E501
""" # noqa: E501
Lists the connected blockchain networks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.connections_get(async=True)
>>> result = thread.get()
:param async bool
:param int top: The maximum number of items to return
:param int skip: The number of items to skip before returning
:return: ConnectionList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.connections_get_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.connections_get_with_http_info(**kwargs) # noqa: E501
return data
def connections_get_with_http_info(self, **kwargs): # noqa: E501
""" # noqa: E501
Lists the connected blockchain networks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.connections_get_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param int top: The maximum number of items to return
:param int skip: The number of items to skip before returning
:return: ConnectionList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['top', 'skip'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method connections_get" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'top' in params:
query_params.append(('top', params['top'])) # noqa: E501
if 'skip' in params:
query_params.append(('skip', params['skip'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/ledgers/connections', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ConnectionList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def transaction_get(self, connection_id, transaction_id, **kwargs): # noqa: E501
""" # noqa: E501
Gets the transaction matching a specific transaction ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.transaction_get(connection_id, transaction_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The connectionId of the transaction (required)
:param int transaction_id: The id of the transaction (required)
:return: Transaction
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.transaction_get_with_http_info(connection_id, transaction_id, **kwargs) # noqa: E501
else:
(data) = self.transaction_get_with_http_info(connection_id, transaction_id, **kwargs) # noqa: E501
return data
def transaction_get_with_http_info(self, connection_id, transaction_id, **kwargs): # noqa: E501
""" # noqa: E501
Gets the transaction matching a specific transaction ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.transaction_get_with_http_info(connection_id, transaction_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The connectionId of the transaction (required)
:param int transaction_id: The id of the transaction (required)
:return: Transaction
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['connection_id', 'transaction_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method transaction_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'connection_id' is set
if ('connection_id' not in params or
params['connection_id'] is None):
raise ValueError("Missing the required parameter `connection_id` when calling `transaction_get`") # noqa: E501
# verify the required parameter 'transaction_id' is set
if ('transaction_id' not in params or
params['transaction_id'] is None):
raise ValueError("Missing the required parameter `transaction_id` when calling `transaction_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'connection_id' in params:
path_params['connectionId'] = params['connection_id'] # noqa: E501
if 'transaction_id' in params:
path_params['transactionId'] = params['transaction_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/ledgers/connections/{connectionId}/transactions/{transactionId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Transaction', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def transactions_get(self, connection_id, **kwargs): # noqa: E501
""" # noqa: E501
Lists the transactions for a connected blockchain network. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.transactions_get(connection_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The id of the connection (required)
:param int top: The maximum number of items to return
:param int skip: The number of items to skip before returning
:return: list[TransactionList]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.transactions_get_with_http_info(connection_id, **kwargs) # noqa: E501
else:
(data) = self.transactions_get_with_http_info(connection_id, **kwargs) # noqa: E501
return data
def transactions_get_with_http_info(self, connection_id, **kwargs): # noqa: E501
""" # noqa: E501
Lists the transactions for a connected blockchain network. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.transactions_get_with_http_info(connection_id, async=True)
>>> result = thread.get()
:param async bool
:param int connection_id: The id of the connection (required)
:param int top: The maximum number of items to return
:param int skip: The number of items to skip before returning
:return: list[TransactionList]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['connection_id', 'top', 'skip'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method transactions_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'connection_id' is set
if ('connection_id' not in params or
params['connection_id'] is None):
raise ValueError("Missing the required parameter `connection_id` when calling `transactions_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'connection_id' in params:
path_params['connectionId'] = params['connection_id'] # noqa: E501
query_params = []
if 'top' in params:
query_params.append(('top', params['top'])) # noqa: E501
if 'skip' in params:
query_params.append(('skip', params['skip'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v1/ledgers/connections/{connectionId}/transactions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[TransactionList]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 39.969005 | 380 | 0.613077 | 2,808 | 24,501 | 5.134615 | 0.067308 | 0.048273 | 0.023304 | 0.029963 | 0.927313 | 0.912054 | 0.907685 | 0.888265 | 0.885976 | 0.871341 | 0 | 0.015978 | 0.300069 | 24,501 | 612 | 381 | 40.034314 | 0.824771 | 0.051222 | 0 | 0.766871 | 0 | 0 | 0.194366 | 0.049468 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.01227 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b4fd95cd1edfcce9973f16bdfa16df5ccdf1c39d | 34 | py | Python | tests/module_test.py | airportyh/cpython | e3cb54bdfcafb8493a936ba50d53e496f98f9222 | [
"0BSD"
] | null | null | null | tests/module_test.py | airportyh/cpython | e3cb54bdfcafb8493a936ba50d53e496f98f9222 | [
"0BSD"
] | null | null | null | tests/module_test.py | airportyh/cpython | e3cb54bdfcafb8493a936ba50d53e496f98f9222 | [
"0BSD"
] | null | null | null | import a_module
a_module.a_func() | 11.333333 | 17 | 0.823529 | 7 | 34 | 3.571429 | 0.571429 | 0.56 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 3 | 17 | 11.333333 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
2ed424dd89def4e77fdba808337a8d831f158521 | 220 | py | Python | cleverhans/future/torch/attacks/__init__.py | iArunava/cleverhans | f01d21deada2f835c759323ecc58981304054c05 | [
"MIT"
] | 2 | 2019-12-24T18:10:19.000Z | 2021-03-11T07:41:55.000Z | cleverhans/future/torch/attacks/__init__.py | iArunava/cleverhans | f01d21deada2f835c759323ecc58981304054c05 | [
"MIT"
] | null | null | null | cleverhans/future/torch/attacks/__init__.py | iArunava/cleverhans | f01d21deada2f835c759323ecc58981304054c05 | [
"MIT"
] | 1 | 2017-02-03T05:59:09.000Z | 2017-02-03T05:59:09.000Z | # pylint: disable=missing-docstring
from cleverhans.future.torch.attacks.fast_gradient_method import fast_gradient_method
from cleverhans.future.torch.attacks.projected_gradient_descent import projected_gradient_descent
| 55 | 97 | 0.895455 | 28 | 220 | 6.75 | 0.535714 | 0.148148 | 0.21164 | 0.26455 | 0.338624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 220 | 3 | 98 | 73.333333 | 0.904306 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2ee2c26443a5ebdc7b52c299bad04465d9d33387 | 104 | py | Python | python/miniconda/vendored/vendor/noarch/setuptools-52.0.0-py39h06a4308_0/info/test/run_test.py | kvedurmu/paketo-samples | 525b49f14883a6aa54959de3232430f0fdc1e66e | [
"Apache-2.0"
] | null | null | null | python/miniconda/vendored/vendor/noarch/setuptools-52.0.0-py39h06a4308_0/info/test/run_test.py | kvedurmu/paketo-samples | 525b49f14883a6aa54959de3232430f0fdc1e66e | [
"Apache-2.0"
] | 19 | 2021-03-10T21:30:56.000Z | 2022-02-27T06:45:03.000Z | python/miniconda/vendored/vendor/noarch/setuptools-52.0.0-py39h06a4308_0/info/test/run_test.py | kvedurmu/paketo-samples | 525b49f14883a6aa54959de3232430f0fdc1e66e | [
"Apache-2.0"
] | null | null | null | print("import: 'setuptools'")
import setuptools
print("import: 'pkg_resources'")
import pkg_resources
| 14.857143 | 32 | 0.769231 | 12 | 104 | 6.5 | 0.416667 | 0.282051 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 104 | 6 | 33 | 17.333333 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0.417476 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 8 |
2ee58372db213443d4575b5a3ba905a6b23bf590 | 11,123 | py | Python | fec/home/migrations/0018_record_digest_press_release.py | cnlucas/fec-cms | aa67a0d4c19a350420d2f8c4b4e6f93acb808639 | [
"CC0-1.0"
] | 39 | 2018-03-09T21:56:17.000Z | 2022-01-20T02:31:38.000Z | fec/home/migrations/0018_record_digest_press_release.py | rbtrsv/fec-cms | 3136d1cf300ce1505d7035de38038e1c045937e6 | [
"CC0-1.0"
] | 3,183 | 2018-03-09T20:30:55.000Z | 2022-03-30T21:27:49.000Z | fec/home/migrations/0018_record_digest_press_release.py | rbtrsv/fec-cms | 3136d1cf300ce1505d7035de38038e1c045937e6 | [
"CC0-1.0"
] | 19 | 2018-03-09T20:47:31.000Z | 2022-03-10T02:54:33.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.9 on 2016-08-31 00:35
from __future__ import unicode_literals
import datetime
from django.db import migrations, models
import django.db.models.deletion
import home.models
import modelcluster.fields
import wagtail.contrib.table_block.blocks
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.images.blocks
class Migration(migrations.Migration):
dependencies = [
('wagtailcore', '0028_merge'),
('wagtailimages', '0013_make_rendition_upload_callable'),
('home', '0017_auto_20160823_1504'),
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('title', models.CharField(max_length=255)),
('email', models.EmailField(max_length=254)),
('phone', models.CharField(blank=True, max_length=255)),
('bio', models.TextField(blank=True)),
('photo', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
),
migrations.CreateModel(
name='DigestPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.StreamField([(b'heading', wagtail.core.blocks.CharBlock(classname=b'full title')), (b'paragraph', wagtail.core.blocks.RichTextBlock()), (b'html', wagtail.core.blocks.RawHTMLBlock()), (b'image', wagtail.images.blocks.ImageChooserBlock()), (b'table', wagtail.contrib.table_block.blocks.TableBlock())], blank=True, null=True)),
('date', models.DateField(default=datetime.date.today)),
('feed_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('read_next', models.ForeignKey(blank=True, default=home.models.get_previous_digest_page, null=True, on_delete=django.db.models.deletion.SET_NULL, to='home.DigestPage')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='DigestPageAuthors',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('role', models.CharField(choices=[(b'author', b'Author'), (b'writer', b'Written by'), (b'graphics', b'Graphics by'), (b'contact', b'Contact')], default=b'author', max_length=255)),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='home.Author')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='authors', to='home.DigestPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='PressReleasePage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.StreamField([(b'heading', wagtail.core.blocks.CharBlock(classname=b'full title')), (b'paragraph', wagtail.core.blocks.RichTextBlock()), (b'html', wagtail.core.blocks.RawHTMLBlock()), (b'image', wagtail.images.blocks.ImageChooserBlock()), (b'table', wagtail.contrib.table_block.blocks.TableBlock())], blank=True, null=True)),
('date', models.DateField(default=datetime.date.today)),
('category', models.CharField(choices=[(b'audit reports', b'Audit reports'), (b'campaign finance data summaries', b'Campaign finance data summaries'), (b'commission appointments', b'Commission appointments'), (b'disclosure initiatives', b'Disclosure initiatives'), (b'enforcement matters', b'Enforcement matters'), (b'hearings', b'Hearings'), (b'litigation', b'Litigation'), (b'non-filer publications', b'Non-filer publications'), (b'open meetings and related matters', b'Open meetings and related matters'), (b'presidential public funds', b'Presidential public funds'), (b'rulemakings', b'Rulemakings'), (b'other agency actions', b'Other agency actions')], max_length=255)),
('feed_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('read_next', models.ForeignKey(blank=True, default=home.models.get_previous_press_release_page, null=True, on_delete=django.db.models.deletion.SET_NULL, to='home.PressReleasePage')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='PressReleasePageAuthors',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('role', models.CharField(choices=[(b'author', b'Author'), (b'writer', b'Written by'), (b'graphics', b'Graphics by'), (b'contact', b'Contact')], default=b'author', max_length=255)),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='home.Author')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='authors', to='home.PressReleasePage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='RecordPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.StreamField([(b'heading', wagtail.core.blocks.CharBlock(classname=b'full title')), (b'paragraph', wagtail.core.blocks.RichTextBlock()), (b'html', wagtail.core.blocks.RawHTMLBlock()), (b'image', wagtail.images.blocks.ImageChooserBlock()), (b'table', wagtail.contrib.table_block.blocks.TableBlock())], blank=True, null=True)),
('date', models.DateField(default=datetime.date.today)),
('category', models.CharField(choices=[(b'advisory opinions', b'Advisory Opinions'), (b'commission', b'Commission'), (b'compliance', b'Compliance'), (b'litigation', b'Litigation'), (b'outreach', b'Outreach'), (b'public funding', b'Public Funding'), (b'regulations', b'Regulations'), (b'reporting', b'Reporting'), (b'statistics', b'Statistics')], max_length=255)),
('related_section_title', models.CharField(blank=True, default=b'Explore campaign finance data', max_length=255)),
('related_section_url', models.CharField(blank=True, default=b'/data/', max_length=255)),
('feed_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('read_next', models.ForeignKey(blank=True, default=home.models.get_previous_record_page, null=True, on_delete=django.db.models.deletion.SET_NULL, to='home.RecordPage')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='RecordPageAuthors',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('role', models.CharField(choices=[(b'author', b'Author'), (b'writer', b'Written by'), (b'graphics', b'Graphics by'), (b'contact', b'Contact')], default=b'author', max_length=255)),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='home.Author')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='authors', to='home.RecordPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.AddField(
model_name='calendarpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='checklistpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='contactpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='homepage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='landingpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='nonconnectedchecklistpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='partychecklistpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
migrations.AddField(
model_name='ssfchecklistpage',
name='feed_image',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image'),
),
]
| 67.006024 | 691 | 0.634631 | 1,227 | 11,123 | 5.629992 | 0.152404 | 0.03011 | 0.050666 | 0.079618 | 0.804719 | 0.765489 | 0.747684 | 0.73205 | 0.73205 | 0.685871 | 0 | 0.008282 | 0.207588 | 11,123 | 165 | 692 | 67.412121 | 0.775471 | 0.006024 | 0 | 0.651899 | 1 | 0 | 0.207184 | 0.01529 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.063291 | 0 | 0.082278 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2ef081211cf4ab698190b5783dd174d1bff09af2 | 23,729 | py | Python | validations_libs/tests/cli/test_run.py | openstack/validations-libs | 7d416acbe89a9ba23cabfd4e97c80affe57e06cb | [
"Apache-2.0"
] | 1 | 2020-03-11T09:13:28.000Z | 2020-03-11T09:13:28.000Z | validations_libs/tests/cli/test_run.py | openstack/validations-libs | 7d416acbe89a9ba23cabfd4e97c80affe57e06cb | [
"Apache-2.0"
] | null | null | null | validations_libs/tests/cli/test_run.py | openstack/validations-libs | 7d416acbe89a9ba23cabfd4e97c80affe57e06cb | [
"Apache-2.0"
] | 1 | 2021-03-23T08:31:43.000Z | 2021-03-23T08:31:43.000Z | # Copyright 2021 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import sys
import copy
try:
from unittest import mock
except ImportError:
import mock
from validations_libs.cli import run
from validations_libs.tests import fakes
from validations_libs.tests.cli.fakes import BaseCommand
class TestRun(BaseCommand):
def setUp(self):
super(TestRun, self).setUp()
self.cmd = run.Run(self.app, None)
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=None)
def test_run_command_return_none(self, mock_run):
args = self._set_args(['--validation', 'foo'])
verifylist = [('validation_name', ['foo'])]
parsed_args = self.check_parser(self.cmd, args, verifylist)
self.assertRaises(RuntimeError, self.cmd.take_action, parsed_args)
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
def test_run_command_success(self, mock_run):
args = self._set_args(['--validation', 'foo'])
verifylist = [('validation_name', ['foo'])]
parsed_args = self.check_parser(self.cmd, args, verifylist)
self.cmd.take_action(parsed_args)
def test_run_command_exclusive_group(self):
arglist = ['--validation', 'foo', '--group', 'bar']
self._set_args(arglist)
verifylist = [('validation_name', ['foo'], 'group', 'bar')]
self.assertRaises(Exception, self.check_parser, self.cmd,
arglist, verifylist)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('validations_libs.cli.common.print_dict')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_vars(self, mock_config, mock_run,
mock_user, mock_print, mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': {'key': 'value'},
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-vars', 'key=value']
verifylist = [('validation_name', ['foo']),
('extra_vars', {'key': 'value'})]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('validations_libs.cli.common.print_dict')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_vars_twice(self, mock_config,
mock_run, mock_user, mock_print,
mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': {'key': 'value2'},
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-vars', 'key=value1',
'--extra-vars', 'key=value2']
verifylist = [('validation_name', ['foo']),
('extra_vars', {'key': 'value2'})]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
def test_run_command_exclusive_vars(self):
arglist = ['--validation', 'foo',
'--extra-vars', 'key=value1',
'--extra-vars-file', '/foo/vars.yaml']
verifylist = [('validation_name', ['foo']),
('extra_vars', {'key': 'value2'})]
self.assertRaises(Exception, self.check_parser, self.cmd,
arglist, verifylist)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('yaml.safe_load', return_value={'key': 'value'})
@mock.patch('six.moves.builtins.open')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_vars_file(self, mock_config, mock_run,
mock_user, mock_open,
mock_yaml, mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': {'key': 'value'},
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-vars-file', '/foo/vars.yaml']
verifylist = [('validation_name', ['foo']),
('extra_vars_file', '/foo/vars.yaml')]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_env_vars(self, mock_config, mock_run,
mock_user, mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': {'key': 'value'},
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-env-vars', 'key=value']
verifylist = [('validation_name', ['foo']),
('extra_env_vars', {'key': 'value'})]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_env_vars_with_custom_callback(self,
mock_config,
mock_run,
mock_user,
mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'log_path': mock_log_dir,
'quiet': False,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': {'ANSIBLE_STDOUT_CALLBACK': 'default'},
'python_interpreter': sys.executable,
'quiet': False,
'ssh_user': 'doe',
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-env-vars', 'ANSIBLE_STDOUT_CALLBACK=default']
verifylist = [('validation_name', ['foo']),
('extra_env_vars', {'ANSIBLE_STDOUT_CALLBACK': 'default'})]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_env_vars_twice(self, mock_config,
mock_run, mock_user,
mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': {'key': 'value2'},
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-env-vars', 'key=value1',
'--extra-env-vars', 'key=value2']
verifylist = [('validation_name', ['foo']),
('extra_env_vars', {'key': 'value2'})]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_extra_env_vars_and_extra_vars(self,
mock_config,
mock_run,
mock_user,
mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': {'key': 'value'},
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': {'key2': 'value2'},
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo',
'--extra-vars', 'key=value',
'--extra-env-vars', 'key2=value2']
verifylist = [('validation_name', ['foo']),
('extra_vars', {'key': 'value'}),
('extra_env_vars', {'key2': 'value2'})]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
def test_run_command_exclusive_wrong_extra_vars(self):
arglist = ['--validation', 'foo',
'--extra-vars', 'key=value1,key=value2']
verifylist = [('validation_name', ['foo']),
('extra_vars', {'key': 'value2'})]
self.assertRaises(Exception, self.check_parser, self.cmd,
arglist, verifylist)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_FAILED_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_failed_validation(self, mock_config,
mock_run, mock_user, mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo']
verifylist = [('validation_name', ['foo'])]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.assertRaises(RuntimeError, self.cmd.take_action, parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=[])
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_no_validation(self, mock_config, mock_run,
mock_user):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': {'key': 'value'},
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': {'key2': 'value2'},
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'validation_config': {},
'skip_list': None
}
arglist = ['--validation', 'foo']
verifylist = [('validation_name', ['foo'])]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.assertRaises(RuntimeError, self.cmd.take_action, parsed_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=fakes.FAKE_SUCCESS_RUN)
def test_run_with_wrong_config(self, mock_run,
mock_user, mock_log_dir):
arglist = ['--validation', 'foo', '--config', 'wrong.cfg']
verifylist = [('validation_name', ['foo']),
('config', 'wrong.cfg')]
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=fakes.FAKE_SUCCESS_RUN)
@mock.patch('os.path.exists', return_value=True)
def test_run_with_config(self, mock_exists,
mock_run, mock_user,
mock_log_dir):
arglist = ['--validation', 'foo', '--config', 'config.cfg']
verifylist = [('validation_name', ['foo']),
('config', 'config.cfg')]
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': None
}
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('yaml.safe_load', return_value={'key': 'value'})
@mock.patch('six.moves.builtins.open')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_with_skip_list(self, mock_config, mock_run,
mock_user, mock_open,
mock_yaml, mock_log_dir):
run_called_args = {
'inventory': 'localhost',
'limit_hosts': None,
'group': [],
'category': [],
'product': [],
'extra_vars': None,
'validations_dir': '/usr/share/ansible/validation-playbooks',
'base_dir': '/usr/share/ansible',
'validation_name': ['foo'],
'extra_env_vars': None,
'python_interpreter': sys.executable,
'quiet': True,
'ssh_user': 'doe',
'log_path': mock_log_dir,
'validation_config': {},
'skip_list': {'key': 'value'}
}
arglist = ['--validation', 'foo',
'--skiplist', '/foo/skip.yaml']
verifylist = [('validation_name', ['foo']),
('skip_list', '/foo/skip.yaml')]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_run.assert_called_with(**run_called_args)
@mock.patch('validations_libs.constants.VALIDATIONS_LOG_BASEDIR')
@mock.patch('yaml.safe_load', return_value=[{'key': 'value'}])
@mock.patch('six.moves.builtins.open')
@mock.patch('getpass.getuser',
return_value='doe')
@mock.patch('validations_libs.validation_actions.ValidationActions.'
'run_validations',
return_value=copy.deepcopy(fakes.FAKE_SUCCESS_RUN))
@mock.patch('validations_libs.utils.load_config', return_value={})
def test_run_command_with_skip_list_bad_format(self, mock_config, mock_run,
mock_user, mock_open,
mock_yaml, mock_log_dir):
arglist = ['--validation', 'foo',
'--skiplist', '/foo/skip.yaml']
verifylist = [('validation_name', ['foo']),
('skip_list', '/foo/skip.yaml')]
self._set_args(arglist)
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.assertRaises(RuntimeError, self.cmd.take_action, parsed_args)
| 42.22242 | 81 | 0.555228 | 2,335 | 23,729 | 5.332762 | 0.07666 | 0.043367 | 0.064247 | 0.077096 | 0.916961 | 0.912143 | 0.90355 | 0.89841 | 0.88026 | 0.871667 | 0 | 0.001773 | 0.31059 | 23,729 | 561 | 82 | 42.297683 | 0.759399 | 0.023979 | 0 | 0.842315 | 0 | 0 | 0.294257 | 0.107808 | 0 | 0 | 0 | 0 | 0.035928 | 1 | 0.037924 | false | 0.025948 | 0.015968 | 0 | 0.055888 | 0.007984 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2ef85248b3913a4e34d0f1de2c8af03fd1b5ad09 | 9,069 | py | Python | api/portal_api/administration.py | mkeller3/mapping_portal_api | 2a7112e0ddea7c4b662f0ec1a8d7b1ee4627cdd6 | [
"Apache-2.0"
] | 2 | 2021-08-09T12:03:31.000Z | 2021-09-11T08:23:22.000Z | api/portal_api/administration.py | mkeller3/open_source_mapping_portal | 2a7112e0ddea7c4b662f0ec1a8d7b1ee4627cdd6 | [
"Apache-2.0"
] | null | null | null | api/portal_api/administration.py | mkeller3/open_source_mapping_portal | 2a7112e0ddea7c4b662f0ec1a8d7b1ee4627cdd6 | [
"Apache-2.0"
] | null | null | null | from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
from rest_framework.permissions import IsAuthenticated
from .serializers import *
from .helpers import *
from .constants import *
from drf_yasg.utils import swagger_auto_schema
from rest_framework_tracking.mixins import LoggingMixin
# Map Service Configuration
class mapServiceConfigurationView(LoggingMixin, APIView):
permission_classes = (IsAuthenticated),
@swagger_auto_schema(request_body=mapServiceDataSerializer, operation_description="Create a map service within Mapping Portal")
def post(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
serializer = mapServiceDataSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=mapServiceDataSerializer, operation_description="Update a map service within Mapping Portal")
def put(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = mapServiceData.objects.get(map_service_id=request.data['map_service_id'])
except mapServiceData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = mapServiceDataSerializer(details, data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=details.username, updated_username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=genericMapServiceSerializer, operation_description="Delete a map service within Mapping Portal")
def delete(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = mapServiceData.objects.get(map_service_id=request.data['map_service_id'])
except mapServiceData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
details.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# Map Service Security
class mapServiceSecurityConfigurationView(LoggingMixin, APIView):
permission_classes = (IsAuthenticated),
@swagger_auto_schema(request_body=mapServiceSecurityDataSerializer, operation_description="Create a map service security within Mapping Portal")
def post(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
serializer = mapServiceSecurityDataSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=mapServiceSecurityDataSerializer, operation_description="Update a map service security within Mapping Portal")
def put(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = mapSecurityData.objects.get(map_service_security_id=request.data['map_service_security_id'])
except mapSecurityData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = mapServiceSecurityDataSerializer(details, data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=details.username, updated_username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=genericMapServiceSecuritySerializer, operation_description="Delete a map service security within Mapping Portal")
def delete(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = mapSecurityData.objects.get(map_service_security_id=request.data['map_service_security_id'])
except mapSecurityData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
details.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# Blocked Users
class blockedUserView(LoggingMixin, APIView):
permission_classes = (IsAuthenticated),
@swagger_auto_schema(request_body=blockedUserDataSerializer, operation_description="Create a blocked user within Mapping Portal")
def post(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
serializer = blockedUserDataSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=blockedUserDataSerializer, operation_description="Update a blocked user within Mapping Portal")
def put(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = blockedUserData.objects.get(blocked_user_id=request.data['blocked_user_id'])
except blockedUserData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = blockedUserDataSerializer(details, data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=details.username, updated_username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=genericBlockedUserDataSerializer, operation_description="Delete a map service security within Mapping Portal")
def delete(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = blockedUserData.objects.get(blocked_user_id=request.data['blocked_user_id'])
except blockedUserData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
details.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# Alerts
class alertView(LoggingMixin, APIView):
permission_classes = (IsAuthenticated),
@swagger_auto_schema(request_body=alertDataSerializer, operation_description="Create an alert within Mapping Portal")
def post(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
serializer = alertDataSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=alertDataSerializer, operation_description="Update an alert within Mapping Portal")
def put(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = alertData.objects.get(alert_id=request.data['alert_id'])
except alertData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = alertDataSerializer(details, data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(username=details.username, updated_username=request.user.username)
return Response(serializer.data, status=status.HTTP_201_CREATED)
@swagger_auto_schema(request_body=genericAlertDataSerializer, operation_description="Delete an alert within Mapping Portal")
def delete(self, request):
user_groups = get_user_groups(request.user.username)
if 'admins' not in user_groups:
return Response(status=status.HTTP_401_UNAUTHORIZED)
try:
details = alertData.objects.get(alert_id=request.data['alert_id'])
except alertData.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
details.delete()
return Response(status=status.HTTP_204_NO_CONTENT) | 53.662722 | 151 | 0.743301 | 1,025 | 9,069 | 6.345366 | 0.096585 | 0.055351 | 0.078721 | 0.095941 | 0.872079 | 0.872079 | 0.861162 | 0.852399 | 0.791205 | 0.791205 | 0 | 0.012917 | 0.180505 | 9,069 | 169 | 152 | 53.662722 | 0.862217 | 0.007388 | 0 | 0.778523 | 0 | 0 | 0.079907 | 0.005112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080537 | false | 0 | 0.060403 | 0 | 0.409396 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2efd71efe0cf75b62252fde7109aa6d1c1582041 | 85 | py | Python | examples/trapile.py | renning22/python-sc2 | 5e21c2b8a334d135c40b21f664ccb067a7296dee | [
"MIT"
] | null | null | null | examples/trapile.py | renning22/python-sc2 | 5e21c2b8a334d135c40b21f664ccb067a7296dee | [
"MIT"
] | null | null | null | examples/trapile.py | renning22/python-sc2 | 5e21c2b8a334d135c40b21f664ccb067a7296dee | [
"MIT"
] | null | null | null |
def weapon_ready(func):
return func
def weapon_cooldown(func):
return func
| 12.142857 | 26 | 0.717647 | 12 | 85 | 4.916667 | 0.5 | 0.305085 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211765 | 85 | 6 | 27 | 14.166667 | 0.880597 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
25adfde348f272a3340f2c011732ca752078aa0f | 30,440 | py | Python | lib/nnsysident/nnsysident/models/models.py | mohammadbashiri/bashiri-et-al-2021 | c7c15ea0bf165d4d3db2ff63a04a1e78c29bf44c | [
"MIT"
] | 2 | 2021-12-04T20:01:00.000Z | 2021-12-05T19:59:02.000Z | lib/nnsysident/nnsysident/models/models.py | mohammadbashiri/bashiri-et-al-2021 | c7c15ea0bf165d4d3db2ff63a04a1e78c29bf44c | [
"MIT"
] | 1 | 2021-12-15T20:50:04.000Z | 2021-12-15T20:50:04.000Z | lib/nnsysident/nnsysident/models/models.py | mohammadbashiri/bashiri-et-al-2021 | c7c15ea0bf165d4d3db2ff63a04a1e78c29bf44c | [
"MIT"
] | 1 | 2021-09-15T12:26:17.000Z | 2021-09-15T12:26:17.000Z | import numpy as np
from torch import nn
import copy
from nnfabrik.utility.nn_helpers import set_random_seed, get_dims_for_loader_dict
from neuralpredictors.layers.readouts import (
MultipleFullGaussian2d,
MultiplePointPooled2d,
MultipleSpatialXFeatureLinear,
MultipleFullSXF,
)
from ..utility.data_helpers import unpack_data_info
from neuralpredictors.layers.cores import TransferLearningCore, SE2dCore
class Encoder(nn.Module):
def __init__(self, core, readout, elu_offset):
super().__init__()
self.core = core
self.readout = readout
self.offset = elu_offset
def forward(self, *args, data_key=None, detach_core=False, **kwargs):
x = args[0]
x = self.core(x)
if detach_core:
x = x.detach()
if "sample" in kwargs:
x = self.readout(x, data_key=data_key, sample=kwargs["sample"])
else:
x = self.readout(x, data_key=data_key)
return nn.functional.elu(x + self.offset) + 1
def regularizer(self, data_key, detach_core=False):
return int(
not detach_core
) * self.core.regularizer() + self.readout.regularizer(data_key)
def se2d_fullgaussian2d(
dataloaders,
seed,
elu_offset=0,
data_info=None,
transfer_state_dict=None,
# core args
hidden_channels=64,
input_kern=9,
hidden_kern=7,
layers=4,
gamma_input=6.3831,
skip=0,
bias=False,
final_nonlinearity=True,
momentum=0.9,
pad_input=False,
batch_norm=True,
hidden_dilation=1,
laplace_padding=None,
input_regularizer="LaplaceL2norm",
stack=-1,
se_reduction=32,
n_se_blocks=0,
depth_separable=True,
linear=False,
# readout args
init_mu_range=0.3,
init_sigma=0.1,
readout_bias=True,
gamma_readout=0.0076,
gauss_type="full",
grid_mean_predictor={
"type": "cortex",
"input_dimensions": 2,
"hidden_layers": 0,
"hidden_features": 30,
"final_tanh": True,
},
share_features=False,
share_grid=False,
share_transform=False,
init_noise=1e-3,
init_transform_scale=0.2,
):
"""
Model class of a SE2dCore and a Gaussian readout)
Args:
dataloaders: a dictionary of dataloaders, one loader per session
in the format {'data_key': dataloader object, .. }
seed: random seed
elu_offset: Offset for the output non-linearity [F.elu(x + self.offset)]
grid_mean_predictor: if not None, needs to be a dictionary of the form
{
'type': 'cortex',
'input_dimensions': 2,
'hidden_layers':0,
'hidden_features':20,
'final_tanh': False,
}
In that case the datasets need to have the property `neurons.cell_motor_coordinates`
share_features: whether to share features between readouts. This requires that the datasets
have the properties `neurons.multi_match_id` which are used for matching. Every dataset
has to have all these ids and cannot have any more.
share_grid: whether to share the grid between neurons. This requires that the datasets
have the properties `neurons.multi_match_id` which are used for matching. Every dataset
has to have all these ids and cannot have any more.
share_transform: whether to share the transform from the grid_mean_predictor between neurons. This requires that the datasets
have the properties `neurons.multi_match_id` which are used for matching. Every dataset
has to have all these ids and cannot have any more.
init_noise: noise for initialization of weights
init_transform_scale: scale of the weights of the randomly intialized grid_mean_predictor network
all other args: See Documentation of SE2dCore in neuralpredictors.layers.cores and
FullGaussian2d in neuralpredictors.layers.readouts
Returns: An initialized model which consists of model.core and model.readout
"""
if transfer_state_dict is not None:
print(
"Transfer state_dict given. This will only have an effect in the bayesian hypersearch. See: TrainedModelBayesianTransfer "
)
if data_info is not None:
n_neurons_dict, in_shapes_dict, input_channels = unpack_data_info(data_info)
else:
if "train" in dataloaders.keys():
dataloaders = dataloaders["train"]
# Obtain the named tuple fields from the first entry of the first dataloader in the dictionary
in_name, out_name = next(iter(list(dataloaders.values())[0]))._fields
session_shape_dict = get_dims_for_loader_dict(dataloaders)
n_neurons_dict = {k: v[out_name][1] for k, v in session_shape_dict.items()}
in_shapes_dict = {k: v[in_name] for k, v in session_shape_dict.items()}
input_channels = [v[in_name][1] for v in session_shape_dict.values()]
core_input_channels = (
list(input_channels.values())[0]
if isinstance(input_channels, dict)
else input_channels[0]
)
source_grids = None
grid_mean_predictor_type = None
if grid_mean_predictor is not None:
grid_mean_predictor = copy.deepcopy(grid_mean_predictor)
grid_mean_predictor_type = grid_mean_predictor.pop("type")
if grid_mean_predictor_type == "cortex":
input_dim = grid_mean_predictor.pop("input_dimensions", 2)
source_grids = {}
for k, v in dataloaders.items():
# real data
try:
if v.dataset.neurons.animal_ids[0] != 0:
source_grids[k] = v.dataset.neurons.cell_motor_coordinates[
:, :input_dim
]
# simulated data -> get random linear non-degenerate transform of true positions
else:
source_grid_true = v.dataset.neurons.center[:, :input_dim]
det = 0.0
loops = 0
grid_bias = np.random.rand(2) * 3
while det < 5.0 and loops < 100:
matrix = np.random.rand(2, 2) * 3
det = np.linalg.det(matrix)
loops += 1
assert det > 5.0, "Did not find a non-degenerate matrix"
source_grids[k] = np.add(
(matrix @ source_grid_true.T).T, grid_bias
)
except FileNotFoundError:
print(
"Dataset type is not recognized to be from Baylor College of Medicine."
)
source_grids[k] = v.dataset.neurons.cell_motor_coordinates[
:, :input_dim
]
elif grid_mean_predictor_type == "shared":
pass
else:
raise ValueError(
"Grid mean predictor type {} not understood.".format(
grid_mean_predictor_type
)
)
shared_match_ids = None
if share_features or share_grid:
shared_match_ids = {
k: v.dataset.neurons.multi_match_id for k, v in dataloaders.items()
}
all_multi_unit_ids = set(np.hstack(shared_match_ids.values()))
for match_id in shared_match_ids.values():
assert len(set(match_id) & all_multi_unit_ids) == len(
all_multi_unit_ids
), "All multi unit IDs must be present in all datasets"
set_random_seed(seed)
core = SE2dCore(
input_channels=core_input_channels,
hidden_channels=hidden_channels,
input_kern=input_kern,
hidden_kern=hidden_kern,
layers=layers,
gamma_input=gamma_input,
skip=skip,
final_nonlinearity=final_nonlinearity,
bias=bias,
momentum=momentum,
pad_input=pad_input,
batch_norm=batch_norm,
hidden_dilation=hidden_dilation,
laplace_padding=laplace_padding,
input_regularizer=input_regularizer,
stack=stack,
se_reduction=se_reduction,
n_se_blocks=n_se_blocks,
depth_separable=depth_separable,
linear=linear,
)
readout = MultipleFullGaussian2d(
core,
in_shape_dict=in_shapes_dict,
n_neurons_dict=n_neurons_dict,
init_mu_range=init_mu_range,
bias=readout_bias,
init_sigma=init_sigma,
gamma_readout=gamma_readout,
gauss_type=gauss_type,
grid_mean_predictor=grid_mean_predictor,
grid_mean_predictor_type=grid_mean_predictor_type,
source_grids=source_grids,
share_features=share_features,
share_grid=share_grid,
share_transform=share_transform,
shared_match_ids=shared_match_ids,
init_noise=init_noise,
init_transform_scale=init_transform_scale,
)
# initializing readout bias to mean response
if readout_bias and data_info is None:
for key, value in dataloaders.items():
_, targets = next(iter(value))
readout[key].bias.data = targets.mean(0)
model = Encoder(core, readout, elu_offset)
return model
def se2d_pointpooled(
dataloaders,
seed,
elu_offset=0,
data_info=None,
# core args
hidden_channels=64,
input_kern=9, # core args
hidden_kern=7,
layers=4,
gamma_input=46.402,
bias=False,
skip=0,
final_nonlinearity=True,
momentum=0.9,
pad_input=False,
batch_norm=True,
hidden_dilation=1,
laplace_padding=None,
input_regularizer="LaplaceL2norm",
stack=-1,
se_reduction=32,
n_se_blocks=0,
depth_separable=True,
linear=False,
# readout args
pool_steps=2,
pool_kern=3,
readout_bias=True,
gamma_readout=0.0207,
init_range=0.2,
):
"""
Model class of a SE2dCore and a pointpooled (spatial transformer) readout
Args:
dataloaders: a dictionary of dataloaders, one loader per session
in the format {'data_key': dataloader object, .. }
seed: random seed
elu_offset: Offset for the output non-linearity [F.elu(x + self.offset)]
all other args: See Documentation of SE2dCore in neuralpredictors.layers.cores and
PointPooled2D in neuralpredictors.layers.readouts
Returns: An initialized model which consists of model.core and model.readout
"""
if data_info is not None:
n_neurons_dict, in_shapes_dict, input_channels = unpack_data_info(data_info)
else:
if "train" in dataloaders.keys():
dataloaders = dataloaders["train"]
# Obtain the named tuple fields from the first entry of the first dataloader in the dictionary
in_name, out_name = next(iter(list(dataloaders.values())[0]))._fields
session_shape_dict = get_dims_for_loader_dict(dataloaders)
n_neurons_dict = {k: v[out_name][1] for k, v in session_shape_dict.items()}
in_shapes_dict = {k: v[in_name] for k, v in session_shape_dict.items()}
input_channels = [v[in_name][1] for v in session_shape_dict.values()]
core_input_channels = (
list(input_channels.values())[0]
if isinstance(input_channels, dict)
else input_channels[0]
)
set_random_seed(seed)
core = SE2dCore(
input_channels=core_input_channels,
hidden_channels=hidden_channels,
input_kern=input_kern,
hidden_kern=hidden_kern,
layers=layers,
gamma_input=gamma_input,
bias=bias,
skip=skip,
final_nonlinearity=final_nonlinearity,
momentum=momentum,
pad_input=pad_input,
batch_norm=batch_norm,
hidden_dilation=hidden_dilation,
laplace_padding=laplace_padding,
input_regularizer=input_regularizer,
stack=stack,
se_reduction=se_reduction,
n_se_blocks=n_se_blocks,
depth_separable=depth_separable,
linear=linear,
)
readout = MultiplePointPooled2d(
core,
in_shape_dict=in_shapes_dict,
n_neurons_dict=n_neurons_dict,
pool_steps=pool_steps,
pool_kern=pool_kern,
bias=readout_bias,
gamma_readout=gamma_readout,
init_range=init_range,
)
# initializing readout bias to mean response
if readout_bias and data_info is None:
for key, value in dataloaders.items():
_, targets = next(iter(value))
readout[key].bias.data = targets.mean(0)
model = Encoder(core, readout, elu_offset)
return model
def se2d_spatialxfeaturelinear(
dataloaders,
seed,
elu_offset=0,
data_info=None,
# core args
hidden_channels=64,
input_kern=9,
hidden_kern=7,
layers=4,
gamma_input=20.0,
skip=0,
final_nonlinearity=True,
momentum=0.9,
pad_input=False,
batch_norm=True,
hidden_dilation=1,
laplace_padding=None,
input_regularizer="LaplaceL2norm",
stack=-1,
se_reduction=32,
n_se_blocks=0,
depth_separable=True,
linear=False,
# readout args,
init_noise=4.1232e-05,
readout_bias=True,
gamma_readout=0.0019,
normalize=False,
):
"""
Model class of a SE2d core and a spatialXfeature (factorized) readout
Args:
Returns: An initialized model which consists of model.core and model.readout
"""
if data_info is not None:
n_neurons_dict, in_shapes_dict, input_channels = unpack_data_info(data_info)
else:
if "train" in dataloaders.keys():
dataloaders = dataloaders["train"]
# Obtain the named tuple fields from the first entry of the first dataloader in the dictionary
in_name, out_name = next(iter(list(dataloaders.values())[0]))._fields
session_shape_dict = get_dims_for_loader_dict(dataloaders)
n_neurons_dict = {k: v[out_name][1] for k, v in session_shape_dict.items()}
in_shapes_dict = {k: v[in_name] for k, v in session_shape_dict.items()}
input_channels = [v[in_name][1] for v in session_shape_dict.values()]
core_input_channels = (
list(input_channels.values())[0]
if isinstance(input_channels, dict)
else input_channels[0]
)
set_random_seed(seed)
core = SE2dCore(
input_channels=core_input_channels,
hidden_channels=hidden_channels,
input_kern=input_kern,
hidden_kern=hidden_kern,
layers=layers,
gamma_input=gamma_input,
skip=skip,
final_nonlinearity=final_nonlinearity,
bias=False,
momentum=momentum,
pad_input=pad_input,
batch_norm=batch_norm,
hidden_dilation=hidden_dilation,
laplace_padding=laplace_padding,
input_regularizer=input_regularizer,
stack=stack,
se_reduction=se_reduction,
n_se_blocks=n_se_blocks,
depth_separable=depth_separable,
linear=linear,
)
readout = MultipleSpatialXFeatureLinear(
core,
in_shape_dict=in_shapes_dict,
n_neurons_dict=n_neurons_dict,
init_noise=init_noise,
bias=readout_bias,
gamma_readout=gamma_readout,
normalize=normalize,
)
# initializing readout bias to mean response
if readout_bias and data_info is None:
for key, value in dataloaders.items():
_, targets = next(iter(value))
readout[key].bias.data = targets.mean(0)
model = Encoder(core, readout, elu_offset)
return model
def se2d_fullSXF(
dataloaders,
seed,
elu_offset=0,
data_info=None,
transfer_state_dict=None,
# core args
hidden_channels=64,
input_kern=9,
hidden_kern=7,
layers=4,
gamma_input=6.3831,
skip=0,
bias=False,
final_nonlinearity=True,
momentum=0.9,
pad_input=False,
batch_norm=True,
hidden_dilation=1,
laplace_padding=None,
input_regularizer="LaplaceL2norm",
stack=-1,
se_reduction=32,
n_se_blocks=0,
depth_separable=True,
linear=False,
init_noise=4.1232e-05,
normalize=False,
readout_bias=True,
gamma_readout=0.0076,
share_features=False,
):
"""
Model class of a SE2dCore and a factorized (sxf) readout
Args:
dataloaders: a dictionary of dataloaders, one loader per session
in the format {'data_key': dataloader object, .. }
seed: random seed
elu_offset: Offset for the output non-linearity [F.elu(x + self.offset)]
all other args: See Documentation of SE2dCore in neuralpredictors.layers.cores and
fullSXF in neuralpredictors.layers.readouts
Returns: An initialized model which consists of model.core and model.readout
"""
if transfer_state_dict is not None:
print(
"Transfer state_dict given. This will only have an effect in the bayesian hypersearch. See: TrainedModelBayesianTransfer "
)
if data_info is not None:
n_neurons_dict, in_shapes_dict, input_channels = unpack_data_info(data_info)
else:
if "train" in dataloaders.keys():
dataloaders = dataloaders["train"]
# Obtain the named tuple fields from the first entry of the first dataloader in the dictionary
in_name, out_name = next(iter(list(dataloaders.values())[0]))._fields
session_shape_dict = get_dims_for_loader_dict(dataloaders)
n_neurons_dict = {k: v[out_name][1] for k, v in session_shape_dict.items()}
in_shapes_dict = {k: v[in_name] for k, v in session_shape_dict.items()}
input_channels = [v[in_name][1] for v in session_shape_dict.values()]
core_input_channels = (
list(input_channels.values())[0]
if isinstance(input_channels, dict)
else input_channels[0]
)
shared_match_ids = None
if share_features:
shared_match_ids = {
k: v.dataset.neurons.multi_match_id for k, v in dataloaders.items()
}
all_multi_unit_ids = set(np.hstack(shared_match_ids.values()))
for match_id in shared_match_ids.values():
assert len(set(match_id) & all_multi_unit_ids) == len(
all_multi_unit_ids
), "All multi unit IDs must be present in all datasets"
set_random_seed(seed)
core = SE2dCore(
input_channels=core_input_channels,
hidden_channels=hidden_channels,
input_kern=input_kern,
hidden_kern=hidden_kern,
layers=layers,
gamma_input=gamma_input,
skip=skip,
final_nonlinearity=final_nonlinearity,
bias=bias,
momentum=momentum,
pad_input=pad_input,
batch_norm=batch_norm,
hidden_dilation=hidden_dilation,
laplace_padding=laplace_padding,
input_regularizer=input_regularizer,
stack=stack,
se_reduction=se_reduction,
n_se_blocks=n_se_blocks,
depth_separable=depth_separable,
linear=linear,
)
readout = MultipleFullSXF(
core,
in_shape_dict=in_shapes_dict,
n_neurons_dict=n_neurons_dict,
init_noise=init_noise,
bias=readout_bias,
gamma_readout=gamma_readout,
normalize=normalize,
share_features=share_features,
shared_match_ids=shared_match_ids,
)
# initializing readout bias to mean response
if readout_bias and data_info is None:
for key, value in dataloaders.items():
_, targets = next(iter(value))
readout[key].bias.data = targets.mean(0)
model = Encoder(core, readout, elu_offset)
return model
def taskdriven_fullgaussian2d(
dataloaders,
seed,
elu_offset=0,
data_info=None,
# core args
tl_model_name="vgg16",
layers=4,
pretrained=True,
final_batchnorm=True,
final_nonlinearity=True,
momentum=0.1,
fine_tune=False,
# readout args
init_mu_range=0.3,
init_sigma=0.1,
readout_bias=True,
gamma_readout=0.0076,
gauss_type="full",
grid_mean_predictor={
"type": "cortex",
"input_dimensions": 2,
"hidden_layers": 0,
"hidden_features": 30,
"final_tanh": True,
},
share_features=False,
share_grid=False,
share_transform=False,
init_noise=1e-3,
init_transform_scale=0.2,
):
"""
Model class of a task-driven transfer core and a Gaussian readout
Args:
dataloaders: a dictionary of dataloaders, one loader per session
in the format {'data_key': dataloader object, .. }
seed: random seed
elu_offset: Offset for the output non-linearity [F.elu(x + self.offset)]
grid_mean_predictor: if not None, needs to be a dictionary of the form
{
'type': 'cortex',
'input_dimensions': 2,
'hidden_layers':0,
'hidden_features':20,
'final_tanh': False,
}
In that case the datasets need to have the property `neurons.cell_motor_coordinates`
share_features: whether to share features between readouts. This requires that the datasets
have the properties `neurons.multi_match_id` which are used for matching. Every dataset
has to have all these ids and cannot have any more.
share_grid: whether to share the grid between neurons. This requires that the datasets
have the properties `neurons.multi_match_id` which are used for matching. Every dataset
has to have all these ids and cannot have any more.
share_transform: whether to share the transform from the grid_mean_predictor between neurons. This requires that the datasets
have the properties `neurons.multi_match_id` which are used for matching. Every dataset
has to have all these ids and cannot have any more.
init_noise: noise for initialization of weights
init_transform_scale: scale of the weights of the randomly intialized grid_mean_predictor network
all other args: See Documentation of TransferLearningCore in neuralpredictors.layers.cores and
FullGaussian2d in neuralpredictors.layers.readouts
Returns: An initialized model which consists of model.core and model.readout
"""
if data_info is not None:
n_neurons_dict, in_shapes_dict, input_channels = unpack_data_info(data_info)
else:
if "train" in dataloaders.keys():
dataloaders = dataloaders["train"]
# Obtain the named tuple fields from the first entry of the first dataloader in the dictionary
in_name, out_name = next(iter(list(dataloaders.values())[0]))._fields
session_shape_dict = get_dims_for_loader_dict(dataloaders)
n_neurons_dict = {k: v[out_name][1] for k, v in session_shape_dict.items()}
in_shapes_dict = {k: v[in_name] for k, v in session_shape_dict.items()}
input_channels = [v[in_name][1] for v in session_shape_dict.values()]
core_input_channels = (
list(input_channels.values())[0]
if isinstance(input_channels, dict)
else input_channels[0]
)
source_grids = None
grid_mean_predictor_type = None
if grid_mean_predictor is not None:
grid_mean_predictor = copy.deepcopy(grid_mean_predictor)
grid_mean_predictor_type = grid_mean_predictor.pop("type")
if grid_mean_predictor_type == "cortex":
input_dim = grid_mean_predictor.pop("input_dimensions", 2)
source_grids = {}
for k, v in dataloaders.items():
# real data
try:
if v.dataset.neurons.animal_ids[0] != 0:
source_grids[k] = v.dataset.neurons.cell_motor_coordinates[
:, :input_dim
]
# simulated data -> get random linear non-degenerate transform of true positions
else:
source_grid_true = v.dataset.neurons.center[:, :input_dim]
det = 0.0
loops = 0
grid_bias = np.random.rand(2) * 3
while det < 5.0 and loops < 100:
matrix = np.random.rand(2, 2) * 3
det = np.linalg.det(matrix)
loops += 1
assert det > 5.0, "Did not find a non-degenerate matrix"
source_grids[k] = np.add(
(matrix @ source_grid_true.T).T, grid_bias
)
except FileNotFoundError:
print(
"Dataset type is not recognized to be from Baylor College of Medicine."
)
source_grids[k] = v.dataset.neurons.cell_motor_coordinates[
:, :input_dim
]
elif grid_mean_predictor_type == "shared":
pass
else:
raise ValueError(
"Grid mean predictor type {} not understood.".format(
grid_mean_predictor_type
)
)
shared_match_ids = None
if share_features or share_grid:
shared_match_ids = {
k: v.dataset.neurons.multi_match_id for k, v in dataloaders.items()
}
all_multi_unit_ids = set(np.hstack(shared_match_ids.values()))
for match_id in shared_match_ids.values():
assert len(set(match_id) & all_multi_unit_ids) == len(
all_multi_unit_ids
), "All multi unit IDs must be present in all datasets"
set_random_seed(seed)
core = TransferLearningCore(
input_channels=core_input_channels,
tl_model_name=tl_model_name,
layers=layers,
pretrained=pretrained,
final_batchnorm=final_batchnorm,
final_nonlinearity=final_nonlinearity,
momentum=momentum,
fine_tune=fine_tune,
)
readout = MultipleFullGaussian2d(
core,
in_shape_dict=in_shapes_dict,
n_neurons_dict=n_neurons_dict,
init_mu_range=init_mu_range,
bias=readout_bias,
init_sigma=init_sigma,
gamma_readout=gamma_readout,
gauss_type=gauss_type,
grid_mean_predictor=grid_mean_predictor,
grid_mean_predictor_type=grid_mean_predictor_type,
source_grids=source_grids,
share_features=share_features,
share_grid=share_grid,
shared_match_ids=shared_match_ids,
share_transform=share_transform,
init_noise=init_noise,
init_transform_scale=init_transform_scale,
)
# initializing readout bias to mean response
if readout_bias and data_info is None:
for key, value in dataloaders.items():
_, targets = next(iter(value))
readout[key].bias.data = targets.mean(0)
model = Encoder(core, readout, elu_offset)
return model
def taskdriven_fullSXF(
dataloaders,
seed,
elu_offset=0,
data_info=None,
# core args
tl_model_name="vgg16",
layers=4,
pretrained=True,
final_batchnorm=True,
final_nonlinearity=True,
momentum=0.1,
fine_tune=False,
# readout args
init_noise=4.1232e-05,
normalize=False,
readout_bias=True,
gamma_readout=0.0076,
share_features=False,
):
"""
Model class of a task-driven transfer core and a factorized (sxf) readout
Args:
dataloaders: a dictionary of dataloaders, one loader per session
in the format {'data_key': dataloader object, .. }
seed: random seed
elu_offset: Offset for the output non-linearity [F.elu(x + self.offset)]
all other args: See Documentation of TransferLearningCore in neuralpredictors.layers.cores and
fullSXF in neuralpredictors.layers.readouts
Returns: An initialized model which consists of model.core and model.readout
"""
if data_info is not None:
n_neurons_dict, in_shapes_dict, input_channels = unpack_data_info(data_info)
else:
if "train" in dataloaders.keys():
dataloaders = dataloaders["train"]
# Obtain the named tuple fields from the first entry of the first dataloader in the dictionary
in_name, out_name = next(iter(list(dataloaders.values())[0]))._fields
session_shape_dict = get_dims_for_loader_dict(dataloaders)
n_neurons_dict = {k: v[out_name][1] for k, v in session_shape_dict.items()}
in_shapes_dict = {k: v[in_name] for k, v in session_shape_dict.items()}
input_channels = [v[in_name][1] for v in session_shape_dict.values()]
core_input_channels = (
list(input_channels.values())[0]
if isinstance(input_channels, dict)
else input_channels[0]
)
shared_match_ids = None
if share_features:
shared_match_ids = {
k: v.dataset.neurons.multi_match_id for k, v in dataloaders.items()
}
all_multi_unit_ids = set(np.hstack(shared_match_ids.values()))
for match_id in shared_match_ids.values():
assert len(set(match_id) & all_multi_unit_ids) == len(
all_multi_unit_ids
), "All multi unit IDs must be present in all datasets"
set_random_seed(seed)
core = TransferLearningCore(
input_channels=core_input_channels,
tl_model_name=tl_model_name,
layers=layers,
pretrained=pretrained,
final_batchnorm=final_batchnorm,
final_nonlinearity=final_nonlinearity,
momentum=momentum,
fine_tune=fine_tune,
)
readout = MultipleFullSXF(
core,
in_shape_dict=in_shapes_dict,
n_neurons_dict=n_neurons_dict,
init_noise=init_noise,
bias=readout_bias,
gamma_readout=gamma_readout,
normalize=normalize,
share_features=share_features,
shared_match_ids=shared_match_ids,
)
# initializing readout bias to mean response
if readout_bias and data_info is None:
for key, value in dataloaders.items():
_, targets = next(iter(value))
readout[key].bias.data = targets.mean(0)
model = Encoder(core, readout, elu_offset)
return model
| 34.669704 | 134 | 0.639192 | 3,818 | 30,440 | 4.837611 | 0.075956 | 0.033785 | 0.034976 | 0.020466 | 0.942122 | 0.940065 | 0.931727 | 0.927666 | 0.924039 | 0.91987 | 0 | 0.012811 | 0.28456 | 30,440 | 877 | 135 | 34.709236 | 0.835293 | 0.211367 | 0 | 0.868778 | 0 | 0 | 0.045416 | 0.002377 | 0 | 0 | 0 | 0 | 0.00905 | 1 | 0.013575 | false | 0.003017 | 0.010558 | 0.001508 | 0.037707 | 0.006033 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
25c7c12ad9e1a96d9bcb69ec28d6ebb49229cb25 | 110 | py | Python | 12_module_basic/16_controller/modb.py | hemuke/python | bc99f2b5aee997083ae31f59a2b33db48c8255f3 | [
"Apache-2.0"
] | null | null | null | 12_module_basic/16_controller/modb.py | hemuke/python | bc99f2b5aee997083ae31f59a2b33db48c8255f3 | [
"Apache-2.0"
] | null | null | null | 12_module_basic/16_controller/modb.py | hemuke/python | bc99f2b5aee997083ae31f59a2b33db48c8255f3 | [
"Apache-2.0"
] | null | null | null | import mod
print(mod.v)
print(mod.f())
print(mod.MyClass)
print(mod._v)
print(mod._f())
print(mod._MyClass)
| 11 | 19 | 0.709091 | 20 | 110 | 3.75 | 0.3 | 0.64 | 0.24 | 0.373333 | 0.88 | 0.88 | 0.88 | 0.88 | 0.88 | 0 | 0 | 0 | 0.090909 | 110 | 9 | 20 | 12.222222 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0.857143 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 12 |
25d4932671f9bcd909079b0460c9958e37d582bb | 149 | py | Python | scavenge-site/forum.py | daemon/scavenge-server | b02c46a9932ef81bd849e4666eced44c1a4ffeec | [
"MIT"
] | null | null | null | scavenge-site/forum.py | daemon/scavenge-server | b02c46a9932ef81bd849e4666eced44c1a4ffeec | [
"MIT"
] | null | null | null | scavenge-site/forum.py | daemon/scavenge-server | b02c46a9932ef81bd849e4666eced44c1a4ffeec | [
"MIT"
] | null | null | null | import os
def register_user(username, password, email):
os.system("php /home/td/forum/add_user.php {} {} {}".format(username, password, email)) | 37.25 | 89 | 0.704698 | 21 | 149 | 4.904762 | 0.714286 | 0.31068 | 0.407767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120805 | 149 | 4 | 89 | 37.25 | 0.78626 | 0 | 0 | 0 | 0 | 0 | 0.272109 | 0.183673 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.666667 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
d37fef5ade8460d23f3aec4fc8bcac7f50abf56f | 36,458 | py | Python | unittest/scripts/auto/py_devapi/validation/collection_create_index.py | mueller/mysql-shell | 29bafc5692bd536a12c4e41c54cb587375fe52cf | [
"Apache-2.0"
] | 119 | 2016-04-14T14:16:22.000Z | 2022-03-08T20:24:38.000Z | unittest/scripts/auto/py_devapi/validation/collection_create_index.py | mueller/mysql-shell | 29bafc5692bd536a12c4e41c54cb587375fe52cf | [
"Apache-2.0"
] | 9 | 2017-04-26T20:48:42.000Z | 2021-09-07T01:52:44.000Z | unittest/scripts/auto/py_devapi/validation/collection_create_index.py | mueller/mysql-shell | 29bafc5692bd536a12c4e41c54cb587375fe52cf | [
"Apache-2.0"
] | 51 | 2016-07-20T05:06:48.000Z | 2022-03-09T01:20:53.000Z | #@<OUT> Create an index on a single field. 1 (WL10858-FR1_1)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index on a single field. 2 (WL10858-FR1_1)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10))
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10)),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index on a single field with all the possibles options. 1 (WL10858-FR1_2)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index on a single field with all the possibles options. 2 (WL10858-FR1_2)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL NOT NULL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10))
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10)),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index on multiple fields 1 (WL10858-FR1_3)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
*************************** 2. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 2
Column_name: <<<idx_col_2>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
*************************** 3. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 3
Column_name: <<<idx_col_3>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index on multiple fields 2 (WL10858-FR1_3)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
`<<<idx_col_2>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField2'))) VIRTUAL,
?{VER(<8.0.19)}
`<<<idx_col_3>>>` int(11) GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField3')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`<<<idx_col_3>>>` int GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField3')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10),`<<<idx_col_2>>>`(10),`<<<idx_col_3>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10),`<<<idx_col_2>>>`(10),`<<<idx_col_3>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index on multiple fields with all the possibles options. 1 (WL10858-FR1_4)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
*************************** 2. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 2
Column_name: <<<idx_col_2>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
*************************** 3. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 3
Column_name: <<<idx_col_3>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index on multiple fields with all the possibles options. 2 (WL10858-FR1_4)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
`<<<idx_col_2>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField2'))) VIRTUAL NOT NULL,
?{VER(<8.0.19)}
`<<<idx_col_3>>>` int(11) GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField3')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`<<<idx_col_3>>>` int GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField3')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10),`<<<idx_col_2>>>`(10),`<<<idx_col_3>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10),`<<<idx_col_2>>>`(10),`<<<idx_col_3>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a geojson datatype field. 1 (WL10858-FR1_5)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 32
Packed: NULL
Null:
Index_type: SPATIAL
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a geojson datatype field. 2 (WL10858-FR1_5)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` geometry GENERATED ALWAYS AS (st_geomfromgeojson(json_extract(`doc`,_utf8mb4'$.myGeoJsonField'),1,4326)) STORED NOT NULL /*!80003 SRID 4326 */,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
SPATIAL KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
SPATIAL KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a geojson datatype field without specifying the required flag it should be set to True by default. 1 (WL10858-FR1_6)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 32
Packed: NULL
Null:
Index_type: SPATIAL
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a geojson datatype field without specifying the required flag it should be set to True by default. 2 (WL10858-FR1_6)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` geometry GENERATED ALWAYS AS (st_geomfromgeojson(json_extract(`doc`,_utf8mb4'$.myGeoJsonField'),1,4326)) STORED NOT NULL /*!80003 SRID 4326 */,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
SPATIAL KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
SPATIAL KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a geojson datatype field with all the possibles options. 1 (WL10858-FR1_7)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 32
Packed: NULL
Null:
Index_type: SPATIAL
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a geojson datatype field with all the possibles options. 2 (WL10858-FR1_7)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` geometry GENERATED ALWAYS AS (st_geomfromgeojson(json_extract(`doc`,_utf8mb4'$.myGeoJsonField'),2,4400)) STORED NOT NULL /*!80003 SRID 4400 */,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
SPATIAL KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
SPATIAL KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a datetime field. 1 (WL10858-FR1_8)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a datetime field. 2 (WL10858-FR1_8)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` datetime GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a timestamp field. 1 (WL10858-FR1_9)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a timestamp field. 2 (WL10858-FR1_9)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` timestamp GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL NULL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a time field. 1 (WL10858-FR1_10)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a time field. 2 (WL10858-FR1_10)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` time GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a date field. 1 (WL10858-FR1_11)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a date field. 2 (WL10858-FR1_11)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` date GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a numeric field. 1 (WL10858-FR1_12)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a numeric field. 2 (WL10858-FR1_12)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` decimal(10,0) unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> FR1_13 Create an index using a decimal field. 1 (WL10858-FR1_13)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> FR1_13 Create an index using a decimal field. 2 (WL10858-FR1_13)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` decimal(10,0) GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a double field. 1 (WL10858-FR1_14)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a double field. 2 (WL10858-FR1_14)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` double GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a float field. 1 (WL10858-FR1_15)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a float field. 2 (WL10858-FR1_15)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` float unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a real field. 1 (WL10858-FR1_16)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a real field. 2 (WL10858-FR1_16)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` double unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a bigint field. 1 (WL10858-FR1_17)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a bigint field. 2 (WL10858-FR1_17)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(<8.0.19)}
`<<<idx_col_1>>>` bigint(20) GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
`<<<idx_col_1>>>` bigint GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a integer field. 1 (WL10858-FR1_18)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a integer field. 2 (WL10858-FR1_18)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(<8.0.19)}
`<<<idx_col_1>>>` int(10) unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
`<<<idx_col_1>>>` int unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a mediumint field. 1 (WL10858-FR1_19)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a mediumint field. 2 (WL10858-FR1_19)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(<8.0.19)}
`<<<idx_col_1>>>` mediumint(8) unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
`<<<idx_col_1>>>` mediumint unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a smallint field. 1 (WL10858-FR1_20)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a smallint field. 2 (WL10858-FR1_20)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(<8.0.19)}
`<<<idx_col_1>>>` smallint(6) GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
`<<<idx_col_1>>>` smallint GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Create an index using a tinyint field. 1 (WL10858-FR1_21)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Create an index using a tinyint field. 2 (WL10858-FR1_21)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(<8.0.19)}
`<<<idx_col_1>>>` tinyint(3) unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
`<<<idx_col_1>>>` tinyint unsigned GENERATED ALWAYS AS (json_extract(`doc`,_utf8mb4'$.myField')) VIRTUAL,
?{}
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`)
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@<OUT> Verify that the drop_index function removes the index entry from the table schema of a collection. 1 (WL10858-FR4_1)
*************************** 1. row ***************************
Table: my_coll
Non_unique: 1
Key_name: myIndex
Seq_in_index: 1
Column_name: <<<idx_col_1>>>
Collation: A
Cardinality: 0
Sub_part: 10
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Visible: YES
Expression: NULL
#@<OUT> Verify that the drop_index function removes the index entry from the table schema of a collection. 2 (WL10858-FR4_1)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
?{}
`<<<idx_col_1>>>` text GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$.myField'))) VIRTUAL,
PRIMARY KEY (`_id`),
?{VER(<8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10))
?{}
?{VER(>=8.0.19)}
KEY `myIndex` (`<<<idx_col_1>>>`(10)),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@ Verify that the drop_index function removes the index entry from the table schema of a collection. 3 (WL10858-FR4_1)
|Empty set|
#@<OUT> Verify that the drop_index function removes the index entry from the table schema of a collection. 4 (WL10858-FR4_1)
*************************** 1. row ***************************
Table: my_coll
Create Table: CREATE TABLE `my_coll` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
?{VER(<8.0.19)}
PRIMARY KEY (`_id`)
?{}
?{VER(>=8.0.19)}
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
PRIMARY KEY (`_id`),
CONSTRAINT `$val_strict_98ECC39AA1BEFEB54F58E37A530CD5D1BD7631C5` CHECK (json_schema_valid(`_json_schema`,`doc`)) /*!80016 NOT ENFORCED */
?{}
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
#@ Verify that the dropIndex silently succeeds if the index does not exist. (WL10858-FR4_2)
||
#@ Create an index with the name of an index that already exists. (WL10858-FR5_2)
||MySQL Error (1061): Duplicate key name 'myIndex'
#@ Create an index with a not valid JSON document definition. (WL10858-FR5_3) {sys.version_info[:2] < (3, 8)}
||coll.create_index('myIndex', {'fields': [{'field' = '$.myField', type = 'TEXT(10)'}]})
|| ^
||SyntaxError: invalid syntax
||coll.create_index('myIndex', {'fields': [{'field': '$.myField', 'type': 'TEXT(10)']})
|| ^
||SyntaxError: invalid syntax
||coll.create_index('myIndex', {'fields': [{'field': '$.myField', 'type': 'TEXT(10)'}})
|| ^
||SyntaxError: invalid syntax
#@ Create an index with a not valid JSON document definition. (WL10858-FR5_3) {sys.version_info[:2] >= (3, 8)}
||coll.create_index('myIndex', {'fields': [{'field' = '$.myField', type = 'TEXT(10)'}]})
|| ^
||SyntaxError: invalid syntax
||SyntaxError: closing parenthesis ']' does not match opening parenthesis '{'
||SyntaxError: closing parenthesis '}' does not match opening parenthesis '['
#@ Create an index where its definition is a JSON document but its structure is not valid. (WL10858-FR5_4)
||MySQL Error (5015): Invalid number of arguments, expected value for 'fields[0].field'
#@ Create an index with the index type not "INDEX" or "SPATIAL" (case insensitive). (WL10858-FR5_5)
||MySQL Error (5017): Argument value 'IDX' for index type is invalid
||MySQL Error (5017): Argument value 'SPATIAL_' for index type is invalid
||MySQL Error (5017): Argument value 'INVALID' for index type is invalid
#@ Create a 'SPATIAL' index with "required" flag set to False. (WL10858-FR5_6)
||MySQL Error (5117): GEOJSON index requires 'field.required: TRUE
#@ Create an index with an invalid "type" specified (type names are case insensitive). (WL10858-FR5_7)
||MySQL Error (5017): Invalid or unsupported type specification '_Text(10)'
||MySQL Error (5017): Invalid or unsupported type specification 'Invalid'
||MySQL Error (5017): Invalid or unsupported type specification 'Timestamps'
||MySQL Error (5017): Invalid or unsupported type specification 'Dates'
#@ Create an index specifiying geojson options for non geojson data type. (WL10858-FR5_8)
||MySQL Error (5017): Unsupported argument specification for '$.myField'
#@ Create an index with mismatched data types (WL10858-ET_1)
||MySQL Error (1292): Incorrect datetime value: '10' for column
#@ Create an index specifiying SPATIAL as the index type for a non spatial data type (WL10858-ET_2)
||MySQL Error (3106): 'Spatial index on virtual generated column' is not supported for generated columns.
#@ Create an index specifiying INDEX as the index type for a spatial data type (WL10858-ET_3)
||Column '$ix_gj_r_B4C4FDF5AD30671EF010BCE1E67FA76778A889F7' cannot be null
| 35.848574 | 163 | 0.621373 | 4,756 | 36,458 | 4.537847 | 0.046047 | 0.030859 | 0.030164 | 0.024975 | 0.945278 | 0.937958 | 0.93629 | 0.935733 | 0.884163 | 0.882124 | 0 | 0.072881 | 0.165259 | 36,458 | 1,016 | 164 | 35.883858 | 0.636283 | 0 | 0 | 0.950282 | 0 | 0 | 0.03631 | 0.001542 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d38cdb3065e0cb19ef6635c5b4f2cd05534883e6 | 12,106 | py | Python | myapp/tests/test_views.py | kimtaemila/movie-watchlist | de62218a824f54466bf71de2e74d86cf5d4262f0 | [
"CC0-1.0"
] | null | null | null | myapp/tests/test_views.py | kimtaemila/movie-watchlist | de62218a824f54466bf71de2e74d86cf5d4262f0 | [
"CC0-1.0"
] | null | null | null | myapp/tests/test_views.py | kimtaemila/movie-watchlist | de62218a824f54466bf71de2e74d86cf5d4262f0 | [
"CC0-1.0"
] | null | null | null | from django.contrib.auth.models import User
from django.test import TestCase, Client
from django.urls import reverse
from myapp.tests.test_models import create_movie, create_user_profile, create_playlist
import datetime
class TestViews(TestCase):
def setUp(self):
self.client = Client()
credentials = {
'username': 'TestUser',
'password': 'user1234'
}
self.test_user = User.objects.create_user(**credentials)
self.user_profile1 = create_user_profile(self.test_user, False)
def test_home_GET(self):
"""
Homepage test
"""
url = reverse('myapp:home')
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/collection.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_login_GET(self):
"""
Login page test
"""
url = reverse('myapp:login')
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/login.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_login_POST(self):
"""
Login Form test
"""
url = reverse('myapp:login')
form_data = {'username': 'TestUser', 'password': 'user1234'}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/collection.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_logout_POST(self):
"""
Logout page test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:logout')
response = self.client.post(url, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/collection.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_moviedetails_GET(self):
"""
Movie details test
"""
test_movie1 = create_movie(title='Released Test',
releasedate=datetime.date(
2016, 5, 13),
)
url = reverse('myapp:moviedetails', args=[test_movie1.slug])
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/moviedetails.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_signup_GET(self):
"""
Signup page test
"""
url = reverse('myapp:signup')
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/signup.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_signup_POST(self):
"""
signup Form test
"""
url = reverse('myapp:signup')
form_data = {
'first_name': 'Dummy',
'last_name': 'User',
'username': 'DummyUser',
'email': 'dummy.user@gmail.com',
'password1': 'user1234',
'password2': 'user1234',
}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/collection.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_search_GET(self):
"""
Search page test
"""
url = reverse('myapp:search')
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/search.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_search_POST(self):
"""
search Form test
"""
url = reverse('myapp:search')
form_data = {'searched': 'Movie Test'}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/search.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_userprofile_GET(self):
"""
User Profile test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:userprofile')
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/userprofile.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_playlistdetails_GET(self):
"""
Playlist details test
"""
self.client.login(username='TestUser', password='user1234')
dummy_playlist = create_playlist(title='Test Playlist',
createdby=self.test_user)
url = reverse('myapp:playlistdetails', args=[dummy_playlist.slug])
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/playlistdetails.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_createmovie_GET(self):
"""
Create Movie page test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:createmovie')
response = self.client.get(url, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/createmovie.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_createmovie_POST(self):
"""
Create Movie test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:createmovie')
form_data = {
'title': 'Dummy Movie 3',
'releasedate': '05/31/2021',
'language': 'en-US',
'description': 'N/A'
}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/createmovie.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_requestmovie_GET(self):
"""
Request Movie page test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:requestmovie')
response = self.client.get(url, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/requestmovie.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_requestmovie_POST(self):
"""
Request Movie test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:requestmovie')
form_data = {
'movietitle': 'Dummy Movie 4',
'releasedate': '12/31/2021',
'language': 'en-US',
}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/requestmovie.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_createlist_GET(self):
"""
Create list page test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:createlist')
response = self.client.get(url, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/createlist.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_createlist_POST(self):
"""
Create list test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:createlist')
form_data = {
'title': 'Action',
'description': 'N/A'
}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/collection.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_addtoplaylist_POST(self):
"""
Add to Playlist test
"""
self.client.login(username='TestUser', password='user1234')
test_movie1 = create_movie(title='Released Test',
releasedate=datetime.date(
2016, 5, 13),
)
dummy_playlist = create_playlist(title='Test Playlist',
createdby=self.test_user)
url = reverse('myapp:addtoplaylist', args=[
dummy_playlist.slug, test_movie1.slug])
response = self.client.post(url, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/moviedetails.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_removefromplaylist_POST(self):
"""
Remove from Playlist test
"""
self.client.login(username='TestUser', password='user1234')
test_movie1 = create_movie(title='Released Test',
releasedate=datetime.date(
2016, 5, 13),
)
dummy_playlist = create_playlist(title='Test Playlist',
createdby=self.test_user)
url = reverse('myapp:removefromplaylist',
args=[dummy_playlist.slug,
test_movie1.slug])
response = self.client.post(url, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/playlistdetails.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_donate_GET(self):
"""
Donate page test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:donate')
response = self.client.get(url)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/donate.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
def test_donate_POST(self):
"""
Donate test
"""
self.client.login(username='TestUser', password='user1234')
url = reverse('myapp:donate')
form_data = {
'payment': 'paid'
}
response = self.client.post(url, form_data, follow=True)
# SUCCESS TEST
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'myapp/userprofile.html')
# FAIL TEST
self.assertNotEquals(response.status_code, not 200)
| 32.026455 | 86 | 0.586982 | 1,203 | 12,106 | 5.797174 | 0.092269 | 0.063091 | 0.108403 | 0.081302 | 0.826355 | 0.79653 | 0.795096 | 0.795096 | 0.787783 | 0.787783 | 0 | 0.028796 | 0.305799 | 12,106 | 377 | 87 | 32.111406 | 0.801047 | 0.07203 | 0 | 0.648515 | 0 | 0 | 0.130824 | 0.037929 | 0 | 0 | 0 | 0 | 0.311881 | 1 | 0.108911 | false | 0.084158 | 0.024752 | 0 | 0.138614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
6ccb1fa1391ce5f18606bad7ac55e9848d9fc96f | 681 | py | Python | src/testcase/GN_Y201S/input_case/GN_Y201S_Timer_Time.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | 28 | 2017-11-10T00:19:16.000Z | 2022-02-19T16:42:05.000Z | src/testcase/GN_Y201S/input_case/GN_Y201S_Timer_Time.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | null | null | null | src/testcase/GN_Y201S/input_case/GN_Y201S_Timer_Time.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | 23 | 2017-08-22T06:12:19.000Z | 2021-09-18T05:45:41.000Z | # coding=utf-8
try:
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_001 import *
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_002 import *
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_003 import *
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_004 import *
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_005 import *
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_006 import *
from src.testcase.GN_Y201S.case.GN_Y201S_TIMER_TIME.GN_Y201S_TIMER_TIME_007 import *
except ImportError as e:
print(e)
| 56.75 | 88 | 0.828194 | 122 | 681 | 4.163934 | 0.204918 | 0.28937 | 0.330709 | 0.440945 | 0.870079 | 0.870079 | 0.870079 | 0.870079 | 0.870079 | 0.870079 | 0 | 0.138662 | 0.099853 | 681 | 11 | 89 | 61.909091 | 0.690049 | 0.017621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0.1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 12 |
6cf4d9139343cd0e087c8f297797fb7ed87c7d36 | 196 | py | Python | topi/python/topi/arm_cpu/__init__.py | mingwayzhang/tvm | 3b287c4d4e6d83e6fd30db47ffa3d5481a332a63 | [
"Apache-2.0"
] | 48 | 2020-07-29T18:09:23.000Z | 2021-10-09T01:53:33.000Z | topi/python/topi/arm_cpu/__init__.py | mingwayzhang/tvm | 3b287c4d4e6d83e6fd30db47ffa3d5481a332a63 | [
"Apache-2.0"
] | 9 | 2021-04-02T02:28:07.000Z | 2022-03-26T18:23:59.000Z | topi/python/topi/arm_cpu/__init__.py | mingwayzhang/tvm | 3b287c4d4e6d83e6fd30db47ffa3d5481a332a63 | [
"Apache-2.0"
] | 42 | 2020-08-01T06:41:24.000Z | 2022-01-20T10:33:08.000Z | """Schedule for ARM CPU"""
from . import conv2d
from . import depthwise_conv2d
from . import conv2d_transpose
from . import bitserial_conv2d
from . import bitserial_dense
from . import injective
| 21.777778 | 30 | 0.790816 | 26 | 196 | 5.807692 | 0.461538 | 0.397351 | 0.317881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023952 | 0.147959 | 196 | 8 | 31 | 24.5 | 0.88024 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6cf527acac82468693b4b06b3b06535bcbd567de | 13,063 | py | Python | Packages/backrefs/st3/backrefs/uniprops/unidata/joiningtype.py | aimee5/sublime_packages | 071e3d0a5892e177d7f93365b20ebccb3f60aedd | [
"MIT"
] | 2 | 2018-04-24T10:02:26.000Z | 2019-06-02T13:53:31.000Z | Packages/backrefs/st3/backrefs/uniprops/unidata/joiningtype.py | aimee5/sublime_packages | 071e3d0a5892e177d7f93365b20ebccb3f60aedd | [
"MIT"
] | null | null | null | Packages/backrefs/st3/backrefs/uniprops/unidata/joiningtype.py | aimee5/sublime_packages | 071e3d0a5892e177d7f93365b20ebccb3f60aedd | [
"MIT"
] | 2 | 2019-04-11T04:13:02.000Z | 2019-06-02T13:53:33.000Z | """Unicode Properties from Unicode version 6.1.0 (autogen)."""
from __future__ import unicode_literals
unicode_joining_type = {
"^c": "\u0000-\u063f\u0641-\u07f9\u07fb-\u200c\u200e-\U0010ffff",
"^d": "\u0000-\u061f\u0621-\u0625\u0627\u0629\u062f-\u0632\u0640\u0648\u064b-\u066d\u0670-\u0677\u0688-\u0699\u06c0\u06c3-\u06cb\u06cd\u06cf\u06d2-\u06f9\u06fd-\u06fe\u0700-\u0711\u0715-\u0719\u071e\u0728\u072a\u072c\u072f-\u074d\u0759-\u075b\u076b-\u076c\u0771\u0773-\u0774\u0778-\u0779\u0780-\u07c9\u07eb-\u0840\u0846\u0849\u084f\u0854\u0856-\u089f\u08a1\u08aa-\U0010ffff",
"^r": "\u0000-\u0621\u0626\u0628\u062a-\u062e\u0633-\u0647\u0649-\u0670\u0674\u0678-\u0687\u069a-\u06bf\u06c1-\u06c2\u06cc\u06ce\u06d0-\u06d1\u06d4\u06d6-\u06ed\u06f0-\u070f\u0711-\u0714\u071a-\u071d\u071f-\u0727\u0729\u072b\u072d-\u072e\u0730-\u074c\u074e-\u0758\u075c-\u076a\u076d-\u0770\u0772\u0775-\u0777\u077a-\u083f\u0841-\u0845\u0847-\u0848\u084a-\u084e\u0850-\u0853\u0855-\u08a9\u08ad-\U0010ffff",
"^t": "\u0000-\u00ac\u00ae-\u02ff\u0370-\u0482\u048a-\u0590\u05be\u05c0\u05c3\u05c6\u05c8-\u060f\u061b-\u064a\u0660-\u066f\u0671-\u06d5\u06dd-\u06de\u06e5-\u06e6\u06e9\u06ee-\u070e\u0710\u0712-\u072f\u074b-\u07a5\u07b1-\u07ea\u07f4-\u0815\u081a\u0824\u0828\u082e-\u0858\u085c-\u08e3\u08ff\u0903-\u0939\u093b\u093d-\u0940\u0949-\u094c\u094e-\u0950\u0958-\u0961\u0964-\u0980\u0982-\u09bb\u09bd-\u09c0\u09c5-\u09cc\u09ce-\u09e1\u09e4-\u0a00\u0a03-\u0a3b\u0a3d-\u0a40\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a50\u0a52-\u0a6f\u0a72-\u0a74\u0a76-\u0a80\u0a83-\u0abb\u0abd-\u0ac0\u0ac6\u0ac9-\u0acc\u0ace-\u0ae1\u0ae4-\u0b00\u0b02-\u0b3b\u0b3d-\u0b3e\u0b40\u0b45-\u0b4c\u0b4e-\u0b55\u0b57-\u0b61\u0b64-\u0b81\u0b83-\u0bbf\u0bc1-\u0bcc\u0bce-\u0c3d\u0c41-\u0c45\u0c49\u0c4e-\u0c54\u0c57-\u0c61\u0c64-\u0cbb\u0cbd-\u0cbe\u0cc0-\u0cc5\u0cc7-\u0ccb\u0cce-\u0ce1\u0ce4-\u0d40\u0d45-\u0d4c\u0d4e-\u0d61\u0d64-\u0dc9\u0dcb-\u0dd1\u0dd5\u0dd7-\u0e30\u0e32-\u0e33\u0e3b-\u0e46\u0e4f-\u0eb0\u0eb2-\u0eb3\u0eba\u0ebd-\u0ec7\u0ece-\u0f17\u0f1a-\u0f34\u0f36\u0f38\u0f3a-\u0f70\u0f7f\u0f85\u0f88-\u0f8c\u0f98\u0fbd-\u0fc5\u0fc7-\u102c\u1031\u1038\u103b-\u103c\u103f-\u1057\u105a-\u105d\u1061-\u1070\u1075-\u1081\u1083-\u1084\u1087-\u108c\u108e-\u109c\u109e-\u135c\u1360-\u1711\u1715-\u1731\u1735-\u1751\u1754-\u1771\u1774-\u17b3\u17b6\u17be-\u17c5\u17c7-\u17c8\u17d4-\u17dc\u17de-\u180a\u180e-\u18a8\u18aa-\u191f\u1923-\u1926\u1929-\u1931\u1933-\u1938\u193c-\u1a16\u1a19-\u1a55\u1a57\u1a5f\u1a61\u1a63-\u1a64\u1a6d-\u1a72\u1a7d-\u1a7e\u1a80-\u1aff\u1b04-\u1b33\u1b35\u1b3b\u1b3d-\u1b41\u1b43-\u1b6a\u1b74-\u1b7f\u1b82-\u1ba1\u1ba6-\u1ba7\u1baa\u1bac-\u1be5\u1be7\u1bea-\u1bec\u1bee\u1bf2-\u1c2b\u1c34-\u1c35\u1c38-\u1ccf\u1cd3\u1ce1\u1ce9-\u1cec\u1cee-\u1cf3\u1cf5-\u1dbf\u1de7-\u1dfb\u1e00-\u200a\u200c-\u200d\u2010-\u2029\u202f-\u205f\u2065-\u2069\u2070-\u20cf\u20f1-\u2cee\u2cf2-\u2d7e\u2d80-\u2ddf\u2e00-\u3029\u302e-\u3098\u309b-\ua66e\ua673\ua67e-\ua69e\ua6a0-\ua6ef\ua6f2-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua824\ua827-\ua8c3\ua8c5-\ua8df\ua8f2-\ua925\ua92e-\ua946\ua952-\ua97f\ua983-\ua9b2\ua9b4-\ua9b5\ua9ba-\ua9bb\ua9bd-\uaa28\uaa2f-\uaa30\uaa33-\uaa34\uaa37-\uaa42\uaa44-\uaa4b\uaa4d-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2-\uaaeb\uaaee-\uaaf5\uaaf7-\uabe4\uabe6-\uabe7\uabe9-\uabec\uabee-\ufb1d\ufb1f-\ufdff\ufe10-\ufe1f\ufe27-\ufefe\uff00-\ufff8\ufffc-\U000101fc\U000101fe-\U00010a00\U00010a04\U00010a07-\U00010a0b\U00010a10-\U00010a37\U00010a3b-\U00010a3e\U00010a40-\U00011000\U00011002-\U00011037\U00011047-\U0001107f\U00011082-\U000110b2\U000110b7-\U000110b8\U000110bb-\U000110bc\U000110be-\U000110ff\U00011103-\U00011126\U0001112c\U00011135-\U0001117f\U00011182-\U000111b5\U000111bf-\U000116aa\U000116ac\U000116ae-\U000116af\U000116b6\U000116b8-\U00016f8e\U00016f93-\U0001d166\U0001d16a-\U0001d172\U0001d183-\U0001d184\U0001d18c-\U0001d1a9\U0001d1ae-\U0001d241\U0001d245-\U000e0000\U000e0002-\U000e001f\U000e0080-\U000e00ff\U000e01f0-\U0010ffff",
"^u": "\u00ad\u0300-\u036f\u0483-\u0489\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u0610-\u061a\u0620\u0622-\u065f\u066e-\u0673\u0675-\u06d3\u06d5-\u06dc\u06df-\u06e4\u06e7-\u06e8\u06ea-\u06ef\u06fa-\u06fc\u06ff\u070f-\u074a\u074d-\u077f\u07a6-\u07b0\u07ca-\u07f3\u07fa\u0816-\u0819\u081b-\u0823\u0825-\u0827\u0829-\u082d\u0840-\u0855\u0859-\u085b\u08a0\u08a2-\u08ac\u08e4-\u08fe\u0900-\u0902\u093a\u093c\u0941-\u0948\u094d\u0951-\u0957\u0962-\u0963\u0981\u09bc\u09c1-\u09c4\u09cd\u09e2-\u09e3\u0a01-\u0a02\u0a3c\u0a41-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a70-\u0a71\u0a75\u0a81-\u0a82\u0abc\u0ac1-\u0ac5\u0ac7-\u0ac8\u0acd\u0ae2-\u0ae3\u0b01\u0b3c\u0b3f\u0b41-\u0b44\u0b4d\u0b56\u0b62-\u0b63\u0b82\u0bc0\u0bcd\u0c3e-\u0c40\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c62-\u0c63\u0cbc\u0cbf\u0cc6\u0ccc-\u0ccd\u0ce2-\u0ce3\u0d41-\u0d44\u0d4d\u0d62-\u0d63\u0dca\u0dd2-\u0dd4\u0dd6\u0e31\u0e34-\u0e3a\u0e47-\u0e4e\u0eb1\u0eb4-\u0eb9\u0ebb-\u0ebc\u0ec8-\u0ecd\u0f18-\u0f19\u0f35\u0f37\u0f39\u0f71-\u0f7e\u0f80-\u0f84\u0f86-\u0f87\u0f8d-\u0f97\u0f99-\u0fbc\u0fc6\u102d-\u1030\u1032-\u1037\u1039-\u103a\u103d-\u103e\u1058-\u1059\u105e-\u1060\u1071-\u1074\u1082\u1085-\u1086\u108d\u109d\u135d-\u135f\u1712-\u1714\u1732-\u1734\u1752-\u1753\u1772-\u1773\u17b4-\u17b5\u17b7-\u17bd\u17c6\u17c9-\u17d3\u17dd\u180b-\u180d\u18a9\u1920-\u1922\u1927-\u1928\u1932\u1939-\u193b\u1a17-\u1a18\u1a56\u1a58-\u1a5e\u1a60\u1a62\u1a65-\u1a6c\u1a73-\u1a7c\u1a7f\u1b00-\u1b03\u1b34\u1b36-\u1b3a\u1b3c\u1b42\u1b6b-\u1b73\u1b80-\u1b81\u1ba2-\u1ba5\u1ba8-\u1ba9\u1bab\u1be6\u1be8-\u1be9\u1bed\u1bef-\u1bf1\u1c2c-\u1c33\u1c36-\u1c37\u1cd0-\u1cd2\u1cd4-\u1ce0\u1ce2-\u1ce8\u1ced\u1cf4\u1dc0-\u1de6\u1dfc-\u1dff\u200b\u200d-\u200f\u202a-\u202e\u2060-\u2064\u206a-\u206f\u20d0-\u20f0\u2cef-\u2cf1\u2d7f\u2de0-\u2dff\u302a-\u302d\u3099-\u309a\ua66f-\ua672\ua674-\ua67d\ua69f\ua6f0-\ua6f1\ua802\ua806\ua80b\ua825-\ua826\ua8c4\ua8e0-\ua8f1\ua926-\ua92d\ua947-\ua951\ua980-\ua982\ua9b3\ua9b6-\ua9b9\ua9bc\uaa29-\uaa2e\uaa31-\uaa32\uaa35-\uaa36\uaa43\uaa4c\uaab0\uaab2-\uaab4\uaab7-\uaab8\uaabe-\uaabf\uaac1\uaaec-\uaaed\uaaf6\uabe5\uabe8\uabed\ufb1e\ufe00-\ufe0f\ufe20-\ufe26\ufeff\ufff9-\ufffb\U000101fd\U00010a01-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a0f\U00010a38-\U00010a3a\U00010a3f\U00011001\U00011038-\U00011046\U00011080-\U00011081\U000110b3-\U000110b6\U000110b9-\U000110ba\U000110bd\U00011100-\U00011102\U00011127-\U0001112b\U0001112d-\U00011134\U00011180-\U00011181\U000111b6-\U000111be\U000116ab\U000116ad\U000116b0-\U000116b5\U000116b7\U00016f8f-\U00016f92\U0001d167-\U0001d169\U0001d173-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U000e0001\U000e0020-\U000e007f\U000e0100-\U000e01ef",
"c": "\u0640\u07fa\u200d",
"d": "\u0620\u0626\u0628\u062a-\u062e\u0633-\u063f\u0641-\u0647\u0649-\u064a\u066e-\u066f\u0678-\u0687\u069a-\u06bf\u06c1-\u06c2\u06cc\u06ce\u06d0-\u06d1\u06fa-\u06fc\u06ff\u0712-\u0714\u071a-\u071d\u071f-\u0727\u0729\u072b\u072d-\u072e\u074e-\u0758\u075c-\u076a\u076d-\u0770\u0772\u0775-\u0777\u077a-\u077f\u07ca-\u07ea\u0841-\u0845\u0847-\u0848\u084a-\u084e\u0850-\u0853\u0855\u08a0\u08a2-\u08a9",
"r": "\u0622-\u0625\u0627\u0629\u062f-\u0632\u0648\u0671-\u0673\u0675-\u0677\u0688-\u0699\u06c0\u06c3-\u06cb\u06cd\u06cf\u06d2-\u06d3\u06d5\u06ee-\u06ef\u0710\u0715-\u0719\u071e\u0728\u072a\u072c\u072f\u074d\u0759-\u075b\u076b-\u076c\u0771\u0773-\u0774\u0778-\u0779\u0840\u0846\u0849\u084f\u0854\u08aa-\u08ac",
"t": "\u00ad\u0300-\u036f\u0483-\u0489\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u0610-\u061a\u064b-\u065f\u0670\u06d6-\u06dc\u06df-\u06e4\u06e7-\u06e8\u06ea-\u06ed\u070f\u0711\u0730-\u074a\u07a6-\u07b0\u07eb-\u07f3\u0816-\u0819\u081b-\u0823\u0825-\u0827\u0829-\u082d\u0859-\u085b\u08e4-\u08fe\u0900-\u0902\u093a\u093c\u0941-\u0948\u094d\u0951-\u0957\u0962-\u0963\u0981\u09bc\u09c1-\u09c4\u09cd\u09e2-\u09e3\u0a01-\u0a02\u0a3c\u0a41-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a70-\u0a71\u0a75\u0a81-\u0a82\u0abc\u0ac1-\u0ac5\u0ac7-\u0ac8\u0acd\u0ae2-\u0ae3\u0b01\u0b3c\u0b3f\u0b41-\u0b44\u0b4d\u0b56\u0b62-\u0b63\u0b82\u0bc0\u0bcd\u0c3e-\u0c40\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c62-\u0c63\u0cbc\u0cbf\u0cc6\u0ccc-\u0ccd\u0ce2-\u0ce3\u0d41-\u0d44\u0d4d\u0d62-\u0d63\u0dca\u0dd2-\u0dd4\u0dd6\u0e31\u0e34-\u0e3a\u0e47-\u0e4e\u0eb1\u0eb4-\u0eb9\u0ebb-\u0ebc\u0ec8-\u0ecd\u0f18-\u0f19\u0f35\u0f37\u0f39\u0f71-\u0f7e\u0f80-\u0f84\u0f86-\u0f87\u0f8d-\u0f97\u0f99-\u0fbc\u0fc6\u102d-\u1030\u1032-\u1037\u1039-\u103a\u103d-\u103e\u1058-\u1059\u105e-\u1060\u1071-\u1074\u1082\u1085-\u1086\u108d\u109d\u135d-\u135f\u1712-\u1714\u1732-\u1734\u1752-\u1753\u1772-\u1773\u17b4-\u17b5\u17b7-\u17bd\u17c6\u17c9-\u17d3\u17dd\u180b-\u180d\u18a9\u1920-\u1922\u1927-\u1928\u1932\u1939-\u193b\u1a17-\u1a18\u1a56\u1a58-\u1a5e\u1a60\u1a62\u1a65-\u1a6c\u1a73-\u1a7c\u1a7f\u1b00-\u1b03\u1b34\u1b36-\u1b3a\u1b3c\u1b42\u1b6b-\u1b73\u1b80-\u1b81\u1ba2-\u1ba5\u1ba8-\u1ba9\u1bab\u1be6\u1be8-\u1be9\u1bed\u1bef-\u1bf1\u1c2c-\u1c33\u1c36-\u1c37\u1cd0-\u1cd2\u1cd4-\u1ce0\u1ce2-\u1ce8\u1ced\u1cf4\u1dc0-\u1de6\u1dfc-\u1dff\u200b\u200e-\u200f\u202a-\u202e\u2060-\u2064\u206a-\u206f\u20d0-\u20f0\u2cef-\u2cf1\u2d7f\u2de0-\u2dff\u302a-\u302d\u3099-\u309a\ua66f-\ua672\ua674-\ua67d\ua69f\ua6f0-\ua6f1\ua802\ua806\ua80b\ua825-\ua826\ua8c4\ua8e0-\ua8f1\ua926-\ua92d\ua947-\ua951\ua980-\ua982\ua9b3\ua9b6-\ua9b9\ua9bc\uaa29-\uaa2e\uaa31-\uaa32\uaa35-\uaa36\uaa43\uaa4c\uaab0\uaab2-\uaab4\uaab7-\uaab8\uaabe-\uaabf\uaac1\uaaec-\uaaed\uaaf6\uabe5\uabe8\uabed\ufb1e\ufe00-\ufe0f\ufe20-\ufe26\ufeff\ufff9-\ufffb\U000101fd\U00010a01-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a0f\U00010a38-\U00010a3a\U00010a3f\U00011001\U00011038-\U00011046\U00011080-\U00011081\U000110b3-\U000110b6\U000110b9-\U000110ba\U000110bd\U00011100-\U00011102\U00011127-\U0001112b\U0001112d-\U00011134\U00011180-\U00011181\U000111b6-\U000111be\U000116ab\U000116ad\U000116b0-\U000116b5\U000116b7\U00016f8f-\U00016f92\U0001d167-\U0001d169\U0001d173-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U000e0001\U000e0020-\U000e007f\U000e0100-\U000e01ef",
"u": "\u0000-\u00ac\u00ae-\u02ff\u0370-\u0482\u048a-\u0590\u05be\u05c0\u05c3\u05c6\u05c8-\u060f\u061b-\u061f\u0621\u0660-\u066d\u0674\u06d4\u06dd-\u06de\u06e5-\u06e6\u06e9\u06f0-\u06f9\u06fd-\u06fe\u0700-\u070e\u074b-\u074c\u0780-\u07a5\u07b1-\u07c9\u07f4-\u07f9\u07fb-\u0815\u081a\u0824\u0828\u082e-\u083f\u0856-\u0858\u085c-\u089f\u08a1\u08ad-\u08e3\u08ff\u0903-\u0939\u093b\u093d-\u0940\u0949-\u094c\u094e-\u0950\u0958-\u0961\u0964-\u0980\u0982-\u09bb\u09bd-\u09c0\u09c5-\u09cc\u09ce-\u09e1\u09e4-\u0a00\u0a03-\u0a3b\u0a3d-\u0a40\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a50\u0a52-\u0a6f\u0a72-\u0a74\u0a76-\u0a80\u0a83-\u0abb\u0abd-\u0ac0\u0ac6\u0ac9-\u0acc\u0ace-\u0ae1\u0ae4-\u0b00\u0b02-\u0b3b\u0b3d-\u0b3e\u0b40\u0b45-\u0b4c\u0b4e-\u0b55\u0b57-\u0b61\u0b64-\u0b81\u0b83-\u0bbf\u0bc1-\u0bcc\u0bce-\u0c3d\u0c41-\u0c45\u0c49\u0c4e-\u0c54\u0c57-\u0c61\u0c64-\u0cbb\u0cbd-\u0cbe\u0cc0-\u0cc5\u0cc7-\u0ccb\u0cce-\u0ce1\u0ce4-\u0d40\u0d45-\u0d4c\u0d4e-\u0d61\u0d64-\u0dc9\u0dcb-\u0dd1\u0dd5\u0dd7-\u0e30\u0e32-\u0e33\u0e3b-\u0e46\u0e4f-\u0eb0\u0eb2-\u0eb3\u0eba\u0ebd-\u0ec7\u0ece-\u0f17\u0f1a-\u0f34\u0f36\u0f38\u0f3a-\u0f70\u0f7f\u0f85\u0f88-\u0f8c\u0f98\u0fbd-\u0fc5\u0fc7-\u102c\u1031\u1038\u103b-\u103c\u103f-\u1057\u105a-\u105d\u1061-\u1070\u1075-\u1081\u1083-\u1084\u1087-\u108c\u108e-\u109c\u109e-\u135c\u1360-\u1711\u1715-\u1731\u1735-\u1751\u1754-\u1771\u1774-\u17b3\u17b6\u17be-\u17c5\u17c7-\u17c8\u17d4-\u17dc\u17de-\u180a\u180e-\u18a8\u18aa-\u191f\u1923-\u1926\u1929-\u1931\u1933-\u1938\u193c-\u1a16\u1a19-\u1a55\u1a57\u1a5f\u1a61\u1a63-\u1a64\u1a6d-\u1a72\u1a7d-\u1a7e\u1a80-\u1aff\u1b04-\u1b33\u1b35\u1b3b\u1b3d-\u1b41\u1b43-\u1b6a\u1b74-\u1b7f\u1b82-\u1ba1\u1ba6-\u1ba7\u1baa\u1bac-\u1be5\u1be7\u1bea-\u1bec\u1bee\u1bf2-\u1c2b\u1c34-\u1c35\u1c38-\u1ccf\u1cd3\u1ce1\u1ce9-\u1cec\u1cee-\u1cf3\u1cf5-\u1dbf\u1de7-\u1dfb\u1e00-\u200a\u200c\u2010-\u2029\u202f-\u205f\u2065-\u2069\u2070-\u20cf\u20f1-\u2cee\u2cf2-\u2d7e\u2d80-\u2ddf\u2e00-\u3029\u302e-\u3098\u309b-\ua66e\ua673\ua67e-\ua69e\ua6a0-\ua6ef\ua6f2-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua824\ua827-\ua8c3\ua8c5-\ua8df\ua8f2-\ua925\ua92e-\ua946\ua952-\ua97f\ua983-\ua9b2\ua9b4-\ua9b5\ua9ba-\ua9bb\ua9bd-\uaa28\uaa2f-\uaa30\uaa33-\uaa34\uaa37-\uaa42\uaa44-\uaa4b\uaa4d-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2-\uaaeb\uaaee-\uaaf5\uaaf7-\uabe4\uabe6-\uabe7\uabe9-\uabec\uabee-\ufb1d\ufb1f-\ufdff\ufe10-\ufe1f\ufe27-\ufefe\uff00-\ufff8\ufffc-\U000101fc\U000101fe-\U00010a00\U00010a04\U00010a07-\U00010a0b\U00010a10-\U00010a37\U00010a3b-\U00010a3e\U00010a40-\U00011000\U00011002-\U00011037\U00011047-\U0001107f\U00011082-\U000110b2\U000110b7-\U000110b8\U000110bb-\U000110bc\U000110be-\U000110ff\U00011103-\U00011126\U0001112c\U00011135-\U0001117f\U00011182-\U000111b5\U000111bf-\U000116aa\U000116ac\U000116ae-\U000116af\U000116b6\U000116b8-\U00016f8e\U00016f93-\U0001d166\U0001d16a-\U0001d172\U0001d183-\U0001d184\U0001d18c-\U0001d1a9\U0001d1ae-\U0001d241\U0001d245-\U000e0000\U000e0002-\U000e001f\U000e0080-\U000e00ff\U000e01f0-\U0010ffff"
}
| 816.4375 | 3,018 | 0.786956 | 1,884 | 13,063 | 5.45276 | 0.497877 | 0.001947 | 0.00292 | 0.003894 | 0.90548 | 0.881145 | 0.875304 | 0.875304 | 0.867517 | 0.841234 | 0 | 0.449295 | 0.005818 | 13,063 | 15 | 3,019 | 870.866667 | 0.341726 | 0.004287 | 0 | 0 | 1 | 0.615385 | 0.98554 | 0.983001 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
9f72804149ce48ecc5380e777cec34a9029f4f2e | 4,070 | py | Python | server/data/email_classes/email_html.py | MikeSmvl/travelingstrategy | 3d38c64f00bafdf2ca1079d14f9b618bce8307b0 | [
"MIT"
] | null | null | null | server/data/email_classes/email_html.py | MikeSmvl/travelingstrategy | 3d38c64f00bafdf2ca1079d14f9b618bce8307b0 | [
"MIT"
] | 2 | 2021-05-08T23:09:17.000Z | 2021-09-02T11:27:08.000Z | server/data/email_classes/email_html.py | MikeSmvl/travelingstrategy | 3d38c64f00bafdf2ca1079d14f9b618bce8307b0 | [
"MIT"
] | 2 | 2020-10-14T01:18:32.000Z | 2020-11-09T16:54:16.000Z | from email_classes.email_config import style, message_body, image_left_table_top_tags, image_left_table_bottom_tags, image_right_table_top_tags, image_bottom_tags, footer
from flags import Flags
from logger import Logger
FLAGS = Flags()
LEVEL = FLAGS.get_logger_level()
LOGGER = Logger(level=LEVEL) if LEVEL is not None else Logger()
class Email_html():
def __init__(self):
self.style= style
self.message_body = message_body
self.images_left_side = ""
self.images_right_side = ""
self.images_left_section = image_left_table_top_tags+self.images_left_side+image_left_table_bottom_tags
self.image_right_section = image_right_table_top_tags+self.images_right_side+image_bottom_tags
self.footer = footer
def get_email(self):
return "<html>"+self.style+self.message_body+self.images_left_section+self.image_right_section+self.footer+"</html>"
# function to grab images and store them on the left side of the table
def add_left_image(self, url, width, height, image_url, city):
additional_image = """
<tr>
<th>
<table border="0" cellspacing="0" cellpadding="0" role="presentation" style="border-spacing:0;border-collapse:collapse;">
<tbody>
<tr>
<td class="container" style="width:244px;border-collapse:collapse;"><img class="image" data-imagetype="External" src="{}" style="font-size:13px;display:block;width:{}px;height:{}px;text-decoration:none;border:1px solid #EEEEEF;border-top-right-radius:4px;border-bottom-right-radius:4px;border-bottom-left-radius:4px;line-height:13px;outline:none;border-top-left-radius:4px;">
<div class="middle">
<img data-imagetype="External" src="https://img.icons8.com/offices/30/000000/place-marker.png"><a href="{}" style="text-decoration:none;color:white">{}</a>
</div>
</a>
</td>
</tr>
</tbody>
</table>
</th>
</tr>
<tr>
<th height="16" style="line-height:0;"> </th>
</tr>
""".format( url, width, height, image_url, city, sep='')
self.images_left_side = self.images_left_side + additional_image
self.images_left_section = image_left_table_top_tags+self.images_left_side+image_left_table_bottom_tags
# function to grab images and store them on the right side of the table
def add_right_image(self, url, width, height, image_url, city):
additional_image = """
<tr>
<th>
<table border="0" cellspacing="0" cellpadding="0" role="presentation" style="border-spacing:0;border-collapse:collapse;">
<tbody>
<tr>
<td class="container" style="width:244px;border-collapse:collapse;"><img class="image" data-imagetype="External" src="{}" style="font-size:13px;display:block;width:{}px;height:{}px;text-decoration:none;border:1px solid #EEEEEF;border-top-right-radius:4px;border-bottom-right-radius:4px;border-bottom-left-radius:4px;line-height:13px;outline:none;border-top-left-radius:4px;">
<div class="middle">
<img data-imagetype="External" src="https://img.icons8.com/offices/30/000000/place-marker.png"><a href="{}" style="text-decoration:none;color:white">{}</a>
</div>
</a>
</td>
</tr>
</tbody>
</table>
</th>
</tr>
<tr>
<th height="16" style="line-height:0;"> </th>
</tr>
""".format(url, width, height, image_url, city, sep='')
self.images_right_side = self.images_right_side + additional_image
self.image_right_section = image_right_table_top_tags+self.images_right_side+image_bottom_tags | 56.527778 | 399 | 0.607862 | 507 | 4,070 | 4.682446 | 0.201183 | 0.05476 | 0.047178 | 0.037911 | 0.831508 | 0.78433 | 0.73631 | 0.73631 | 0.73631 | 0.705139 | 0 | 0.018623 | 0.261179 | 4,070 | 72 | 400 | 56.527778 | 0.770868 | 0.033907 | 0 | 0.707692 | 0 | 0.092308 | 0.614504 | 0.243257 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061538 | false | 0 | 0.046154 | 0.015385 | 0.138462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4cd83a30bdbd2c7714855299a108f130abd126dd | 11,071 | py | Python | tests/test_views.py | MrThearMan/django-admin-data-views | 6b9df605b5879a7b4438bc6e67de196b58074aa3 | [
"MIT"
] | null | null | null | tests/test_views.py | MrThearMan/django-admin-data-views | 6b9df605b5879a7b4438bc6e67de196b58074aa3 | [
"MIT"
] | null | null | null | tests/test_views.py | MrThearMan/django-admin-data-views | 6b9df605b5879a7b4438bc6e67de196b58074aa3 | [
"MIT"
] | null | null | null | import pytest
from bs4 import BeautifulSoup
from django.http import HttpResponse
@pytest.mark.django_db
def test_admin_main_page(django_client):
result: HttpResponse = django_client.get("/admin/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
main_content = soup.find(name="div", attrs={"id": "content-main"})
admin_data_views = main_content.find(name="div", attrs={"class": "app-admin-data-views"})
assert admin_data_views is not None
title_link = admin_data_views.find(name="caption").find(name="a")
assert title_link.get("href") == "/admin/admin-data-views/"
assert title_link.text == "Admin Data Views"
foo_list = admin_data_views.find(name="tr", attrs={"class": "model-foo_list"}).find("th").find("a")
bar_list = admin_data_views.find(name="tr", attrs={"class": "model-bar_list"}).find("th").find("a")
fizz_item = admin_data_views.find(name="tr", attrs={"class": "model-fizz"}).find("th").find("a")
buzz_item = admin_data_views.find(name="tr", attrs={"class": "model-buzz"}).find("th").find("a")
assert foo_list.text == "Foo List"
assert foo_list.get("href") == "/admin/admin-data-views/foo/"
assert bar_list.text == "Bar List"
assert bar_list.get("href") == "/admin/admin-data-views/bar/"
assert fizz_item.text == "Fizz"
assert fizz_item.get("href") == "/admin/admin-data-views/fizz/"
assert buzz_item.text == "Buzz"
assert buzz_item.get("href") == "/admin/admin-data-views/buzz/"
@pytest.mark.django_db
def test_admin_data_views_list(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
main_content = soup.find(name="div", attrs={"id": "content-main"})
admin_data_views = main_content.find(name="div", attrs={"class": "app-admin-data-views"})
assert admin_data_views is not None
title_link = admin_data_views.find(name="caption").find(name="a")
assert title_link.get("href") == "/admin/admin-data-views/"
assert title_link.text == "Admin Data Views"
foo_list = admin_data_views.find(name="tr", attrs={"class": "model-foo_list"}).find("th").find("a")
bar_list = admin_data_views.find(name="tr", attrs={"class": "model-bar_list"}).find("th").find("a")
fizz_item = admin_data_views.find(name="tr", attrs={"class": "model-fizz"}).find("th").find("a")
buzz_item = admin_data_views.find(name="tr", attrs={"class": "model-buzz"}).find("th").find("a")
assert foo_list.text == "Foo List"
assert foo_list.get("href") == "/admin/admin-data-views/foo/"
assert bar_list.text == "Bar List"
assert bar_list.get("href") == "/admin/admin-data-views/bar/"
assert fizz_item.text == "Fizz"
assert fizz_item.get("href") == "/admin/admin-data-views/fizz/"
assert buzz_item.text == "Buzz"
assert buzz_item.get("href") == "/admin/admin-data-views/buzz/"
@pytest.mark.django_db
def test_admin_foo_list_view(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/foo/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
content = soup.find(name="div", attrs={"id": "content"})
assert content.find(name="h1").text == "Foo items"
list_table = content.find(name="form", attrs={"id": "changelist-form"})
headers = list_table.find("table").find(name="thead").findAll(name="th")
assert len(headers) == 2
assert headers[0].find(name="span").text == "Name"
assert headers[1].find(name="span").text == "Value"
rows = list_table.find("table").find(name="tbody").findAll(name="tr")
assert len(rows) == 2
row_1_items = rows[0].findAll(name="td")
assert len(row_1_items) == 2
row_1_link = row_1_items[0].find(name="a")
assert row_1_link.get("href") == "/admin/admin-data-views/foo/123/"
assert row_1_link.text == "Foo"
assert row_1_items[1].text == "1"
row_2_items = rows[1].findAll(name="td")
assert len(row_2_items) == 2
row_2_link = row_2_items[0].find(name="a")
assert row_2_link.get("href") == "/admin/admin-data-views/foo/124/"
assert row_2_link.text == "Bar"
assert row_2_items[1].text == "2"
@pytest.mark.django_db
def test_admin_foo_item_view(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/foo/123/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
content = soup.find(name="div", attrs={"id": "content"})
assert content.find(name="h1").text == "This is 123"
sections = content.findAll(name="fieldset")
assert len(sections) == 2
section_1_title = sections[0].find(name="h2")
section_1_subtitle = sections[0].find(name="div", attrs={"class": "description"})
section_1_fields = sections[0].findAll(name="div", attrs={"class": "fieldBox"})
assert section_1_title is None
assert section_1_subtitle is None
assert len(section_1_fields) == 1
section_1_label_1 = section_1_fields[0].find(name="label")
section_1_input_1 = section_1_fields[0].find(name="input")
assert section_1_label_1.text == "Foo"
assert section_1_input_1.get("value") == "123"
section_2_title = sections[1].find(name="h2")
section_2_subtitle = sections[1].find(name="div", attrs={"class": "description"})
section_2_fields = sections[1].findAll(name="div", attrs={"class": "fieldBox"})
assert section_2_title.text == "This is another section"
assert section_2_subtitle.text == "This is the description for this section"
assert len(section_2_fields) == 1
section_2_label_1 = section_2_fields[0].find(name="label")
section_2_input_1 = section_2_fields[0].find(name="input")
assert section_2_label_1.text == "Fizz"
assert section_2_input_1.get("value") == "246"
@pytest.mark.django_db
def test_admin_bar_list_view(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/bar/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
content = soup.find(name="div", attrs={"id": "content"})
assert content.find(name="h1").text == "Bar items"
list_table = content.find(name="form", attrs={"id": "changelist-form"})
headers = list_table.find("table").find(name="thead").findAll(name="th")
assert len(headers) == 2
assert headers[0].find(name="span").text == "Fizz"
assert headers[1].find(name="span").text == "Buzz"
rows = list_table.find("table").find(name="tbody").findAll(name="tr")
assert len(rows) == 2
row_1_items = rows[0].findAll(name="td")
assert len(row_1_items) == 2
row_1_link = row_1_items[0].find(name="a")
assert row_1_link.get("href") == "/admin/admin-data-views/bar/bar/"
assert row_1_link.text == "X"
assert row_1_items[1].text == "1"
row_2_items = rows[1].findAll(name="td")
assert len(row_2_items) == 2
row_2_link = row_2_items[0].find(name="a")
assert row_2_link.get("href") == "/admin/admin-data-views/bar/bar/"
assert row_2_link.text == "Y"
assert row_2_items[1].text == "2"
@pytest.mark.django_db
def test_admin_bar_item_view(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/bar/bar/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
content = soup.find(name="div", attrs={"id": "content"})
assert content.find(name="h1").text == "Bar page"
sections = content.findAll(name="fieldset")
assert len(sections) == 2
section_1_title = sections[0].find(name="h2")
section_1_subtitle = sections[0].find(name="div", attrs={"class": "description"})
section_1_fields = sections[0].findAll(name="div", attrs={"class": "fieldBox"})
assert section_1_title is None
assert section_1_subtitle is None
assert len(section_1_fields) == 1
section_1_label_1 = section_1_fields[0].find(name="label")
section_1_input_1 = section_1_fields[0].find(name="input")
assert section_1_label_1.text == "Foo"
assert section_1_input_1.get("value") == "Bar"
section_2_title = sections[1].find(name="h2")
section_2_subtitle = sections[1].find(name="div", attrs={"class": "description"})
section_2_fields = sections[1].findAll(name="div", attrs={"class": "fieldBox"})
assert section_2_title.text == "This is another section"
assert section_2_subtitle.text == "This is the description for this section"
assert len(section_2_fields) == 1
section_2_label_1 = section_2_fields[0].find(name="label")
section_2_input_1 = section_2_fields[0].find(name="input")
assert section_2_label_1.text == "Fizz"
assert section_2_input_1.get("value") == "Buzz"
@pytest.mark.django_db
def test_admin_fizz_list_view(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/fizz/", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
content = soup.find(name="div", attrs={"id": "content"})
assert content.find(name="h1").text == "Fizz view"
list_table = content.find(name="form", attrs={"id": "changelist-form"})
headers = list_table.find("table").find(name="thead").findAll(name="th")
assert len(headers) == 2
assert headers[0].find(name="span").text == "A"
assert headers[1].find(name="span").text == "B"
rows = list_table.find("table").find(name="tbody").findAll(name="tr")
assert len(rows) == 2
row_1_items = rows[0].findAll(name="td")
assert len(row_1_items) == 2
assert row_1_items[0].text == "X"
assert row_1_items[1].text == "1"
row_2_items = rows[1].findAll(name="td")
assert len(row_2_items) == 2
assert row_2_items[0].text == "Y"
assert row_2_items[1].text == "2"
@pytest.mark.django_db
def test_admin_buzz_item_view(django_client):
result: HttpResponse = django_client.get("/admin/admin-data-views/buzz", follow=True)
soup = BeautifulSoup(result.content, features="html.parser")
content = soup.find(name="div", attrs={"id": "content"})
assert content.find(name="h1").text == "Buzz page"
sections = content.findAll(name="fieldset")
assert len(sections) == 1
section_1_title = sections[0].find(name="h2")
section_1_subtitle = sections[0].find(name="div", attrs={"class": "description"})
section_1_fields = sections[0].findAll(name="div", attrs={"class": "fieldBox"})
assert section_1_title is None
assert section_1_subtitle is None
assert len(section_1_fields) == 1
section_1_label_1 = section_1_fields[0].find(name="label")
section_1_input_1 = section_1_fields[0].find(name="input")
assert section_1_label_1.text == "Foo"
assert section_1_input_1.get("value") == "Bar"
| 37.026756 | 104 | 0.661819 | 1,614 | 11,071 | 4.327757 | 0.057001 | 0.076736 | 0.080172 | 0.057122 | 0.972799 | 0.965068 | 0.964209 | 0.947029 | 0.928132 | 0.919685 | 0 | 0.023993 | 0.168007 | 11,071 | 298 | 105 | 37.151007 | 0.734339 | 0 | 0 | 0.765625 | 0 | 0 | 0.177295 | 0.056159 | 0 | 0 | 0 | 0 | 0.46875 | 1 | 0.041667 | false | 0 | 0.015625 | 0 | 0.057292 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
980c6176140c14893a71e7602bfb1c18891b9c96 | 297 | py | Python | netests/converters/ping/cumulus/validator.py | Netests/netests | 1a48bda461761c4ec854d6fa0c38629049009a4a | [
"MIT"
] | 14 | 2020-06-08T07:34:59.000Z | 2022-03-14T08:52:03.000Z | netests/converters/ping/cumulus/validator.py | Netests/netests | 1a48bda461761c4ec854d6fa0c38629049009a4a | [
"MIT"
] | null | null | null | netests/converters/ping/cumulus/validator.py | Netests/netests | 1a48bda461761c4ec854d6fa0c38629049009a4a | [
"MIT"
] | 3 | 2020-06-19T03:57:05.000Z | 2020-06-22T22:46:42.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
def cumulus_api_ping_validator(output: str, must_works: bool) -> bool:
pass
def cumulus_netconf_ping_validator(output: str, must_works: bool) -> bool:
pass
def cumulus_ssh_ping_validator(output: str, must_works: bool) -> bool:
pass
| 19.8 | 74 | 0.707071 | 43 | 297 | 4.604651 | 0.465116 | 0.151515 | 0.287879 | 0.333333 | 0.752525 | 0.752525 | 0.752525 | 0.752525 | 0.752525 | 0.535354 | 0 | 0.008065 | 0.164983 | 297 | 14 | 75 | 21.214286 | 0.790323 | 0.144781 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
e239cc78cc8ec007c96e7e63136b94a21e644368 | 33,230 | py | Python | aioketraapi/api/scene_operations_api.py | s4v4g3/aio-ketra-api | 1c8fefa2a66d4a66addeefdc33c71b2f0faa1137 | [
"MIT"
] | null | null | null | aioketraapi/api/scene_operations_api.py | s4v4g3/aio-ketra-api | 1c8fefa2a66d4a66addeefdc33c71b2f0faa1137 | [
"MIT"
] | null | null | null | aioketraapi/api/scene_operations_api.py | s4v4g3/aio-ketra-api | 1c8fefa2a66d4a66addeefdc33c71b2f0faa1137 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Ketra Lighting API
Control your Ketra lights # noqa: E501
The version of the OpenAPI document: 1.4.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from aioketraapi.api_client import ApiClient
from aioketraapi.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class SceneOperationsApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def root_get(self, **kwargs): # noqa: E501
"""Get keypads and groups (and scenes in API schema 4 or later) # noqa: E501
Gets all keypads and groups in the installation. Added in hub firmware version 1.14 (API schema 3). Scenes are also returned in API schema 4. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.root_get(async_req=True)
>>> result = thread.get()
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: InlineResponse200
"""
kwargs['_return_http_data_only'] = True
return self.root_get_with_http_info(**kwargs) # noqa: E501
def root_get_with_http_info(self, **kwargs): # noqa: E501
"""Get keypads and groups (and scenes in API schema 4 or later) # noqa: E501
Gets all keypads and groups in the installation. Added in hub firmware version 1.14 (API schema 3). Scenes are also returned in API schema 4. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.root_get_with_http_info(async_req=True)
>>> result = thread.get()
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(InlineResponse200, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'basicauthuser',
'basicauthpassword'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method root_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'basicauthuser' in local_var_params and local_var_params['basicauthuser'] is not None: # noqa: E501
query_params.append(('basicauthuser', local_var_params['basicauthuser'])) # noqa: E501
if 'basicauthpassword' in local_var_params and local_var_params['basicauthpassword'] is not None: # noqa: E501
query_params.append(('basicauthpassword', local_var_params['basicauthpassword'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse200', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def scenes_get(self, **kwargs): # noqa: E501
"""Get Scenes # noqa: E501
(New in API schema 4) Gets the list of defined Scenes. A scene is a predefined state (or states) for one or more groups of lights. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scenes_get(async_req=True)
>>> result = thread.get()
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: InlineResponse2005
"""
kwargs['_return_http_data_only'] = True
return self.scenes_get_with_http_info(**kwargs) # noqa: E501
def scenes_get_with_http_info(self, **kwargs): # noqa: E501
"""Get Scenes # noqa: E501
(New in API schema 4) Gets the list of defined Scenes. A scene is a predefined state (or states) for one or more groups of lights. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scenes_get_with_http_info(async_req=True)
>>> result = thread.get()
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(InlineResponse2005, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'basicauthuser',
'basicauthpassword'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method scenes_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'basicauthuser' in local_var_params and local_var_params['basicauthuser'] is not None: # noqa: E501
query_params.append(('basicauthuser', local_var_params['basicauthuser'])) # noqa: E501
if 'basicauthpassword' in local_var_params and local_var_params['basicauthpassword'] is not None: # noqa: E501
query_params.append(('basicauthpassword', local_var_params['basicauthpassword'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/Scenes', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse2005', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def scenes_scene_id_activate_post(self, scene_id, **kwargs): # noqa: E501
"""Activates a scene # noqa: E501
(New in API schema 4) Activates a Ketra scene specified by {scene-id}. If a group is specified, the scene will be activated only for that group (and its subgroups). If no group is specified, the scene will be activated for all groups for which the scene is defined. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scenes_scene_id_activate_post(scene_id, async_req=True)
>>> result = thread.get()
:param scene_id: The scene's unique identifier (uuid) (required)
:type scene_id: str
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param group: Specifies the parent group for which the scene should be activated
:type group: str
:param level: Specifies the master brightness level (from 0 to 65535) at which the scene should be activated. If this parameter is omitted, the scene will be activated at the maximum level (65535).
:type level: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: InlineResponse2007
"""
kwargs['_return_http_data_only'] = True
return self.scenes_scene_id_activate_post_with_http_info(scene_id, **kwargs) # noqa: E501
def scenes_scene_id_activate_post_with_http_info(self, scene_id, **kwargs): # noqa: E501
"""Activates a scene # noqa: E501
(New in API schema 4) Activates a Ketra scene specified by {scene-id}. If a group is specified, the scene will be activated only for that group (and its subgroups). If no group is specified, the scene will be activated for all groups for which the scene is defined. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scenes_scene_id_activate_post_with_http_info(scene_id, async_req=True)
>>> result = thread.get()
:param scene_id: The scene's unique identifier (uuid) (required)
:type scene_id: str
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param group: Specifies the parent group for which the scene should be activated
:type group: str
:param level: Specifies the master brightness level (from 0 to 65535) at which the scene should be activated. If this parameter is omitted, the scene will be activated at the maximum level (65535).
:type level: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(InlineResponse2007, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'scene_id',
'basicauthuser',
'basicauthpassword',
'group',
'level'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method scenes_scene_id_activate_post" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'scene_id' is set
if self.api_client.client_side_validation and ('scene_id' not in local_var_params or # noqa: E501
local_var_params['scene_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `scene_id` when calling `scenes_scene_id_activate_post`") # noqa: E501
if self.api_client.client_side_validation and 'level' in local_var_params and local_var_params['level'] > 65535: # noqa: E501
raise ApiValueError("Invalid value for parameter `level` when calling `scenes_scene_id_activate_post`, must be a value less than or equal to `65535`") # noqa: E501
if self.api_client.client_side_validation and 'level' in local_var_params and local_var_params['level'] < 0: # noqa: E501
raise ApiValueError("Invalid value for parameter `level` when calling `scenes_scene_id_activate_post`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'scene_id' in local_var_params:
path_params['scene-id'] = local_var_params['scene_id'] # noqa: E501
query_params = []
if 'basicauthuser' in local_var_params and local_var_params['basicauthuser'] is not None: # noqa: E501
query_params.append(('basicauthuser', local_var_params['basicauthuser'])) # noqa: E501
if 'basicauthpassword' in local_var_params and local_var_params['basicauthpassword'] is not None: # noqa: E501
query_params.append(('basicauthpassword', local_var_params['basicauthpassword'])) # noqa: E501
if 'group' in local_var_params and local_var_params['group'] is not None: # noqa: E501
query_params.append(('group', local_var_params['group'])) # noqa: E501
if 'level' in local_var_params and local_var_params['level'] is not None: # noqa: E501
query_params.append(('level', local_var_params['level'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/Scenes/{scene-id}/Activate', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse2007', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def scenes_scene_id_get(self, scene_id, **kwargs): # noqa: E501
"""Gets a single scene # noqa: E501
(New in API schema 4) Gets a Ketra scene specified by {scene-id}. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scenes_scene_id_get(scene_id, async_req=True)
>>> result = thread.get()
:param scene_id: The scene's unique identifier (uuid) (required)
:type scene_id: str
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: InlineResponse2006
"""
kwargs['_return_http_data_only'] = True
return self.scenes_scene_id_get_with_http_info(scene_id, **kwargs) # noqa: E501
def scenes_scene_id_get_with_http_info(self, scene_id, **kwargs): # noqa: E501
"""Gets a single scene # noqa: E501
(New in API schema 4) Gets a Ketra scene specified by {scene-id}. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scenes_scene_id_get_with_http_info(scene_id, async_req=True)
>>> result = thread.get()
:param scene_id: The scene's unique identifier (uuid) (required)
:type scene_id: str
:param basicauthuser: Username to use in place of username in basic authentication header. For a secure installation, this value is ignored but still must be supplied unless a basic authentication header is sent with the request.
:type basicauthuser: str
:param basicauthpassword: Password to use in place of password in basic authentication header. For a secure installation, this should be an oauth token for a user with access to the installation. If a basic authentication header is sent, this parameter is ignored. If no basic authentication header is sent, this parameter as well as the basicauthuser parameter must be supplied if the hub is a member of a secure installation.
:type basicauthpassword: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(InlineResponse2006, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'scene_id',
'basicauthuser',
'basicauthpassword'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method scenes_scene_id_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'scene_id' is set
if self.api_client.client_side_validation and ('scene_id' not in local_var_params or # noqa: E501
local_var_params['scene_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `scene_id` when calling `scenes_scene_id_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'scene_id' in local_var_params:
path_params['scene-id'] = local_var_params['scene_id'] # noqa: E501
query_params = []
if 'basicauthuser' in local_var_params and local_var_params['basicauthuser'] is not None: # noqa: E501
query_params.append(('basicauthuser', local_var_params['basicauthuser'])) # noqa: E501
if 'basicauthpassword' in local_var_params and local_var_params['basicauthpassword'] is not None: # noqa: E501
query_params.append(('basicauthpassword', local_var_params['basicauthpassword'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/Scenes/{scene-id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse2006', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
| 54.120521 | 437 | 0.639031 | 4,048 | 33,230 | 5.062994 | 0.063735 | 0.033569 | 0.053281 | 0.031617 | 0.956819 | 0.955404 | 0.95394 | 0.953208 | 0.942718 | 0.93779 | 0 | 0.014915 | 0.297863 | 33,230 | 613 | 438 | 54.208809 | 0.863492 | 0.571532 | 0 | 0.72973 | 1 | 0.007722 | 0.201198 | 0.035776 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034749 | false | 0.046332 | 0.019305 | 0 | 0.088803 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e23a14caea156c1874024b98784cdbff394bdd8e | 1,240 | py | Python | src/graph_rename.py | JianyiCheng/DSS | ffc08efd80415df49bd4b5b49ea4f28ff38134db | [
"BSD-3-Clause"
] | 25 | 2019-11-27T15:40:22.000Z | 2022-02-02T11:41:10.000Z | src/graph_rename.py | JianyiCheng/DSS | ffc08efd80415df49bd4b5b49ea4f28ff38134db | [
"BSD-3-Clause"
] | 1 | 2020-07-16T09:36:48.000Z | 2020-07-16T09:36:48.000Z | src/graph_rename.py | JianyiCheng/DSS | ffc08efd80415df49bd4b5b49ea4f28ff38134db | [
"BSD-3-Clause"
] | 6 | 2021-01-09T05:30:59.000Z | 2021-08-04T10:09:41.000Z | from __future__ import print_function
import os, fnmatch, datetime, sys, re, glob, cxxfilt
import helper as helper
top = sys.argv[1]
fT = glob.glob(top+'/_build/ds/*_graph.dot')
for n in fT:
line = n[n.rfind("/")+1:n.find("_graph.dot")]
print(line)
if (line.startswith('_Z')):
print("Fixing naming issue: " + line + " >> " + cxxfilt.demangle(line)[0:cxxfilt.demangle(line).find("(")])
print("mv "+top+"/_build/ds/"+line+"_graph.dot "+top+"/_build/ds/"+cxxfilt.demangle(line)[0:cxxfilt.demangle(line).find("(")]+"_graph.dot")
os.system("mv "+top+"/_build/ds/"+line+"_graph.dot "+top+"/_build/ds/"+cxxfilt.demangle(line)[0:cxxfilt.demangle(line).find("(")]+"_graph.dot")
fT = glob.glob(top+'/_build/ds/*_bbgraph.dot')
for n in fT:
line = n[n.rfind("/")+1:n.find("_bbgraph.dot")]
print(line)
if (line.startswith('_Z')):
print("Fixing naming issue: " + line + " >> " + cxxfilt.demangle(line)[0:cxxfilt.demangle(line).find("(")])
print("mv "+top+"/_build/ds/"+line+"_bbgraph.dot "+top+"/_build/ds/"+cxxfilt.demangle(line)[0:cxxfilt.demangle(line).find("(")]+"_bbgraph.dot")
os.system("mv "+top+"/_build/ds/"+line+"_bbgraph.dot "+top+"/_build/ds/"+cxxfilt.demangle(line)[0:cxxfilt.demangle(line).find("(")]+"_bbgraph.dot")
| 49.6 | 149 | 0.659677 | 187 | 1,240 | 4.219251 | 0.208556 | 0.228137 | 0.288973 | 0.152091 | 0.844106 | 0.844106 | 0.793409 | 0.793409 | 0.773131 | 0.773131 | 0 | 0.007923 | 0.083871 | 1,240 | 24 | 150 | 51.666667 | 0.68662 | 0 | 0 | 0.4 | 0 | 0 | 0.259887 | 0.037127 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0.35 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e2405000bc16e4d9ccfed9b987050c087c3d3f8a | 9,594 | py | Python | src/SLR/Python/SimpleLinearRegression.py | SamyuelDanyo/opencl-machine-learning-acceleration | fbd63359188351c79c03893a6ad303d96fb8bc50 | [
"MIT"
] | 1 | 2020-03-11T19:59:37.000Z | 2020-03-11T19:59:37.000Z | src/SLR/Python/SimpleLinearRegression.py | SamyuelDanyo/opencl-machine-learning-acceleration | fbd63359188351c79c03893a6ad303d96fb8bc50 | [
"MIT"
] | null | null | null | src/SLR/Python/SimpleLinearRegression.py | SamyuelDanyo/opencl-machine-learning-acceleration | fbd63359188351c79c03893a6ad303d96fb8bc50 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
# Creator: Samyuel Danyo
# Date: 10/2017
# coding: utf-8
from __future__ import division, print_function, unicode_literals
import os
import numpy as np
import matplotlib.pyplot as plt
def chf(x, n): # The basis function to be used. This is how our training Y relates to input X.
return np.cos(n*np.arccos(x)) # In real life we do not have the exact function, only inputs and outputs.
X1pts = 200 # Training points
X1lin = np.linspace(-1,0,X1pts) # Training interval (training input variable values)
y1 = chf(X1lin, 1)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys (our labels)
X1ptsPred = 400 # Prediction points
X1linPred = np.linspace(-1,1,X1ptsPred)# Prediction interval (real input variable values)
y1True = chf(X1linPred,1)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval (for verification)
Achf = np.stack((np.ones(X1pts),X1lin)).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred)).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
print(np.array(y1).shape)
y1 = chf(X1lin, 2)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys
X1ptsPred = 400 # Prediction points
X1linPred = np.linspace(-1,1,X1ptsPred)# Prediction interval
y1True = chf(X1linPred,2)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval
Achf = np.stack((np.ones(X1pts),X1lin)).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred)).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
X1pts = 40 # Training points
X1lin = np.linspace(-1,0,X1pts) # Training interval
y1 = chf(X1lin, 2)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys
X1ptsPred = 80 # Prediction points
X1linPred = np.linspace(-1,1,X1ptsPred)# Prediction interval
y1True = chf(X1linPred,2)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval
Achf = np.stack((np.ones(X1pts),X1lin,np.square(X1lin))).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred,np.square(X1linPred))).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
#plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
X1pts = 40 # Training points
X1lin = np.linspace(-0.5,0.5,X1pts) # Training interval
y1 = chf(X1lin, 2)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys
X1ptsPred = 80 # Prediction points
X1linPred = np.linspace(-1,1,X1ptsPred)# Prediction interval
y1True = chf(X1linPred,2)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval
Achf = np.stack((np.ones(X1pts),X1lin,np.square(X1lin))).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred,np.square(X1linPred))).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
#plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
def f(x): # The basis function to be used. This is how our training Y relates to input X.
return 0.5*(x)*(x**4)/(.05+(x**4)) # In real life we do not have the exact function, only inputs and outputs.
X1pts = 20 # Training points
X1lin = np.linspace(0,1,X1pts) # Training interval
y1 = f(X1lin)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys
X1ptsPred = 40 # Prediction points
X1linPred = np.linspace(0,2,X1ptsPred)# Prediction interval
y1True = f(X1linPred)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval
Achf = np.stack((np.ones(X1pts),X1lin,np.square(X1lin))).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred,np.square(X1linPred))).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
def f(x): # The basis function to be used. This is how our training Y relates to input X.
return 0.5*(x)*(x**4)/(.05+(x**4)) # In real life we do not have the exact function, only inputs and outputs.
X1pts = 20 # Training points
X1lin = np.linspace(0,1,X1pts) # Training interval
y1 = f(X1lin)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys
X1ptsPred = 40 # Prediction points
X1linPred = np.linspace(0,2,X1ptsPred)# Prediction interval
y1True = f(X1linPred)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval
Achf = np.stack((np.ones(X1pts),X1lin)).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred)).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
X1pts = 200 # Training points
X1lin = np.linspace(-1,0,X1pts) # Training interval
y1 = chf(X1lin, 2)+0.03*np.random.normal(0,1,X1pts)# Training results, our Ys
X1ptsPred = 400 # Prediction points
X1linPred = np.linspace(-1,1,X1ptsPred)# Prediction interval
y1True = chf(X1linPred,2)+0.03*np.random.normal(-1,1,X1ptsPred)# True results for the prediction interval
Achf = np.stack((np.ones(X1pts),X1lin,np.square(X1lin))).T # Constructing design matrix
AchfPred = np.stack((np.ones(X1ptsPred),X1linPred,np.square(X1linPred))).T # Prediction design matrix
w1hat = np.linalg.pinv(Achf).dot(y1) #Training our weights
y1pred = AchfPred.dot(w1hat) # Making our prediction, based on the weights
plt.plot(X1linPred,y1pred) # Displaying our Prediction (regression) in Blue
plt.scatter(X1linPred,y1True, color='y')# Displaying our true values in Yellow
plt.scatter(X1lin,y1, color='r') # Displaying our trainign vlaues in Red
plt.show()
print (w1hat)
def slr(x): # It is used to create the training set.This is how training Y relates to input X.
return 10*np.exp(-2*x**2) +np.sin(3*x)*10 +x #The model will need to aproximate this pattern, learning from X and Y.
Xpts = 100 # Training points:100
Xlin = np.linspace(-3,3,Xpts) # Training interval [-3:3]
Y = slr(Xlin) # Training targets, our Ys
Y += 0.7*np.random.normal(0,1,Xpts) # Noise is added, as in real life, there is always distortion
XptsPred = 200 # Inference points:200 (the # of points, the learnt model will be tested on, fitting)
XlinPred = np.linspace(-10,10,XptsPred) # Inference interval [-10:10] Presentation of the power of generalization
YTrue = slr(XlinPred) # True labels for the inference interval
YTrue += 0.7*np.random.normal(0,1,XptsPred) # Will be used to validate the prediction
A = np.stack((np.ones(Xpts), Xlin, # Constructing the training features (training Design Matrix)
np.sin(3*Xlin), np.exp(-1*Xlin**2))).T
APred = np.stack((np.ones(XptsPred), XlinPred,# Constructing the inference features (inference Design Matrix)
np.sin(3*XlinPred), np.exp(-1*XlinPred**2))).T
W = np.linalg.pinv(A).dot(Y) # Training the model (weights & bias)
Ypred = APred.dot(W) # Doing inference (making a prediction, based on the model)
plt.plot(XlinPred,Ypred) # Displaying the prediction (regression) in BLUE
plt.scatter(XlinPred,YTrue, color='y') # Displaying the true labels in YELLOW
plt.scatter(Xlin,Y, color='r') # Displaying the training targets in RED
plt.show()
print (W) | 62.705882 | 131 | 0.681363 | 1,401 | 9,594 | 4.66167 | 0.119201 | 0.041801 | 0.034298 | 0.031848 | 0.810136 | 0.802021 | 0.79605 | 0.785944 | 0.785944 | 0.785944 | 0 | 0.051082 | 0.200125 | 9,594 | 153 | 132 | 62.705882 | 0.799974 | 0.432875 | 0 | 0.755906 | 0 | 0 | 0.002631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031496 | false | 0 | 0.031496 | 0.031496 | 0.094488 | 0.031496 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e2480eca23ded4ec67b8a26c2726a9e72aa9a074 | 5,566 | py | Python | autocnet/matcher/tests/test_naive_template.py | readthedocs-assistant/autocnet | 579cccd0edc4cd870b5d9671165ebd830f1112b8 | [
"CC0-1.0"
] | 17 | 2016-11-21T17:07:18.000Z | 2022-01-16T06:14:04.000Z | autocnet/matcher/tests/test_naive_template.py | readthedocs-assistant/autocnet | 579cccd0edc4cd870b5d9671165ebd830f1112b8 | [
"CC0-1.0"
] | 504 | 2015-12-17T18:46:11.000Z | 2021-12-17T19:19:49.000Z | autocnet/matcher/tests/test_naive_template.py | readthedocs-assistant/autocnet | 579cccd0edc4cd870b5d9671165ebd830f1112b8 | [
"CC0-1.0"
] | 42 | 2015-12-09T15:30:15.000Z | 2022-02-24T04:47:46.000Z | import pytest
import unittest
from .. import naive_template
import numpy as np
import cv2
class TestNaiveTemplateAutoReg(unittest.TestCase):
def setUp(self):
self._test_image = np.array(((0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 1, 1, 1, 0, 0, 0),
(0, 0, 0, 0, 0, 1, 0, 0, 0),
(0, 0, 0, 0, 0, 1, 0, 0, 0),
(0, 0, 0, 1, 1, 1, 0, 0, 0),
(0, 0, 0, 1, 0, 1, 0, 0, 0),
(0, 0, 0, 1, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0)), dtype=np.uint8)
self._shape = np.array(((1, 1, 1),
(1, 0, 1),
(1, 1, 1)), dtype=np.uint8)
def test_subpixel_shift(self):
result_x, result_y, result_strength, _ = naive_template.pattern_match_autoreg(self._shape,
self._test_image,
cv2.TM_CCORR_NORMED)
print(result_x, result_y)
np.testing.assert_almost_equal(result_x, 0.167124, decimal=5)
np.testing.assert_almost_equal(result_y, -1.170976, decimal=5)
class TestNaiveTemplate(unittest.TestCase):
def setUp(self):
# Center is (5, 6)
self._test_image = np.array(((0, 0, 0, 0, 0, 0, 0, 1, 0),
(0, 0, 0, 0, 0, 0, 0, 1, 0),
(1, 1, 1, 0, 0, 0, 0, 1, 0),
(0, 1, 0, 0, 0, 0, 0, 0, 0),
(0, 1, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 1, 1, 1),
(0, 1, 1, 1, 0, 0, 1, 0, 1),
(0, 1, 0, 1, 0, 0, 1, 0, 1),
(0, 1, 1, 1, 0, 0, 1, 0, 1),
(0, 0, 0, 0, 0, 0, 1, 1, 1)), dtype=np.uint8)
# Should yield (-3, 3) offset from image center
self._t_shape = np.array(((1, 1, 1),
(0, 1, 0),
(0, 1, 0)), dtype=np.uint8)
# Should be (3, -4)
self._rect_shape = np.array(((1, 1, 1),
(1, 0, 1),
(1, 0, 1),
(1, 0, 1),
(1, 1, 1)), dtype=np.uint8)
# Should be (-2, -4)
self._square_shape = np.array(((1, 1, 1),
(1, 0, 1),
(1, 1, 1)), dtype=np.uint8)
# Should be (3, 5)
self._vertical_line = np.array(((0, 1, 0),
(0, 1, 0),
(0, 1, 0)), dtype=np.uint8)
def test_t_shape(self):
result_x, result_y, result_strength, _ = naive_template.pattern_match(self._t_shape,
self._test_image, upsampling=1)
# Test offsets
self.assertEqual(result_x, -3)
self.assertEqual(result_y, -3)
# Test Correlation Strength: At least 0.8
self.assertGreaterEqual(result_strength, 0.8, "Returned Correlation Strength of %d" % result_strength)
def test_rect_shape(self):
result_x, result_y, result_strength, _ = naive_template.pattern_match(self._rect_shape,
self._test_image, upsampling=1)
# Test offsets
self.assertEqual(result_x, 3)
self.assertEqual(result_y, 4)
# Test Correlation Strength: At least 0.8
self.assertGreaterEqual(result_strength, 0.8, "Returned Correlation Strength of %d" % result_strength)
def test_square_shape(self):
result_x, result_y, result_strength, _ = naive_template.pattern_match(self._square_shape,
self._test_image, upsampling=1)
# Test offsets
self.assertEqual(result_x, -2)
self.assertEqual(result_y, 4)
# Test Correlation Strength: At least 0.8
self.assertGreaterEqual(result_strength, 0.8, "Returned Correlation Strength of %d" % result_strength)
def test_line_shape(self):
result_x, result_y, result_strength, _ = naive_template.pattern_match(self._vertical_line,
self._test_image, upsampling=1)
# Test offsets
self.assertEqual(result_x, 3)
self.assertEqual(result_y, -5)
# Test Correlation Strength: At least 0.8
self.assertGreaterEqual(result_strength, 0.8, "Returned Correlation Strength of %d" % result_strength)
def tearDown(self):
pass
| 47.57265 | 110 | 0.402803 | 668 | 5,566 | 3.206587 | 0.110778 | 0.160598 | 0.212885 | 0.25957 | 0.81606 | 0.78338 | 0.742297 | 0.731092 | 0.725023 | 0.70775 | 0 | 0.119752 | 0.477902 | 5,566 | 116 | 111 | 47.982759 | 0.617343 | 0.058929 | 0 | 0.494118 | 0 | 0 | 0.026805 | 0 | 0 | 0 | 0 | 0 | 0.164706 | 1 | 0.094118 | false | 0.011765 | 0.058824 | 0 | 0.176471 | 0.011765 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e26ca0ae309b3f4e2db8d86c2230a556f8d03f43 | 10,720 | py | Python | Eva/Applicatie/Backend/PythonPrototypes/Applicatie/Test/Filteren/test_filteren_controller.py | triplejingle/cito | 43abeec8a68b7e8791b0d125fc8026dd58a0f7aa | [
"MIT"
] | null | null | null | Eva/Applicatie/Backend/PythonPrototypes/Applicatie/Test/Filteren/test_filteren_controller.py | triplejingle/cito | 43abeec8a68b7e8791b0d125fc8026dd58a0f7aa | [
"MIT"
] | null | null | null | Eva/Applicatie/Backend/PythonPrototypes/Applicatie/Test/Filteren/test_filteren_controller.py | triplejingle/cito | 43abeec8a68b7e8791b0d125fc8026dd58a0f7aa | [
"MIT"
] | null | null | null | from unittest import TestCase
from Applicatie.UsecaseControllers.VisualizeVariablesController import VisualizeVariablesController
from Tool.Test.Excel_Library import Excel
class TestController(TestCase):
base_path = "./sources/"
def test_filter(self):
criteria = "[{\"name\":\"Plaats\",\"variables\":[\"Apeldoorn\",\"Arnhem\"]},{\"name\":\"Niveau\",\"variables\":[\"HBO\"]}]"
excel = Excel()
file_name = "test_filter_criteria.xlsx"
excel.create_document(self.base_path + file_name)
excel.add_worksheet()
niveaus = ["Plaats", "School", "Niveau", "Leerling", "Score", "Label", "Response"]
data = [
["Apeldoorn", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Arnhem", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Nijmegen", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Apeldoorn", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Doetinchem", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."]
]
test = [niveaus]
for x in range(0, 200):
for rowset in data:
test.append(rowset)
excel.add_data_to_document(test)
excel.save_document()
controller = VisualizeVariablesController()
controller.load(self.base_path + file_name)
expectedResult = {'Plaats': ['Apeldoorn', 'Arnhem', 'Nijmegen', 'Doetinchem'], 'School': ['HAN'],
'Niveau': ['HBO'], 'Leerling': ['563631'], 'Score': ['15'], 'Label': ['test1'], 'Response': [
'Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis.']}
criteriaResult = controller.get_criteria()
self.assertEqual(expectedResult, criteriaResult)
print(controller.filter(criteria))
def test_filter_criteria(self):
excel = Excel()
file_name = "test_filter_criteria.xlsx"
excel.create_document(self.base_path + file_name)
excel.add_worksheet()
niveaus = ["Plaats", "School", "Niveau", "Leerling", "Score", "Label", "Response"]
data = [
["Apeldoorn", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Arnhem", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Nijmegen", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Amsterdam", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."],
["Zutphen", "HAN", "HBO", "563631", "15", "test1",
"Litora fringilla turpis hymenaeos tempor interdum pede dapibus ac, dui magna fermentum Habitasse ad sed justo enim placerat sagittis per sagittis in sed adipiscing proin diam duis facilisi adipiscing varius dignissim eu fringilla porta tempor. Pellentesque lorem convallis.Condimentum mus ultrices nostra quis ut commodo diam integer nibh hac. Sociosqu egestas nisl aliquam purus nisl mattis laoreet massa venenatis. Fringilla nisi elementum vehicula. Iaculis sem laoreet lacinia. Interdum Nec augue et aliquam euismod massa hac praesent, mus nec maecenas sollicitudin ante leo metus imperdiet semper vehicula fames interdum sociosqu pretium sit. Duis mi parturient, dignissim platea arcu magnis quis mattis."]
]
test = [niveaus]
for x in range(0, 200):
for rowset in data:
test.append(rowset)
excel.add_data_to_document(test)
excel.save_document()
controller = VisualizeVariablesController()
controller.load(self.base_path + file_name)
controller.get_criteria()
def runTest(self):
self.test_filter()
self.test_filter_criteria()
| 134 | 728 | 0.758675 | 1,366 | 10,720 | 5.927526 | 0.104685 | 0.020378 | 0.028529 | 0.040756 | 0.923799 | 0.923799 | 0.923799 | 0.923799 | 0.923799 | 0.923799 | 0 | 0.012184 | 0.180784 | 10,720 | 79 | 729 | 135.696203 | 0.909816 | 0 | 0 | 0.661765 | 0 | 0.161765 | 0.777052 | 0.026213 | 0 | 0 | 0 | 0 | 0.014706 | 1 | 0.044118 | false | 0 | 0.044118 | 0 | 0.117647 | 0.014706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2c58a1d64c5a2dfcd1722c8295d9a3dcf1eed594 | 19,968 | py | Python | solver/WaveEq.py | lonestar686/PINO_Applications | 3a834159e975bb81592365593a3ed57009b9e88f | [
"Apache-2.0"
] | 5 | 2022-03-25T08:19:08.000Z | 2022-03-26T19:41:17.000Z | solver/WaveEq.py | lonestar686/PINO_Applications | 3a834159e975bb81592365593a3ed57009b9e88f | [
"Apache-2.0"
] | null | null | null | solver/WaveEq.py | lonestar686/PINO_Applications | 3a834159e975bb81592365593a3ed57009b9e88f | [
"Apache-2.0"
] | 1 | 2022-03-25T21:33:25.000Z | 2022-03-25T21:33:25.000Z | from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import os
# import jax
# import jax.numpy as jnp
import numpy as np
import torch
# from jax import random, grad, vmap, jit, hessian, value_and_grad
# from jax.experimental import optimizers
# from jax.experimental.optimizers import adam, exponential_decay
# from jax.experimental.ode import odeint
# from jax.nn import relu, elu, softplus
# from jax.config import config
# # from jax.ops import index_update, index
# from jax import lax
# from jax.lax import while_loop, scan, cond
# from jax.flatten_util import ravel_pytree
import itertools
from functools import partial
from torch.utils import data
from tqdm import trange, tqdm
import matplotlib.pyplot as plt
from scipy.interpolate import griddata
import scipy
import scipy.io
from scipy.io import loadmat
import sys
import h5py
class WaveEq1D():
def __init__(self,
xmin=0,
xmax=1,
# ymin=0,
# ymax=1,
# dx=0.01,
# dy=0.01,
Nx=100,
# Ny=100,
c=1.0,
dt=1e-3,
tend=1.0,
device=None,
dtype=torch.float64,
# phi0='Data/data6.h5'
):
self.xmin = xmin
self.xmax = xmax
self.Nx = Nx
x = torch.linspace(xmin, xmax, Nx+1, device=device, dtype=dtype)
self.x = x
# self.y = y
self.dx = x[1] - x[0]
# self.dy = y[1] - y[0]
# self.X, self.Y = torch.meshgrid(x,y,indexing='ij')
self.c = c
self.phi = torch.zeros_like(self.x[:Nx], device=device)
self.psi = torch.zeros_like(self.phi, device=device)
self.phi0 = torch.zeros_like(self.phi, device=device)
self.dt = dt
self.tend = tend
self.t = 0
self.it = 0
self.Phi = []
self.T = []
self.device = device
# All Central Differencing Functions are 4th order. These are used to compute ann inputs.
def CD_i(self, data, axis, dx):
data_m2 = torch.roll(data,shifts=2,dims=axis)
data_m1 = torch.roll(data,shifts=1,dims=axis)
data_p1 = torch.roll(data,shifts=-1,dims=axis)
data_p2 = torch.roll(data,shifts=-2,dims=axis)
data_diff_i = (data_m2 - 8.0*data_m1 + 8.0*data_p1 - data_p2)/(12.0*dx)
return data_diff_i
def CD_ij(self, data, axis_i, axis_j, dx, dy):
data_diff_i = self.CD_i(data,axis_i,dx)
data_diff_ij = self.CD_i(data_diff_i,axis_j,dy)
return data_diff_ij
def CD_ii(self, data, axis, dx):
data_m2 = torch.roll(data,shifts=2,dims=axis)
data_m1 = torch.roll(data,shifts=1,dims=axis)
data_p1 = torch.roll(data,shifts=-1,dims=axis)
data_p2 = torch.roll(data,shifts=-2,dims=axis)
data_diff_ii = (-data_m2 + 16.0*data_m1 - 30.0*data + 16.0*data_p1 -data_p2)/(12.0*dx**2)
return data_diff_ii
def Dx(self, data):
data_dx = self.CD_i(data=data, axis=0, dx=self.dx)
return data_dx
def Dxx(self, data):
data_dxx = self.CD_ii(data, axis=0, dx=self.dx)
return data_dxx
def wave_calc_RHS(self, phi, psi):
phi_xx = self.Dxx(phi)
psi_RHS = self.c**2 * phi_xx # it is usually c^2, but c is consistent with simflowny code
phi_RHS = psi
return phi_RHS, psi_RHS
def update_field(self, field, RHS, step_frac):
field_new = field + self.dt*step_frac*RHS
return field_new
def rk4_merge_RHS(self, field, RHS1, RHS2, RHS3, RHS4):
field_new = field + self.dt/6.0*(RHS1 + 2*RHS2 + 2.0*RHS3 + RHS4)
return field_new
def wave_rk4(self, phi, psi, t=0):
phi_RHS1, psi_RHS1 = self.wave_calc_RHS(phi, psi)
t1 = t + 0.5*self.dt
# display(phi)
# display(phi_RHS1)
phi1 = self.update_field(phi, phi_RHS1, step_frac=0.5)
psi1 = self.update_field(psi, psi_RHS1, step_frac=0.5)
phi_RHS2, psi_RHS2 = self.wave_calc_RHS(phi1, psi1)
t2 = t + 0.5*self.dt
phi2 = self.update_field(phi, phi_RHS2, step_frac=0.5)
psi2 = self.update_field(psi, psi_RHS2, step_frac=0.5)
phi_RHS3, psi_RHS3 = self.wave_calc_RHS(phi2, psi2)
t3 = t + self.dt
phi3 = self.update_field(phi, phi_RHS3, step_frac=1.0)
psi3 = self.update_field(psi, psi_RHS3, step_frac=1.0)
phi_RHS4, psi_RHS4 = self.wave_calc_RHS(phi3, psi3)
t_new = t + self.dt
psi_new = self.rk4_merge_RHS(psi, psi_RHS1, psi_RHS2, psi_RHS3, psi_RHS4)
phi_new = self.rk4_merge_RHS(phi, phi_RHS1, phi_RHS2, phi_RHS3, phi_RHS4)
return phi_new, psi_new, t_new
def plot_data(self, cmap='jet', vmin=None, vmax=None, fig_num=0, title='', xlabel='', ylabel=''):
plt.ion()
fig = plt.figure(fig_num)
plt.cla()
plt.clf()
plt.plot(self.x, self.phi)
# c = plt.pcolormesh(self.X, self.Y, self.phi, cmap=cmap, vmin=vmin, vmax=vmax, shading='gouraud')
# fig.colorbar(c)
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
# plt.axis('equal')
# plt.axis('square')
plt.draw()
plt.pause(1e-17)
plt.show()
def wave_driver(self, phi0, save_interval=10, plot_interval=0):
# plot results
# t,it = get_time(time)
# display(phi0[:self.Nx,:self.Ny].shape)
self.phi0 = phi0[:self.Nx]
self.phi = self.phi0
self.t = 0
self.it = 0
self.T = []
self.Phi = []
if plot_interval != 0 and self.it % plot_interval == 0:
self.plot_data(vmin=-1,vmax=1,title=r'\{phi}')
if save_interval != 0 and self.it % save_interval == 0:
self.Phi.append(self.phi)
# self.Psi.append(self.psi)
self.T.append(self.t)
# Compute equations
while self.t < self.tend:
# print(f"t:\t{self.t}")
self.phi, self.psi, self.t = self.wave_rk4(self.phi, self.psi, self.t)
self.it += 1
if plot_interval != 0 and self.it % plot_interval == 0:
self.plot_data(vmin=-1,vmax=1,title=r'\{phi}')
if save_interval != 0 and self.it % save_interval == 0:
self.Phi.append(self.phi)
# self.Psi.append(self.psi)
self.T.append(self.t)
return torch.stack(self.Phi)
class WaveEq2D():
def __init__(self,
xmin=0,
xmax=1,
ymin=0,
ymax=1,
# dx=0.01,
# dy=0.01,
Nx=100,
Ny=100,
c=1.0,
dt=1e-3,
tend=1.0,
device=None,
dtype=torch.float64,
# phi0='Data/data6.h5'
):
self.xmin = xmin
self.xmax = xmax
self.ymin = ymin
self.ymax = ymax
self.Nx = Nx
self.Ny = Ny
x = torch.linspace(xmin, xmax, Nx+1, device=device, dtype=dtype)
y = torch.linspace(ymin, ymax, Ny+1, device=device, dtype=dtype)
self.x = x
self.y = y
self.dx = x[1] - x[0]
self.dy = y[1] - y[0]
self.X, self.Y = torch.meshgrid(x,y,indexing='ij')
self.c = c
self.phi = torch.zeros_like(self.X[:Nx,:Ny], device=device)
self.psi = torch.zeros_like(self.phi, device=device)
self.phi0 = torch.zeros_like(self.phi, device=device)
self.dt = dt
self.tend = tend
self.t = 0
self.it = 0
self.Phi = []
self.T = []
self.device = device
# All Central Differencing Functions are 4th order. These are used to compute ann inputs.
def CD_i(self, data, axis, dx):
data_m2 = torch.roll(data,shifts=2,dims=axis)
data_m1 = torch.roll(data,shifts=1,dims=axis)
data_p1 = torch.roll(data,shifts=-1,dims=axis)
data_p2 = torch.roll(data,shifts=-2,dims=axis)
data_diff_i = (data_m2 - 8.0*data_m1 + 8.0*data_p1 - data_p2)/(12.0*dx)
return data_diff_i
def CD_ij(self, data, axis_i, axis_j, dx, dy):
data_diff_i = self.CD_i(data,axis_i,dx)
data_diff_ij = self.CD_i(data_diff_i,axis_j,dy)
return data_diff_ij
def CD_ii(self, data, axis, dx):
data_m2 = torch.roll(data,shifts=2,dims=axis)
data_m1 = torch.roll(data,shifts=1,dims=axis)
data_p1 = torch.roll(data,shifts=-1,dims=axis)
data_p2 = torch.roll(data,shifts=-2,dims=axis)
data_diff_ii = (-data_m2 + 16.0*data_m1 - 30.0*data + 16.0*data_p1 -data_p2)/(12.0*dx**2)
return data_diff_ii
def Dx(self, data):
data_dx = self.CD_i(data=data, axis=0, dx=self.dx)
return data_dx
def Dy(self, data):
data_dy = self.CD_i(data=data, axis=1, dx=self.dy)
return data_dy
def Dxy(self, data):
data_dxy = self.CD_ij(data, axis_i=0, axis_j=1, dx=self.dx, dy=self.dy)
return data_dxy
def Dxx(self, data):
data_dxx = self.CD_ii(data, axis=0, dx=self.dx)
return data_dxx
def Dyy(self, data):
data_dyy = self.CD_ii(data,axis=1, dx=self.dy)
return data_dyy
def wave_calc_RHS(self, phi, psi):
phi_xx = self.Dxx(phi)
phi_yy = self.Dyy(phi)
psi_RHS = self.c**2 * (phi_xx + phi_yy) # it is usually c^2, but c is consistent with simflowny code
phi_RHS = psi
return phi_RHS, psi_RHS
def update_field(self, field, RHS, step_frac):
field_new = field + self.dt*step_frac*RHS
return field_new
def rk4_merge_RHS(self, field, RHS1, RHS2, RHS3, RHS4):
field_new = field + self.dt/6.0*(RHS1 + 2*RHS2 + 2.0*RHS3 + RHS4)
return field_new
def wave_rk4(self, phi, psi, t=0):
phi_RHS1, psi_RHS1 = self.wave_calc_RHS(phi, psi)
t1 = t + 0.5*self.dt
# display(phi.shape)
# display(phi_RHS1.shape)
phi1 = self.update_field(phi, phi_RHS1, step_frac=0.5)
psi1 = self.update_field(psi, psi_RHS1, step_frac=0.5)
phi_RHS2, psi_RHS2 = self.wave_calc_RHS(phi1, psi1)
t2 = t + 0.5*self.dt
phi2 = self.update_field(phi, phi_RHS2, step_frac=0.5)
psi2 = self.update_field(psi, psi_RHS2, step_frac=0.5)
phi_RHS3, psi_RHS3 = self.wave_calc_RHS(phi2, psi2)
t3 = t + self.dt
phi3 = self.update_field(phi, phi_RHS3, step_frac=1.0)
psi3 = self.update_field(psi, psi_RHS3, step_frac=1.0)
phi_RHS4, psi_RHS4 = self.wave_calc_RHS(phi3, psi3)
t_new = t + self.dt
psi_new = self.rk4_merge_RHS(psi, psi_RHS1, psi_RHS2, psi_RHS3, psi_RHS4)
phi_new = self.rk4_merge_RHS(phi, phi_RHS1, phi_RHS2, phi_RHS3, phi_RHS4)
return phi_new, psi_new, t_new
def plot_data(self, cmap='jet', vmin=None, vmax=None, fig_num=0, title='', xlabel='', ylabel=''):
plt.ion()
fig = plt.figure(fig_num)
plt.cla()
plt.clf()
c = plt.pcolormesh(self.X, self.Y, self.phi, cmap=cmap, vmin=vmin, vmax=vmax, shading='gouraud')
fig.colorbar(c)
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.axis('equal')
plt.axis('square')
plt.draw()
plt.pause(1e-17)
plt.show()
def wave_driver(self, phi0, save_interval=10, plot_interval=0):
# plot results
# t,it = get_time(time)
# display(phi0[:self.Nx,:self.Ny].shape)
self.phi0 = phi0[:self.Nx,:self.Ny]
self.phi = self.phi0
if plot_interval != 0 and self.it % plot_interval == 0:
self.plot_data(vmin=-1,vmax=1,title=r'\{phi}')
if save_interval != 0 and self.it % save_interval == 0:
self.Phi.append(self.phi)
# self.Psi.append(self.psi)
self.T.append(self.t)
# Compute equations
while self.t < self.tend:
# print(f"t:\t{self.t}")
self.phi, self.psi, self.t = self.wave_rk4(self.phi, self.psi, self.t)
self.it += 1
if plot_interval != 0 and self.it % plot_interval == 0:
self.plot_data(vmin=-1,vmax=1,title=r'\{phi}')
if save_interval != 0 and self.it % save_interval == 0:
self.Phi.append(self.phi)
# self.Psi.append(self.psi)
self.T.append(self.t)
return torch.stack(self.Phi)
class WaveEq3D():
def __init__(self,
xmin=0,
xmax=1,
ymin=0,
ymax=1,
zmin=0,
zmax=1,
# dx=0.01,
# dy=0.01,
# dz = 0.01,
Nx=100,
Ny=100,
Nz=100,
c=1.0,
dt=1e-3,
tend=1.0,
device=None,
dtype=torch.float64,
# phi0='Data/data6.h5'
):
self.xmin = xmin
self.xmax = xmax
self.ymin = ymin
self.ymax = ymax
self.ymin = zmin
self.ymax = zmax
self.Nx = Nx
self.Ny = Ny
self.Nz = Nz
x = torch.linspace(xmin, xmax, Nx+1, device=device, dtype=dtype)
y = torch.linspace(ymin, ymax, Ny+1, device=device, dtype=dtype)
z = torch.linspace(zmin, zmax, Nz+1, device=device, dtype=dtype)
self.x = x
self.y = y
self.z = z
self.dx = x[1] - x[0]
self.dy = y[1] - y[0]
self.dz = z[1] - z[0]
self.X, self.Y, self.Z = torch.meshgrid(x,y,z,indexing='ij')
self.c = c
self.phi = torch.zeros_like(self.X[:Nx,:Ny,:Nz], device=device)
self.psi = torch.zeros_like(self.phi, device=device)
self.phi0 = torch.zeros_like(self.phi, device=device)
self.dt = dt
self.tend = tend
self.t = 0
self.it = 0
self.Phi = []
self.T = []
self.device = device
# All Central Differencing Functions are 4th order. These are used to compute ann inputs.
def CD_i(self, data, axis, dx):
data_m2 = torch.roll(data,shifts=2,dims=axis)
data_m1 = torch.roll(data,shifts=1,dims=axis)
data_p1 = torch.roll(data,shifts=-1,dims=axis)
data_p2 = torch.roll(data,shifts=-2,dims=axis)
data_diff_i = (data_m2 - 8.0*data_m1 + 8.0*data_p1 - data_p2)/(12.0*dx)
return data_diff_i
def CD_ij(self, data, axis_i, axis_j, dx, dy):
data_diff_i = self.CD_i(data,axis_i,dx)
data_diff_ij = self.CD_i(data_diff_i,axis_j,dy)
return data_diff_ij
def CD_ii(self, data, axis, dx):
data_m2 = torch.roll(data,shifts=2,dims=axis)
data_m1 = torch.roll(data,shifts=1,dims=axis)
data_p1 = torch.roll(data,shifts=-1,dims=axis)
data_p2 = torch.roll(data,shifts=-2,dims=axis)
data_diff_ii = (-data_m2 + 16.0*data_m1 - 30.0*data + 16.0*data_p1 -data_p2)/(12.0*dx**2)
return data_diff_ii
def Dx(self, data):
data_dx = self.CD_i(data=data, axis=0, dx=self.dx)
return data_dx
def Dy(self, data):
data_dy = self.CD_i(data=data, axis=1, dx=self.dy)
return data_dy
def Dz(self, data):
data_dy = self.CD_i(data=data, axis=1, dx=self.dz)
return data_dy
def Dxy(self, data):
data_dxy = self.CD_ij(data, axis_i=0, axis_j=1, dx=self.dx, dy=self.dy)
return data_dxy
def Dxz(self, data):
data_dxz = self.CD_ij(data, axis_i=0, axis_j=1, dx=self.dx, dy=self.dz)
return data_dxz
def Dyz(self, data):
data_dyz = self.CD_ij(data, axis_i=0, axis_j=1, dx=self.dy, dy=self.dz)
return data_dyz
def Dxx(self, data):
data_dxx = self.CD_ii(data, axis=0, dx=self.dx)
return data_dxx
def Dyy(self, data):
data_dyy = self.CD_ii(data,axis=1, dx=self.dy)
return data_dyy
def Dzz(self, data):
data_dzz = self.CD_ii(data,axis=1, dx=self.dz)
return data_dzz
def wave_calc_RHS(self, phi, psi):
phi_xx = self.Dxx(phi)
phi_yy = self.Dyy(phi)
phi_zz = self.Dzz(phi)
psi_RHS = self.c**2 * (phi_xx + phi_yy + phi_zz) # it is usually c^2, but c is consistent with simflowny code
phi_RHS = psi
return phi_RHS, psi_RHS
def update_field(self, field, RHS, step_frac):
field_new = field + self.dt*step_frac*RHS
return field_new
def rk4_merge_RHS(self, field, RHS1, RHS2, RHS3, RHS4):
field_new = field + self.dt/6.0*(RHS1 + 2*RHS2 + 2.0*RHS3 + RHS4)
return field_new
def wave_rk4(self, phi, psi, t=0):
phi_RHS1, psi_RHS1 = self.wave_calc_RHS(phi, psi)
t1 = t + 0.5*self.dt
# display(phi.shape)
# display(phi_RHS1.shape)
phi1 = self.update_field(phi, phi_RHS1, step_frac=0.5)
psi1 = self.update_field(psi, psi_RHS1, step_frac=0.5)
phi_RHS2, psi_RHS2 = self.wave_calc_RHS(phi1, psi1)
t2 = t + 0.5*self.dt
phi2 = self.update_field(phi, phi_RHS2, step_frac=0.5)
psi2 = self.update_field(psi, psi_RHS2, step_frac=0.5)
phi_RHS3, psi_RHS3 = self.wave_calc_RHS(phi2, psi2)
t3 = t + self.dt
phi3 = self.update_field(phi, phi_RHS3, step_frac=1.0)
psi3 = self.update_field(psi, psi_RHS3, step_frac=1.0)
phi_RHS4, psi_RHS4 = self.wave_calc_RHS(phi3, psi3)
t_new = t + self.dt
psi_new = self.rk4_merge_RHS(psi, psi_RHS1, psi_RHS2, psi_RHS3, psi_RHS4)
phi_new = self.rk4_merge_RHS(phi, phi_RHS1, phi_RHS2, phi_RHS3, phi_RHS4)
return phi_new, psi_new, t_new
def plot_data(self, cmap='jet', vmin=None, vmax=None, fig_num=0, title='', xlabel='', ylabel=''):
# plt.ion()
# fig = plt.figure(fig_num)
# plt.cla()
# plt.clf()
# c = plt.pcolormesh(self.X, self.Y, self.phi, cmap=cmap, vmin=vmin, vmax=vmax, shading='gouraud')
# fig.colorbar(c)
# plt.title(title)
# plt.xlabel(xlabel)
# plt.ylabel(ylabel)
# plt.axis('equal')
# plt.axis('square')
# plt.draw()
# plt.pause(1e-17)
# plt.show()
pass
def wave_driver(self, phi0, save_interval=10, plot_interval=0):
# plot results
# t,it = get_time(time)
# display(phi0[:self.Nx,:self.Ny].shape)
self.phi0 = phi0[:self.Nx,:self.Ny,:self.Nz]
self.phi = self.phi0
if plot_interval != 0 and self.it % plot_interval == 0:
self.plot_data(vmin=-1,vmax=1,title=r'\{phi}')
if save_interval != 0 and self.it % save_interval == 0:
self.Phi.append(self.phi)
# self.Psi.append(self.psi)
self.T.append(self.t)
# Compute equations
while self.t < self.tend:
# print(f"t:\t{self.t}")
self.phi, self.psi, self.t = self.wave_rk4(self.phi, self.psi, self.t)
self.it += 1
if plot_interval != 0 and self.it % plot_interval == 0:
self.plot_data(vmin=-1,vmax=1,title=r'\{phi}')
if save_interval != 0 and self.it % save_interval == 0:
self.Phi.append(self.phi)
# self.Psi.append(self.psi)
self.T.append(self.t)
return torch.stack(self.Phi)
| 34.133333 | 117 | 0.554938 | 3,040 | 19,968 | 3.487171 | 0.074013 | 0.031035 | 0.029431 | 0.043015 | 0.907745 | 0.9035 | 0.899632 | 0.896991 | 0.891897 | 0.891897 | 0 | 0.042719 | 0.317708 | 19,968 | 584 | 118 | 34.191781 | 0.735393 | 0.115785 | 0 | 0.859259 | 0 | 0 | 0.006771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11358 | false | 0.002469 | 0.039506 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2c8fdfb6af95792ac0a90ca7c2ed51193fb55f7f | 53 | py | Python | parsers/__init__.py | GBLin5566/FilmFestScheduler | 9e798ca448b4afcfb2ed486ebfb3c4083c50fb49 | [
"MIT"
] | null | null | null | parsers/__init__.py | GBLin5566/FilmFestScheduler | 9e798ca448b4afcfb2ed486ebfb3c4083c50fb49 | [
"MIT"
] | null | null | null | parsers/__init__.py | GBLin5566/FilmFestScheduler | 9e798ca448b4afcfb2ed486ebfb3c4083c50fb49 | [
"MIT"
] | null | null | null | from .golden_horse_parser import golden_horse_parser
| 26.5 | 52 | 0.90566 | 8 | 53 | 5.5 | 0.625 | 0.5 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 53 | 1 | 53 | 53 | 0.897959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2c90f94e05dffd151fc09ebb42a9ce4e862a3585 | 1,470 | py | Python | test_/footprint/tst_helper.py | bfueldner/pykicadlib | 4e78347d4713f55187d2a1d791f4f81e5b6772a8 | [
"MIT"
] | null | null | null | test_/footprint/tst_helper.py | bfueldner/pykicadlib | 4e78347d4713f55187d2a1d791f4f81e5b6772a8 | [
"MIT"
] | null | null | null | test_/footprint/tst_helper.py | bfueldner/pykicadlib | 4e78347d4713f55187d2a1d791f4f81e5b6772a8 | [
"MIT"
] | null | null | null | import unittest
import pykicadlib.footprint.helper
class TestFootprintHelperQuoteStr(unittest.TestCase):
def test_values(self):
self.assertEqual(pykicadlib.footprint.helper.quote_str('Text'), '"Text"')
self.assertEqual(pykicadlib.footprint.helper.quote_str('Text "with" quote'), '"Text ""with"" quote"')
def test_exception(self):
with self.assertRaises(TypeError):
pykicadlib.footprint.helper.quote_str(1)
with self.assertRaises(ValueError):
pykicadlib.footprint.helper.quote_str("\xc3")
class TestFootprintHelperFloatToStr(unittest.TestCase):
def test_values(self):
self.assertEqual(pykicadlib.footprint.helper.float_to_str(0.0), "0.0")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(1000000000.0), "1000000000.0")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(1000000.0), "1000000.0")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(1000.0), "1000.0")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(1.0), "1.0")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(0.001), "0.001")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(0.000001), "0.000001")
self.assertEqual(pykicadlib.footprint.helper.float_to_str(0.000000001), "0.000000001")
def test_exception(self):
with self.assertRaises(TypeError):
pykicadlib.footprint.helper.float_to_str(1)
| 44.545455 | 109 | 0.72449 | 179 | 1,470 | 5.804469 | 0.184358 | 0.256015 | 0.336862 | 0.327238 | 0.750722 | 0.711261 | 0.711261 | 0.699711 | 0.638114 | 0.282964 | 0 | 0.08035 | 0.144898 | 1,470 | 32 | 110 | 45.9375 | 0.746221 | 0 | 0 | 0.25 | 0 | 0 | 0.07415 | 0 | 0 | 0 | 0 | 0 | 0.541667 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.333333 | 0.666667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
2c9f29bfe70c19876c483b2f477081749ca464d4 | 54,646 | py | Python | migrations_prod/versions/07096bf5dc1b_.py | PlanetaryResources/pid | ecb146cc26c6ade2863bcdc6d271ead3cbcbbe40 | [
"Apache-2.0"
] | 3 | 2019-06-14T18:05:22.000Z | 2020-01-22T17:38:17.000Z | migrations_prod/versions/07096bf5dc1b_.py | PlanetaryResources/pid | ecb146cc26c6ade2863bcdc6d271ead3cbcbbe40 | [
"Apache-2.0"
] | null | null | null | migrations_prod/versions/07096bf5dc1b_.py | PlanetaryResources/pid | ecb146cc26c6ade2863bcdc6d271ead3cbcbbe40 | [
"Apache-2.0"
] | null | null | null | """empty message
Revision ID: 07096bf5dc1b
Revises:
Create Date: 2017-10-09 00:33:47.890401
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '07096bf5dc1b'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('change_logs',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
op.create_table('companies',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('website', sa.String(), nullable=True),
sa.Column('address', sa.Text(), nullable=True),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('pri_account_number', sa.String(), nullable=True),
sa.Column('terms', sa.Text(), nullable=True),
sa.Column('alias', sa.String(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name')
)
op.create_table('criticalities',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name'),
sa.UniqueConstraint('ordering')
)
op.create_table('dispositions',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name'),
sa.UniqueConstraint('ordering')
)
op.create_table('hardware_types',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name'),
sa.UniqueConstraint('ordering')
)
op.create_table('materials',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name'),
sa.UniqueConstraint('ordering')
)
op.create_table('projects',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name')
)
op.create_table('references',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('by_id', sa.BigInteger(), nullable=False),
sa.Column('by_class', sa.String(), nullable=False),
sa.Column('to_id', sa.BigInteger(), nullable=False),
sa.Column('to_class', sa.String(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
op.create_table('revision_logs',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
op.create_table('users',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('first_name', sa.String(), nullable=True),
sa.Column('last_name', sa.String(), nullable=True),
sa.Column('username', sa.String(), nullable=False),
sa.Column('email', sa.String(), nullable=False),
sa.Column('roles', sa.String(), nullable=False),
sa.Column('padawan', sa.Boolean(), nullable=True),
sa.Column('supervisor_id', sa.BigInteger(), nullable=True),
sa.Column('last_active', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['supervisor_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('email'),
sa.UniqueConstraint('username')
)
op.create_table('workflow_logs',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
op.create_table('advanced_searches',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('user_id', sa.BigInteger(), nullable=False),
sa.Column('search_parameters', sa.String(), nullable=True),
sa.Column('name', sa.String(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('approvers',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.Column('capacity', sa.String(), nullable=False),
sa.Column('approved_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['approver_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('bookmarks',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('user_id', sa.BigInteger(), nullable=False),
sa.Column('bookmarked_id', sa.BigInteger(), nullable=False),
sa.Column('bookmarked_class', sa.String(), nullable=False),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('change_log_entries',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('parent_id', sa.BigInteger(), nullable=False),
sa.Column('action', sa.String(), nullable=True),
sa.Column('field', sa.String(), nullable=True),
sa.Column('original_value', sa.Text(), nullable=True),
sa.Column('new_value', sa.Text(), nullable=True),
sa.Column('changed_by_id', sa.BigInteger(), nullable=False),
sa.Column('changed_at', sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(['changed_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['parent_id'], ['change_logs.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('discrepancies',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('discrepancy_number', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('justification', sa.Text(), nullable=True),
sa.Column('disposition_id', sa.BigInteger(), nullable=True),
sa.Column('state', sa.String(), nullable=True),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['disposition_id'], ['dispositions.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('documents',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('path', sa.String(), nullable=False),
sa.Column('title', sa.String(), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('uploaded_by_id', sa.BigInteger(), nullable=False),
sa.Column('uploaded_at', sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(['uploaded_by_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('path')
)
op.create_table('images',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('path', sa.String(), nullable=False),
sa.Column('title', sa.String(), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('uploaded_by_id', sa.BigInteger(), nullable=False),
sa.Column('uploaded_at', sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(['uploaded_by_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('path')
)
op.create_table('links',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('url', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('material_specifications',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('material_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['material_id'], ['materials.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('plaid_settings',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('efab_user_id', sa.BigInteger(), nullable=False),
sa.Column('mfab_user_id', sa.BigInteger(), nullable=False),
sa.Column('plaid_admin_id', sa.BigInteger(), nullable=False),
sa.Column('name_order', sa.String(), nullable=True),
sa.ForeignKeyConstraint(['efab_user_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['mfab_user_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['plaid_admin_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('revision_log_entries',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('parent_id', sa.BigInteger(), nullable=False),
sa.Column('revision', sa.String(), nullable=True),
sa.Column('reason', sa.Text(), nullable=True),
sa.Column('revisioned_by_id', sa.BigInteger(), nullable=False),
sa.Column('revisioned_at', sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(['parent_id'], ['revision_logs.id'], ),
sa.ForeignKeyConstraint(['revisioned_by_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('workflow_log_entries',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('parent_id', sa.BigInteger(), nullable=False),
sa.Column('changed_by_id', sa.BigInteger(), nullable=False),
sa.Column('changed_at', sa.DateTime(), nullable=False),
sa.Column('capacity', sa.String(), nullable=True),
sa.Column('action', sa.String(), nullable=True),
sa.Column('comment', sa.Text(), nullable=True),
sa.ForeignKeyConstraint(['changed_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['parent_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('anomalies',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('key', sa.String(), nullable=False),
sa.Column('anomaly_type', sa.String(), nullable=True),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('criticality_id', sa.BigInteger(), nullable=False),
sa.Column('analysis', sa.String(), nullable=True),
sa.Column('corrective_action', sa.String(), nullable=True),
sa.Column('software_version', sa.String(), nullable=True),
sa.Column('project_id', sa.BigInteger(), nullable=True),
sa.Column('state', sa.String(), nullable=True),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['criticality_id'], ['criticalities.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('designs',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('revision', sa.String(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('design_number', sa.String(), nullable=False),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('export_control', sa.Boolean(), nullable=True),
sa.Column('state', sa.String(), nullable=True),
sa.Column('revision_log_id', sa.BigInteger(), nullable=False),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['revision_log_id'], ['revision_logs.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('design_number', 'revision', name='design_number_revision_unique')
)
op.create_table('ecos',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('key', sa.String(), nullable=False),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('analysis', sa.String(), nullable=True),
sa.Column('corrective_action', sa.String(), nullable=True),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('state', sa.String(), nullable=True),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('procedures',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('revision', sa.String(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('procedure_number', sa.String(), nullable=False),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('state', sa.String(), nullable=True),
sa.Column('revision_log_id', sa.BigInteger(), nullable=False),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['revision_log_id'], ['revision_logs.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('procedure_number', 'revision', name='procedure_number_revision_unique')
)
op.create_table('specifications',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('revision', sa.String(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('specification_number', sa.String(), nullable=False),
sa.Column('scope', sa.String(), nullable=True),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('state', sa.String(), nullable=True),
sa.Column('revision_log_id', sa.BigInteger(), nullable=False),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['revision_log_id'], ['revision_logs.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('specification_number', 'revision', name='specification_number_revision_unique')
)
op.create_table('tasks',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('task_number', sa.String(), nullable=False),
sa.Column('title', sa.String(), nullable=True),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('urgency', sa.String(), nullable=True),
sa.Column('state', sa.String(), nullable=True),
sa.Column('assigned_to_id', sa.BigInteger(), nullable=False),
sa.Column('requested_by_id', sa.BigInteger(), nullable=False),
sa.Column('requested_on', sa.DateTime(), nullable=False),
sa.Column('need_date', sa.DateTime(), nullable=False),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['assigned_to_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['requested_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('vendor_parts',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('part_number', sa.String(), nullable=False),
sa.Column('current_best_estimate', sa.Float(), nullable=False),
sa.Column('uncertainty', sa.Float(), nullable=False),
sa.Column('predicted_best_estimate', sa.Float(), nullable=False),
sa.Column('material_id', sa.BigInteger(), nullable=True),
sa.Column('material_specification_id', sa.BigInteger(), nullable=True),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('vendor_id', sa.BigInteger(), nullable=False),
sa.Column('state', sa.String(), nullable=True),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['material_id'], ['materials.id'], ),
sa.ForeignKeyConstraint(['material_specification_id'], ['material_specifications.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['vendor_id'], ['companies.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('part_number', name='part_number_unique')
)
op.create_table('anomalies_approvers',
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.PrimaryKeyConstraint('anomaly_id', 'approver_id')
)
op.create_table('anomalies_documents',
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.PrimaryKeyConstraint('anomaly_id', 'document_id')
)
op.create_table('anomalies_images',
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.PrimaryKeyConstraint('anomaly_id', 'image_id')
)
op.create_table('anomalies_links',
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.PrimaryKeyConstraint('anomaly_id', 'link_id')
)
op.create_table('as_runs',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('as_run_number', sa.Integer(), nullable=False),
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('notes', sa.String(), nullable=True),
sa.Column('software_version', sa.String(), nullable=True),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('state', sa.String(), nullable=True),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('designs_anomalies',
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.PrimaryKeyConstraint('design_id', 'anomaly_id')
)
op.create_table('designs_approvers',
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.PrimaryKeyConstraint('design_id', 'approver_id')
)
op.create_table('designs_documents',
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.PrimaryKeyConstraint('design_id', 'document_id')
)
op.create_table('designs_ecos',
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('eco_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.ForeignKeyConstraint(['eco_id'], ['ecos.id'], ),
sa.PrimaryKeyConstraint('design_id', 'eco_id')
)
op.create_table('designs_images',
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.PrimaryKeyConstraint('design_id', 'image_id')
)
op.create_table('designs_links',
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.PrimaryKeyConstraint('design_id', 'link_id')
)
op.create_table('ecos_approvers',
sa.Column('eco_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['eco_id'], ['ecos.id'], ),
sa.PrimaryKeyConstraint('eco_id', 'approver_id')
)
op.create_table('ecos_documents',
sa.Column('eco_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['eco_id'], ['ecos.id'], ),
sa.PrimaryKeyConstraint('eco_id', 'document_id')
)
op.create_table('ecos_images',
sa.Column('eco_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['eco_id'], ['ecos.id'], ),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.PrimaryKeyConstraint('eco_id', 'image_id')
)
op.create_table('ecos_links',
sa.Column('eco_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['eco_id'], ['ecos.id'], ),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.PrimaryKeyConstraint('eco_id', 'link_id')
)
op.create_table('parts',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('part_identifier', sa.Integer(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('current_best_estimate', sa.Float(), nullable=False),
sa.Column('uncertainty', sa.Float(), nullable=False),
sa.Column('predicted_best_estimate', sa.Float(), nullable=False),
sa.Column('design_id', sa.BigInteger(), nullable=False),
sa.Column('material_id', sa.BigInteger(), nullable=True),
sa.Column('material_specification_id', sa.BigInteger(), nullable=True),
sa.Column('inseparable_component', sa.Boolean(), nullable=True),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['design_id'], ['designs.id'], ),
sa.ForeignKeyConstraint(['material_id'], ['materials.id'], ),
sa.ForeignKeyConstraint(['material_specification_id'], ['material_specifications.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('part_identifier', 'design_id', name='part_identifier_design_unique')
)
op.create_table('procedures_approvers',
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.PrimaryKeyConstraint('procedure_id', 'approver_id')
)
op.create_table('procedures_documents',
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.PrimaryKeyConstraint('procedure_id', 'document_id')
)
op.create_table('procedures_images',
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.PrimaryKeyConstraint('procedure_id', 'image_id')
)
op.create_table('procedures_links',
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.PrimaryKeyConstraint('procedure_id', 'link_id')
)
op.create_table('procedures_vendor_parts',
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('procedure_id', 'vendor_part_id')
)
op.create_table('specifications_approvers',
sa.Column('specification_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['specification_id'], ['specifications.id'], ),
sa.PrimaryKeyConstraint('specification_id', 'approver_id')
)
op.create_table('specifications_documents',
sa.Column('specification_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['specification_id'], ['specifications.id'], ),
sa.PrimaryKeyConstraint('specification_id', 'document_id')
)
op.create_table('specifications_images',
sa.Column('specification_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['specification_id'], ['specifications.id'], ),
sa.PrimaryKeyConstraint('specification_id', 'image_id')
)
op.create_table('specifications_links',
sa.Column('specification_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.ForeignKeyConstraint(['specification_id'], ['specifications.id'], ),
sa.PrimaryKeyConstraint('specification_id', 'link_id')
)
op.create_table('tasks_documents',
sa.Column('task_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['task_id'], ['tasks.id'], ),
sa.PrimaryKeyConstraint('task_id', 'document_id')
)
op.create_table('tasks_images',
sa.Column('task_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['task_id'], ['tasks.id'], ),
sa.PrimaryKeyConstraint('task_id', 'image_id')
)
op.create_table('tasks_links',
sa.Column('task_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.ForeignKeyConstraint(['task_id'], ['tasks.id'], ),
sa.PrimaryKeyConstraint('task_id', 'link_id')
)
op.create_table('vendor_builds',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('build_identifier', sa.String(), nullable=False),
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('purchase_order', sa.String(), nullable=True),
sa.Column('vendor_id', sa.BigInteger(), nullable=False),
sa.Column('manufacturer_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['manufacturer_id'], ['companies.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['vendor_id'], ['companies.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('build_identifier', 'vendor_part_id', name='build_identifier_vendor_part_unique')
)
op.create_table('vendor_parts_anomalies',
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('vendor_part_id', 'anomaly_id')
)
op.create_table('vendor_parts_approvers',
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('vendor_part_id', 'approver_id')
)
op.create_table('vendor_parts_documents',
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('vendor_part_id', 'document_id')
)
op.create_table('vendor_parts_images',
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('vendor_part_id', 'image_id')
)
op.create_table('vendor_parts_links',
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('vendor_part_id', 'link_id')
)
op.create_table('as_runs_anomalies',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('anomaly_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['anomaly_id'], ['anomalies.id'], ),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'anomaly_id')
)
op.create_table('as_runs_approvers',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'approver_id')
)
op.create_table('as_runs_documents',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'document_id')
)
op.create_table('as_runs_images',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'image_id')
)
op.create_table('as_runs_links',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'link_id')
)
op.create_table('builds',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('build_identifier', sa.String(), nullable=False),
sa.Column('part_id', sa.BigInteger(), nullable=False),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('purchase_order', sa.String(), nullable=True),
sa.Column('vendor_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['part_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['vendor_id'], ['companies.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('build_identifier', 'part_id', name='build_identifier_part_unique')
)
op.create_table('part_components',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('parent_id', sa.BigInteger(), nullable=False),
sa.Column('quantity', sa.Integer(), nullable=False),
sa.Column('part_id', sa.BigInteger(), nullable=True),
sa.Column('vendor_part_id', sa.BigInteger(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['parent_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['part_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('procedures_parts',
sa.Column('procedure_id', sa.BigInteger(), nullable=False),
sa.Column('part_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['part_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['procedure_id'], ['procedures.id'], ),
sa.PrimaryKeyConstraint('procedure_id', 'part_id')
)
op.create_table('vendor_builds_discrepancies',
sa.Column('vendor_build_id', sa.BigInteger(), nullable=False),
sa.Column('discrepancy_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['discrepancy_id'], ['discrepancies.id'], ),
sa.ForeignKeyConstraint(['vendor_build_id'], ['vendor_builds.id'], ),
sa.PrimaryKeyConstraint('vendor_build_id', 'discrepancy_id')
)
op.create_table('vendor_builds_documents',
sa.Column('vendor_build_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['vendor_build_id'], ['vendor_builds.id'], ),
sa.PrimaryKeyConstraint('vendor_build_id', 'document_id')
)
op.create_table('vendor_products',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('serial_number', sa.String(), nullable=False),
sa.Column('vendor_part_id', sa.BigInteger(), nullable=False),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('product_type', sa.String(), nullable=True),
sa.Column('measured_mass', sa.Float(), nullable=True),
sa.Column('hardware_type_id', sa.BigInteger(), nullable=False),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('vendor_build_id', sa.BigInteger(), nullable=False),
sa.Column('state', sa.String(), nullable=True),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['hardware_type_id'], ['hardware_types.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['vendor_build_id'], ['vendor_builds.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('serial_number', 'vendor_part_id', name='serial_number_vendor_part_unique')
)
op.create_table('as_runs_vendor_products',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('vendor_product_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'vendor_product_id')
)
op.create_table('builds_discrepancies',
sa.Column('build_id', sa.BigInteger(), nullable=False),
sa.Column('discrepancy_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['build_id'], ['builds.id'], ),
sa.ForeignKeyConstraint(['discrepancy_id'], ['discrepancies.id'], ),
sa.PrimaryKeyConstraint('build_id', 'discrepancy_id')
)
op.create_table('builds_documents',
sa.Column('build_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['build_id'], ['builds.id'], ),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.PrimaryKeyConstraint('build_id', 'document_id')
)
op.create_table('products',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('self_approved', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('serial_number', sa.String(), nullable=False),
sa.Column('part_id', sa.BigInteger(), nullable=False),
sa.Column('revision', sa.String(), nullable=False),
sa.Column('summary', sa.String(), nullable=True),
sa.Column('notes', sa.Text(), nullable=True),
sa.Column('product_type', sa.String(), nullable=True),
sa.Column('measured_mass', sa.Float(), nullable=True),
sa.Column('hardware_type_id', sa.BigInteger(), nullable=False),
sa.Column('project_id', sa.BigInteger(), nullable=False),
sa.Column('build_id', sa.BigInteger(), nullable=False),
sa.Column('state', sa.String(), nullable=True),
sa.Column('thumbnail_id', sa.BigInteger(), nullable=True),
sa.Column('workflow_log_id', sa.BigInteger(), nullable=False),
sa.Column('owner_id', sa.BigInteger(), nullable=False),
sa.Column('created_by_id', sa.BigInteger(), nullable=False),
sa.Column('change_log_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['build_id'], ['builds.id'], ),
sa.ForeignKeyConstraint(['change_log_id'], ['change_logs.id'], ),
sa.ForeignKeyConstraint(['created_by_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['hardware_type_id'], ['hardware_types.id'], ),
sa.ForeignKeyConstraint(['owner_id'], ['users.id'], ),
sa.ForeignKeyConstraint(['part_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['project_id'], ['projects.id'], ),
sa.ForeignKeyConstraint(['thumbnail_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['workflow_log_id'], ['workflow_logs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('serial_number', 'part_id', name='serial_number_part_unique')
)
op.create_table('vendor_products_approvers',
sa.Column('vendor_product_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('vendor_product_id', 'approver_id')
)
op.create_table('vendor_products_discrepancies',
sa.Column('vendor_product_id', sa.BigInteger(), nullable=False),
sa.Column('discrepancy_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['discrepancy_id'], ['discrepancies.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('vendor_product_id', 'discrepancy_id')
)
op.create_table('vendor_products_documents',
sa.Column('vendor_product_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('vendor_product_id', 'document_id')
)
op.create_table('vendor_products_images',
sa.Column('vendor_product_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('vendor_product_id', 'image_id')
)
op.create_table('vendor_products_links',
sa.Column('vendor_product_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('vendor_product_id', 'link_id')
)
op.create_table('as_runs_products',
sa.Column('as_run_id', sa.BigInteger(), nullable=False),
sa.Column('product_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['as_run_id'], ['as_runs.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.PrimaryKeyConstraint('as_run_id', 'product_id')
)
op.create_table('extra_product_components',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('parent_id', sa.BigInteger(), nullable=False),
sa.Column('part_id', sa.BigInteger(), nullable=True),
sa.Column('vendor_part_id', sa.BigInteger(), nullable=True),
sa.Column('vendor_product_id', sa.BigInteger(), nullable=True),
sa.Column('product_id', sa.BigInteger(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['parent_id'], ['products.id'], ),
sa.ForeignKeyConstraint(['part_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('product_components',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('parent_id', sa.BigInteger(), nullable=False),
sa.Column('part_id', sa.BigInteger(), nullable=True),
sa.Column('vendor_part_id', sa.BigInteger(), nullable=True),
sa.Column('vendor_product_id', sa.BigInteger(), nullable=True),
sa.Column('product_id', sa.BigInteger(), nullable=True),
sa.Column('ordering', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['parent_id'], ['products.id'], ),
sa.ForeignKeyConstraint(['part_id'], ['parts.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.ForeignKeyConstraint(['vendor_part_id'], ['vendor_parts.id'], ),
sa.ForeignKeyConstraint(['vendor_product_id'], ['vendor_products.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('products_approvers',
sa.Column('product_id', sa.BigInteger(), nullable=False),
sa.Column('approver_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['approver_id'], ['approvers.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.PrimaryKeyConstraint('product_id', 'approver_id')
)
op.create_table('products_discrepancies',
sa.Column('product_id', sa.BigInteger(), nullable=False),
sa.Column('discrepancy_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['discrepancy_id'], ['discrepancies.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.PrimaryKeyConstraint('product_id', 'discrepancy_id')
)
op.create_table('products_documents',
sa.Column('product_id', sa.BigInteger(), nullable=False),
sa.Column('document_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['document_id'], ['documents.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.PrimaryKeyConstraint('product_id', 'document_id')
)
op.create_table('products_images',
sa.Column('product_id', sa.BigInteger(), nullable=False),
sa.Column('image_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['image_id'], ['images.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.PrimaryKeyConstraint('product_id', 'image_id')
)
op.create_table('products_links',
sa.Column('product_id', sa.BigInteger(), nullable=False),
sa.Column('link_id', sa.BigInteger(), nullable=False),
sa.ForeignKeyConstraint(['link_id'], ['links.id'], ),
sa.ForeignKeyConstraint(['product_id'], ['products.id'], ),
sa.PrimaryKeyConstraint('product_id', 'link_id')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('products_links')
op.drop_table('products_images')
op.drop_table('products_documents')
op.drop_table('products_discrepancies')
op.drop_table('products_approvers')
op.drop_table('product_components')
op.drop_table('extra_product_components')
op.drop_table('as_runs_products')
op.drop_table('vendor_products_links')
op.drop_table('vendor_products_images')
op.drop_table('vendor_products_documents')
op.drop_table('vendor_products_discrepancies')
op.drop_table('vendor_products_approvers')
op.drop_table('products')
op.drop_table('builds_documents')
op.drop_table('builds_discrepancies')
op.drop_table('as_runs_vendor_products')
op.drop_table('vendor_products')
op.drop_table('vendor_builds_documents')
op.drop_table('vendor_builds_discrepancies')
op.drop_table('procedures_parts')
op.drop_table('part_components')
op.drop_table('builds')
op.drop_table('as_runs_links')
op.drop_table('as_runs_images')
op.drop_table('as_runs_documents')
op.drop_table('as_runs_approvers')
op.drop_table('as_runs_anomalies')
op.drop_table('vendor_parts_links')
op.drop_table('vendor_parts_images')
op.drop_table('vendor_parts_documents')
op.drop_table('vendor_parts_approvers')
op.drop_table('vendor_parts_anomalies')
op.drop_table('vendor_builds')
op.drop_table('tasks_links')
op.drop_table('tasks_images')
op.drop_table('tasks_documents')
op.drop_table('specifications_links')
op.drop_table('specifications_images')
op.drop_table('specifications_documents')
op.drop_table('specifications_approvers')
op.drop_table('procedures_vendor_parts')
op.drop_table('procedures_links')
op.drop_table('procedures_images')
op.drop_table('procedures_documents')
op.drop_table('procedures_approvers')
op.drop_table('parts')
op.drop_table('ecos_links')
op.drop_table('ecos_images')
op.drop_table('ecos_documents')
op.drop_table('ecos_approvers')
op.drop_table('designs_links')
op.drop_table('designs_images')
op.drop_table('designs_ecos')
op.drop_table('designs_documents')
op.drop_table('designs_approvers')
op.drop_table('designs_anomalies')
op.drop_table('as_runs')
op.drop_table('anomalies_links')
op.drop_table('anomalies_images')
op.drop_table('anomalies_documents')
op.drop_table('anomalies_approvers')
op.drop_table('vendor_parts')
op.drop_table('tasks')
op.drop_table('specifications')
op.drop_table('procedures')
op.drop_table('ecos')
op.drop_table('designs')
op.drop_table('anomalies')
op.drop_table('workflow_log_entries')
op.drop_table('revision_log_entries')
op.drop_table('plaid_settings')
op.drop_table('material_specifications')
op.drop_table('links')
op.drop_table('images')
op.drop_table('documents')
op.drop_table('discrepancies')
op.drop_table('change_log_entries')
op.drop_table('bookmarks')
op.drop_table('approvers')
op.drop_table('advanced_searches')
op.drop_table('workflow_logs')
op.drop_table('users')
op.drop_table('revision_logs')
op.drop_table('references')
op.drop_table('projects')
op.drop_table('materials')
op.drop_table('hardware_types')
op.drop_table('dispositions')
op.drop_table('criticalities')
op.drop_table('companies')
op.drop_table('change_logs')
# ### end Alembic commands ###
| 50.692022 | 105 | 0.684643 | 6,508 | 54,646 | 5.548402 | 0.029656 | 0.056717 | 0.128361 | 0.163283 | 0.921238 | 0.87535 | 0.823091 | 0.797502 | 0.764712 | 0.733861 | 0 | 0.00071 | 0.123431 | 54,646 | 1,077 | 106 | 50.73909 | 0.753116 | 0.005179 | 0 | 0.599622 | 0 | 0 | 0.256157 | 0.024868 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001889 | false | 0 | 0.001889 | 0 | 0.003777 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2cb2b217b3b41a634cfd702c39df2dd091c7ddea | 1,865 | py | Python | tests/functional/test_resource.py | System73/tamarco-kafka | df086fa89ae1d90f8cdc7013bff038b144923596 | [
"MIT"
] | 1 | 2019-09-26T20:56:30.000Z | 2019-09-26T20:56:30.000Z | tests/functional/test_resource.py | System73/tamarco-kafka | df086fa89ae1d90f8cdc7013bff038b144923596 | [
"MIT"
] | null | null | null | tests/functional/test_resource.py | System73/tamarco-kafka | df086fa89ae1d90f8cdc7013bff038b144923596 | [
"MIT"
] | null | null | null | import pytest
from tamarco.resources.basic.status.status_codes import StatusCodes
from tamarco_kafka.input import KafkaInput
from tests.functional.conftest import bootstrap_servers
@pytest.mark.asyncio
async def test_start_and_stop(kafka_resource):
async def settings_method():
return {"bootstrap_servers": bootstrap_servers}
@KafkaInput(topic="start_and_stop", resource=kafka_resource)
async def consume_cats(message):
pass
kafka_resource.get_confluent_kafka_settings = settings_method
await kafka_resource.start()
await kafka_resource.post_start()
@pytest.mark.asyncio
async def test_status_code_pre_start(kafka_resource):
status = await kafka_resource.status()
assert isinstance(status, dict)
assert status == {"status": StatusCodes.NOT_STARTED}
@pytest.mark.asyncio
async def test_status_code_start(kafka_resource):
async def settings_method():
return {"bootstrap_servers": bootstrap_servers}
@KafkaInput(topic="status_code_start", resource=kafka_resource)
async def consume_cats(message):
pass
kafka_resource.get_confluent_kafka_settings = settings_method
await kafka_resource.start()
status = await kafka_resource.status()
assert isinstance(status, dict)
assert status["status"] == StatusCodes.STARTED
@pytest.mark.asyncio
async def test_status_code_stop(kafka_resource):
async def settings_method():
return {"bootstrap_servers": bootstrap_servers}
@KafkaInput(topic="status_code_stop", resource=kafka_resource)
async def consume_cats(message):
pass
kafka_resource.get_confluent_kafka_settings = settings_method
await kafka_resource.start()
await kafka_resource.stop()
status = await kafka_resource.status()
assert isinstance(status, dict)
assert status["status"] == StatusCodes.STOPPED
| 28.692308 | 67 | 0.762466 | 226 | 1,865 | 6 | 0.20354 | 0.172566 | 0.106195 | 0.09292 | 0.832596 | 0.832596 | 0.811209 | 0.811209 | 0.782448 | 0.714602 | 0 | 0 | 0.156032 | 1,865 | 64 | 68 | 29.140625 | 0.861499 | 0 | 0 | 0.636364 | 0 | 0 | 0.062198 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0 | false | 0.068182 | 0.090909 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
e2beba32ef093ba564e273724bac427b81d28000 | 25,849 | py | Python | eeauditor/auditors/aws/Amazon_VPC_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 442 | 2020-03-15T20:56:36.000Z | 2022-03-31T22:13:07.000Z | eeauditor/auditors/aws/Amazon_VPC_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 57 | 2020-03-15T22:09:56.000Z | 2022-03-31T13:17:06.000Z | eeauditor/auditors/aws/Amazon_VPC_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 59 | 2020-03-15T21:19:10.000Z | 2022-03-31T15:01:31.000Z | #This file is part of ElectricEye.
#SPDX-License-Identifier: Apache-2.0
#Licensed to the Apache Software Foundation (ASF) under one
#or more contributor license agreements. See the NOTICE file
#distributed with this work for additional information
#regarding copyright ownership. The ASF licenses this file
#to you under the Apache License, Version 2.0 (the
#"License"); you may not use this file except in compliance
#with the License. You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing,
#software distributed under the License is distributed on an
#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
#KIND, either express or implied. See the License for the
#specific language governing permissions and limitations
#under the License.
import boto3
import datetime
from check_register import CheckRegister
registry = CheckRegister()
# create boto3 clients
ec2 = boto3.client("ec2")
# loop through vpcs
def describe_vpcs(cache):
response = cache.get("describe_vpcs")
if response:
return response
cache["describe_vpcs"] = ec2.describe_vpcs(DryRun=False)
return cache["describe_vpcs"]
@registry.register_check("ec2")
def vpc_default_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[VPC.1] Consider deleting the Default VPC if unused"""
vpc = describe_vpcs(cache=cache)
for vpcs in vpc["Vpcs"]:
vpcId = str(vpcs["VpcId"])
vpcArn = f"arn:{awsPartition}:ec2:{awsRegion}:{awsAccountId}vpc/{vpcId}"
defaultVpcCheck = str(vpcs["IsDefault"])
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if defaultVpcCheck == "True":
finding = {
"SchemaVersion": "2018-10-08",
"Id": vpcArn + "/vpc-is-default-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": vpcArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[VPC.1] Consider deleting the Default VPC if unused",
"Description": "VPC "
+ vpcId
+ " has been identified as the Default VPC, consider deleting this VPC if it is not necessary for daily operations. The Default VPC in AWS Regions not typically used can serve as a persistence area for malicious actors, additionally, many services will automatically use this VPC which can lead to a degraded security posture. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "For more information on the default VPC refer to the Deleting Your Default Subnets and Default VPC section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#deleting-default-vpc",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Vpc",
"Id": vpcArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"VpcId": vpcId}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-5",
"NIST SP 800-53 AC-4",
"NIST SP 800-53 AC-10",
"NIST SP 800-53 SC-7",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.1.3",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": vpcArn + "/vpc-is-default-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": vpcArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[VPC.1] Consider deleting the Default VPC if unused",
"Description": "VPC " + vpcId + " is not the Default VPC",
"Remediation": {
"Recommendation": {
"Text": "For more information on the default VPC refer to the Deleting Your Default Subnets and Default VPC section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#deleting-default-vpc",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Vpc",
"Id": vpcArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"VpcId": vpcId}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-5",
"NIST SP 800-53 AC-4",
"NIST SP 800-53 AC-10",
"NIST SP 800-53 SC-7",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.1.3",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("ec2")
def vpc_flow_logs_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[VPC.2] Flow Logs should be enabled for all VPCs"""
vpc = describe_vpcs(cache=cache)
for vpcs in vpc["Vpcs"]:
vpcId = str(vpcs["VpcId"])
vpcArn = f"arn:{awsPartition}:ec2:{awsRegion}:{awsAccountId}vpc/{vpcId}"
response = ec2.describe_flow_logs(
DryRun=False, Filters=[{"Name": "resource-id", "Values": [vpcId]}]
)
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if str(response["FlowLogs"]) == "[]":
finding = {
"SchemaVersion": "2018-10-08",
"Id": vpcArn + "/vpc-flow-log-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": vpcArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[VPC.2] Flow Logs should be enabled for all VPCs",
"Description": "VPC "
+ vpcId
+ " does not have flow logging enabled. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "For more information on flow logs refer to the VPC Flow Logs section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Vpc",
"Id": vpcArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"VpcId": vpcId}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": vpcArn + "/vpc-flow-log-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": vpcArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[VPC.2] Flow Logs should be enabled for all VPCs",
"Description": "VPC " + vpcId + " has flow logging enabled.",
"Remediation": {
"Recommendation": {
"Text": "For more information on flow logs refer to the VPC Flow Logs section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Vpc",
"Id": vpcArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"VpcId": vpcId}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("ec2")
def subnet_public_ip_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[VPC.3] Subnets should not automatically map Public IP addresses on launch"""
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
vpc = describe_vpcs(cache=cache)
myVpcs = vpc["Vpcs"]
for vpcs in myVpcs:
vpcId = str(vpcs["VpcId"])
# Get subnets for the VPC
for snet in ec2.describe_subnets(Filters=[{'Name': 'vpc-id','Values': [vpcId]}])["Subnets"]:
snetArn = str(snet["SubnetArn"])
snetId = str(snet["SubnetId"])
if str(snet["MapPublicIpOnLaunch"]) == "True":
# This is a failing check
finding = {
"SchemaVersion": "2018-10-08",
"Id": snetArn + "/subnet-map-public-ip-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": snetArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[VPC.3] Subnets should not automatically map Public IP addresses on launch",
"Description": "Subnet "
+ snetId
+ " maps Public IPs on Launch, consider disabling this to avoid unncessarily exposing workloads to the internet. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "For information on IP addressing refer to the IP Addressing in your VPC section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Subnet",
"Id": snetArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"VpcId": vpcId,
"SubnetId": snetId
}
}
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-5",
"NIST SP 800-53 AC-4",
"NIST SP 800-53 AC-10",
"NIST SP 800-53 SC-7",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.1.3",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
]
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE"
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": snetArn + "/subnet-map-public-ip-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": snetArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[VPC.3] Subnets should not automatically map Public IP addresses on launch",
"Description": "Subnet "
+ snetId
+ " does not map Public IPs on Launch.",
"Remediation": {
"Recommendation": {
"Text": "For information on IP addressing refer to the IP Addressing in your VPC section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Subnet",
"Id": snetArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"VpcId": vpcId,
"SubnetId": snetId
}
}
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-5",
"NIST SP 800-53 AC-4",
"NIST SP 800-53 AC-10",
"NIST SP 800-53 SC-7",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.1.3",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3"
]
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED"
}
yield finding
@registry.register_check("ec2")
def subnet_no_ip_space_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[VPC.4] Subnets should be monitored for available IP address space"""
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
vpc = describe_vpcs(cache=cache)
myVpcs = vpc["Vpcs"]
for vpcs in myVpcs:
vpcId = str(vpcs["VpcId"])
# Get subnets for the VPC
for snet in ec2.describe_subnets(Filters=[{'Name': 'vpc-id','Values': [vpcId]}])["Subnets"]:
snetArn = str(snet["SubnetArn"])
snetId = str(snet["SubnetId"])
if int(snet["AvailableIpAddressCount"]) <= 1:
# This is a failing check
finding = {
"SchemaVersion": "2018-10-08",
"Id": snetArn + "/subnet-map-no-more-ips-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": snetArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[VPC.4] Subnets should be monitored for available IP address space",
"Description": "Subnet "
+ snetId
+ " does not have any available IP address space, consider terminating unncessary workloads or expanding CIDR capacity to avoid availability losses. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "For information on IP addressing refer to the IP Addressing in your VPC section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Subnet",
"Id": snetArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"VpcId": vpcId,
"SubnetId": snetId
}
}
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF ID.BE-5",
"NIST CSF PR.PT-5",
"NIST SP 800-53 CP-2",
"NIST SP 800-53 CP-11",
"NIST SP 800-53 SA-13",
"NIST SP 800-53 SA14",
"AICPA TSC CC3.1",
"AICPA TSC A1.2",
"ISO 27001:2013 A.11.1.4",
"ISO 27001:2013 A.17.1.1",
"ISO 27001:2013 A.17.1.2",
"ISO 27001:2013 A.17.2.1",
]
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE"
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": snetArn + "/subnet-map-no-more-ips-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": snetArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[VPC.4] Subnets should be monitored for available IP address space",
"Description": "Subnet "
+ snetId
+ " has available IP address space, well, at least 2 lol...",
"Remediation": {
"Recommendation": {
"Text": "For information on IP addressing refer to the IP Addressing in your VPC section of the Amazon Virtual Private Cloud User Guide",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsEc2Subnet",
"Id": snetArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"VpcId": vpcId,
"SubnetId": snetId
}
}
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF ID.BE-5",
"NIST CSF PR.PT-5",
"NIST SP 800-53 CP-2",
"NIST SP 800-53 CP-11",
"NIST SP 800-53 SA-13",
"NIST SP 800-53 SA14",
"AICPA TSC CC3.1",
"AICPA TSC A1.2",
"ISO 27001:2013 A.11.1.4",
"ISO 27001:2013 A.17.1.1",
"ISO 27001:2013 A.17.1.2",
"ISO 27001:2013 A.17.2.1",
]
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED"
}
yield finding | 49.519157 | 420 | 0.446207 | 2,205 | 25,849 | 5.217234 | 0.143311 | 0.01669 | 0.025035 | 0.030598 | 0.846227 | 0.84501 | 0.842142 | 0.838752 | 0.838752 | 0.835449 | 0 | 0.061516 | 0.444698 | 25,849 | 522 | 421 | 49.519157 | 0.739933 | 0.045766 | 0 | 0.808247 | 0 | 0.039175 | 0.383431 | 0.04065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010309 | false | 0.008247 | 0.006186 | 0 | 0.020619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e2d6e6307d170bc396be5c5261c7ebaecf448aa1 | 105 | py | Python | rpmfile/__main__.py | cwt/rpmfile | 908719c6647cd0a194b46c9bf7827e3f244090bc | [
"MIT"
] | 16 | 2015-05-29T17:36:22.000Z | 2021-08-30T13:01:09.000Z | rpmfile/__main__.py | cwt/rpmfile | 908719c6647cd0a194b46c9bf7827e3f244090bc | [
"MIT"
] | 30 | 2015-04-14T09:28:09.000Z | 2021-08-30T21:42:01.000Z | rpmfile/__main__.py | cwt/rpmfile | 908719c6647cd0a194b46c9bf7827e3f244090bc | [
"MIT"
] | 29 | 2015-01-04T18:52:36.000Z | 2022-02-17T12:17:33.000Z | from .cli import console_script_entry_point
if __name__ == "__main__":
console_script_entry_point()
| 21 | 43 | 0.790476 | 14 | 105 | 4.928571 | 0.714286 | 0.376812 | 0.521739 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 105 | 4 | 44 | 26.25 | 0.758242 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e2e9b5dc4fa9bf349be252edf70f08ae6e5c2be0 | 47,377 | py | Python | stocal/tests/test_dsd_rules.py | dannycg1996/stocal | dd9a830dc521e82bff5032e99af0198fbc3f9ff5 | [
"MIT"
] | 1 | 2022-03-09T06:58:30.000Z | 2022-03-09T06:58:30.000Z | stocal/tests/test_dsd_rules.py | dannycg1996/stocal | dd9a830dc521e82bff5032e99af0198fbc3f9ff5 | [
"MIT"
] | null | null | null | stocal/tests/test_dsd_rules.py | dannycg1996/stocal | dd9a830dc521e82bff5032e99af0198fbc3f9ff5 | [
"MIT"
] | null | null | null | """Unit testing for rules in dsd.py """
import unittest
from stocal.tests.test_transitions import TestReactionRule as TestTransitionRule, TestMassAction
class TestBindingRule(unittest.TestCase):
from stocal.examples.dsd import BindingRule
Rule = BindingRule
def test_lakin_r_b_example(self):
# Test that the basic RB example from the Lakin paper can be replicated with the Binding Rule.
r_b_1 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L' N^* R'}", "<L N^ R>")))[0].products.keys())[0]
self.assertEqual(r_b_1, "{L'}<L>[N^]<R>{R'}")
def test_lakin_r_b_example_diff_order(self):
# Test that the basic RB example from the Lakin paper can be replicated with the Binding Rule regardless of input order.
r_b_2 = list(list(set(self.Rule.novel_reactions(self.Rule(), "<L N^ R>", "{L' N^* R'}")))[0].products.keys())[0]
self.assertEqual(r_b_2, "{L'}<L>[N^]<R>{R'}")
def test_systems_which_can_bind_in_multiple_spots(self):
# Tests that when possible, the Binding Rule yields multiple different bindings from the same inputs.
r_b_3 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{S' N^* L' R'}", "<L N^ M N^>")))[0].products.keys())
exp_res_3 = {"{S'}<L N^ M>[N^]{L' R'}", "{S'}<L>[N^]<M N^>{L' R'}"}
self.assertEqual(set(), set.difference(r_b_3, exp_res_3))
def test_binding_between_strands_where_the_output_has_no_lower_strand_before_the_double_strand(self):
# Test a variant of the Binding Rule, where the yielded result doesn't have a lower strand preceding the d_s.
r_b_4 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{ N^* L' R'}", "<L N^ M>")))[0].products.keys())[0]
self.assertEqual(r_b_4, "<L>[N^]<M>{L' R'}")
def test_binding_between_strands_where_the_output_has_no_lower_strand_after_the_double_strand(self):
# Test a variant of the Binding Rule, where the yielded result doesn't have a lower strand after the d_s.
r_b_5 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L' N^*}", "<L N^ M>")))[0].products.keys())[0]
self.assertEqual(r_b_5, "{L'}<L>[N^]<M>")
def test_binding_between_strands_where_the_output_has_no_upper_strand_before_the_double_strand(self):
# Test a variant of the Binding Rule, where the yielded result doesn't have an upper strand preceding the d_s.
r_b_6 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{A N^* L' R'}", "<N^ M>")))[0].products.keys())[0]
self.assertEqual(r_b_6, "{A}[N^]<M>{L' R'}")
def test_binding_between_strands_where_the_output_has_no_upper_strand_after_the_double_strand(self):
# Test a variant of the Binding Rule, where the yielded result doesn't have an upper strand after the d_s.
r_b_7 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L' N^* R}", "<L N^>")))[0].products.keys())[0]
self.assertEqual(r_b_7, "{L'}<L>[N^]{R}")
def test_simplest_binding_case(self):
# Test the simplest strand to strand binding case, where the yielded result has just a single double toehold.
r_b_8 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{N^*}", "<N^>")))[0].products.keys())[0]
self.assertEqual(r_b_8, "[N^]")
def test_lakin_fig_4a_example(self):
# Test an example from Figure 4 of the Lakin paper
r_b_9 = list(list(set(self.Rule.novel_reactions(self.Rule(), "<t^ x y>", "{t^*}[x]:[y u^]")))[0].products.keys())[0]
self.assertEqual(r_b_9, "[t^]<x y>:[x]:[y u^]")
def test_lakin_r_p_example(self):
# Test that the basic RP example from the Lakin paper yields the correct result.
r_b_10 = list(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 N^ S R1>", "{L' N^*}<L>[S R2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_b_10, "{L'}<L1>[N^]<S R1>:<L>[S R2]<R>{R'}")
def test_binding_gate_to_gate_yields_no_results(self):
# Test that binding does not occur between two gates.
r_b_11 = set(self.Rule.novel_reactions(self.Rule(), "{N^* S' N^*}[C^]", "{L'}<L>[N^]<R>[M^]<S'>[A^]{B}"))
self.assertEqual(r_b_11, set())
def test_lower_strand_binding_to_gate(self):
# Test that binding can occur between a lower strand and a gate.
r_b_12 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{A C^*}", "{F}<B C^ G>[H^]<I>{J}")))[0].products.keys())[0]
self.assertEqual(r_b_12, "{A}<B>[C^]::{F}<G>[H^]<I>{J}")
def test_lower_strand_binding_to_second_gate(self):
# Test that binding can occur between a lower strand and a gate, when the gate being bound to is preceded by another gate.
r_b_13 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{F}<B C^ D G>[H^]:{J K}<I L>[M^]<N>{O}", "{A C^* E}")))[0].products.keys())[0]
self.assertEqual(r_b_13, "{A}<B>[C^]{E}::{F}<D G>[H^]:{J K}<I L>[M^]<N>{O}")
def test_upper_strand_binding_to_gate(self):
# Test that binding can occur between an upper strand and a gate.
r_b_14 = list(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 N^ S R1>", "{L' N^*}<L>[S R2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_b_14, "{L'}<L1>[N^]<S R1>:<L>[S R2]<R>{R'}")
class TestUnbindingRule(TestTransitionRule):
from stocal.examples.dsd import UnbindingRule
Rule = UnbindingRule
def test_lakin_r_u_example(self):
# r_u_1 tests that the basic RU example from the Lakin paper yields the correct result.
r_u_1 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[N^]<R>{R'}")))[0].products.keys())
exp_res_1 = {"{L' N^* R'}", "<L N^ R>"}
self.assertEqual(set(), set.difference(r_u_1, exp_res_1))
def test_unbinding_on_a_gate_containing_more_domains(self):
# Test that RU correctly unbinds a gate which has more domains on its strands.
r_u_2 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{B}<A>[D^]<C^ F>{C^* G}")))[0].products.keys())
exp_res_2 = {"<A D^ C^ F>", "{B D^* C^* G}"}
self.assertEqual(set(), set.difference(r_u_2, exp_res_2))
def test_the_unbinding_of_the_second_gate_in_a_system(self):
# Test a system which consists of two gates, with one possible point of unbinding, on the 2nd gate.
r_u_3 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L1>[N^]<S R1>:<L>[S R2]<R>{R'}")))[0].products.keys())
exp_res_3 = {"<L1 N^ S R1>", "{L' N^*}<L>[S R2]<R>{R'}"}
self.assertEqual(set(), set.difference(r_u_3, exp_res_3))
def test_the_unbinding_of_a_system_with_several_possible_unbinding_locations(self):
# Test a system which can unbind at 3 different points.
r_u_4 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{A}<B>[C^]<D>{E}::{F}<G>[H^]<I>{J}::{K}<L>[M^]<N>{O}")))[0].products.keys())
exp_res_4 = {"{F}<B C^ D G>[H^]{J}::{K}<I L>[M^]<N>{O}", "{A C^* E}", "{A}<B>[C^]{E}::{K}<D G H^ I L>[M^]<N>{O}", "{F H^* J}",
"{A}<B>[C^]{E}::{F}<D G>[H^]<I L M^ N>{J}", "{K M^* O}"}
self.assertEqual(set(), set.difference(r_u_4, exp_res_4))
class TestCoveringRule(TestTransitionRule):
from stocal.examples.dsd import CoveringRule
Rule = CoveringRule
def test_lakin_r_c_example_l_to_r(self):
# Tests that the basic RC example from the Lakin paper yields the correct result.
r_c_1 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S]<N^ R>{N^* R'}")))[0].products.keys())[0]
self.assertEqual(r_c_1, "{L'}<L>[S N^]<R>{R'}")
def test_lakin_rc_example_r_to_l(self):
# r_c_2 tests that the RC example works in reverse, in the right to left direction.
r_c_2 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L' N^*}<L N^>[S]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_c_2, "{L'}<L>[N^ S]<R>{R'}")
def test_covering_rule_variant_left_to_right(self):
# Test a basic variant of the covering rule RC, applied left to right.
r_c_3 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[S]<N^ R>{N^* R'}")))[0].products.keys())[0]
self.assertEqual(r_c_3, "[S N^]<R>{R'}")
def test_covering_rule_variant_right_to_left(self):
# Test a basic variant of the covering rule RC, applied right to left.
r_c_4 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{R' N^*}<R N^>[S]")))[0].products.keys())[0]
self.assertEqual(r_c_4, "{R'}<R>[N^ S]")
def test_covering_rule_across_gates_which_are_joined_via_upper_strand(self):
# Test the application of the covering rule across gates, left to right, where the gates are joined by an upper strand.
r_c_5 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{A}<B>[C]{E^*}::{F}<E^ D>[G]")))[0].products.keys())[0]
self.assertEqual(r_c_5, "{A}<B>[C E^]::{F}<D>[G]")
# N.B: No right_to_left version of this exists, due to the chosen normal form.
def test_covering_rule_across_gates_which_are_joined_via_upper_strand_variant(self):
# A variation of the last test, where the lower domain which is being bound to is followed by other domains.
r_c_6 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{A}<B>[C]{E^* Z}::{F}<E^ D>[G]")))[0].products.keys())[0]
self.assertEqual(r_c_6, "{A}<B>[C E^]{Z}::{F}<D>[G]")
# N.B: No right_to_left version of this exists, due to the chosen normal form.
def test_covering_rule_left_to_right_variant(self):
# Tests a variation of the covering rule where the gate which is being 'covered' is followed immediately by another d_s.
r_c_7 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S]<N^ R>{N^* R'}::[A B]")))[0].products.keys())[0]
self.assertEqual(r_c_7, "{L'}<L>[S N^]<R>{R'}::[A B]")
# N.B: No right_to_left version of this exists, due to the chosen normal form.
def test_covering_rule_left_to_right_variant_2(self):
# Tests a variation of the covering rule where the gate which is being 'covered' lies between other gates.
r_c_8 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[C D]<A>:{L'}<L>[S]<N^ R>{N^* R'}::[A B]")))[0].products.keys())[0]
self.assertEqual(r_c_8, "[C D]<A>:{L'}<L>[S N^]<R>{R'}::[A B]")
# N.B: No right_to_left version of this exists, due to the chosen normal form.
class TestMigrationRule(TestTransitionRule):
from stocal.examples.dsd import MigrationRule
Rule = MigrationRule
def test_lakin_r_m_example_upper_l_to_r(self):
# r_m_1 tests that the basic RM example from the Lakin paper yields the correct result.
r_m_1 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]<S R2>:<L1>[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_1, "{L'}<L>[S1 S]<R2>:<L1 S>[S2]<R>{R'}")
def test_lakin_r_m_example_lower_l_to_r(self):
# Test variants of r_m_1 but when the overhang is on the lower strand:
r_m_2 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]{S R2}::{L1}[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_2, "{L'}<L>[S1 S]{R2}::{L1 S}[S2]<R>{R'}")
def test_lakin_r_m_example_upper_r_to_l(self):
# Tests that the basic RM example from the Lakin paper yields the correct result - when done in reverse (right to left).
r_m_3 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]<R2>:<L1 S>[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_3, "{L'}<L>[S1]<S R2>:<L1>[S S2]<R>{R'}")
def test_lakin_r_m_example_lower_r_to_l(self):
# Tests that the lower strand version of the RM example from the Lakin paper can be performed left to right (reverse)
r_m_4 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]<R2>:<L1 S>[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_4, "{L'}<L>[S1]<S R2>:<L1>[S S2]<R>{R'}")
def test_lakin_r_m_example_upper_l_to_r_second_overhang_only_in_result(self):
# Test variant of r_m_1 where R2 is missing (so the result only has one overhang):
r_m_5 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]<S>:<L1>[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_5, "{L'}<L>[S1 S]:<L1 S>[S2]<R>{R'}")
def test_lakin_r_m_example_upper_r_to_l_second_overhang_only_in_input(self):
# Test variant of RM (applied right to left) where the input only has the 2nd overhang. Also reverse of r_m_5.
r_m_6 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]:<L1 S>[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_6, "{L'}<L>[S1]<S>:<L1>[S S2]<R>{R'}")
def test_lakin_r_m_example_lower_l_to_r_second_overhang_only_in_result(self):
# Test variant of r_m_2 where R2 is missing (so the result only has one overhang):
r_m_7 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]{S}::{L1}[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_7, "{L'}<L>[S1 S]::{L1 S}[S2]<R>{R'}")
def test_lakin_r_m_example_lower_r_to_l_second_overhang_only_in_input(self):
# Test lower strand variant of RM (applied right to left) where the input only has the 2nd overhang. Also reverse of r_m_7.
r_m_8 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]::{L1 S}[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_8, "{L'}<L>[S1]{S}::{L1}[S S2]<R>{R'}")
def test_lakin_r_m_example_upper_l_to_r_input_only_has_first_overhang(self):
# Test variant of r_m_1 where the input only has the 1st overhang (i.e. L1 is missing)
r_m_9 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]<S R2>:[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_9, "{L'}<L>[S1 S]<R2>:<S>[S2]<R>{R'}")
def test_lakin_r_m_example_upper_r_to_l_result_only_has_first_overhang(self):
# Test r_m_9 applied in reverse (right to left) where the result only has the 1st overhang.
r_m_10 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]<R2>:<S>[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_10, "{L'}<L>[S1]<S R2>:[S S2]<R>{R'}")
def test_lakin_r_m_example_lower_l_to_r_input_only_has_first_overhang(self):
# Test lower strand variant of r_m_1 where the input only has the 1st overhang (i.e. L1 is missing).
r_m_11 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]{S R2}::[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_11, "{L'}<L>[S1 S]{R2}::{S}[S2]<R>{R'}")
def test_lakin_r_m_example_lower_r_to_l_result_only_has_first_overhang(self):
# Test lower strand variant of RM (appied right to left) where the result only has the 1st overhang. Also reverse of r_m_11
r_m_12 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]{R2}::{S}[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_12, "{L'}<L>[S1]{S R2}::[S S2]<R>{R'}")
def test_lakin_r_m_example_upper_l_to_r_input_only_has_the_first_overhang_and_result_only_has_second_overhang(self):
# Test variants of r_m_1 where R2 and L1 are missing:
r_m_13 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]<S>:[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_13, "{L'}<L>[S1 S]:<S>[S2]<R>{R'}")
def test_lakin_r_m_example_upper_r_to_l_input_only_has_the_second_overhang_and_result_only_has_first_overhang(self):
# Test variant of Lakin's RM rule (applied right to left) where R2 and L1 are missing. Also reverse of r_m_13.
r_m_14 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]:<S>[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_14, "{L'}<L>[S1]<S>:[S S2]<R>{R'}")
def test_lakin_r_m_example_lower_l_to_r_input_only_has_the_first_overhang_and_result_only_has_second_overhang(self):
# Test variants of r_m_2 where R2 and L1 are missing:
r_m_15 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]{S}::[S S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_15, "{L'}<L>[S1 S]::{S}[S2]<R>{R'}")
def test_lakin_r_m_example_lower_r_to_l_input_only_has_the_second_overhang_and_result_only_has_first_overhang(self):
# Test lower strand variant of Lakin's RM rule (applied right to left) where R2 and L1 are missing. Also reverse of r_m_15.
r_m_16 = list(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1 S]::{S}[S2]<R>{R'}")))[0].products.keys())[0]
self.assertEqual(r_m_16, "{L'}<L>[S1]{S}::[S S2]<R>{R'}")
def test_that_migration_rule_is_not_applied_to_lakin_displacement_example_rd(self):
# Test that RM is not applied on the RD example, as the two should be mutually exclusive.
r_m_17 = set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]<S R>:<L2>[S]<R2>{R'}"))
self.assertEqual(r_m_17, set())
def test_that_migration_rule_is_not_applied_to_lower_strand_version_of_lakin_displacement_example_rd(self):
# Test that RM is not applied to the lower strand version of the RD example, as the rules should be mutually exclusive.
r_m_18 = set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]{S R}::{L2}[S]<R2>{R'}"))
self.assertEqual(r_m_18, set())
def test_that_migration_rule_is_not_applied_to_lakin_displacement_example_fig_4a(self):
# Test that the RM rule is not applied to the RD example from Figure 4a).
r_m_19 = set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:[x]:[y u^]"))
self.assertEqual(r_m_19, set())
def test_that_migration_rule_is_not_applied_to_lower_strand_version_of_lakin_displacement_example_fig_4a(self):
# Test that the RM rule is not applied to the lower strand version of the RD example from Figure 4a).
r_m_20 = set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::[x]::[y u^]"))
self.assertEqual(r_m_20, set())
def test_upper_l_to_r_lakin_fig_4a_migration_example_correct(self):
# Test the migration rule is applied correctly to the example from Figure 4a) of Lakin's paper.
r_m_21 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x]<y>:[y u^]")))[0].products.keys())[0]
self.assertEqual(r_m_21, "[t^ x y]:<y>[u^]")
def test_upper_r_to_l_lakin_fig_4a_migration_example_correct(self):
# Test the migration rule is applied correctly (in reverse) to the example from Figure 4a) of Lakin's paper (i.e. r_m_21).
r_m_22 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x y]:<y>[u^]")))[0].products.keys())[0]
self.assertEqual(r_m_22, "[t^ x]<y>:[y u^]")
def test_lower_l_to_r_lakin_fig_4a_migration_example_correct(self):
# Test the migration rule is applied correctly to the lower strand version of the example from Figure 4a) of Lakin's paper.
r_m_23 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x]{y}::[y u^]")))[0].products.keys())[0]
self.assertEqual(r_m_23, "[t^ x y]::{y}[u^]")
def test_lower_r_to_l_lakin_fig_4a_migration_example_correct(self):
# Test that the rule works (right-to-left) on the lower strand version of the Fig. 4a example (i.e. r_m_23) in Lakin's paper.
r_m_24 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x y]::{y}[u^]")))[0].products.keys())[0]
self.assertEqual(r_m_24, "[t^ x]{y}::[y u^]")
def test_migration_rule_upper_l_to_r_variant_1(self):
# Test system where the 2nd gate involved in migration is connected to a 3rd gate via the upper strand.
r_m_25 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:[x v]::[y u^]")))[0].products.keys())[0]
self.assertEqual("[t^ x]<y>:<x>[v]::[y u^]", r_m_25)
def test_migration_rule_upper_r_to_l_variant_1(self):
# Test right-to-left rule application where 2nd gate is connected to a 3rd via an upper strand. Reverse of r_m_25.
r_m_26 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x]<y>:<x>[v]::[y u^]")))[0].products.keys())[0]
self.assertEqual("[t^]<x y>:[x v]::[y u^]", r_m_26)
def test_migration_rule_lower_l_to_r_variant_1(self):
# Test system where the 2nd gate involved in migration is connected to a 3rd gate via the lower strand.
r_m_27 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::[x v]:[y u^]")))[0].products.keys())[0]
self.assertEqual("[t^ x]{y}::{x}[v]:[y u^]", r_m_27)
def test_migration_rule_lower_r_to_l_variant_1(self):
# Test right-to-left rule application of a system where the 2nd gate involved connects to a 3rd gate via the lower strand.
# Reverse of r_m_27
r_m_28 = list(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x]{y}::{x}[v]:[y u^]")))[0].products.keys())[0]
self.assertEqual("[t^]{x y}::[x v]:[y u^]", r_m_28)
class TestDisplacementRule(TestTransitionRule):
from stocal.examples.dsd import DisplacementRule
Rule = DisplacementRule
def test_lakin_r_d_example_upper_l_to_r(self):
# Test the rule reduction example RD from Lakin's paper.
r_d_1 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]<S R>:<L2>[S]<R2>{R'}")))[0].products.keys())
exp_res_1 = {"<L2 S R2>", "{L'}<L>[S1 S]<R>{R'}"}
self.assertEqual(set(), set.difference(r_d_1, exp_res_1))
def test_lakin_r_d_example_upper_r_to_l(self):
# Test an inverted version of example RD (r_d_1 above) from Lakin's paper, where the rule is applied right to left.
r_d_2 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S]<L2>:<R S>[S1]<R2>{R'}")))[0].products.keys())
exp_res_2 = {"<L S L2>", "{L'}<R>[S S1]<R2>{R'}"}
self.assertEqual(set(), set.difference(r_d_2, exp_res_2))
def test_lakin_r_d_example_lower_l_to_r(self):
# Test the lower strand equivalent of the reduction example RD (r_d_1 above) from Lakin's paper.
r_d_3 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S1]{S R}::{L2}[S]<R2>{R'}")))[0].products.keys())
exp_res_3 = {"{L2 S R'}", "{L'}<L>[S1 S]<R2>{R}"}
self.assertEqual(set(), set.difference(r_d_3, exp_res_3))
def test_lakin_r_d_example_lower_r_to_l(self):
# Test an inverted lower strand version of example RD (r_d_1 above) from Lakin's paper, applying the rule right-to-left.
r_d_4 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L'}<L>[S]{L2}::{R S}[S1]<R2>{R'}")))[0].products.keys())
exp_res_4 = {"{L' S L2}", "{R}<L>[S S1]<R2>{R'}"}
self.assertEqual(set(), set.difference(r_d_4, exp_res_4))
def test_lakin_fig_4a_example_upper_l_to_r(self):
# Tests that the application of the displacement rule from Figure 4a works as expected.
r_d_5 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:[x]:[y u^]")))[0].products.keys())
exp_res_5 = {"<x>", "[t^ x]<y>:[y u^]"}
self.assertEqual(set(), set.difference(r_d_5, exp_res_5))
def test_lakin_fig_4a_example_upper_r_to_l(self):
# Tests that an altered version of the displacement eg. from Fig 4a can be displaced in the right-to-left direction.
r_d_6 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]:[x]:<y x>[t^]")))[0].products.keys())
exp_res_6 = {"<x>", "[u^ y]:<y>[x t^]"}
self.assertEqual(set(), set.difference(r_d_6, exp_res_6))
def test_lakin_fig_4a_example_lower_l_to_r(self):
# Tests that the application of the Displacement example from Figure 4a works as expected.
r_d_7 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::[x]::[y u^]")))[0].products.keys())
exp_res_7 = {"{x}", "[t^ x]{y}::[y u^]"}
self.assertEqual(set(), set.difference(r_d_7, exp_res_7))
def test_lakin_fig_4a_example_lower_r_to_l(self):
# Tests an inverted (lower strand) version of the displacement example from Fig 4a (in the right-to-left direction).
r_d_8 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]::[x]::{y x}[t^]")))[0].products.keys())
exp_res_8 = {"{x}", "[u^ y]::{y}[x t^]"}
self.assertEqual(set(), set.difference(r_d_8, exp_res_8))
def test_lakin_migration_example_fig_upper_4a_l_to_r_does_not_yield_results(self):
# Test that the Displacement rule does not get applied to the Migration example from Figure 4a of the Lakin paper.
r_d_9 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x]<y>:[y u^]"))))
self.assertEqual(set(), r_d_9)
def test_lakin_migration_example_fig_upper_4a_r_to_l_does_not_yield_results(self):
# Tests that this rule yields no results when applied to an inverted Migration example from Fig. 4a of the Lakin paper.
r_d_10 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]:<y>[x t]"))))
self.assertEqual(set(), r_d_10)
def test_lakin_migration_example_fig_lower_4a_l_to_r_does_not_yield_results(self):
# Test that the lower strand version of the example from Fig. 4a cannot yield displacement products.
r_d_11 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^ x]{y}::[y u^]"))))
self.assertEqual(r_d_11, set())
def test_lakin_migration_example_fig_lower_4a_r_to__does_not_yield_results(self):
# Tests that this rule yields no results when applied to an inverted, flipped Migration example from Fig. 4a of the Lakin paper.
r_d_12 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]::{y}[x t]"))))
self.assertEqual(set(), r_d_12)
def test_that_more_migration_examples_yield_no_displacement_results(self):
# Test that other systems where migration can occur cannot be displaced:
r_d_13 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:[x v]::[y u^]"))))
self.assertEqual(set(), r_d_13)
def test_that_more_migration_examples_yield_no_displacement_results_2(self):
# Test that other systems where migration can occur cannot be displaced:
r_d_14 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::[x v]:[y u^]"))))
self.assertEqual(set(), r_d_14)
def test_displacement_of_upper_strand_which_connects_to_the_next_gate_via_upper_strand_l_to_r(self):
# This test checks that applying the displacement rule along an upper strand works, when the strand which is being
# displaced is connected along its upper strand to the next gate (left to right).
r_d_15 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:[x]::[y u^]")))[0].products.keys())
exp_res_15 = {"[t^ x]<y>", "<x>[y u^]"}
self.assertEqual(set(), set.difference(r_d_15, exp_res_15))
def test_displacement_of_upper_strand_which_connects_to_the_previous_gate_via_upper_strand_r_to_l(self):
# This test checks that applying the displacement rule along an upper strand works, when the strand which is being
# displaced is connected along its upper strand to the previous gate (right to left). Variant of r_d_15.
r_d_16 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]::[x]:<y x>[t^]")))[0].products.keys())
exp_res_16 = {"[u^ y]<x>", "<y>[x t^]"}
self.assertEqual(set(), set.difference(r_d_16, exp_res_16))
def test_displacement_of_lower_strand_which_connects_to_the_next_gate_via_lower_strand_l_to_r(self):
# This test checks that applying the displacement rule along a lower strand works, when the strand which is being
# displaced is connected to the next gate (left to right) along its lower strand.
r_d_17 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::[x]:[y u^]")))[0].products.keys())
exp_res_17 = {"[t^ x]{y}", "{x}[y u^]"}
self.assertEqual(set(), set.difference(r_d_17, exp_res_17))
def test_displacement_of_lower_strand_which_connects_to_the_previous_gate_via_lower_strand_r_to_l(self):
# This test checks that applying the displacement rule along an lower strand works, when the toehold which is being
# displaced is connected along its upper strand to the previous gate (right to left). Variant of r_d_16
r_d_18 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]:[x]::{y x}[t^]")))[0].products.keys())
exp_res_18 = {"[u^ y]{x}", "{y}[x t^]"}
self.assertEqual(set(), set.difference(r_d_18, exp_res_18))
def test_displacement_of_upper_strand_which_is_connected_to_the_next_strand_via_upper_strand_l_to_r_variant_1(self):
# This tests that displacing an upper strand works, when the strand which is being displaced is connected along to the
# next gate (left to right) via the upper strand. Variant of r_d_15 but with an upper strand attached to the second d_s.
r_d_19 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:<R>[x]::[y u^]")))[0].products.keys())
exp_res_19 = {"[t^ x]<y>", "<R x>[y u^]"}
self.assertEqual(set(), set.difference(r_d_19, exp_res_19))
def test_displacement_of_upper_strand_which_is_connected_to_the_previous_strand_via_upper_strand_r_to_l_variant_1(self):
# This tests that displacing an upper strand (right to left) works, when the strand which is being displaced is connected
# along to the previous gate via the upper strand. Variant of r_d_16 but with an upper strand attached to the second d_s.
r_d_20 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]::[x]<R>:<y x>[t^]")))[0].products.keys())
exp_res_20 = {"[u^ y]<x R>", "<y>[x t^]"}
self.assertEqual(set(), set.difference(r_d_20, exp_res_20))
def test_displacement_of_lower_strand_which_is_connected_to_the_next_strand_via_lower_strand_l_to_r_variant_1(self):
# This tests that displacing a lower strand works, when the strand which is being displaced is connected along to the
# next gate (left to right) via a lower strand. Variant of r_d_17 but with a lower strand attached to the second d_s.
r_d_21 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::{R}[x]:[y u^]")))[0].products.keys())
exp_res_21 = {"[t^ x]{y}", "{R x}[y u^]"}
self.assertEqual(set(), set.difference(r_d_21, exp_res_21))
def test_displacement_of_lower_strand_which_is_connected_to_the_previous_strand_via_lower_strand_r_to_l_variant_1(self):
# This tests that displacing a lower strand (right-to-left) works, when the strand which is being displaced is connected
# to the previous gate via a lower strand. Variant of r_d_18 but with a lower strand attached to the second d_s.
r_d_22 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]:[x]{R}::{y x}[t^]")))[0].products.keys())
exp_res_22 = {"[u^ y]{x R}", "{y}[x t^]"}
self.assertEqual(set(), set.difference(r_d_22, exp_res_22))
def test_displacement_of_upper_strand_which_is_connected_to_the_next_strand_via_upper_strand_l_to_r_variant_2(self):
# This tests that displacing an upper strand (left-to-right) works, when the strand which is being displaced is connected
# to the next gate via the upper strand. Variant of r_d_19 but with a lower strand attached to the second d_s.
r_d_23 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]<x y>:<r>[x]{g}::[y u^]")))[0].products.keys())
exp_res_23 = {"[t^ x]<y>{g}", "<r x>[y u^]"}
self.assertEqual(set(), set.difference(r_d_23, exp_res_23))
def test_displacement_of_upper_strand_which_is_connected_to_the_previous_strand_via_upper_strand_r_to_l_variant_2(self):
# This tests that displacing an upper strand (right-to-right) works, when the strand which is being displaced is connected
# to the previous gate via the upper strand. Variant of r_d_20 but with a lower strand attached to the first d_s.
r_d_24 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]::{g}[x]<r>:<y x>[t^]")))[0].products.keys())
exp_res_24 = {"[u^ y]<x r>", "{g}<y>[x t^]"}
self.assertEqual(set(), set.difference(r_d_24, exp_res_24))
def test_displacement_of_lower_strand_which_is_connected_to_the_next_strand_via_lower_strand_l_to_r_variant_2(self):
# This tests that displacing a lower strand (left-to-right) works, when the strand which is being displaced is connected
# to the next gate via the upper strand. Variant of r_d_21 but with a upper strand attached to the second d_s.
r_d_25 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[t^]{x y}::{r}[x]<g>:[y u^]")))[0].products.keys())
exp_res_25 = {"[t^ x]<g>{y}", "{r x}[y u^]"}
self.assertEqual(set(), set.difference(r_d_25, exp_res_25))
def test_displacement_of_lower_strand_which_is_connected_to_the_previous_strand_via_lower_strand_r_to_l_variant_2(self):
# This tests that displacing a lower strand (right-to-left) works, when the strand which is being displaced is connected
# to the previous gate via the lower strand. Variant of r_d_22 but with an upper strand attached to the first d_s.
r_d_26 = set(list(set(self.Rule.novel_reactions(self.Rule(), "[u^ y]:<g>[x]{r}::{y x}[t^]")))[0].products.keys())
exp_res_26 = {"[u^ y]{x r}", "{y}<g>[x t^]"}
self.assertEqual(set(), set.difference(r_d_26, exp_res_26))
class TestStrandLeakageRule(unittest.TestCase):
from stocal.examples.dsd import StrandLeakageRule
Rule = StrandLeakageRule
def test_lakin_l_s_example(self):
# Test that the basic LS example from the Lakin paper can be replicated with the Leakage Rule.
l_s_1 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S R1>", "{L'}<L>[S]<R>{R'}")))[0].products.keys())
exp_res_1 = {"<L S R>", "{L'}<L1>[S]<R1>{R'}"}
self.assertEqual(set(), set.difference(l_s_1, exp_res_1))
def test_lakin_l_s_example_rotated(self):
# Test the basic LS example from the Lakin paper, but rotate the invader strand to be a lower strand.
l_s_2 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S* R1}", "{L'}<L>[S]<R>{R'}")))[0].products.keys())
exp_res_2 = {"{L' S* R'}", "{L1}<L>[S]<R>{R1}"}
self.assertEqual(set(), set.difference(l_s_2, exp_res_2))
def test_that_strand_leakage_does_not_apply_to_short_double_toeholds(self):
# Test that the strand leakage rule yields nothing when a gate's double strand has form [N^].
l_s_3 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S* R1}", "{L'}<L>[S^]<R>{R'}"))))
self.assertEqual(set(), l_s_3)
def test_that_strand_leakage_fails_when_invader_strand_does_not_match_gate(self):
# Test that when the invader sequence of domains does not match the sequence of domains within the d_s of the
# other input, no leakages are yielded
l_s_4 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 A* B^* C* R1}", "{L'}<L>[A B C]<R>{R'}"))))
self.assertEqual(set(), l_s_4)
def test_strand_leakage_with_an_upper_invader_which_causes_a_gate_to_leak_its_upper_strand(self):
# Test the LS rule when the invader strand is an upper strand which contains a mixture of toeholds and long domains.
l_s_5 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S T^ R1>", "{L'}<L>[S T^]<R>{R'}")))[0].products.keys())
exp_res_5 = {"<L S T^ R>", "{L'}<L1>[S T^]<R1>{R'}"}
self.assertEqual(set(), set.difference(l_s_5, exp_res_5))
def test_strand_leakage_with_an_upper_invader_which_causes_a_gate_to_leak_its_lower_strand(self):
# Test the LS rule when the invader strand is an upper strand which can only initiate a leakage after rotating into
# a lower strand.
l_s_6 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 T^* S* R1>", "{L'}<L>[S T^]<R>{R'}")))[0].products.keys())
exp_res_6 = {"{L' S* T^* R'}", "{R1}<L>[S T^]<R>{L1}"}
self.assertEqual(set(), set.difference(l_s_6, exp_res_6))
def test_strand_leakage_with_a_lower_invader_which_causes_a_gate_to_leak_its_lower_strand(self):
# Test the LS rule when the invader strand is a lower strand which contains a mixture of toeholds and long domains.
l_s_7 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S* T^* R1}", "{L'}<L>[S T^]<R>{R'}")))[0].products.keys())
exp_res_7 = {"{L' S* T^* R'}", "{L1}<L>[S T^]<R>{R1}"}
self.assertEqual(set(), set.difference(l_s_7, exp_res_7))
def test_strand_leakage_with_a_lower_invader_which_causes_a_gate_to_leak_its_upper_strand(self):
# Test the LS rule when the invader strand is a lower strand which can only initiate a leakage after rotating into
# an upper strand.
l_s_8 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 T^ S R1}", "{L'}<L>[S T^]<R>{R'}")))[0].products.keys())
exp_res_8 = {"<L S T^ R>", "{L'}<R1>[S T^]<L1>{R'}"}
self.assertEqual(set(), set.difference(l_s_8, exp_res_8))
def test_strand_leakage_with_constructs_which_contain_more_complex_sequences_of_domains_1(self):
# Test the LS rule with an upper invader strand which can only cause a leak with one rotation i.e. if the invader rotates
# into a lower strand, a leakage will not occur (on the lower strand). Variant of l_s_5 with long sequences of domains.
l_s_9 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 LA S T^ RA R1>",
"{L' L2}<L LB>[S T^]<RB R>{R2 R'}")))[0].products.keys())
exp_res_9 = {"<L LB S T^ RB R>", "{L' L2}<L1 LA>[S T^]<RA R1>{R2 R'}"}
self.assertEqual(set(), set.difference(l_s_9, exp_res_9))
def test_strand_leakage_with_constructs_which_contain_more_complex_sequences_of_domains_1(self):
# Test the LS rule when the invader strand is an upper strand which can only cause a leak if it rotates into a lower strand.
# Variant of l_s_6 with longer sequences of domains.
l_s_10 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 LA T^* S* RA R1>",
"{L' L2}<L LB>[S T^]<RB R>{R2 R'}")))[0].products.keys())
exp_res_10 = {"{L' L2 S* T^* R2 R'}", "{R1 RA}<L LB>[S T^]<RB R>{LA L1}"}
self.assertEqual(set(), set.difference(l_s_10, exp_res_10))
def test_strand_leakage_with_constructs_which_contain_more_complex_sequences_of_domains_2(self):
# Test the LS rule when the invader is a lower strand which can only cause a leak with one rotation i.e. if the invader
# rotates into an upper strand, a leakage will not occur (on the upper strand). Variant of l_s_7 with longer sequences of domains.
l_s_11 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 LA S* T^* RA R1}",
"{L' L2}<L LB>[S T^]<RB R>{R2 R'}")))[0].products.keys())
exp_res_11 = {"{L' L2 S* T^* R2 R'}", "{L1 LA}<L LB>[S T^]<RB R>{RA R1}"}
self.assertEqual(set(), set.difference(l_s_11, exp_res_11))
def test_leakage_rule_yields_correctly_when_lower_strand_can_only_invade_as_upper_strand_long(self):
# Test the LS rule when the invader strand is a lower strand which can only cause a leak if it rotates into an upper strand.
# Variant of l_s_8 but with longer sequences of domains.
l_s_12 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 LA T^ S RA R1}",
"{L' L2}<L LB>[S T^]<RB R>{R2 R'}")))[0].products.keys())
exp_res_12 = {"<L LB S T^ RB R>", "{L' L2}<R1 RA>[S T^]<LA L1>{R2 R'}"}
self.assertEqual(set(), set.difference(l_s_12, exp_res_12))
def test_leakage_rule_does_not_displace_an_upper_strand_attached_to_a_previous_gate(self):
# Test the LS rule does not displace an upper strand which connects directly to the previous gate.
l_s_13 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S T R1>", "[A]<B>::{L'}<L>[S T]<R>{R'}"))))
self.assertEqual(set(), l_s_13)
def test_leakage_rule_does_not_displace_an_upper_strand_attached_to_a_following_gate(self):
# Test the LS rule does not displace an upper strand which connects directly to the following gate.
l_s_14 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S T R1>", "[A]<B>:{L'}<L>[S T]<R>{R'}::<C>[D]"))))
self.assertEqual(set(), l_s_14)
def test_leakage_rule_does_not_displace_a_lower_strand_attached_to_a_previous_gate(self):
# Test the LS rule does not displace a lower strand which connects directly to the previous gate.
l_s_15 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S R1}", "[A]<B>:{L'}<L>[S^]<R>{R'}"))))
self.assertEqual(set(), l_s_15)
def test_leakage_rule_does_not_displace_a_lower_strand_attached_to_a_following_gate(self):
# Test the LS rule does not displace an lower strand which connects directly to the following gate.
l_s_16 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S T R1}", "[A]<B>::{L'}<L>[S T]<R>{R'}:<C>[D]"))))
self.assertEqual(set(), l_s_16)
class TestToeholdLeakageRule(unittest.TestCase):
from stocal.examples.dsd import ToeholdLeakageRule
Rule = ToeholdLeakageRule
def test_lakin_l_t_example(self):
# Test that the basic LT example from the Lakin paper can be replicated with the Leakage Rule.
l_t_1 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S R1>", "{L'}<L>[S N^]<R>{R'}")))[0].products.keys())
exp_res_1 = {"<L S N^ R>", "{L'}<L1>[S]<R1>{N^* R'}"}
self.assertEqual(set(), set.difference(l_t_1, exp_res_1))
def test_extended_lakin_l_t_example(self):
# Test a different version of the LT example from the Lakin paper, with more domains on the double strand.
l_t_2 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S K^ R1>", "{L'}<L>[S K^ N^]<R>{R'}")))[0].products.keys())
exp_res_2 = {"<L S K^ N^ R>", "{L'}<L1>[S K^]<R1>{N^* R'}"}
self.assertEqual(set(), set.difference(l_t_2, exp_res_2))
def test_lower_strand_version_of_lakin_l_t_example(self):
# Test that the basic (rotated) LT example from the Lakin paper can be replicated with the Leakage Rule.
l_t_3 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S* R1}", "{L'}<L>[S N^]<R>{R'}")))[0].products.keys())
exp_res_3 = {"{L' S* N^* R'}", "{L1}<L>[S]<N^ R>{R1}"}
self.assertEqual(set(), set.difference(l_t_3, exp_res_3))
def test_extended_lower_strand_version_of_lakin_l_t_example(self):
# Test that the basic (rotated) LT example from the Lakin paper can be replicated with the Leakage Rule.
l_t_4 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S* B^* R1}", "{L'}<L>[S B^ N^]<R>{R'}")))[0].products.keys())
exp_res_4 = {"{L' S* B^* N^* R'}", "{L1}<L>[S B^]<N^ R>{R1}"}
self.assertEqual(set(), set.difference(l_t_4, exp_res_4))
def test_toehold_leak_where_upper_strand_only_initiates_leak_after_rotating_into_a_lower_strand(self):
# Test that the basic LT example from the Lakin paper can be replicated, even when the strand is passed at the wrong rotation
# and cannot initiate the leak until it rotates back to its original position.
l_t_5 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S* R1>", "{L'}<L>[S N^]<R>{R'}")))[0].products.keys())
exp_res_5 = {"{L' S* N^* R'}", "{R1}<L>[S]<N^ R>{L1}"}
self.assertEqual(set(), set.difference(l_t_5, exp_res_5))
def test_toehold_leak_where_lower_strand_only_initiates_leak_after_rotating_into_an_upper_strand(self):
# Test that the basic LT example from the Lakin paper can be replicated, even when the strand is passed at the wrong rotation
# and cannot initiate the leak until it rotates back to its original position.
l_t_6 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{R1 S L1}", "{L'}<L>[S N^]<R>{R'}")))[0].products.keys())
exp_res_6 = {"<L S N^ R>", "{L'}<L1>[S]<R1>{N^* R'}"}
self.assertEqual(set(), set.difference(l_t_6, exp_res_6))
def test_toehold_leak_with_toehold_at_start_of_double_strand_with_upper_invader_strand(self):
# Test that the basic LT example from the Lakin paper can be replicated in reverse, right to left, when the
# toehold occurs at the start of the double strand.
l_t_7 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S R1>", "{L'}<L>[N^ S]<R>{R'}")))[0].products.keys())
exp_res_7 = {"<L N^ S R>", "{L' N^*}<L1>[S]<R1>{R'}"}
self.assertEqual(set(), set.difference(l_t_7, exp_res_7))
def test_toehold_leak_with_toehold_at_start_of_double_strand_with_lower_invader_strand(self):
# Test that the basic LT example from the Lakin paper can be replicated in reverse, right to left, when the
# toehold occurs at the start of the double strand and the invader is a lower strand.
l_t_8 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S* R1}", "{L'}<L>[N^ S]<R>{R'}")))[0].products.keys())
exp_res_8 = {"{L' N^* S* R'}", "{L1}<L N^>[S]<R>{R1}"}
self.assertEqual(set(), set.difference(l_t_8, exp_res_8))
def test_extended_lakin_l_t_example_with_toehold_at_start(self):
# Test that the basic LT example from the Lakin paper can be replicated with the Leakage Rule.
l_t_9 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 N^ S R1>", "{L'}<L>[N^ S]<R>{R'}"))))
exp_res_9 = {"<L N^ S R>", "{L' N^*}<L1>[S]<R1>{R'}"}
self.assertEqual(set(), l_t_9)
def test_lakin_l_s_example_does_not_yield_any_results_from_the_l_t_rule(self):
# Test that the LT rule is not applied to the basic LS example from the Lakin paper.
l_t_1 = set(list(set(self.Rule.novel_reactions(self.Rule(), "<L1 S R1>", "{L'}<L>[S]<R>{R'}"))))
self.assertEqual(set(), set.difference(l_t_1, set()))
def test_that_a_rotated_lakin_l_s_example_does_not_yield_any_results_from_the_l_t_rule(self):
# Test that the LT rule is not applied to the rotated (lower strand version) of the LS example from the Lakin paper.
l_t_2 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S R1}", "{L'}<L>[S]<R>{R'}"))))
self.assertEqual(set(), set.difference(l_t_2, set()))
def test_that_the_l_t_rule_does_not_apply_to_short_double_toeholds(self):
# Test that the leakage rule does not yield any results when the short double strand has form [N^].
l_t_3 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 S R1}", "{L'}<L>[S^]<R>{R'}"))))
self.assertEqual(set(), l_t_3)
def test_that_invader_strand_cannot_yield_a_toehold_leak_when_the_sequences_do_not_match(self):
# Test that when the invader sequence of domains does not match the sequence of domains within the d_s of the
# other input, no leakages are yielded
l_t_4 = set(list(set(self.Rule.novel_reactions(self.Rule(), "{L1 A B^ C^ R1}", "{L'}<L>[A B C^]<R>{R'}"))))
self.assertEqual(set(), l_t_4)
if __name__ == '__main__':
unittest.main()
| 71.243609 | 149 | 0.661355 | 8,422 | 47,377 | 3.476965 | 0.038708 | 0.059557 | 0.040945 | 0.059557 | 0.888946 | 0.863334 | 0.820578 | 0.780043 | 0.734624 | 0.716593 | 0 | 0.024243 | 0.175486 | 47,377 | 664 | 150 | 71.350904 | 0.725392 | 0.282521 | 0 | 0.014963 | 0 | 0.017456 | 0.166622 | 0.00969 | 0 | 0 | 0 | 0 | 0.27182 | 1 | 0.27182 | false | 0 | 0.022444 | 0 | 0.329177 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
390ba98013c54aa7036a1e3cc58d3d7341e9bf28 | 7,271 | py | Python | datasets/wat_070b_rw0_erosion_sources_near_water/wat_070b_rw0_create_river_mask.py | resource-watch/ocean-watch-data | 569011ae51a60efc87106aa2098227d5c6fbfc67 | [
"MIT"
] | null | null | null | datasets/wat_070b_rw0_erosion_sources_near_water/wat_070b_rw0_create_river_mask.py | resource-watch/ocean-watch-data | 569011ae51a60efc87106aa2098227d5c6fbfc67 | [
"MIT"
] | null | null | null | datasets/wat_070b_rw0_erosion_sources_near_water/wat_070b_rw0_create_river_mask.py | resource-watch/ocean-watch-data | 569011ae51a60efc87106aa2098227d5c6fbfc67 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# ---------------------------------------------------------------------------
# wat_070b_rw0_create_river_mask.py
# Created on: 2021-10-29 14:10:20.00000
# (generated by ArcGIS/ModelBuilder)
# Description:
# ---------------------------------------------------------------------------
import os
# Import arcpy module
import arcpy
ARC_PROCESSING_DIR = os.getenv('ARC_PROCESSING_DIR')
# Local variables:
HydroRiv_First_Ex_Project = "HydroRiv_First_Ex_Project"
HydroRiv_First_Ex_Project__2_ = HydroRiv_First_Ex_Project
GRWL_selection_coast_and_rivers = "GRWL_selection_coast_and_rivers"
GRWL_selection_coast_and_rivers__2_ = GRWL_selection_coast_and_rivers
GRWL_selection_coast_and_rivers__3_ = GRWL_selection_coast_and_rivers__2_
GRWL_selection_coast_and_rivers__4_ = GRWL_selection_coast_and_rivers__3_
GRWL_selection_coast_and_rivers__5_ = GRWL_selection_coast_and_rivers__4_
GRWLcoast_and_river_buffer = ARC_PROCESSING_DIR + "\\Sediment Pressure\\Sediment Pressure.gdb\\GRWLcoast_and_river_buffer"
HydroRiv_subset = ARC_PROCESSING_DIR + "\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_subset"
HydroRiv_width_buffer = ARC_PROCESSING_DIR + "\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer"
Combined_riv_buffer_new = ARC_PROCESSING_DIR + "\\Sediment Pressure\\Sediment Pressure.gdb\\Combined_riv_buffer_new"
# Process: Add Field (2)
arcpy.AddField_management(GRWL_selection_coast_and_rivers, "WIDTH", "DOUBLE", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
# Process: Calculate Field (2)
arcpy.CalculateField_management(GRWL_selection_coast_and_rivers__2_, "WIDTH", "zero( !width_med_! )", "PYTHON", "def zero(width):\\n if width is not None:\\n return width\\n elif width is None:\\n return 0\\n")
# Process: Add Field
arcpy.AddField_management(GRWL_selection_coast_and_rivers__3_, "RAD_UNITS", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
# Process: Calculate Field
arcpy.CalculateField_management(GRWL_selection_coast_and_rivers__4_, "RAD_UNITS", "str( !WIDTH! /2 +7) +' meters'", "PYTHON", "")
# Process: Buffer
arcpy.Buffer_analysis(GRWL_selection_coast_and_rivers__5_, GRWLcoast_and_river_buffer, "RAD_UNITS", "FULL", "ROUND", "ALL", "", "PLANAR")
# Process: Select Layer By Location (3)
arcpy.SelectLayerByLocation_management(HydroRiv_First_Ex_Project, "HAVE_THEIR_CENTER_IN", GRWLcoast_and_river_buffer, "1 Kilometers", "NEW_SELECTION", "INVERT")
# Process: Copy Features (2)
arcpy.CopyFeatures_management(HydroRiv_First_Ex_Project__2_, HydroRiv_subset, "", "0", "0", "0")
# Process: Buffer (3)
arcpy.Buffer_analysis(HydroRiv_subset, HydroRiv_width_buffer, "12 Meters", "FULL", "ROUND", "ALL", "", "PLANAR")
# Process: Merge
arcpy.Merge_management(ARC_PROCESSING_DIR + "\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer';'C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\GRWLcoast_and_river_buffer'", Combined_riv_buffer_new, "HYRIV_ID \"HYRIV_ID\" true true false 4 Long 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,HYRIV_ID,-1,-1;NEXT_DOWN \"NEXT_DOWN\" true true false 4 Long 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,NEXT_DOWN,-1,-1;MAIN_RIV \"MAIN_RIV\" true true false 4 Long 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,MAIN_RIV,-1,-1;LENGTH_KM \"LENGTH_KM\" true true false 4 Float 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,LENGTH_KM,-1,-1;DIST_DN_KM \"DIST_DN_KM\" true true false 4 Float 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,DIST_DN_KM,-1,-1;DIST_UP_KM \"DIST_UP_KM\" true true false 4 Float 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,DIST_UP_KM,-1,-1;CATCH_SKM \"CATCH_SKM\" true true false 4 Float 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,CATCH_SKM,-1,-1;UPLAND_SKM \"UPLAND_SKM\" true true false 4 Float 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,UPLAND_SKM,-1,-1;ENDORHEIC \"ENDORHEIC\" true true false 2 Short 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,ENDORHEIC,-1,-1;DIS_AV_CMS \"DIS_AV_CMS\" true true false 4 Float 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,DIS_AV_CMS,-1,-1;ORD_STRA \"ORD_STRA\" true true false 2 Short 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,ORD_STRA,-1,-1;ORD_CLAS \"ORD_CLAS\" true true false 2 Short 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,ORD_CLAS,-1,-1;ORD_FLOW \"ORD_FLOW\" true true false 2 Short 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,ORD_FLOW,-1,-1;HYBAS_L12 \"HYBAS_L12\" true true false 8 Double 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,HYBAS_L12,-1,-1;Shape_Length \"Shape_Length\" false true true 8 Double 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,Shape_Length,-1,-1,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\GRWLcoast_and_river_buffer,Shape_Length,-1,-1;BUFF_DIST \"BUFF_DIST\" true true false 0 Double 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,BUFF_DIST,-1,-1;ORIG_FID \"ORIG_FID\" true true false 0 Long 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\HydroRiv_width_buffer,ORIG_FID,-1,-1;Shape_Area \"Shape_Area\" false true true 8 Double 0 0 ,First,#,C:\\Users\\RThoms.Local\\OneDrive - World Resources Institute\\Documents\\ArcGIS\\Sediment Pressure\\Sediment Pressure.gdb\\GRWLcoast_and_river_buffer,Shape_Area,-1,-1")
| 132.2 | 4,490 | 0.762756 | 1,029 | 7,271 | 5.104956 | 0.142857 | 0.152294 | 0.11422 | 0.152294 | 0.773844 | 0.734818 | 0.700362 | 0.700362 | 0.637731 | 0.610128 | 0 | 0.022285 | 0.080457 | 7,271 | 54 | 4,491 | 134.648148 | 0.763386 | 0.074955 | 0 | 0 | 1 | 0.869565 | 0.735568 | 0.453397 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
393ab7b768f1603611e9a287ee8cfc566937e3f6 | 8,325 | py | Python | pseudo-4d_k-map.py | silentrald/Pseudo-4d_Kmap_VIsualizer | d874080750605e722a261492931ab00d97996d47 | [
"MIT"
] | null | null | null | pseudo-4d_k-map.py | silentrald/Pseudo-4d_Kmap_VIsualizer | d874080750605e722a261492931ab00d97996d47 | [
"MIT"
] | null | null | null | pseudo-4d_k-map.py | silentrald/Pseudo-4d_Kmap_VIsualizer | d874080750605e722a261492931ab00d97996d47 | [
"MIT"
] | null | null | null |
minterms = []
dont_care = []
minterms_txt = input('Name of your minterms file: ')
if len(minterms_txt) < 5 or minterms_txt[-4:] != '.txt':
minterms_txt += '.txt'
# input minterms
with open('./' + minterms_txt, 'r') as fp:
minterms = [ int(x) for x in fp.read().split(', ')[0:-1] ]
print('')
dont_care_txt = input('Name of your dont care file: ')
if len(dont_care_txt) < 5 or dont_care_txt[-4:] != '.txt':
dont_care_txt += '.txt'
# input don't care
with open('./' + dont_care_txt, 'r') as fp:
dont_care = [ int(x) for x in fp.read().split(', ')[0:-1] ]
print('')
# print(minterms)
# print(dont_care)
'''
\cd 00 01 11 10 ef \cd 00 01 11 10 ef
ab *------/ *------/ *-----/ *------/ ab *------/ *------/ *-----/ *------/
++==================================++ ++==================================++
00 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 00 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++
01 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 01 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++
11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++
10 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 10 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++==================================++ ++==================================++
gh = 00 gh = 01
\cd 00 01 11 10 ef \cd 00 01 11 10 ef
ab *------/ *------/ *-----/ *------/ ab *------/ *------/ *-----/ *------/
++==================================++ ++==================================++
00 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 00 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++===+===XX===+===XX===+===XX===+===++ ++===+===//===+===//===+===//===+===++
01 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 01 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++
11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++
10 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01 10 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 00 | 01
++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---
|| 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11 || 1 | 1 // 1 | 1 // 1 | 1 // 1 | 1 || 10 | 11
++==================================++ ++==================================++
gh = 10 gh = 11
'''
# 4d k-map = 256
kmap_4d = []
# create 4d representation
for i in range(4):
kmap_4d.append([])
for j in range(4):
kmap_4d[i].append([])
for k in range(4):
kmap_4d[i][j].append([])
for l in range(4):
kmap_4d[i][j][k].append(0)
for i in range(256):
if i in minterms:
kmap_4d[(i >> 6) & 3][(i >> 4) & 3][(i >> 2) & 3][i & 3] = 1
minterms = minterms[1:]
elif i in dont_care:
kmap_4d[(i >> 6) & 3][(i >> 4) & 3][(i >> 2) & 3][i & 3] = 'X'
dont_care = dont_care[1:]
print(kmap_4d)
pattern = [0, 1, 3, 2]
str_pat = ['00', '01', '10', '11']
print(' \cd 00 01 11 10 ef \cd 00 01 11 10 ef')
print('ab *------/ *------/ *-----/ *------/ ab *------/ *------/ *-----/ *------/')
print(' ++==================================++ ++==================================++')
for ab in pattern:
print(str_pat[ab] + ' || ', end='')
for gh in [0, 1]:
for cd in pattern:
print(kmap_4d[ab][cd][0][gh], end='')
print(' | ', end='')
print(kmap_4d[ab][cd][1][gh], end='')
if cd != 2:
print(' // ', end='')
if gh == 0:
print(' || 00 | 01 ' + str_pat[ab] + ' || ', end='')
else:
print(' || 00 | 01')
print(' ++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---')
print(' || ', end='')
for gh in [0, 1]:
for cd in pattern:
print(kmap_4d[ab][cd][2][gh], end='')
print(' | ', end='')
print(kmap_4d[ab][cd][3][gh], end='')
if cd != 2:
print(' // ', end='')
if gh == 0:
print(' || 10 | 11 || ', end='')
else:
print(' || 10 | 11')
if ab != 2:
print(' ++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++')
print(' ++==================================++ ++==================================++')
print('')
print(' gh = 00 gh = 01')
print('')
print(' \cd 00 01 11 10 ef \cd 00 01 11 10 ef')
print('ab *------/ *------/ *-----/ *------/ ab *------/ *------/ *-----/ *------/')
print(' ++==================================++ ++==================================++')
for ab in pattern:
print(str_pat[ab] + ' || ', end='')
for gh in [2, 3]:
for cd in pattern:
print(kmap_4d[ab][cd][0][gh], end='')
print(' | ', end='')
print(kmap_4d[ab][cd][1][gh], end='')
if cd != 2:
print(' // ', end='')
if gh == 2:
print(' || 00 | 01 ' + str_pat[ab] + ' || ', end='')
else:
print(' || 00 | 01')
print(' ++---+---//---+---//---+---//---+---|| ---+--- ++---+---//---+---//---+---//---+---|| ---+---')
print(' || ', end='')
for gh in [2, 3]:
for cd in pattern:
print(kmap_4d[ab][cd][2][gh], end='')
print(' | ', end='')
print(kmap_4d[ab][cd][3][gh], end='')
if cd != 2:
print(' // ', end='')
if gh == 2:
print(' || 10 | 11 || ', end='')
else:
print(' || 10 | 11')
if ab != 2:
print(' ++===+===XX===+===XX===+===XX===+===++ ++===+===XX===+===XX===+===XX===+===++')
print(' ++==================================++ ++==================================++')
print('')
print(' gh = 10 gh = 11')
print('') | 48.684211 | 114 | 0.23976 | 890 | 8,325 | 2.195506 | 0.067416 | 0.229273 | 0.29478 | 0.327533 | 0.746673 | 0.721085 | 0.713408 | 0.697032 | 0.697032 | 0.697032 | 0 | 0.112912 | 0.34042 | 8,325 | 171 | 115 | 48.684211 | 0.242943 | 0.012492 | 0 | 0.679612 | 0 | 0 | 0.353596 | 0.141067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.475728 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
3940eef346e301dea659715f2d671ae44d9ba5f0 | 1,632 | py | Python | whatads/web/migrations/0002_checkdlvy_checkseen_sendimg_sendvce_sendvdo.py | almajan/whatads | ccb3ba66e20ebc618a85cb271413ddf7317af790 | [
"MIT"
] | null | null | null | whatads/web/migrations/0002_checkdlvy_checkseen_sendimg_sendvce_sendvdo.py | almajan/whatads | ccb3ba66e20ebc618a85cb271413ddf7317af790 | [
"MIT"
] | null | null | null | whatads/web/migrations/0002_checkdlvy_checkseen_sendimg_sendvce_sendvdo.py | almajan/whatads | ccb3ba66e20ebc618a85cb271413ddf7317af790 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2020-12-21 02:04
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('web', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='CheckDlvy',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='CheckSeen',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='SendImg',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='SendVce',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='SendVdo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.CharField(max_length=255)),
],
),
]
| 33.306122 | 114 | 0.534314 | 152 | 1,632 | 5.598684 | 0.309211 | 0.123384 | 0.146886 | 0.135135 | 0.763807 | 0.763807 | 0.763807 | 0.763807 | 0.763807 | 0.763807 | 0 | 0.030965 | 0.327206 | 1,632 | 48 | 115 | 34 | 0.74408 | 0.027574 | 0 | 0.714286 | 1 | 0 | 0.059306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1a9c86a56b8c58aad65a99969af3f9947ef90331 | 3,970 | py | Python | tests/dsl/one_group_unit_test.py | chen0040/pysie | 5e5edeae214009b963405cb1e5c948980bb4ae93 | [
"MIT"
] | 2 | 2019-04-13T19:50:46.000Z | 2020-10-11T07:26:29.000Z | tests/dsl/one_group_unit_test.py | chen0040/pysie | 5e5edeae214009b963405cb1e5c948980bb4ae93 | [
"MIT"
] | null | null | null | tests/dsl/one_group_unit_test.py | chen0040/pysie | 5e5edeae214009b963405cb1e5c948980bb4ae93 | [
"MIT"
] | 1 | 2020-06-15T10:30:47.000Z | 2020-06-15T10:30:47.000Z | import unittest
from random import random
from numpy.random.mtrand import normal
from pysie.dsl.one_group import MeanTesting, ProportionTesting
from pysie.stats.distributions import MeanSamplingDistribution, ProportionSamplingDistribution
from pysie.stats.samples import Sample, SampleDistribution
class MeanTestingUnitTest(unittest.TestCase):
def test_mean_normal(self):
mu = 0.0
sigma = 1.0
sample_size = 31
sample = Sample()
for i in range(sample_size):
sample.add_numeric(normal(mu, sigma))
sampling_distribution = MeanSamplingDistribution(sample_distribution=SampleDistribution(sample))
testing = MeanTesting(sampling_distribution=sampling_distribution, mean_null=0.0)
print('one tail p-value: ' + str(testing.p_value_one_tail))
print('two tail p-value: ' + str(testing.p_value_two_tail))
reject_one_tail, reject_two_tail = testing.will_reject(0.01)
print('will reject mean = 0 (one-tail) ? ' + str(reject_one_tail))
print('will reject mean = 0 (two-tail) ? ' + str(reject_two_tail))
self.assertFalse(reject_one_tail)
self.assertFalse(reject_two_tail)
def test_mean_student(self):
mu = 0.0
sigma = 1.0
sample_size = 29
sample = Sample()
for i in range(sample_size):
sample.add_numeric(normal(mu, sigma))
sampling_distribution = MeanSamplingDistribution(sample_distribution=SampleDistribution(sample))
testing = MeanTesting(sampling_distribution=sampling_distribution, mean_null=0.0)
print('one tail p-value: ' + str(testing.p_value_one_tail))
print('two tail p-value: ' + str(testing.p_value_two_tail))
reject_one_tail, reject_two_tail = testing.will_reject(0.01)
print('will reject mean = 0 (one-tail) ? ' + str(reject_one_tail))
print('will reject mean = 0 (two-tail) ? ' + str(reject_two_tail))
self.assertFalse(reject_one_tail)
self.assertFalse(reject_two_tail)
class ProportionTestingUnitTest(unittest.TestCase):
def test_proportion_normal(self):
sample = Sample()
for i in range(100):
if random() <= 0.6:
sample.add_category("OK")
else:
sample.add_category("CANCEL")
sampling_distribution = ProportionSamplingDistribution(
sample_distribution=SampleDistribution(sample, categorical_value="OK"))
testing = ProportionTesting(sampling_distribution=sampling_distribution, p_null=0.6)
print('one tail p-value: ' + str(testing.p_value_one_tail))
print('two tail p-value: ' + str(testing.p_value_two_tail))
reject_one_tail, reject_two_tail = testing.will_reject(0.01)
print('will reject p = 0.6 (one-tail) ? ' + str(reject_one_tail))
print('will reject p = 0.6 (two-tail) ? ' + str(reject_two_tail))
self.assertFalse(reject_one_tail)
self.assertFalse(reject_two_tail)
def test_proportion_simulation(self):
sample = Sample()
for i in range(10):
if random() <= 0.6:
sample.add_category("OK")
else:
sample.add_category("CANCEL")
sampling_distribution = ProportionSamplingDistribution(
sample_distribution=SampleDistribution(sample, categorical_value="OK"))
testing = ProportionTesting(sampling_distribution=sampling_distribution, p_null=0.6)
print('one tail p-value: ' + str(testing.p_value_one_tail))
print('two tail p-value: ' + str(testing.p_value_two_tail))
reject_one_tail, reject_two_tail = testing.will_reject(0.01)
print('will reject p = 0.6 (one-tail) ? ' + str(reject_one_tail))
print('will reject p = 0.6 (two-tail) ? ' + str(reject_two_tail))
self.assertFalse(reject_one_tail)
self.assertFalse(reject_two_tail)
if __name__ == '__main__':
unittest.main()
| 39.7 | 104 | 0.672544 | 491 | 3,970 | 5.183299 | 0.14053 | 0.066012 | 0.061297 | 0.040864 | 0.823576 | 0.823576 | 0.823576 | 0.802358 | 0.802358 | 0.782711 | 0 | 0.017225 | 0.224937 | 3,970 | 99 | 105 | 40.10101 | 0.80988 | 0 | 0 | 0.763158 | 0 | 0 | 0.110831 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.052632 | false | 0 | 0.078947 | 0 | 0.157895 | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1ac11f2a85de58080382322c05aeb96f4b3195bb | 9,836 | py | Python | tests/Test_sendAlertToDomoticz.py | treussart/Transilien-Domoticz | 7636a7230ed878743660ba6e7fd5f6d6ad5143bb | [
"MIT"
] | null | null | null | tests/Test_sendAlertToDomoticz.py | treussart/Transilien-Domoticz | 7636a7230ed878743660ba6e7fd5f6d6ad5143bb | [
"MIT"
] | null | null | null | tests/Test_sendAlertToDomoticz.py | treussart/Transilien-Domoticz | 7636a7230ed878743660ba6e7fd5f6d6ad5143bb | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# coding: utf8
import unittest
import json
import os
import configparser
from Transilien_Domoticz.transilien import send_alert_to_domoticz, format_content
config_name = "conf.cfg"
config_file = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) + "/Transilien_Domoticz/" + config_name
config = configparser.ConfigParser()
config.read(config_file)
nbr_trains = 2
host = config["domoticz"]["host"]
port = config["domoticz"].getint('port')
idx_alert = config["domoticz"]["idx_alert"]
depart_name = config["default"]["departName"]
level = config["domoticz"]["level"]
gare_name = config["default"]["gareName"]
class TestSendAlertToDomoticz(unittest.TestCase):
def test_normal(self):
content = """<?xml version="1.0" encoding="UTF-8"?>\r\n<passages gare="87393405">\r\n<train><date mode="R">26/01/2017 08:38</date>\r\n<num>164674</num>\r\n<miss>PEMU</miss>\r\n<term>87391003</term>\r\n<etat>Retardé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 08:52</date>\r\n<num>164576</num>\r\n<miss>PEGU</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:07</date>\r\n<num>164578</num>\r\n<miss>POGI</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:37</date>\r\n<num>164682</num>\r\n<miss>POMI</miss>\r\n<term>87391003</term>\r\n</train>\r\n</passages>"""
values, state = format_content(nbr_trains, content, depart_name)
value = send_alert_to_domoticz(host, port, idx_alert, values, level)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
def test_level(self):
content = """<?xml version="1.0" encoding="UTF-8"?>\r\n<passages gare="87393405">\r\n<train><date mode="R">26/01/2017 08:38</date>\r\n<num>164674</num>\r\n<miss>PEMU</miss>\r\n<term>87391003</term>\r\n<etat>Retardé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 08:52</date>\r\n<num>164576</num>\r\n<miss>PEGU</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:07</date>\r\n<num>164578</num>\r\n<miss>POGI</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:37</date>\r\n<num>164682</num>\r\n<miss>POMI</miss>\r\n<term>87391003</term>\r\n</train>\r\n</passages>"""
values, state = format_content(nbr_trains, content, depart_name)
value = send_alert_to_domoticz(host, port, idx_alert, values, -5)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
value = send_alert_to_domoticz(host, port, idx_alert, values, "rte")
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
value = send_alert_to_domoticz(host, port, idx_alert, values, 0)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
value = send_alert_to_domoticz(host, port, idx_alert, values, 40)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
def test_idx(self):
content = """<?xml version="1.0" encoding="UTF-8"?>\r\n<passages gare="87393405">\r\n<train><date mode="R">26/01/2017 08:38</date>\r\n<num>164674</num>\r\n<miss>PEMU</miss>\r\n<term>87391003</term>\r\n<etat>Retardé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 08:52</date>\r\n<num>164576</num>\r\n<miss>PEGU</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:07</date>\r\n<num>164578</num>\r\n<miss>POGI</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:37</date>\r\n<num>164682</num>\r\n<miss>POMI</miss>\r\n<term>87391003</term>\r\n</train>\r\n</passages>"""
values, state = format_content(nbr_trains, content, depart_name)
value = send_alert_to_domoticz(host, port, 2900, values, level)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "ERR"\n}\n""", value.decode("utf-8"))
value = send_alert_to_domoticz(host, port, 'azerty', values, level)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "ERR"\n}\n""", value.decode("utf-8"))
def test_port(self):
content = """<?xml version="1.0" encoding="UTF-8"?>\r\n<passages gare="87393405">\r\n<train><date mode="R">26/01/2017 08:38</date>\r\n<num>164674</num>\r\n<miss>PEMU</miss>\r\n<term>87391003</term>\r\n<etat>Retardé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 08:52</date>\r\n<num>164576</num>\r\n<miss>PEGU</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:07</date>\r\n<num>164578</num>\r\n<miss>POGI</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:37</date>\r\n<num>164682</num>\r\n<miss>POMI</miss>\r\n<term>87391003</term>\r\n</train>\r\n</passages>"""
values, state = format_content(nbr_trains, content, depart_name)
value = send_alert_to_domoticz(host, '12300', idx_alert, values, level)
self.assertEqual('Failed to reach a serverReason: [Errno 61] Connection refused', value)
value = send_alert_to_domoticz(host, 'azerty', idx_alert, values, level)
self.assertEqual('Problem with port number', value)
value = send_alert_to_domoticz(host, 1234, idx_alert, values, level)
self.assertEqual('Failed to reach a serverReason: [Errno 61] Connection refused', value)
def test_host(self):
content = """<?xml version="1.0" encoding="UTF-8"?>\r\n<passages gare="87393405">\r\n<train><date mode="R">26/01/2017 08:38</date>\r\n<num>164674</num>\r\n<miss>PEMU</miss>\r\n<term>87391003</term>\r\n<etat>Retardé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 08:52</date>\r\n<num>164576</num>\r\n<miss>PEGU</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:07</date>\r\n<num>164578</num>\r\n<miss>POGI</miss>\r\n<term>87391003</term>\r\n<etat>Supprimé</etat>\r\n</train>\r\n<train><date mode="R">26/01/2017 09:37</date>\r\n<num>164682</num>\r\n<miss>POMI</miss>\r\n<term>87391003</term>\r\n</train>\r\n</passages>"""
values, state = format_content(nbr_trains, content, depart_name)
value = send_alert_to_domoticz(1234, port, idx_alert, values, level)
self.assertEqual('Failed to reach a serverReason: [Errno 65] No route to host', value)
value = send_alert_to_domoticz('1234', port, idx_alert, values, level)
self.assertEqual('Failed to reach a serverReason: [Errno 65] No route to host', value)
value = send_alert_to_domoticz('1.1.0.0', port, idx_alert, values, level)
self.assertEqual('Failed to reach a serverReason: [Errno 60] Operation timed out', value)
def test_values(self):
value = send_alert_to_domoticz(host, port, idx_alert, None, level)
self.assertEqual('Problem with values: Empty', value)
value = send_alert_to_domoticz(host, port, idx_alert, "", level)
self.assertEqual('Problem with values: Empty', value)
value = send_alert_to_domoticz(host, port, idx_alert, "AZERTY", level)
self.assertEqual('Problem with values: need to be a list or a tuple', value)
value = send_alert_to_domoticz(host, port, idx_alert, 1234, level)
self.assertEqual('Problem with values: need to be a list or a tuple', value)
def test_values_no_train(self):
content = """<?xml version="1.0" encoding="UTF-8"?>\r\n<passages gare="87393405">\r\n</passages>"""
values = format_content(nbr_trains, content, depart_name)[0] + format_content(nbr_trains, content, gare_name)[0]
value = send_alert_to_domoticz(host, port, idx_alert, values, level)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
def test_wrong_values(self):
content = '<?xml version="1.0" encoding="UTF-8"?><passages gare="87393405"><train><date mode="R">18/02/2017 14:37</date><num>165626</num><miss></passages>'
values = format_content(nbr_trains, content, depart_name)[0] + format_content(nbr_trains, content, gare_name)[0]
value = send_alert_to_domoticz(host, port, idx_alert, values, level)
try:
json.loads(str(value.decode("utf-8")))
except ValueError as e:
self.fail('invalid json: ' + e)
self.assertEqual("""{\n "status" : "OK",\n "title" : "Update Device"\n}\n""", value.decode("utf-8"))
if __name__ == '__main__':
unittest.main()
| 75.083969 | 693 | 0.643656 | 1,609 | 9,836 | 3.848353 | 0.09074 | 0.041021 | 0.04522 | 0.047481 | 0.874677 | 0.872901 | 0.86741 | 0.857558 | 0.856428 | 0.844315 | 0 | 0.081581 | 0.148841 | 9,836 | 130 | 694 | 75.661538 | 0.658027 | 0.00305 | 0 | 0.591304 | 0 | 0.121739 | 0.497348 | 0.306406 | 0 | 0 | 0 | 0 | 0.165217 | 1 | 0.069565 | false | 0.06087 | 0.043478 | 0 | 0.121739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
46bdae8ee35594779f9fe80688f316caf02a188b | 2,920 | py | Python | globalextremum.py | GoVed/squeezingGraphs | a8296cc54e53178449afa6ea38d1116ad6c18c7e | [
"MIT"
] | null | null | null | globalextremum.py | GoVed/squeezingGraphs | a8296cc54e53178449afa6ea38d1116ad6c18c7e | [
"MIT"
] | null | null | null | globalextremum.py | GoVed/squeezingGraphs | a8296cc54e53178449afa6ea38d1116ad6c18c7e | [
"MIT"
] | null | null | null | import function as fn
import time
def findminima(eqn,divisions=2000,times=10,showsteps=1):
start=time.time()
divisions+=1
j=0
s=0
minx=0
prec=0
while j<times:
if showsteps or showsteps==2:
print('No.',j)
miny=float(fn.solve(eqn,s,0))
minx=s
i=1
while i<divisions:
x=((i/divisions)*2-1)
x=(x/(1-abs(x)))
x/=(2**prec)
x+=s
y=float(fn.solve(eqn,x,0))
if y<miny:
miny=y
minx=x
if showsteps==1:
print('\tAt ',x,'\tf(x)=',y)
i+=1
if showsteps or showsteps==2:
print('\t=>Min at x=',minx,'\tf(x)=',miny)
if s==minx:
prec+=1
s=minx
j+=1
if showsteps==1 or showsteps==2:
print('time taken=',time.time()-start)
return minx
def findmaxima(eqn,divisions=2000,times=10,showsteps=1):
start=time.time()
divisions+=1
j=0
s=0
maxx=0
prec=0
while j<times:
if showsteps or showsteps==2:
print('No.',j)
maxy=float(fn.solve(eqn,s,0))
maxx=s
i=1
while i<divisions:
x=((i/divisions)*2-1)
x=(x/(1-abs(x)))
x/=(2**prec)
x+=s
y=float(fn.solve(eqn,x,0))
if y>maxy:
maxy=y
maxx=x
if showsteps==1:
print('\tAt ',x,'\tf(x)=',y)
i+=1
if showsteps or showsteps==2:
print('\t=>Max at x=',maxx,'\tf(x)=',maxy)
if s==maxx:
prec+=1
s=maxx
j+=1
if showsteps==1 or showsteps==2:
print('time taken=',time.time()-start)
return maxx
def evalfindminima(eqn,divisions=2000,times=10,showsteps=1):
start=time.time()
divisions+=1
j=0
s=0
minx=0
prec=0
while j<times:
if showsteps or showsteps==2:
print('No.',j)
miny=float(fn.solve(eqn,s,0))
minx=s
i=1
while i<divisions:
x=((i/divisions)*2-1)
x=(x/(1-abs(x)))
x/=(2**prec)
x+=s
y=float(eval(eqn.replace('x',str(x)).replace('^','**')))
if y<miny:
miny=y
minx=x
if showsteps==1:
print('\tAt ',x,'\tf(x)=',y)
i+=1
if showsteps or showsteps==2:
print('\t=>Min at x=',minx,'\tf(x)=',miny)
if s==minx:
prec+=1
s=minx
j+=1
if showsteps==1 or showsteps==2:
print('time taken=',time.time()-start)
return minx
#print(findminima('(x-2)*(x-1)*(x+1)*(x+3)',3,25,1))
#print(findmaxima('-1*x^2+3*x+3',5,2,1))
| 25.840708 | 69 | 0.42774 | 404 | 2,920 | 3.091584 | 0.126238 | 0.105685 | 0.086469 | 0.122498 | 0.831065 | 0.831065 | 0.817454 | 0.817454 | 0.817454 | 0.817454 | 0 | 0.054945 | 0.407877 | 2,920 | 113 | 70 | 25.840708 | 0.667438 | 0.030822 | 0 | 0.846154 | 0 | 0 | 0.052264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028846 | false | 0 | 0.019231 | 0 | 0.076923 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
46ea5395f6ae9fd944d9c3f239fc150d0a74b269 | 1,752 | py | Python | dasa.py | paraklas/DarcyNets | 4040decbf2fc15b01d89b2dc4e2f050b999a3084 | [
"MIT"
] | 5 | 2019-11-13T23:53:37.000Z | 2021-06-09T22:41:37.000Z | dasa.py | paraklas/DarcyNets | 4040decbf2fc15b01d89b2dc4e2f050b999a3084 | [
"MIT"
] | null | null | null | dasa.py | paraklas/DarcyNets | 4040decbf2fc15b01d89b2dc4e2f050b999a3084 | [
"MIT"
] | 1 | 2021-02-10T06:19:02.000Z | 2021-02-10T06:19:02.000Z | import numpy as np
import scipy.sparse.linalg as spl
class DASAExp(object):
def __init__(self, objfun, obj_sens_state, obj_sens_param, solvefun, res_sens_state, res_sens_param):
self.objfun = objfun
self.solvefun = solvefun
self.obj_sens_state = obj_sens_state
self.obj_sens_param = obj_sens_param
self.res_sens_state = res_sens_state
self.res_sens_param = res_sens_param
def obj(self, p):
u = self.solvefun(p)
return self.objfun(u, p)
def grad(self, p):
u = self.solvefun(p)
dhdu = self.obj_sens_state(u, p)
dhdp = self.obj_sens_param(u, p)
dLdu = self.res_sens_state(u, p)
dLdp = self.res_sens_param(u, p)
adj = -spl.spsolve(dLdu.T.tocsc(), dhdu)
sens = dLdp.dot(adj)
sens = sens + dhdp
return sens
class DASAExpLM(object):
def __init__(self, objfun, obj_sens_state, obj_sens_param, solvefun, res_sens_state, res_sens_param):
self.objfun = objfun
self.solvefun = solvefun
self.obj_sens_state = obj_sens_state
self.obj_sens_param = obj_sens_param
self.res_sens_state = res_sens_state
self.res_sens_param = res_sens_param
def obj(self, p):
u = self.solvefun(p)
return self.objfun(u, p)
def grad(self, p):
u = self.solvefun(p)
dhdu = self.obj_sens_state(u, p)
dhdp = self.obj_sens_param(u, p)
dLdu = self.res_sens_state(u, p)
dLdp = self.res_sens_param(u, p)
adj = -spl.spsolve(dLdu.T.tocsc(), dhdu.T.toarray())
sens = dLdp.dot(adj)
sens = np.concatenate((sens.T, dhdp.toarray()), axis=0)
return sens
| 29.694915 | 105 | 0.61016 | 256 | 1,752 | 3.894531 | 0.152344 | 0.112337 | 0.096289 | 0.060181 | 0.860582 | 0.824473 | 0.824473 | 0.824473 | 0.824473 | 0.824473 | 0 | 0.000803 | 0.288813 | 1,752 | 58 | 106 | 30.206897 | 0.799358 | 0 | 0 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.045455 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
645172e162731be71b4deb8f09be458540a2c855 | 277 | py | Python | generatemani/pathFile.py | john526/codePython | d06dabf7cfd56f3b12a843cdc10c20efa889333f | [
"MIT"
] | null | null | null | generatemani/pathFile.py | john526/codePython | d06dabf7cfd56f3b12a843cdc10c20efa889333f | [
"MIT"
] | null | null | null | generatemani/pathFile.py | john526/codePython | d06dabf7cfd56f3b12a843cdc10c20efa889333f | [
"MIT"
] | null | null | null | dirname = "/home/fev/Documents/COURS/DOWNLOAD_COURSE/AI_PYTHON/live_coding_python/codePython/generatemani/"
dirnameLogo = "/home/fev/Documents/COURS/DOWNLOAD_COURSE/AI_PYTHON/live_coding_python/codePython/generatemani/"
filenameimage = "FEV.jpg"
filenamelogo = "logolabel.png" | 55.4 | 111 | 0.833935 | 34 | 277 | 6.558824 | 0.558824 | 0.06278 | 0.143498 | 0.188341 | 0.726457 | 0.726457 | 0.726457 | 0.726457 | 0.726457 | 0.726457 | 0 | 0 | 0.043321 | 277 | 5 | 112 | 55.4 | 0.841509 | 0 | 0 | 0 | 1 | 0 | 0.755396 | 0.683453 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
64a90a26f41e2cd7fb96036cd959d14d69e0c370 | 150 | py | Python | venv/Lib/site-packages/faces/drawers/__init__.py | The-Fragment/FragmentFembot | bca0027b423753eb162590e8fd440a2c1e65d133 | [
"MIT"
] | 2 | 2019-01-07T12:41:05.000Z | 2019-01-07T21:50:55.000Z | venv/Lib/site-packages/faces/drawers/__init__.py | The-Fragment/FragmentFembot | bca0027b423753eb162590e8fd440a2c1e65d133 | [
"MIT"
] | 3 | 2021-03-23T04:58:47.000Z | 2021-04-02T02:40:54.000Z | venv/Lib/site-packages/faces/drawers/__init__.py | The-Fragment/FragmentFembot | bca0027b423753eb162590e8fd440a2c1e65d133 | [
"MIT"
] | null | null | null | from faces.drawers.drawer import Drawer
from faces.drawers.tkinter_drawer import TkinterDrawer
from faces.drawers.tkinter_screen import TkinterScreen
| 37.5 | 54 | 0.88 | 20 | 150 | 6.5 | 0.45 | 0.207692 | 0.369231 | 0.353846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 150 | 3 | 55 | 50 | 0.942029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
64aeed33892199cf98dd221c587e0673b8f2ea42 | 18,834 | py | Python | bsp/raspberry-pico/rtconfig.py | StackRyan/rt-thread | 37d9e08757413a5b752545338aa3af242a3930de | [
"Apache-2.0"
] | 1 | 2021-01-01T21:46:40.000Z | 2021-01-01T21:46:40.000Z | bsp/raspberry-pico/rtconfig.py | StackRyan/rt-thread | 37d9e08757413a5b752545338aa3af242a3930de | [
"Apache-2.0"
] | null | null | null | bsp/raspberry-pico/rtconfig.py | StackRyan/rt-thread | 37d9e08757413a5b752545338aa3af242a3930de | [
"Apache-2.0"
] | null | null | null | import os
# toolchains options
ARCH='arm'
CPU='cortex-m0'
CROSS_TOOL='gcc'
# bsp lib config
BSP_LIBRARY_TYPE = None
if os.getenv('RTT_CC'):
CROSS_TOOL = os.getenv('RTT_CC')
if os.getenv('RTT_ROOT'):
RTT_ROOT = os.getenv('RTT_ROOT')
# cross_tool provides the cross compiler
# EXEC_PATH is the compiler execute path, for example, CodeSourcery, Keil MDK, IAR
if CROSS_TOOL == 'gcc':
PLATFORM = 'gcc'
EXEC_PATH = r'/usr/bin'
# EXEC_PATH = r'C:\RT-ThreadStudio\repo\Extract\ToolChain_Support_Packages\ARM\GNU_Tools_for_ARM_Embedded_Processors\5.4.1\bin'
elif CROSS_TOOL == 'keil':
PLATFORM = 'armcc'
EXEC_PATH = r'C:/Keil_v5'
elif CROSS_TOOL == 'iar':
PLATFORM = 'iar'
EXEC_PATH = r'C:/Program Files (x86)/IAR Systems/Embedded Workbench 8.0'
if os.getenv('RTT_EXEC_PATH'):
EXEC_PATH = os.getenv('RTT_EXEC_PATH')
BUILD = 'debug'
if PLATFORM == 'gcc':
# toolchains
PREFIX = 'arm-none-eabi-'
CC = PREFIX + 'gcc'
AS = PREFIX + 'gcc'
AR = PREFIX + 'ar'
CXX = PREFIX + 'g++'
LINK = PREFIX + 'gcc'
TARGET_EXT = 'elf'
SIZE = PREFIX + 'size'
OBJDUMP = PREFIX + 'objdump'
OBJCPY = PREFIX + 'objcopy'
# /usr/bin/arm-none-eabi-g++ -march=armv6-m -mcpu=cortex-m0plus -mthumb -Og -g -Wl,--build-id=none --specs=nosys.specs -Wl,--wrap=sprintf -Wl,--wrap=snprintf -Wl,--wrap=vsnprintf -Wl,--wrap=__clzsi2 -Wl,--wrap=__clzdi2 -Wl,--wrap=__ctzsi2 -Wl,--wrap=__ctzdi2 -Wl,--wrap=__popcountsi2 -Wl,--wrap=__popcountdi2 -Wl,--wrap=__clz -Wl,--wrap=__clzl -Wl,--wrap=__clzll -Wl,--wrap=__aeabi_idiv -Wl,--wrap=__aeabi_idivmod -Wl,--wrap=__aeabi_ldivmod -Wl,--wrap=__aeabi_uidiv -Wl,--wrap=__aeabi_uidivmod -Wl,--wrap=__aeabi_uldivmod -Wl,--wrap=__aeabi_dadd -Wl,--wrap=__aeabi_ddiv -Wl,--wrap=__aeabi_dmul -Wl,--wrap=__aeabi_drsub -Wl,--wrap=__aeabi_dsub -Wl,--wrap=__aeabi_cdcmpeq -Wl,--wrap=__aeabi_cdrcmple -Wl,--wrap=__aeabi_cdcmple -Wl,--wrap=__aeabi_dcmpeq -Wl,--wrap=__aeabi_dcmplt -Wl,--wrap=__aeabi_dcmple -Wl,--wrap=__aeabi_dcmpge -Wl,--wrap=__aeabi_dcmpgt -Wl,--wrap=__aeabi_dcmpun -Wl,--wrap=__aeabi_i2d -Wl,--wrap=__aeabi_l2d -Wl,--wrap=__aeabi_ui2d -Wl,--wrap=__aeabi_ul2d -Wl,--wrap=__aeabi_d2iz -Wl,--wrap=__aeabi_d2lz -Wl,--wrap=__aeabi_d2uiz -Wl,--wrap=__aeabi_d2ulz -Wl,--wrap=__aeabi_d2f -Wl,--wrap=sqrt -Wl,--wrap=cos -Wl,--wrap=sin -Wl,--wrap=tan -Wl,--wrap=atan2 -Wl,--wrap=exp -Wl,--wrap=log -Wl,--wrap=ldexp -Wl,--wrap=copysign -Wl,--wrap=trunc -Wl,--wrap=floor -Wl,--wrap=ceil -Wl,--wrap=round -Wl,--wrap=sincos -Wl,--wrap=asin -Wl,--wrap=acos -Wl,--wrap=atan -Wl,--wrap=sinh -Wl,--wrap=cosh -Wl,--wrap=tanh -Wl,--wrap=asinh -Wl,--wrap=acosh -Wl,--wrap=atanh -Wl,--wrap=exp2 -Wl,--wrap=log2 -Wl,--wrap=exp10 -Wl,--wrap=log10 -Wl,--wrap=pow -Wl,--wrap=powint -Wl,--wrap=hypot -Wl,--wrap=cbrt -Wl,--wrap=fmod -Wl,--wrap=drem -Wl,--wrap=remainder -Wl,--wrap=remquo -Wl,--wrap=expm1 -Wl,--wrap=log1p -Wl,--wrap=fma -Wl,--wrap=__aeabi_lmul -Wl,--wrap=__aeabi_fadd -Wl,--wrap=__aeabi_fdiv -Wl,--wrap=__aeabi_fmul -Wl,--wrap=__aeabi_frsub -Wl,--wrap=__aeabi_fsub -Wl,--wrap=__aeabi_cfcmpeq -Wl,--wrap=__aeabi_cfrcmple -Wl,--wrap=__aeabi_cfcmple -Wl,--wrap=__aeabi_fcmpeq -Wl,--wrap=__aeabi_fcmplt -Wl,--wrap=__aeabi_fcmple -Wl,--wrap=__aeabi_fcmpge -Wl,--wrap=__aeabi_fcmpgt -Wl,--wrap=__aeabi_fcmpun -Wl,--wrap=__aeabi_i2f -Wl,--wrap=__aeabi_l2f -Wl,--wrap=__aeabi_ui2f -Wl,--wrap=__aeabi_ul2f -Wl,--wrap=__aeabi_f2iz -Wl,--wrap=__aeabi_f2lz -Wl,--wrap=__aeabi_f2uiz -Wl,--wrap=__aeabi_f2ulz -Wl,--wrap=__aeabi_f2d -Wl,--wrap=sqrtf -Wl,--wrap=cosf -Wl,--wrap=sinf -Wl,--wrap=tanf -Wl,--wrap=atan2f -Wl,--wrap=expf -Wl,--wrap=logf -Wl,--wrap=ldexpf -Wl,--wrap=copysignf -Wl,--wrap=truncf -Wl,--wrap=floorf -Wl,--wrap=ceilf -Wl,--wrap=roundf -Wl,--wrap=sincosf -Wl,--wrap=asinf -Wl,--wrap=acosf -Wl,--wrap=atanf -Wl,--wrap=sinhf -Wl,--wrap=coshf -Wl,--wrap=tanhf -Wl,--wrap=asinhf -Wl,--wrap=acoshf -Wl,--wrap=atanhf -Wl,--wrap=exp2f -Wl,--wrap=log2f -Wl,--wrap=exp10f -Wl,--wrap=log10f -Wl,--wrap=powf -Wl,--wrap=powintf -Wl,--wrap=hypotf -Wl,--wrap=cbrtf -Wl,--wrap=fmodf -Wl,--wrap=dremf -Wl,--wrap=remainderf -Wl,--wrap=remquof -Wl,--wrap=expm1f -Wl,--wrap=log1pf -Wl,--wrap=fmaf -Wl,--wrap=malloc -Wl,--wrap=calloc -Wl,--wrap=free -Wl,--wrap=memcpy -Wl,--wrap=memset -Wl,--wrap=__aeabi_memcpy -Wl,--wrap=__aeabi_memset -Wl,--wrap=__aeabi_memcpy4 -Wl,--wrap=__aeabi_memset4 -Wl,--wrap=__aeabi_memcpy8 -Wl,--wrap=__aeabi_memset8 -Wl,-Map=blink.elf.map -Wl,--script=/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_standard_link/memmap_default.ld -Wl,--gc-sections -Wl,--wrap=printf -Wl,--wrap=vprintf -Wl,--wrap=puts -Wl,--wrap=putchar CMakeFiles/blink.dir/blink.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_stdlib/stdlib.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_gpio/gpio.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_claim/claim.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_sync/sync.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_platform/platform.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_uart/uart.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_divider/divider.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_time/time.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_time/timeout_helper.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_timer/timer.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_sync/sem.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_sync/lock_core.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_sync/mutex.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_sync/critical_section.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_util/datetime.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_util/pheap.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/common/pico_util/queue.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_runtime/runtime.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_clocks/clocks.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_watchdog/watchdog.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_xosc/xosc.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_pll/pll.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_vreg/vreg.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_irq/irq.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/hardware_irq/irq_handler_chain.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_printf/printf.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_bit_ops/bit_ops_aeabi.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_bootrom/bootrom.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_divider/divider.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_double/double_aeabi.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_double/double_init_rom.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_double/double_math.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_double/double_v1_rom_shim.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_int64_ops/pico_int64_ops_aeabi.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_float/float_aeabi.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_float/float_init_rom.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_float/float_math.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_float/float_v1_rom_shim.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_malloc/pico_malloc.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_mem_ops/mem_ops_aeabi.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_standard_link/crt0.S.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_standard_link/new_delete.cpp.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_standard_link/binary_info.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_stdio/stdio.c.obj CMakeFiles/blink.dir/home/henson/Documents/rasp-pico/pico/pico-sdk/src/rp2_common/pico_stdio_uart/stdio_uart.c.obj -o blink.elf ../pico_sdk/src/rp2_common/boot_stage2/bs2_default_padded_checksummed.S
# -march=armv6-m -mcpu=cortex-m0plus -mthumb -Og -g -ffunction-sections -fdata-sections
DEVICE = ' -march=armv6-m -mcpu=cortex-m0plus -mthumb -ffunction-sections -fdata-sections'
CFLAGS = DEVICE + ' -Dgcc'
AFLAGS = ' -c' + DEVICE + ' -x assembler-with-cpp -Wa,-mimplicit-it=thumb -ILibraries/pico-sdk/src/common/pico_stdlib/include -ILibraries/pico-sdk/src/rp2_common/hardware_gpio/include -ILibraries/pico-sdk/src/common/pico_base/include -ILibraries/generated/pico_base -ILibraries/pico-sdk/src/boards/include -ILibraries/pico-sdk/src/rp2_common/pico_platform/include -ILibraries/pico-sdk/src/rp2040/hardware_regs/include -ILibraries/pico-sdk/src/rp2_common/hardware_base/include -ILibraries/pico-sdk/src/rp2040/hardware_structs/include -ILibraries/pico-sdk/src/rp2_common/hardware_claim/include -ILibraries/pico-sdk/src/rp2_common/hardware_sync/include -ILibraries/pico-sdk/src/rp2_common/hardware_uart/include -ILibraries/pico-sdk/src/rp2_common/hardware_divider/include -ILibraries/pico-sdk/src/common/pico_time/include -ILibraries/pico-sdk/src/rp2_common/hardware_timer/include -ILibraries/pico-sdk/src/common/pico_sync/include -ILibraries/pico-sdk/src/common/pico_util/include -ILibraries/pico-sdk/src/rp2_common/pico_runtime/include -ILibraries/pico-sdk/src/rp2_common/hardware_clocks/include -ILibraries/pico-sdk/src/rp2_common/hardware_resets/include -ILibraries/pico-sdk/src/rp2_common/hardware_watchdog/include -ILibraries/pico-sdk/src/rp2_common/hardware_xosc/include -ILibraries/pico-sdk/src/rp2_common/hardware_pll/include -ILibraries/pico-sdk/src/rp2_common/hardware_vreg/include -ILibraries/pico-sdk/src/rp2_common/hardware_irq/include -ILibraries/pico-sdk/src/rp2_common/pico_printf/include -ILibraries/pico-sdk/src/rp2_common/pico_bootrom/include -ILibraries/pico-sdk/src/common/pico_bit_ops/include -ILibraries/pico-sdk/src/common/pico_divider/include -ILibraries/pico-sdk/src/rp2_common/pico_double/include -ILibraries/pico-sdk/src/rp2_common/pico_int64_ops/include -ILibraries/pico-sdk/src/rp2_common/pico_float/include -ILibraries/pico-sdk/src/common/pico_binary_info/include -ILibraries/pico-sdk/src/rp2_common/pico_stdio/include -ILibraries/pico-sdk/src/rp2_common/pico_stdio_uart/include'
LFLAGS = DEVICE + ' -Wl,--gc-sections,-Map=rt-thread.map,-cref,-u,Reset_Handler -T link.ld' + ' -Wl,--build-id=none --specs=nosys.specs -Wl,--wrap=sprintf -Wl,--wrap=snprintf -Wl,--wrap=vsnprintf -Wl,--wrap=__clzsi2 -Wl,--wrap=__clzdi2 -Wl,--wrap=__ctzsi2 -Wl,--wrap=__ctzdi2 -Wl,--wrap=__popcountsi2 -Wl,--wrap=__popcountdi2 -Wl,--wrap=__clz -Wl,--wrap=__clzl -Wl,--wrap=__clzll -Wl,--wrap=__aeabi_idiv -Wl,--wrap=__aeabi_idivmod -Wl,--wrap=__aeabi_ldivmod -Wl,--wrap=__aeabi_uidiv -Wl,--wrap=__aeabi_uidivmod -Wl,--wrap=__aeabi_uldivmod -Wl,--wrap=__aeabi_dadd -Wl,--wrap=__aeabi_ddiv -Wl,--wrap=__aeabi_dmul -Wl,--wrap=__aeabi_drsub -Wl,--wrap=__aeabi_dsub -Wl,--wrap=__aeabi_cdcmpeq -Wl,--wrap=__aeabi_cdrcmple -Wl,--wrap=__aeabi_cdcmple -Wl,--wrap=__aeabi_dcmpeq -Wl,--wrap=__aeabi_dcmplt -Wl,--wrap=__aeabi_dcmple -Wl,--wrap=__aeabi_dcmpge -Wl,--wrap=__aeabi_dcmpgt -Wl,--wrap=__aeabi_dcmpun -Wl,--wrap=__aeabi_i2d -Wl,--wrap=__aeabi_l2d -Wl,--wrap=__aeabi_ui2d -Wl,--wrap=__aeabi_ul2d -Wl,--wrap=__aeabi_d2iz -Wl,--wrap=__aeabi_d2lz -Wl,--wrap=__aeabi_d2uiz -Wl,--wrap=__aeabi_d2ulz -Wl,--wrap=__aeabi_d2f -Wl,--wrap=sqrt -Wl,--wrap=cos -Wl,--wrap=sin -Wl,--wrap=tan -Wl,--wrap=atan2 -Wl,--wrap=exp -Wl,--wrap=log -Wl,--wrap=ldexp -Wl,--wrap=copysign -Wl,--wrap=trunc -Wl,--wrap=floor -Wl,--wrap=ceil -Wl,--wrap=round -Wl,--wrap=sincos -Wl,--wrap=asin -Wl,--wrap=acos -Wl,--wrap=atan -Wl,--wrap=sinh -Wl,--wrap=cosh -Wl,--wrap=tanh -Wl,--wrap=asinh -Wl,--wrap=acosh -Wl,--wrap=atanh -Wl,--wrap=exp2 -Wl,--wrap=log2 -Wl,--wrap=exp10 -Wl,--wrap=log10 -Wl,--wrap=pow -Wl,--wrap=powint -Wl,--wrap=hypot -Wl,--wrap=cbrt -Wl,--wrap=fmod -Wl,--wrap=drem -Wl,--wrap=remainder -Wl,--wrap=remquo -Wl,--wrap=expm1 -Wl,--wrap=log1p -Wl,--wrap=fma -Wl,--wrap=__aeabi_lmul -Wl,--wrap=__aeabi_fadd -Wl,--wrap=__aeabi_fdiv -Wl,--wrap=__aeabi_fmul -Wl,--wrap=__aeabi_frsub -Wl,--wrap=__aeabi_fsub -Wl,--wrap=__aeabi_cfcmpeq -Wl,--wrap=__aeabi_cfrcmple -Wl,--wrap=__aeabi_cfcmple -Wl,--wrap=__aeabi_fcmpeq -Wl,--wrap=__aeabi_fcmplt -Wl,--wrap=__aeabi_fcmple -Wl,--wrap=__aeabi_fcmpge -Wl,--wrap=__aeabi_fcmpgt -Wl,--wrap=__aeabi_fcmpun -Wl,--wrap=__aeabi_i2f -Wl,--wrap=__aeabi_l2f -Wl,--wrap=__aeabi_ui2f -Wl,--wrap=__aeabi_ul2f -Wl,--wrap=__aeabi_f2iz -Wl,--wrap=__aeabi_f2lz -Wl,--wrap=__aeabi_f2uiz -Wl,--wrap=__aeabi_f2ulz -Wl,--wrap=__aeabi_f2d -Wl,--wrap=sqrtf -Wl,--wrap=cosf -Wl,--wrap=sinf -Wl,--wrap=tanf -Wl,--wrap=atan2f -Wl,--wrap=expf -Wl,--wrap=logf -Wl,--wrap=ldexpf -Wl,--wrap=copysignf -Wl,--wrap=truncf -Wl,--wrap=floorf -Wl,--wrap=ceilf -Wl,--wrap=roundf -Wl,--wrap=sincosf -Wl,--wrap=asinf -Wl,--wrap=acosf -Wl,--wrap=atanf -Wl,--wrap=sinhf -Wl,--wrap=coshf -Wl,--wrap=tanhf -Wl,--wrap=asinhf -Wl,--wrap=acoshf -Wl,--wrap=atanhf -Wl,--wrap=exp2f -Wl,--wrap=log2f -Wl,--wrap=exp10f -Wl,--wrap=log10f -Wl,--wrap=powf -Wl,--wrap=powintf -Wl,--wrap=hypotf -Wl,--wrap=cbrtf -Wl,--wrap=fmodf -Wl,--wrap=dremf -Wl,--wrap=remainderf -Wl,--wrap=remquof -Wl,--wrap=expm1f -Wl,--wrap=log1pf -Wl,--wrap=fmaf -Wl,--wrap=malloc -Wl,--wrap=calloc -Wl,--wrap=free -Wl,--wrap=memcpy -Wl,--wrap=memset -Wl,--wrap=__aeabi_memcpy -Wl,--wrap=__aeabi_memset -Wl,--wrap=__aeabi_memcpy4 -Wl,--wrap=__aeabi_memset4 -Wl,--wrap=__aeabi_memcpy8 -Wl,--wrap=__aeabi_memset8 -Wl,--gc-sections -Wl,--wrap=printf -Wl,--wrap=vprintf -Wl,--wrap=puts -Wl,--wrap=putchar'
CPATH = ''
LPATH = ''
if BUILD == 'debug':
CFLAGS += ' -O0 -gdwarf-2 -g'
AFLAGS += ' -gdwarf-2'
else:
CFLAGS += ' -O2'
CXXFLAGS = CFLAGS #+ ' -Wl,--build-id=none --specs=nosys.specs -Wl,--wrap=sprintf -Wl,--wrap=snprintf -Wl,--wrap=vsnprintf -Wl,--wrap=__clzsi2 -Wl,--wrap=__clzdi2 -Wl,--wrap=__ctzsi2 -Wl,--wrap=__ctzdi2 -Wl,--wrap=__popcountsi2 -Wl,--wrap=__popcountdi2 -Wl,--wrap=__clz -Wl,--wrap=__clzl -Wl,--wrap=__clzll -Wl,--wrap=__aeabi_idiv -Wl,--wrap=__aeabi_idivmod -Wl,--wrap=__aeabi_ldivmod -Wl,--wrap=__aeabi_uidiv -Wl,--wrap=__aeabi_uidivmod -Wl,--wrap=__aeabi_uldivmod -Wl,--wrap=__aeabi_dadd -Wl,--wrap=__aeabi_ddiv -Wl,--wrap=__aeabi_dmul -Wl,--wrap=__aeabi_drsub -Wl,--wrap=__aeabi_dsub -Wl,--wrap=__aeabi_cdcmpeq -Wl,--wrap=__aeabi_cdrcmple -Wl,--wrap=__aeabi_cdcmple -Wl,--wrap=__aeabi_dcmpeq -Wl,--wrap=__aeabi_dcmplt -Wl,--wrap=__aeabi_dcmple -Wl,--wrap=__aeabi_dcmpge -Wl,--wrap=__aeabi_dcmpgt -Wl,--wrap=__aeabi_dcmpun -Wl,--wrap=__aeabi_i2d -Wl,--wrap=__aeabi_l2d -Wl,--wrap=__aeabi_ui2d -Wl,--wrap=__aeabi_ul2d -Wl,--wrap=__aeabi_d2iz -Wl,--wrap=__aeabi_d2lz -Wl,--wrap=__aeabi_d2uiz -Wl,--wrap=__aeabi_d2ulz -Wl,--wrap=__aeabi_d2f -Wl,--wrap=sqrt -Wl,--wrap=cos -Wl,--wrap=sin -Wl,--wrap=tan -Wl,--wrap=atan2 -Wl,--wrap=exp -Wl,--wrap=log -Wl,--wrap=ldexp -Wl,--wrap=copysign -Wl,--wrap=trunc -Wl,--wrap=floor -Wl,--wrap=ceil -Wl,--wrap=round -Wl,--wrap=sincos -Wl,--wrap=asin -Wl,--wrap=acos -Wl,--wrap=atan -Wl,--wrap=sinh -Wl,--wrap=cosh -Wl,--wrap=tanh -Wl,--wrap=asinh -Wl,--wrap=acosh -Wl,--wrap=atanh -Wl,--wrap=exp2 -Wl,--wrap=log2 -Wl,--wrap=exp10 -Wl,--wrap=log10 -Wl,--wrap=pow -Wl,--wrap=powint -Wl,--wrap=hypot -Wl,--wrap=cbrt -Wl,--wrap=fmod -Wl,--wrap=drem -Wl,--wrap=remainder -Wl,--wrap=remquo -Wl,--wrap=expm1 -Wl,--wrap=log1p -Wl,--wrap=fma -Wl,--wrap=__aeabi_lmul -Wl,--wrap=__aeabi_fadd -Wl,--wrap=__aeabi_fdiv -Wl,--wrap=__aeabi_fmul -Wl,--wrap=__aeabi_frsub -Wl,--wrap=__aeabi_fsub -Wl,--wrap=__aeabi_cfcmpeq -Wl,--wrap=__aeabi_cfrcmple -Wl,--wrap=__aeabi_cfcmple -Wl,--wrap=__aeabi_fcmpeq -Wl,--wrap=__aeabi_fcmplt -Wl,--wrap=__aeabi_fcmple -Wl,--wrap=__aeabi_fcmpge -Wl,--wrap=__aeabi_fcmpgt -Wl,--wrap=__aeabi_fcmpun -Wl,--wrap=__aeabi_i2f -Wl,--wrap=__aeabi_l2f -Wl,--wrap=__aeabi_ui2f -Wl,--wrap=__aeabi_ul2f -Wl,--wrap=__aeabi_f2iz -Wl,--wrap=__aeabi_f2lz -Wl,--wrap=__aeabi_f2uiz -Wl,--wrap=__aeabi_f2ulz -Wl,--wrap=__aeabi_f2d -Wl,--wrap=sqrtf -Wl,--wrap=cosf -Wl,--wrap=sinf -Wl,--wrap=tanf -Wl,--wrap=atan2f -Wl,--wrap=expf -Wl,--wrap=logf -Wl,--wrap=ldexpf -Wl,--wrap=copysignf -Wl,--wrap=truncf -Wl,--wrap=floorf -Wl,--wrap=ceilf -Wl,--wrap=roundf -Wl,--wrap=sincosf -Wl,--wrap=asinf -Wl,--wrap=acosf -Wl,--wrap=atanf -Wl,--wrap=sinhf -Wl,--wrap=coshf -Wl,--wrap=tanhf -Wl,--wrap=asinhf -Wl,--wrap=acoshf -Wl,--wrap=atanhf -Wl,--wrap=exp2f -Wl,--wrap=log2f -Wl,--wrap=exp10f -Wl,--wrap=log10f -Wl,--wrap=powf -Wl,--wrap=powintf -Wl,--wrap=hypotf -Wl,--wrap=cbrtf -Wl,--wrap=fmodf -Wl,--wrap=dremf -Wl,--wrap=remainderf -Wl,--wrap=remquof -Wl,--wrap=expm1f -Wl,--wrap=log1pf -Wl,--wrap=fmaf -Wl,--wrap=malloc -Wl,--wrap=calloc -Wl,--wrap=free -Wl,--wrap=memcpy -Wl,--wrap=memset -Wl,--wrap=__aeabi_memcpy -Wl,--wrap=__aeabi_memset -Wl,--wrap=__aeabi_memcpy4 -Wl,--wrap=__aeabi_memset4 -Wl,--wrap=__aeabi_memcpy8 -Wl,--wrap=__aeabi_memset8 -Wl,--gc-sections -Wl,--wrap=printf -Wl,--wrap=vprintf -Wl,--wrap=puts -Wl,--wrap=putchar'
POST_ACTION = OBJCPY + ' -O binary $TARGET rtthread.bin\n' + SIZE + ' $TARGET \n'
| 281.104478 | 8,530 | 0.751513 | 3,114 | 18,834 | 4.285806 | 0.117213 | 0.2104 | 0.145886 | 0.059419 | 0.886558 | 0.882287 | 0.878391 | 0.837554 | 0.76862 | 0.756481 | 0 | 0.01328 | 0.052458 | 18,834 | 66 | 8,531 | 285.363636 | 0.734562 | 0.645853 | 0 | 0 | 0 | 0.088889 | 0.854354 | 0.522823 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022222 | 0 | 0.022222 | 0.044444 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
b39295b1d5ffe189736e95cc29a40adb9da6384d | 39 | py | Python | src/lib/weakref.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/weakref.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/weakref.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("weakref")
| 19.5 | 38 | 0.769231 | 6 | 39 | 4.166667 | 0.666667 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.694444 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b3a71d7cf4001bd7be8221de76bcaa2b59c30366 | 3,160 | py | Python | tests/primitives/test_signed_area_distance.py | dwferrer/tf-poly | 28633d630a772cdef8fd477866a58561b5ccd42a | [
"Apache-2.0"
] | null | null | null | tests/primitives/test_signed_area_distance.py | dwferrer/tf-poly | 28633d630a772cdef8fd477866a58561b5ccd42a | [
"Apache-2.0"
] | null | null | null | tests/primitives/test_signed_area_distance.py | dwferrer/tf-poly | 28633d630a772cdef8fd477866a58561b5ccd42a | [
"Apache-2.0"
] | null | null | null | import pytest
from pytest import approx
import tensorflow as tf
from tf_polygon.primitives import signed_point_line_area, signed_point_line_distance
unit_x_line_segment = tf.convert_to_tensor(((0., 0.),
(1., 0.)))
unit_y_line_segment = tf.convert_to_tensor(((0., 0.),
(0., 1.)))
def test_point_on_line_distance_zero():
assert signed_point_line_distance(unit_x_line_segment, (0., 0.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_x_line_segment, (.5, 0.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_x_line_segment, (1., 0.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_x_line_segment, (-1., 0.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_x_line_segment, (2., 0.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_y_line_segment, (0., 0.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_y_line_segment, (0., .5)).numpy() == approx(0.)
assert signed_point_line_distance(unit_y_line_segment, (0., 1.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_y_line_segment, (0., -1.)).numpy() == approx(0.)
assert signed_point_line_distance(unit_y_line_segment, (0., 2.)).numpy() == approx(0.)
def test_unit_distance():
assert signed_point_line_distance(unit_x_line_segment, (0., 1.)).numpy() == approx(1.)
assert signed_point_line_distance(unit_x_line_segment, (.5, 1.)).numpy() == approx(1.)
assert signed_point_line_distance(unit_x_line_segment, (1., 1.)).numpy() == approx(1.)
assert signed_point_line_distance(unit_x_line_segment, (0., -1.)).numpy() == approx(-1.)
assert signed_point_line_distance(unit_x_line_segment, (.5, -1.)).numpy() == approx(-1.)
assert signed_point_line_distance(unit_x_line_segment, (1., -1.)).numpy() == approx(-1.)
assert signed_point_line_distance(unit_y_line_segment, (1., 0.,)).numpy() == approx(-1.)
assert signed_point_line_distance(unit_y_line_segment, (1., .5,)).numpy() == approx(-1.)
assert signed_point_line_distance(unit_y_line_segment, (1., 1.,)).numpy() == approx(-1.)
assert signed_point_line_distance(unit_y_line_segment, (-1., 0.,)).numpy() == approx(1.)
assert signed_point_line_distance(unit_y_line_segment, (-1., .5,)).numpy() == approx(1.)
assert signed_point_line_distance(unit_y_line_segment, (-1., 1.,)).numpy() == approx(1.)
def test_zero_length_segment_has_zero_area():
assert signed_point_line_area(((0., 0.), (0., 0.)), (1., 1.)).numpy() == approx(0.)
def test_derivative():
point = tf.convert_to_tensor((1., 0.))
with tf.GradientTape() as tape:
tape.watch(point)
d = signed_point_line_distance(unit_x_line_segment, point)
grad = tape.gradient(d, point)
assert tf.math.reduce_euclidean_norm(grad) > 0.
def test_broadcast():
lines = tf.stack([unit_x_line_segment, unit_y_line_segment])
points = tf.convert_to_tensor(((.5, .5), (-1., 1.)))
d = signed_point_line_distance(lines[:, None, :, :], points[None, :, :])
assert tf.reduce_all(tf.shape(d) == (2, 2))
| 47.164179 | 92 | 0.681013 | 470 | 3,160 | 4.176596 | 0.110638 | 0.151299 | 0.206317 | 0.292919 | 0.750382 | 0.718288 | 0.718288 | 0.718288 | 0.653591 | 0.653591 | 0 | 0.03347 | 0.149051 | 3,160 | 66 | 93 | 47.878788 | 0.696541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.543478 | 1 | 0.108696 | false | 0 | 0.086957 | 0 | 0.195652 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b3bebf8ea877282314f1b1acf646fb8cda6aa536 | 6,805 | py | Python | sdk/python/pulumi_oci/artifacts/_inputs.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/artifacts/_inputs.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/artifacts/_inputs.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'ContainerRepositoryReadmeArgs',
'GetContainerImageSignaturesFilterArgs',
'GetContainerImagesFilterArgs',
'GetContainerRepositoriesFilterArgs',
'GetGenericArtifactsFilterArgs',
'GetRepositoriesFilterArgs',
]
@pulumi.input_type
class ContainerRepositoryReadmeArgs:
def __init__(__self__, *,
content: pulumi.Input[str],
format: pulumi.Input[str]):
"""
:param pulumi.Input[str] content: (Updatable) Readme content. Avoid entering confidential information.
:param pulumi.Input[str] format: (Updatable) Readme format. Supported formats are text/plain and text/markdown.
"""
pulumi.set(__self__, "content", content)
pulumi.set(__self__, "format", format)
@property
@pulumi.getter
def content(self) -> pulumi.Input[str]:
"""
(Updatable) Readme content. Avoid entering confidential information.
"""
return pulumi.get(self, "content")
@content.setter
def content(self, value: pulumi.Input[str]):
pulumi.set(self, "content", value)
@property
@pulumi.getter
def format(self) -> pulumi.Input[str]:
"""
(Updatable) Readme format. Supported formats are text/plain and text/markdown.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: pulumi.Input[str]):
pulumi.set(self, "format", value)
@pulumi.input_type
class GetContainerImageSignaturesFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetContainerImagesFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetContainerRepositoriesFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetGenericArtifactsFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetRepositoriesFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
| 27.003968 | 119 | 0.604555 | 749 | 6,805 | 5.323097 | 0.109479 | 0.076749 | 0.11086 | 0.098069 | 0.76624 | 0.752947 | 0.743918 | 0.714823 | 0.696764 | 0.696764 | 0 | 0.000201 | 0.268332 | 6,805 | 251 | 120 | 27.111554 | 0.800562 | 0.0795 | 0 | 0.825397 | 1 | 0 | 0.072145 | 0.02944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21164 | false | 0 | 0.026455 | 0.079365 | 0.359788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b3cc00020c29aa7489df45a97c6aaa1585cba814 | 169 | py | Python | python-3/beginner/1097.py | MisaelAugusto/uri | 22bee72edf44f939d7a290383336b4d061faecbb | [
"MIT"
] | null | null | null | python-3/beginner/1097.py | MisaelAugusto/uri | 22bee72edf44f939d7a290383336b4d061faecbb | [
"MIT"
] | null | null | null | python-3/beginner/1097.py | MisaelAugusto/uri | 22bee72edf44f939d7a290383336b4d061faecbb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
I , J = 1, 7
while (I <= 9):
print("I=%d J=%d" % (I, J))
print("I=%d J=%d" % (I, J - 1))
print("I=%d J=%d" % (I, J - 2))
I += 2
J += 2 | 18.777778 | 33 | 0.35503 | 37 | 169 | 1.621622 | 0.324324 | 0.133333 | 0.35 | 0.4 | 0.55 | 0.55 | 0.55 | 0 | 0 | 0 | 0 | 0.067227 | 0.295858 | 169 | 9 | 34 | 18.777778 | 0.436975 | 0.12426 | 0 | 0 | 0 | 0 | 0.183673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.428571 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
b3fadf8b6072ddb204d602198de34c5f6161404f | 153 | py | Python | flask/config.py | wyehuongyan/openface-cv2-flask | e5bf3fdcd61eaef46839f0ad6e75cd232d1ec9df | [
"Apache-2.0"
] | null | null | null | flask/config.py | wyehuongyan/openface-cv2-flask | e5bf3fdcd61eaef46839f0ad6e75cd232d1ec9df | [
"Apache-2.0"
] | null | null | null | flask/config.py | wyehuongyan/openface-cv2-flask | e5bf3fdcd61eaef46839f0ad6e75cd232d1ec9df | [
"Apache-2.0"
] | null | null | null | #SQLALCHEMY_DATABASE_URI = 'mysql://root:password@mariadb/openfacedb' # for Docker
SQLALCHEMY_DATABASE_URI = 'mysql://root:password@localhost/openfacedb' | 76.5 | 82 | 0.816993 | 18 | 153 | 6.722222 | 0.611111 | 0.297521 | 0.347107 | 0.429752 | 0.628099 | 0.628099 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052288 | 153 | 2 | 83 | 76.5 | 0.834483 | 0.522876 | 0 | 0 | 0 | 0 | 0.583333 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.