hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3787919003ffade59aef63b0af12f990fb0cabfe | 166 | py | Python | test6.py | yespanthi/git-test | 1832d126683e9d621b2b5ab8f79bd5c704c65458 | [
"Apache-2.0"
] | null | null | null | test6.py | yespanthi/git-test | 1832d126683e9d621b2b5ab8f79bd5c704c65458 | [
"Apache-2.0"
] | null | null | null | test6.py | yespanthi/git-test | 1832d126683e9d621b2b5ab8f79bd5c704c65458 | [
"Apache-2.0"
] | null | null | null | print("hell git hub test5 files"
print("hell git hub test5 files"
print("hell git hub test5 files"
print("hell git hub test5 files"
print("hell git hub test5 files"
| 23.714286 | 32 | 0.753012 | 30 | 166 | 4.166667 | 0.2 | 0.36 | 0.48 | 0.6 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.035714 | 0.156627 | 166 | 6 | 33 | 27.666667 | 0.857143 | 0 | 0 | 1 | 0 | 0 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 14 |
37c07fcc79739912d1645652c7fca6c230832237 | 3,217 | py | Python | task2.py | JA-VON/python-helpers-msbm | c94555e009fce6776f03aee5976b36496b9b8480 | [
"MIT"
] | 1 | 2016-04-03T05:59:31.000Z | 2016-04-03T05:59:31.000Z | task2.py | JA-VON/python-helpers-msbm | c94555e009fce6776f03aee5976b36496b9b8480 | [
"MIT"
] | null | null | null | task2.py | JA-VON/python-helpers-msbm | c94555e009fce6776f03aee5976b36496b9b8480 | [
"MIT"
] | null | null | null | import random
import pickle
import unittest
"""
Function used for task two takes the student's class nam
makes use of python's pickle library for storing data
"""
def task2(class_number, name, score):
class1 = {}
class2 = {}
class3 = {}
# initial set up and loading any data already present
try:
class1 = pickle.load(open("./class1.pkl", "rb"))
except IOError:
pickle.dump(class1, open("./class1.pkl", "wb"))
try:
class2 = pickle.load(open("./class2.pkl", "rb"))
except IOError:
pickle.dump(class2, open("./class2.pkl", "wb"))
try:
class3 = pickle.load(open("./class3.pkl", "rb"))
except IOError:
pickle.dump(class3, open("./class3.pkl", "wb"))
# store data in the appropriate file
if class_number == 1:
if name in class1:
class1[name].append(score)
else:
class1[name] = [score]
pickle.dump(class1, open("./class1.pkl", "wb"))
elif class_number == 2:
if name in class2:
class2[name].append(score)
else:
class2[name] = [score]
pickle.dump(class2, open("./class2.pkl", "wb"))
else:
if name in class3:
class3[name].append(score)
else:
class3[name] = [score]
pickle.dump(class3, open("./class3.pkl", "wb"))
"""Function for task 2 modified to fit specifications in Task 3"""
def task2_modified(class_number, name, score):
class1 = {}
class2 = {}
class3 = {}
# initial set up and loading any data already present
try:
class1 = pickle.load(open("./class1.pkl", "rb"))
except IOError:
pickle.dump(class1, open("./class1.pkl", "wb"))
try:
class2 = pickle.load(open("./class2.pkl", "rb"))
except IOError:
pickle.dump(class2, open("./class2.pkl", "wb"))
try:
class3 = pickle.load(open("./class3.pkl", "rb"))
except IOError:
pickle.dump(class3, open("./class3.pkl", "wb"))
# store data in the appropriate file
if class_number == 1:
if name in class1:
class1[name].append(score)
total_number_of_scores = len(class1[name])
# check to ensure only last 3 grades are kept recorded
if total_number_of_scores > 3:
class1[name] = class1[name][total_number_of_scores - 3:]
else:
class1[name] = [score]
pickle.dump(class1, open("./class1.pkl", "wb"))
elif class_number == 2:
if name in class2:
class2[name].append(score)
total_number_of_scores = len(class2[name])
if total_number_of_scores > 3:
class2[name] = class2[name][total_number_of_scores - 3:]
else:
class2[name] = [score]
pickle.dump(class2, open("./class2.pkl", "wb"))
else:
if name in class3:
class3[name].append(score)
total_number_of_scores = len(class3[name])
if total_number_of_scores > 3:
class3[name] = class3[name][total_number_of_scores - 3:]
else:
class3[name] = [score]
pickle.dump(class3, open("./class3.pkl", "wb")) | 30.638095 | 72 | 0.56792 | 397 | 3,217 | 4.516373 | 0.183879 | 0.066927 | 0.065254 | 0.095371 | 0.824317 | 0.824317 | 0.812047 | 0.742889 | 0.706079 | 0.706079 | 0 | 0.036972 | 0.293752 | 3,217 | 105 | 73 | 30.638095 | 0.752201 | 0.070252 | 0 | 0.8625 | 0 | 0 | 0.090193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.0375 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8072990c552a89599e0af3aaec9e9c19177db61b | 40,297 | py | Python | dlacs/visual.py | geek-yang/DeepClim | a33a23e955637341eb4b8929420e958d02b15a69 | [
"Apache-2.0"
] | 5 | 2020-03-21T14:37:40.000Z | 2022-03-28T11:47:13.000Z | dlacs/visual.py | geek-yang/DeepClim | a33a23e955637341eb4b8929420e958d02b15a69 | [
"Apache-2.0"
] | 8 | 2022-01-20T16:05:11.000Z | 2022-02-13T18:19:44.000Z | dlacs/visual.py | geek-yang/DeepClim | a33a23e955637341eb4b8929420e958d02b15a69 | [
"Apache-2.0"
] | 2 | 2021-01-29T03:25:05.000Z | 2021-03-22T12:15:15.000Z | # -*- coding: utf-8 -*-
"""
Copyright Netherlands eScience Center
Function : Plots generator for visualization
Author : Yang Liu (y.liu@esciencecenter.nl)
First Built : 2018.08.13
Last Update : 2020.04.02
Contributor :
Description : This module provides several methods to perform statistical
analysis on MET and all kinds of fields.
Return Values : pngs
Caveat! : The style of gridliner of Cartopy can be found at
https://scitools.org.uk/cartopy/docs/v0.13/matplotlib/gridliner.html
"""
import numpy as np
import scipy
#from scipy import stats
import os
import matplotlib
import numbers
#import seaborn as sns
#import bokeh
import matplotlib.pyplot as plt
import matplotlib.path as mpath
import matplotlib.ticker as mticker
import iris
import iris.plot as iplt
import cartopy
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
class plots:
@staticmethod
def linearRegress(xaxis, corr, figname='./LinearRegression.png'):
"""
This module will make a x-y plot to display the correlation coefficient
got from the linear regression.
param xaxis: latitude for the plot as x axis
param corr: the correlation coefficient
param figname: name and output path of figure
return: Figures
rtype: png
"""
print ("Create x-y plot of correlation coefficient.")
fig = plt.figure()
plt.plot(xaxis, corr)
plt.xlabel("Latitude")
#plt.xticks(np.linspace(20, 90, 11))
plt.ylabel("Correlation Coefficient")
plt.show()
fig.savefig(figname,dpi=400)
plt.close(fig)
@staticmethod
def vertProfile(xaxis, yaxis, corr, p_value, label,
ticks, figname='./VerticalProfile', ttest=False):
"""
This module helps to create a plot to show the vertical profile of fields
after regression.
param xaxis: latitude for the plot as x axis
param yaxis: level for the plot as y axis
param corr: the correlation coefficient
param figname: name and output path of figure
return: Figures
rtype: png
"""
print ("Create contour plot of correlation coefficient for vertical profiles.")
# make plots
fig = plt.figure(figsize=(6.5,5.4))
cs = plt.contourf(xaxis, yaxis, corr, levels=ticks, cmap='coolwarm', extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.135, format="%.1f")
cbar.set_label(label,size = 10)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 10)
if ttest == True:
ii, jj = np.where(p_value<=0.05) # 95% significance
plt.plot(xaxis[jj], yaxis[ii], 'go', alpha=0.3)
plt.xlabel("Latitude")
plt.ylabel("Level (hPa)")
#invert the y axis
plt.gca().invert_yaxis()
plt.show()
fig.savefig(figname,dpi=200)
plt.close(fig)
@staticmethod
def vertProfileSig(xaxis, yaxis, corr, p_value, label,
ticks, figname='./VerticalProfile', ttest=False):
"""
This module helps to create a plot to show the vertical profile of fields
after regression. It also includes the full contour of confidence interval.
param xaxis: latitude for the plot as x axis
param yaxis: level for the plot as y axis
param corr: the correlation coefficient
param figname: name and output path of figure
return: Figures
rtype: png
"""
print ("Create contour plot of correlation coefficient for vertical profiles.")
# make plots
contour_level = [i for i in np.arange(0,1.1, 0.1)]
fig = plt.figure(figsize=(6.5,5.4))
cs = plt.contourf(xaxis, yaxis, corr, levels=ticks, cmap='coolwarm', extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.135, format="%.1f")
cbar.set_label(label,size = 10)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 10)
plt.xlabel("Latitude")
plt.ylabel("Level (hPa)")
cs = plt.contour(xaxis, yaxis, 1-p_value,
contour_level, colors='k')
plt.clabel(cs, inline=1, fontsize=10)
#invert the y axis
plt.gca().invert_yaxis()
plt.show()
fig.savefig(figname,dpi=200)
plt.close(fig)
@staticmethod
def vertProfileOverlap(xaxis, yaxis, corr, cont, p_value, label,
ticks, contour_level, inline_space,
figname='./VerticalProfile', ttest=False):
"""
This module helps to create a plot to show the vertical profile of fields
after regression. It also includes the full contour of stokes stream function.
param xaxis: latitude for the plot as x axis
param yaxis: level for the plot as y axis
param corr: the correlation coefficient
param figname: name and output path of figure
return: Figures
rtype: png
"""
print ("Create contour plot of stokes stream function for vertical profiles.")
fig = plt.figure(figsize=(6.5,5.4))
cs = plt.contourf(xaxis, yaxis, corr, levels=ticks, cmap='coolwarm', extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.135, format="%.1f")
cbar.set_label(label,size = 10)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 10)
if ttest == True:
ii, jj = np.where(p_value<=0.05) # 95% significance
plt.plot(xaxis[jj], yaxis[ii], 'go', alpha=0.3)
plt.xlabel("Latitude")
plt.ylabel("Level (hPa)")
contour = plt.contour(xaxis, yaxis, cont,
contour_level, colors='k', linewidths = 0.9, alpha=0.6)
plt.clabel(contour, inline=inline_space, fontsize=8, fmt = '%1.1f')
#invert the y axis
plt.gca().invert_yaxis()
plt.show()
fig.savefig(figname,dpi=200)
plt.close(fig)
@staticmethod
def leadlagRegress(yaxis, corr, lag, p_value, figname='./LeadLagRegression.png',
ttest=False):
"""
This module will make a contour plot to display the correlation coefficient
got from the lead/lag regression.
param yaxis: latitude for the plot as y axis
param corr: the correlation coefficient
param lag: the maximum lag time for plot as x axis
param figname: name and output path of figure
return: Figures
rtype: png
"""
print ("Create contour plot of correlation coefficient.")
# calculate the lead/lag index as x axis
lag_index = np.arange(-lag,lag+1,1)
xaxis = lag_index / 12
# make plots
fig = plt.figure()
#contour_level = np.array([-0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6])
contour_level = np.array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8])
cs = plt.contour(xaxis, yaxis, corr.transpose(),
contour_level, colors='k')
plt.clabel(cs, inline=1, fontsize=10)
if ttest == True:
ii, jj = np.where(p_value.transpose()<=0.05) # 95% significance
plt.scatter(xaxis[jj], yaxis[ii], s=0.8, c='gray', alpha=0.6)
#plt.plot(xaxis[jj], yaxis[ii], 'go', s=0.1, alpha=0.3)
plt.xlabel("Time Lag (year)")
#lead_year = ['-15','-12','-9','-6','-3','0','3','6','9','12','15']
plt.ylabel("Latitude")
plt.show()
fig.savefig(figname,dpi=200)
plt.close(fig)
@staticmethod
def geograph(latitude, longitude, field, label, ticks,
figname='./NorthPolar.png', gridtype='geographical',
boundary='northhem', colormap= 'coolwarm'):
"""
This module will make a geographical plot to give a spatial view of fields.
This module is built on iris and cartopy for the visualization of fields on
both geographical and curvilinear grid.
param lat: latitude coordinate for plot
param lon: longitude coordinate for plot
param field: input field for visualization
param gridtype: type of input spatial fields, it has two options
- geographical (default) the coordinate is geographical, normally applied to atmosphere reanalysis
- curvilinear the coordinate is curvilinear, normally applied to ocean reanalysis
param figname: name and output path of figure
param boundary: region for plot. It determines the boundary of plot area (lat,lon) and projection.
- northhem (default) plot the north hemisphere from 20N-90N & 180W-180E, with the projection NorthPolarStereo.
- atlantic plot the north Atlantic from 20N-90N & 90W-40E, with the projection PlateCarree
return: figures
rtype: png
"""
print ("Create a NorthPolarStereo view of input fields.")
if gridtype == 'geographical':
print ("The input fields are originally on geographical grid")
# first construct iris coordinate
lat_iris = iris.coords.DimCoord(latitude, standard_name='latitude', long_name='latitude',
var_name='lat', units='degrees')
lon_iris = iris.coords.DimCoord(longitude, standard_name='longitude', long_name='longitude',
var_name='lon', units='degrees')
# assembly the cube
cube_iris = iris.cube.Cube(field, long_name='geographical field', var_name='field',
units='1', dim_coords_and_dims=[(lat_iris, 0), (lon_iris, 1)])
if boundary == 'northhem':
fig = plt.figure()
ax = plt.axes(projection=ccrs.NorthPolarStereo())
#ax.set_extent([-180,180,20,90],ccrs.PlateCarree())
ax.set_extent([-180,180,50,90],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
ax.set_boundary(circle, transform=ax.transAxes)
cs = iplt.contourf(cube_iris, cmap=colormap,levels=ticks, extend='both') #, vmin=ticks[0], vmax=ticks[-1]
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 8)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 6)
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'atlantic':
fig = plt.figure(figsize=(8,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-90,40,20,85],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
cs = iplt.contourf(cube_iris,cmap=colormap, levels=ticks, extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 11)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 11)
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'Barents_PlateCarree':
fig = plt.figure(figsize=(6,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([15,65,60,85],ccrs.PlateCarree()) # W:18 E:60 S:64 N:80
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
cs = iplt.contourf(cube_iris,cmap=colormap, levels=ticks, extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 11)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 11)
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'Barents_Polar':
fig = plt.figure()
ax = plt.axes(projection=ccrs.EquidistantConic(central_longitude=39.0, central_latitude=72.0))
ax.set_extent([16,60,60,82],ccrs.PlateCarree()) # W:18 E:60 S:64 N:80
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
cs = iplt.contourf(cube_iris, cmap=colormap,levels=ticks, extend='both', vmin=ticks[0], vmax=ticks[-1])
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 9)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 8)
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
else:
print ('This boundary is not supported by the module. Please check the documentation.')
elif gridtype == 'curvilinear':
print ("The input fields are originally on curvilinear grid")
# first construct iris coordinate
lat_iris = iris.coords.AuxCoord(latitude, standard_name='latitude', long_name='latitude',
var_name='lat', units='degrees')
lon_iris = iris.coords.AuxCoord(longitude, standard_name='longitude', long_name='longitude',
var_name='lon', units='degrees')
# assembly the cube
cube_iris = iris.cube.Cube(field, long_name='curvilinear field', var_name='field',
units='1', aux_coords_and_dims=[(lat_iris, (0,1)), (lon_iris, (0,1))])
coord_sys = iris.coord_systems.GeogCS(iris.fileformats.pp.EARTH_RADIUS)
cube_iris.coord('latitude').coord_system = coord_sys
cube_iris.coord('longitude').coord_system = coord_sys
# determine nx and ny for interpolation
jj, ii = latitude.shape
if ii > 1000:
nx = 1440
ny = 350
else:
nx = 720
ny = 140
cube_regrid, extent = iris.analysis.cartography.project(cube_iris, ccrs.PlateCarree(), nx, ny)
# make plots
if boundary == 'northhem':
fig = plt.figure()
ax = plt.axes(projection=ccrs.NorthPolarStereo())
ax.set_extent([-180,180,20,90],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
#gl.ylocator = mticker.FixedLocator([50,60,70,80,90])
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
ax.set_boundary(circle, transform=ax.transAxes)
cs = iplt.contourf(cube_regrid, cmap=colormap, vmin=ticks[0], vmax=ticks[-1]) #pcolormesh
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 8)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 6)
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'atlantic':
fig = plt.figure(figsize=(8,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-90,40,20,85],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
#gl.ylocator = mticker.FixedLocator([50,60,70,80,90])
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
cs = iplt.contourf(cube_regrid, cmap=colormap, vmin=ticks[0], vmax=ticks[-1])
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05, format="%.1f")
cbar.set_label(label,size = 8)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 6)
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
else:
print ('This boundary is not supported by the module. Please check the documentation.')
else:
raise IOError("This module only support fields on geographical or curvilinear grid!")
@staticmethod
def geograph_mode(latitude_x, longitude_x, field_x,
latitude_y, longitude_y, field_y, label, ticks, level,
figname='./NorthPolar.png', gridtype='geographical',
boundary='northhem', colormap= 'coolwarm'):
"""
This module is designed for PCA/SVD/MCA analysis to illustrate two
fields at the same time.
This module is built on iris and cartopy for the visualization of fields on
both geographical and curvilinear grid.
param latitude_x: latitude coordinate for input field x
param longitude_x: longitude coordinate for input field x
param field_x: input field for visualization with shades
param latitude_y: latitude coordinate for input field y
param longitude_y: longitude coordinate for input field y
param field_y: input field for visualization with contours
param label: label of shades
param ticks: ticks of shades
param level: level of contour lines
param gridtype: type of input spatial fields, it has two options
- geographical (default) the coordinate is geographical, normally applied to atmosphere reanalysis
- curvilinear the coordinate is curvilinear, normally applied to ocean reanalysis
param figname: name and output path of figure
param boundary: region for plot. It determines the boundary of plot area (lat,lon) and projection.
- northhem (default) plot the north hemisphere from 20N-90N & 180W-180E, with the projection NorthPolarStereo.
- atlantic plot the north Atlantic from 20N-90N & 90W-40E, with the projection PlateCarree
return: figures
rtype: png
"""
print ("Create a NorthPolarStereo view of input fields.")
if gridtype == 'geographical':
print ("The input fields are originally on geographical grid")
# mode variable x
# first construct iris coordinate
lat_iris_x = iris.coords.DimCoord(latitude_x, standard_name='latitude', long_name='latitude',
var_name='lat', units='degrees')
lon_iris_x = iris.coords.DimCoord(longitude_x, standard_name='longitude', long_name='longitude',
var_name='lon', units='degrees')
# assembly the cube
cube_iris_x = iris.cube.Cube(field_x, long_name='geographical field', var_name='field',
units='1', dim_coords_and_dims=[(lat_iris_x, 0), (lon_iris_x, 1)])
# mode variable y
lat_iris_y = iris.coords.DimCoord(latitude_y, standard_name='latitude', long_name='latitude',
var_name='lat', units='degrees')
lon_iris_y = iris.coords.DimCoord(longitude_y, standard_name='longitude', long_name='longitude',
var_name='lon', units='degrees')
cube_iris_y = iris.cube.Cube(field_y, long_name='geographical field', var_name='field',
units='1', dim_coords_and_dims=[(lat_iris_y, 0), (lon_iris_y, 1)])
if boundary == 'northhem':
fig = plt.figure()
ax = plt.axes(projection=ccrs.NorthPolarStereo())
#ax.set_extent([-180,180,20,90],ccrs.PlateCarree())
ax.set_extent([-180,180,50,90],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
ax.set_boundary(circle, transform=ax.transAxes)
cs = iplt.contourf(cube_iris_x, cmap=colormap,levels=ticks, extend='both') #, vmin=ticks[0], vmax=ticks[-1]
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 8)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 6)
contour = iplt.contour(cube_iris_y, colors='dimgrey', levels = level, linewidths = 0.8, format="%.2f")
plt.clabel(contour, inline=True, fontsize = 8, fmt ="%.2f")
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'atlantic':
fig = plt.figure(figsize=(8,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-90,40,20,85],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
cs = iplt.contourf(cube_iris_x,cmap=colormap, levels=ticks, extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 11)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 11)
contour = iplt.contour(cube_iris_y, colors='dimgrey', levels = level, linewidths = 0.8, format="%.2f")
plt.clabel(contour, inline=True, fontsize = 8, fmt ="%.2f")
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'barents_plateCarree':
fig = plt.figure(figsize=(6,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([15,65,60,85],ccrs.PlateCarree()) # W:18 E:60 S:64 N:80
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
cs = iplt.contourf(cube_iris_x,cmap=colormap, levels=ticks, extend='both')
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 11)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 11)
contour = iplt.contour(cube_iris_y, colors='dimgrey', levels = level, linewidths = 0.8, format="%.2f")
plt.clabel(contour, inline=True, fontsize = 8, fmt ="%.2f")
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'barents_polar':
fig = plt.figure()
ax = plt.axes(projection=ccrs.EquidistantConic(central_longitude=39.0, central_latitude=72.0))
ax.set_extent([16,60,60,82],ccrs.PlateCarree()) # W:18 E:60 S:64 N:80
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
cs = iplt.contourf(cube_iris_x, cmap=colormap,levels=ticks, extend='both', vmin=ticks[0], vmax=ticks[-1])
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 9)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 8)
contour = iplt.contour(cube_iris_y, colors='dimgrey', levels = level, linewidths = 0.8)
plt.clabel(contour, inline=True, fontsize = 8, fmt ="%.2f")
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
else:
print ('This boundary is not supported by the module. Please check the documentation.')
elif gridtype == 'curvilinear':
print ("The input fields are originally on curvilinear grid")
# mode variable x
# first construct iris coordinate
lat_iris_x = iris.coords.AuxCoord(latitude_x, standard_name='latitude', long_name='latitude',
var_name='lat', units='degrees')
lon_iris_x = iris.coords.AuxCoord(longitude_x, standard_name='longitude', long_name='longitude',
var_name='lon', units='degrees')
# assembly the cube
cube_iris_x = iris.cube.Cube(field_x, long_name='curvilinear field', var_name='field',
units='1', aux_coords_and_dims=[(lat_iris_x, (0,1)), (lon_iris_x, (0,1))])
coord_sys = iris.coord_systems.GeogCS(iris.fileformats.pp.EARTH_RADIUS)
cube_iris_x.coord('latitude').coord_system = coord_sys
cube_iris_x.coord('longitude').coord_system = coord_sys
# mode variable y
lat_iris_y = iris.coords.AuxCoord(latitude_y, standard_name='latitude', long_name='latitude',
var_name='lat', units='degrees')
lon_iris_y = iris.coords.AuxCoord(longitude_y, standard_name='longitude', long_name='longitude',
var_name='lon', units='degrees')
cube_iris_y = iris.cube.Cube(field_y, long_name='curvilinear field', var_name='field',
units='1', aux_coords_and_dims=[(lat_iris_y, (0,1)), (lon_iris_y, (0,1))])
cube_iris_y.coord('latitude').coord_system = coord_sys
cube_iris_y.coord('longitude').coord_system = coord_sys
# determine nx and ny for interpolation
jj, ii = latitude.shape
if ii > 1000:
nx = 1440
ny = 350
else:
nx = 720
ny = 140
cube_regrid_x, extent = iris.analysis.cartography.project(cube_iris_x, ccrs.PlateCarree(), nx, ny)
cube_regrid_y, extent = iris.analysis.cartography.project(cube_iris_y, ccrs.PlateCarree(), nx, ny)
# make plots
if boundary == 'northhem':
fig = plt.figure()
ax = plt.axes(projection=ccrs.NorthPolarStereo())
ax.set_extent([-180,180,20,90],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
#gl.ylocator = mticker.FixedLocator([50,60,70,80,90])
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
ax.set_boundary(circle, transform=ax.transAxes)
cs = iplt.contourf(cube_regrid_x, cmap=colormap, vmin=ticks[0], vmax=ticks[-1]) #pcolormesh
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05)#, format="%.1f")
cbar.set_label(label,size = 8)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 6)
contour = iplt.contour(cube_regrid_y, colors='dimgrey', levels = level, linewidths = 0.8)
plt.clabel(contour, inline=True, fontsize = 8, fmt ="%.2f")
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'atlantic':
fig = plt.figure(figsize=(8,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-90,40,20,85],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
#gl.ylocator = mticker.FixedLocator([50,60,70,80,90])
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
cs = iplt.contourf(cube_regrid_x, cmap=colormap, vmin=ticks[0], vmax=ticks[-1])
cbar = fig.colorbar(cs,extend='both', orientation='horizontal',
shrink =0.8, pad=0.05, format="%.1f")
cbar.set_label(label,size = 8)
cbar.set_ticks(ticks)
cbar.ax.tick_params(labelsize = 6)
contour = iplt.contour(cube_regrid_y, colors='dimgrey', levels = level, linewidths = 0.8)
plt.clabel(contour, inline=True, fontsize = 8, fmt ="%.2f")
iplt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
else:
print ('This boundary is not supported by the module. Please check the documentation.')
else:
raise IOError("This module only support fields on geographical or curvilinear grid!")
@staticmethod
def location(latitude, longitude, figname='./NorthPolar.png', boundary='northhem'):
"""
Blank map for position pin-point.
param lat: latitude coordinate for plot
param lon: longitude coordinate for plot
param figname: name and output path of figure
param boundary: region for plot. It determines the boundary of plot area (lat,lon) and projection.
- northhem (default) plot the north hemisphere from 20N-90N & 180W-180E, with the projection NorthPolarStereo.
- atlantic plot the north Atlantic from 20N-90N & 90W-40E, with the projection PlateCarree
return: figures
rtype: png
"""
if boundary == 'northhem':
fig = plt.figure()
ax = plt.axes(projection=ccrs.NorthPolarStereo())
#ax.set_extent([-180,180,20,90],ccrs.PlateCarree())
ax.set_extent([-180,180,50,90],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
ax.set_boundary(circle, transform=ax.transAxes)
ax.scatter(longitude, latitude, transform=ccrs.Geodetic(),
s=20.0, c='green',alpha=0.9)
plt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'atlantic':
fig = plt.figure(figsize=(8,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-90,40,20,85],ccrs.PlateCarree())
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
ax.scatter(longitude, latitude, transform=ccrs.Geodetic(),
s=20.0, c='green',alpha=0.9)
plt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'barents_plateCarree':
fig = plt.figure(figsize=(6,5.4))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([15,65,60,85],ccrs.PlateCarree()) # W:18 E:60 S:64 N:80
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 11, 'color': 'gray'}
gl.ylabel_style = {'size': 11, 'color': 'gray'}
ax.scatter(longitude, latitude, transform=ccrs.Geodetic(),
s=20.0, c='green',alpha=0.9)
plt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
elif boundary == 'barents_polar':
fig = plt.figure()
ax = plt.axes(projection=ccrs.EquidistantConic(central_longitude=39.0, central_latitude=72.0))
ax.set_extent([16,60,60,82],ccrs.PlateCarree()) # W:18 E:60 S:64 N:80
ax.set_aspect('1')
ax.coastlines()
gl = ax.gridlines(linewidth=1, color='gray', alpha=0.5, linestyle='--')
ax.scatter(longitude, latitude, transform=ccrs.Geodetic(),
s=20.0, c='green',alpha=1.0)
plt.show()
fig.savefig(figname, dpi=200)
plt.close(fig)
else:
print ('This boundary is not supported by the module. Please check the documentation.')
def qqplot(x, y, quantiles=None, interpolation='nearest', ax=None, rug=False,
rug_length=0.05, rug_kwargs=None, **kwargs):
"""Draw a quantile-quantile plot for `x` versus `y`.
Parameters
----------
x, y : array-like
One-dimensional numeric arrays.
ax : matplotlib.axes.Axes, optional
Axes on which to plot. If not provided, the current axes will be used.
quantiles : int or array-like, optional
Quantiles to include in the plot. This can be an array of quantiles, in
which case only the specified quantiles of `x` and `y` will be plotted.
If this is an int `n`, then the quantiles will be `n` evenly spaced
points between 0 and 1. If this is None, then `min(len(x), len(y))`
evenly spaced quantiles between 0 and 1 will be computed.
interpolation : {‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}
Specify the interpolation method used to find quantiles when `quantiles`
is an int or None. See the documentation for numpy.quantile().
rug : bool, optional
If True, draw a rug plot representing both samples on the horizontal and
vertical axes. If False, no rug plot is drawn.
rug_length : float in [0, 1], optional
Specifies the length of the rug plot lines as a fraction of the total
vertical or horizontal length.
rug_kwargs : dict of keyword arguments
Keyword arguments to pass to matplotlib.axes.Axes.axvline() and
matplotlib.axes.Axes.axhline() when drawing rug plots.
kwargs : dict of keyword arguments
Keyword arguments to pass to matplotlib.axes.Axes.scatter() when drawing
the q-q plot.
"""
# Get current axes if none are provided
if ax is None:
ax = plt.gca()
if quantiles is None:
quantiles = min(len(x), len(y))
# Compute quantiles of the two samples
if isinstance(quantiles, numbers.Integral):
quantiles = np.linspace(start=0, stop=1, num=int(quantiles))
else:
quantiles = np.atleast_1d(np.sort(quantiles))
x_quantiles = np.quantile(x, quantiles, interpolation=interpolation)
y_quantiles = np.quantile(y, quantiles, interpolation=interpolation)
# Draw the rug plots if requested
if rug:
# Default rug plot settings
rug_x_params = dict(ymin=0, ymax=rug_length, c='gray', alpha=0.5)
rug_y_params = dict(xmin=0, xmax=rug_length, c='gray', alpha=0.5)
# Override default setting by any user-specified settings
if rug_kwargs is not None:
rug_x_params.update(rug_kwargs)
rug_y_params.update(rug_kwargs)
# Draw the rug plots
for point in x:
ax.axvline(point, **rug_x_params)
for point in y:
ax.axhline(point, **rug_y_params)
# Draw the q-q plot
ax.scatter(x_quantiles, y_quantiles, **kwargs) | 52.13066 | 123 | 0.5589 | 4,834 | 40,297 | 4.5753 | 0.09971 | 0.009043 | 0.011394 | 0.019939 | 0.826107 | 0.807478 | 0.79943 | 0.788624 | 0.780395 | 0.771985 | 0 | 0.035939 | 0.325384 | 40,297 | 773 | 124 | 52.13066 | 0.777635 | 0.200238 | 0 | 0.807971 | 0 | 0 | 0.093706 | 0.001444 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016304 | false | 0 | 0.023551 | 0 | 0.041667 | 0.028986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
807cd250b98062d790bf353c7b5c9f2785dd78b2 | 198 | py | Python | erri/python/lesson_14/convertisseur.py | TGITS/programming-workouts | 799e805ccf3fd0936ec8ac2417f7193b8e9bcb55 | [
"MIT"
] | null | null | null | erri/python/lesson_14/convertisseur.py | TGITS/programming-workouts | 799e805ccf3fd0936ec8ac2417f7193b8e9bcb55 | [
"MIT"
] | 16 | 2020-05-30T12:38:13.000Z | 2022-02-19T09:23:31.000Z | erri/python/lesson_14/convertisseur.py | TGITS/programming-workouts | 799e805ccf3fd0936ec8ac2417f7193b8e9bcb55 | [
"MIT"
] | null | null | null | def celsius_vers_fahrenheit(température_en_celsius):
return 9*température_en_celsius/5+32
def fahrenheit_vers_celsius(température_en_fahrenheit):
return (température_en_fahrenheit-32)*5/9
| 28.285714 | 55 | 0.838384 | 28 | 198 | 5.5 | 0.357143 | 0.337662 | 0.25974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.090909 | 198 | 6 | 56 | 33 | 0.811111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
80a99ebe4a918262dba8fd9c13f42138bb9c3ed2 | 1,935 | py | Python | tests/transformer/test_combined_expressions.py | stufraser1/OasisDataConverter | 6c2faed1dbfe0abab9dacfffc1d475d27896ffbd | [
"BSD-3-Clause"
] | 2 | 2021-06-11T13:18:35.000Z | 2021-07-14T16:25:04.000Z | tests/transformer/test_combined_expressions.py | stufraser1/OasisDataConverter | 6c2faed1dbfe0abab9dacfffc1d475d27896ffbd | [
"BSD-3-Clause"
] | 25 | 2021-08-05T16:17:24.000Z | 2022-03-29T16:28:35.000Z | tests/transformer/test_combined_expressions.py | eRez-ws/OasisDataConverter | b30ec822d110395170e8a6e11ee6dca382bd4ad6 | [
"BSD-3-Clause"
] | 2 | 2020-11-11T12:02:04.000Z | 2021-03-29T13:56:32.000Z | from hypothesis import given
from converter.transformers import run
from tests.transformer.strategies import integers
@given(a=integers(), b=integers(), c=integers())
def test_multiplication_and_addition_order(a, b, c):
expected = a + (b * c)
assert run({}, f"{a} + {b} * {c}") == expected
assert run({}, f"{c} * {b} + {a}") == expected
@given(a=integers(), b=integers(), c=integers())
def test_multiplication_and_addition_order_with_brackets(a, b, c):
expected = (a + b) * c
assert run({}, f"({a} + {b}) * {c}") == expected
assert run({}, f"{c} * ({b} + {a})") == expected
@given(a=integers(), b=integers(), c=integers())
def test_multiplication_and_subtraction_order(a, b, c):
expected = a - (b * c)
assert run({}, f"{a} - {b} * {c}") == expected
@given(a=integers(), b=integers(), c=integers())
def test_multiplication_and_subtraction_order_with_brackets(a, b, c):
expected = (a - b) * c
assert run({}, f"({a} - {b}) * {c}") == expected
assert run({}, f"{c} * ({a} - {b})") == expected
@given(a=integers(), b=integers(), c=integers().filter(lambda v: v != 0))
def test_division_and_addition_order(a, b, c):
expected = a + (b / c)
assert run({}, f"{a} + {b} / {c}") == expected
assert run({}, f"{b} / {c} + {a}") == expected
@given(a=integers(), b=integers(), c=integers().filter(lambda v: v != 0))
def test_division_and_addition_order_with_brackets(a, b, c):
expected = (a + b) / c
assert run({}, f"({a} + {b}) / {c}") == expected
@given(a=integers(), b=integers(), c=integers().filter(lambda v: v != 0))
def test_division_and_subtraction_order(a, b, c):
expected = a - (b / c)
assert run({}, f"{a} - {b} / {c}") == expected
@given(a=integers(), b=integers(), c=integers().filter(lambda v: v != 0))
def test_division_and_subtraction_order_with_brackets(a, b, c):
expected = (a - b) / c
assert run({}, f"({a} - {b}) / {c}") == expected
| 33.947368 | 73 | 0.596899 | 287 | 1,935 | 3.885017 | 0.108014 | 0.044843 | 0.064574 | 0.157848 | 0.904036 | 0.904036 | 0.904036 | 0.904036 | 0.904036 | 0.903139 | 0 | 0.002516 | 0.178295 | 1,935 | 56 | 74 | 34.553571 | 0.698742 | 0 | 0 | 0.205128 | 0 | 0 | 0.099225 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.205128 | false | 0 | 0.076923 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
039b6977423e833a5e2ffb52f3596a2707d42db6 | 316,728 | py | Python | Program/Threads/race/esDemoData.py | gavinshark/stayHungryStayFoolish | 2333b9eb4cf0316bf439e3738935940ac16b42ba | [
"MIT"
] | 1 | 2021-01-25T05:43:52.000Z | 2021-01-25T05:43:52.000Z | Program/Threads/race/esDemoData.py | gavinshark/stayHungryStayFoolish | 2333b9eb4cf0316bf439e3738935940ac16b42ba | [
"MIT"
] | null | null | null | Program/Threads/race/esDemoData.py | gavinshark/stayHungryStayFoolish | 2333b9eb4cf0316bf439e3738935940ac16b42ba | [
"MIT"
] | null | null | null | null = 1
demoData = [
{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
},{
"_source": {
"url": [
"http://www.123.xom"
],
"level": "hello",
"timestamp": "2019-03-15T07:22:17Z"
},
"_id": "UOc6gGkBq6iVdv4A52yV"
}] | 25.660536 | 47 | 0.350661 | 21,939 | 316,728 | 4.937417 | 0.000866 | 0.11391 | 0.164537 | 0.202507 | 0.99988 | 0.99988 | 0.99988 | 0.99988 | 0.99988 | 0.99988 | 0 | 0.178851 | 0.467527 | 316,728 | 12,343 | 48 | 25.660536 | 0.463442 | 0 | 0 | 0.888592 | 0 | 0 | 0.389576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
03c1a34f1bdd6fbc726dfd7e692d9e95c5f126b9 | 43,472 | py | Python | sdk/python/pulumi_splunk/outputs_tcp_group.py | pulumi/pulumi-splunk | a593a4b65e7de94d61b93676231606820193f212 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-12-23T01:26:49.000Z | 2020-12-23T01:26:49.000Z | sdk/python/pulumi_splunk/outputs_tcp_group.py | pulumi/pulumi-splunk | a593a4b65e7de94d61b93676231606820193f212 | [
"ECL-2.0",
"Apache-2.0"
] | 36 | 2020-12-22T16:57:47.000Z | 2022-03-25T20:12:26.000Z | sdk/python/pulumi_splunk/outputs_tcp_group.py | pulumi/pulumi-splunk | a593a4b65e7de94d61b93676231606820193f212 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['OutputsTcpGroupArgs', 'OutputsTcpGroup']
@pulumi.input_type
class OutputsTcpGroupArgs:
def __init__(__self__, *,
servers: pulumi.Input[Sequence[pulumi.Input[str]]],
acl: Optional[pulumi.Input['OutputsTcpGroupAclArgs']] = None,
compressed: Optional[pulumi.Input[bool]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
drop_events_on_queue_full: Optional[pulumi.Input[int]] = None,
heartbeat_frequency: Optional[pulumi.Input[int]] = None,
max_queue_size: Optional[pulumi.Input[str]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
send_cooked_data: Optional[pulumi.Input[bool]] = None,
token: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a OutputsTcpGroup resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] servers: Comma-separated list of servers to include in the group.
:param pulumi.Input['OutputsTcpGroupAclArgs'] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[bool] compressed: If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
:param pulumi.Input[bool] disabled: If true, disables the group.
:param pulumi.Input[int] drop_events_on_queue_full: If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
:param pulumi.Input[int] heartbeat_frequency: How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
:param pulumi.Input[str] max_queue_size: Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
:param pulumi.Input[str] method: Valid values: (tcpout | syslog). Specifies the type of output processor.
:param pulumi.Input[str] name: The name of the group of receivers.
:param pulumi.Input[bool] send_cooked_data: If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
:param pulumi.Input[str] token: Token value generated by the indexer after configuration.
"""
pulumi.set(__self__, "servers", servers)
if acl is not None:
pulumi.set(__self__, "acl", acl)
if compressed is not None:
pulumi.set(__self__, "compressed", compressed)
if disabled is not None:
pulumi.set(__self__, "disabled", disabled)
if drop_events_on_queue_full is not None:
pulumi.set(__self__, "drop_events_on_queue_full", drop_events_on_queue_full)
if heartbeat_frequency is not None:
pulumi.set(__self__, "heartbeat_frequency", heartbeat_frequency)
if max_queue_size is not None:
pulumi.set(__self__, "max_queue_size", max_queue_size)
if method is not None:
pulumi.set(__self__, "method", method)
if name is not None:
pulumi.set(__self__, "name", name)
if send_cooked_data is not None:
pulumi.set(__self__, "send_cooked_data", send_cooked_data)
if token is not None:
pulumi.set(__self__, "token", token)
@property
@pulumi.getter
def servers(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
Comma-separated list of servers to include in the group.
"""
return pulumi.get(self, "servers")
@servers.setter
def servers(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "servers", value)
@property
@pulumi.getter
def acl(self) -> Optional[pulumi.Input['OutputsTcpGroupAclArgs']]:
"""
The app/user context that is the namespace for the resource
"""
return pulumi.get(self, "acl")
@acl.setter
def acl(self, value: Optional[pulumi.Input['OutputsTcpGroupAclArgs']]):
pulumi.set(self, "acl", value)
@property
@pulumi.getter
def compressed(self) -> Optional[pulumi.Input[bool]]:
"""
If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
"""
return pulumi.get(self, "compressed")
@compressed.setter
def compressed(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "compressed", value)
@property
@pulumi.getter
def disabled(self) -> Optional[pulumi.Input[bool]]:
"""
If true, disables the group.
"""
return pulumi.get(self, "disabled")
@disabled.setter
def disabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disabled", value)
@property
@pulumi.getter(name="dropEventsOnQueueFull")
def drop_events_on_queue_full(self) -> Optional[pulumi.Input[int]]:
"""
If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
"""
return pulumi.get(self, "drop_events_on_queue_full")
@drop_events_on_queue_full.setter
def drop_events_on_queue_full(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "drop_events_on_queue_full", value)
@property
@pulumi.getter(name="heartbeatFrequency")
def heartbeat_frequency(self) -> Optional[pulumi.Input[int]]:
"""
How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
"""
return pulumi.get(self, "heartbeat_frequency")
@heartbeat_frequency.setter
def heartbeat_frequency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "heartbeat_frequency", value)
@property
@pulumi.getter(name="maxQueueSize")
def max_queue_size(self) -> Optional[pulumi.Input[str]]:
"""
Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
"""
return pulumi.get(self, "max_queue_size")
@max_queue_size.setter
def max_queue_size(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "max_queue_size", value)
@property
@pulumi.getter
def method(self) -> Optional[pulumi.Input[str]]:
"""
Valid values: (tcpout | syslog). Specifies the type of output processor.
"""
return pulumi.get(self, "method")
@method.setter
def method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "method", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the group of receivers.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="sendCookedData")
def send_cooked_data(self) -> Optional[pulumi.Input[bool]]:
"""
If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
"""
return pulumi.get(self, "send_cooked_data")
@send_cooked_data.setter
def send_cooked_data(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "send_cooked_data", value)
@property
@pulumi.getter
def token(self) -> Optional[pulumi.Input[str]]:
"""
Token value generated by the indexer after configuration.
"""
return pulumi.get(self, "token")
@token.setter
def token(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "token", value)
@pulumi.input_type
class _OutputsTcpGroupState:
def __init__(__self__, *,
acl: Optional[pulumi.Input['OutputsTcpGroupAclArgs']] = None,
compressed: Optional[pulumi.Input[bool]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
drop_events_on_queue_full: Optional[pulumi.Input[int]] = None,
heartbeat_frequency: Optional[pulumi.Input[int]] = None,
max_queue_size: Optional[pulumi.Input[str]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
send_cooked_data: Optional[pulumi.Input[bool]] = None,
servers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
token: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering OutputsTcpGroup resources.
:param pulumi.Input['OutputsTcpGroupAclArgs'] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[bool] compressed: If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
:param pulumi.Input[bool] disabled: If true, disables the group.
:param pulumi.Input[int] drop_events_on_queue_full: If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
:param pulumi.Input[int] heartbeat_frequency: How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
:param pulumi.Input[str] max_queue_size: Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
:param pulumi.Input[str] method: Valid values: (tcpout | syslog). Specifies the type of output processor.
:param pulumi.Input[str] name: The name of the group of receivers.
:param pulumi.Input[bool] send_cooked_data: If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
:param pulumi.Input[Sequence[pulumi.Input[str]]] servers: Comma-separated list of servers to include in the group.
:param pulumi.Input[str] token: Token value generated by the indexer after configuration.
"""
if acl is not None:
pulumi.set(__self__, "acl", acl)
if compressed is not None:
pulumi.set(__self__, "compressed", compressed)
if disabled is not None:
pulumi.set(__self__, "disabled", disabled)
if drop_events_on_queue_full is not None:
pulumi.set(__self__, "drop_events_on_queue_full", drop_events_on_queue_full)
if heartbeat_frequency is not None:
pulumi.set(__self__, "heartbeat_frequency", heartbeat_frequency)
if max_queue_size is not None:
pulumi.set(__self__, "max_queue_size", max_queue_size)
if method is not None:
pulumi.set(__self__, "method", method)
if name is not None:
pulumi.set(__self__, "name", name)
if send_cooked_data is not None:
pulumi.set(__self__, "send_cooked_data", send_cooked_data)
if servers is not None:
pulumi.set(__self__, "servers", servers)
if token is not None:
pulumi.set(__self__, "token", token)
@property
@pulumi.getter
def acl(self) -> Optional[pulumi.Input['OutputsTcpGroupAclArgs']]:
"""
The app/user context that is the namespace for the resource
"""
return pulumi.get(self, "acl")
@acl.setter
def acl(self, value: Optional[pulumi.Input['OutputsTcpGroupAclArgs']]):
pulumi.set(self, "acl", value)
@property
@pulumi.getter
def compressed(self) -> Optional[pulumi.Input[bool]]:
"""
If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
"""
return pulumi.get(self, "compressed")
@compressed.setter
def compressed(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "compressed", value)
@property
@pulumi.getter
def disabled(self) -> Optional[pulumi.Input[bool]]:
"""
If true, disables the group.
"""
return pulumi.get(self, "disabled")
@disabled.setter
def disabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disabled", value)
@property
@pulumi.getter(name="dropEventsOnQueueFull")
def drop_events_on_queue_full(self) -> Optional[pulumi.Input[int]]:
"""
If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
"""
return pulumi.get(self, "drop_events_on_queue_full")
@drop_events_on_queue_full.setter
def drop_events_on_queue_full(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "drop_events_on_queue_full", value)
@property
@pulumi.getter(name="heartbeatFrequency")
def heartbeat_frequency(self) -> Optional[pulumi.Input[int]]:
"""
How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
"""
return pulumi.get(self, "heartbeat_frequency")
@heartbeat_frequency.setter
def heartbeat_frequency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "heartbeat_frequency", value)
@property
@pulumi.getter(name="maxQueueSize")
def max_queue_size(self) -> Optional[pulumi.Input[str]]:
"""
Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
"""
return pulumi.get(self, "max_queue_size")
@max_queue_size.setter
def max_queue_size(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "max_queue_size", value)
@property
@pulumi.getter
def method(self) -> Optional[pulumi.Input[str]]:
"""
Valid values: (tcpout | syslog). Specifies the type of output processor.
"""
return pulumi.get(self, "method")
@method.setter
def method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "method", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the group of receivers.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="sendCookedData")
def send_cooked_data(self) -> Optional[pulumi.Input[bool]]:
"""
If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
"""
return pulumi.get(self, "send_cooked_data")
@send_cooked_data.setter
def send_cooked_data(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "send_cooked_data", value)
@property
@pulumi.getter
def servers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Comma-separated list of servers to include in the group.
"""
return pulumi.get(self, "servers")
@servers.setter
def servers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "servers", value)
@property
@pulumi.getter
def token(self) -> Optional[pulumi.Input[str]]:
"""
Token value generated by the indexer after configuration.
"""
return pulumi.get(self, "token")
@token.setter
def token(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "token", value)
class OutputsTcpGroup(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
acl: Optional[pulumi.Input[pulumi.InputType['OutputsTcpGroupAclArgs']]] = None,
compressed: Optional[pulumi.Input[bool]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
drop_events_on_queue_full: Optional[pulumi.Input[int]] = None,
heartbeat_frequency: Optional[pulumi.Input[int]] = None,
max_queue_size: Optional[pulumi.Input[str]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
send_cooked_data: Optional[pulumi.Input[bool]] = None,
servers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
token: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
## # Resource: OutputsTcpGroup
Access to the configuration of a group of one or more data forwarding destinations.
## Example Usage
```python
import pulumi
import pulumi_splunk as splunk
tcp_group = splunk.OutputsTcpGroup("tcpGroup",
disabled=False,
drop_events_on_queue_full=60,
max_queue_size="100KB",
send_cooked_data=True,
servers=[
"1.1.1.1:1234",
"2.2.2.2:1234",
])
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['OutputsTcpGroupAclArgs']] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[bool] compressed: If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
:param pulumi.Input[bool] disabled: If true, disables the group.
:param pulumi.Input[int] drop_events_on_queue_full: If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
:param pulumi.Input[int] heartbeat_frequency: How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
:param pulumi.Input[str] max_queue_size: Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
:param pulumi.Input[str] method: Valid values: (tcpout | syslog). Specifies the type of output processor.
:param pulumi.Input[str] name: The name of the group of receivers.
:param pulumi.Input[bool] send_cooked_data: If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
:param pulumi.Input[Sequence[pulumi.Input[str]]] servers: Comma-separated list of servers to include in the group.
:param pulumi.Input[str] token: Token value generated by the indexer after configuration.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: OutputsTcpGroupArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
## # Resource: OutputsTcpGroup
Access to the configuration of a group of one or more data forwarding destinations.
## Example Usage
```python
import pulumi
import pulumi_splunk as splunk
tcp_group = splunk.OutputsTcpGroup("tcpGroup",
disabled=False,
drop_events_on_queue_full=60,
max_queue_size="100KB",
send_cooked_data=True,
servers=[
"1.1.1.1:1234",
"2.2.2.2:1234",
])
```
:param str resource_name: The name of the resource.
:param OutputsTcpGroupArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(OutputsTcpGroupArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
acl: Optional[pulumi.Input[pulumi.InputType['OutputsTcpGroupAclArgs']]] = None,
compressed: Optional[pulumi.Input[bool]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
drop_events_on_queue_full: Optional[pulumi.Input[int]] = None,
heartbeat_frequency: Optional[pulumi.Input[int]] = None,
max_queue_size: Optional[pulumi.Input[str]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
send_cooked_data: Optional[pulumi.Input[bool]] = None,
servers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
token: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = OutputsTcpGroupArgs.__new__(OutputsTcpGroupArgs)
__props__.__dict__["acl"] = acl
__props__.__dict__["compressed"] = compressed
__props__.__dict__["disabled"] = disabled
__props__.__dict__["drop_events_on_queue_full"] = drop_events_on_queue_full
__props__.__dict__["heartbeat_frequency"] = heartbeat_frequency
__props__.__dict__["max_queue_size"] = max_queue_size
__props__.__dict__["method"] = method
__props__.__dict__["name"] = name
__props__.__dict__["send_cooked_data"] = send_cooked_data
if servers is None and not opts.urn:
raise TypeError("Missing required property 'servers'")
__props__.__dict__["servers"] = servers
__props__.__dict__["token"] = token
super(OutputsTcpGroup, __self__).__init__(
'splunk:index/outputsTcpGroup:OutputsTcpGroup',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
acl: Optional[pulumi.Input[pulumi.InputType['OutputsTcpGroupAclArgs']]] = None,
compressed: Optional[pulumi.Input[bool]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
drop_events_on_queue_full: Optional[pulumi.Input[int]] = None,
heartbeat_frequency: Optional[pulumi.Input[int]] = None,
max_queue_size: Optional[pulumi.Input[str]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
send_cooked_data: Optional[pulumi.Input[bool]] = None,
servers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
token: Optional[pulumi.Input[str]] = None) -> 'OutputsTcpGroup':
"""
Get an existing OutputsTcpGroup resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['OutputsTcpGroupAclArgs']] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[bool] compressed: If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
:param pulumi.Input[bool] disabled: If true, disables the group.
:param pulumi.Input[int] drop_events_on_queue_full: If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
:param pulumi.Input[int] heartbeat_frequency: How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
:param pulumi.Input[str] max_queue_size: Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
:param pulumi.Input[str] method: Valid values: (tcpout | syslog). Specifies the type of output processor.
:param pulumi.Input[str] name: The name of the group of receivers.
:param pulumi.Input[bool] send_cooked_data: If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
:param pulumi.Input[Sequence[pulumi.Input[str]]] servers: Comma-separated list of servers to include in the group.
:param pulumi.Input[str] token: Token value generated by the indexer after configuration.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _OutputsTcpGroupState.__new__(_OutputsTcpGroupState)
__props__.__dict__["acl"] = acl
__props__.__dict__["compressed"] = compressed
__props__.__dict__["disabled"] = disabled
__props__.__dict__["drop_events_on_queue_full"] = drop_events_on_queue_full
__props__.__dict__["heartbeat_frequency"] = heartbeat_frequency
__props__.__dict__["max_queue_size"] = max_queue_size
__props__.__dict__["method"] = method
__props__.__dict__["name"] = name
__props__.__dict__["send_cooked_data"] = send_cooked_data
__props__.__dict__["servers"] = servers
__props__.__dict__["token"] = token
return OutputsTcpGroup(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def acl(self) -> pulumi.Output['outputs.OutputsTcpGroupAcl']:
"""
The app/user context that is the namespace for the resource
"""
return pulumi.get(self, "acl")
@property
@pulumi.getter
def compressed(self) -> pulumi.Output[bool]:
"""
If true, forwarder sends compressed data. If set to true, the receiver port must also have compression turned on.
"""
return pulumi.get(self, "compressed")
@property
@pulumi.getter
def disabled(self) -> pulumi.Output[bool]:
"""
If true, disables the group.
"""
return pulumi.get(self, "disabled")
@property
@pulumi.getter(name="dropEventsOnQueueFull")
def drop_events_on_queue_full(self) -> pulumi.Output[int]:
"""
If set to a positive number, wait the specified number of seconds before throwing out all new events until the output queue has space. Defaults to -1 (do not drop events).
<br>CAUTION: Do not set this value to a positive integer if you are monitoring files.
Setting this to -1 or 0 causes the output queue to block when it gets full, which causes further blocking up the processing chain. If any target group queue is blocked, no more data reaches any other target group.
Using auto load-balancing is the best way to minimize this condition, because, in that case, multiple receivers must be down (or jammed up) before queue blocking can occur.
"""
return pulumi.get(self, "drop_events_on_queue_full")
@property
@pulumi.getter(name="heartbeatFrequency")
def heartbeat_frequency(self) -> pulumi.Output[int]:
"""
How often (in seconds) to send a heartbeat packet to the receiving server.
Heartbeats are only sent if sendCookedData=true. Defaults to 30 seconds.
"""
return pulumi.get(self, "heartbeat_frequency")
@property
@pulumi.getter(name="maxQueueSize")
def max_queue_size(self) -> pulumi.Output[str]:
"""
Specify an integer or integer[KB|MB|GB].
<br>Sets the maximum size of the forwarder output queue. It also sets the maximum size of the wait queue to 3x this value, if you have enabled indexer acknowledgment (useACK=true).
Although the wait queue and the output queues are both configured by this attribute, they are separate queues. The setting determines the maximum size of the queue in-memory (RAM) buffer.
For heavy forwarders sending parsed data, maxQueueSize is the maximum number of events. Since events are typically much shorter than data blocks, the memory consumed by the queue on a parsing forwarder is likely to be much smaller than on a non-parsing forwarder, if you use this version of the setting.
If specified as a lone integer (for example, maxQueueSize=100), maxQueueSize indicates the maximum number of queued events (for parsed data) or blocks of data (for unparsed data). A block of data is approximately 64KB. For non-parsing forwarders, such as universal forwarders, that send unparsed data, maxQueueSize is the maximum number of data blocks.
If specified as an integer followed by KB, MB, or GB (for example, maxQueueSize=100MB), maxQueueSize indicates the maximum RAM allocated to the queue buffer. Defaults to 500KB (which means a maximum size of 500KB for the output queue and 1500KB for the wait queue, if any).
"""
return pulumi.get(self, "max_queue_size")
@property
@pulumi.getter
def method(self) -> pulumi.Output[str]:
"""
Valid values: (tcpout | syslog). Specifies the type of output processor.
"""
return pulumi.get(self, "method")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the group of receivers.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="sendCookedData")
def send_cooked_data(self) -> pulumi.Output[bool]:
"""
If true, events are cooked (processed by Splunk software). If false, events are raw and untouched prior to sending. Defaults to true.
Set to false if you are sending to a third-party system.
"""
return pulumi.get(self, "send_cooked_data")
@property
@pulumi.getter
def servers(self) -> pulumi.Output[Sequence[str]]:
"""
Comma-separated list of servers to include in the group.
"""
return pulumi.get(self, "servers")
@property
@pulumi.getter
def token(self) -> pulumi.Output[str]:
"""
Token value generated by the indexer after configuration.
"""
return pulumi.get(self, "token")
| 59.632373 | 367 | 0.68145 | 5,823 | 43,472 | 4.965653 | 0.054268 | 0.060868 | 0.063081 | 0.027391 | 0.93429 | 0.930417 | 0.918693 | 0.912571 | 0.907902 | 0.904894 | 0 | 0.006369 | 0.237923 | 43,472 | 728 | 368 | 59.714286 | 0.866431 | 0.534988 | 0 | 0.83905 | 1 | 0 | 0.097002 | 0.030696 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163588 | false | 0.002639 | 0.01847 | 0 | 0.279683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
03dc428ca5fb9fe090bfbdda62951509ed5f5bff | 11,908 | py | Python | lightserv/neuroglancer/forms.py | BrainCOGS/lightserv | a47bfb911f095030d811b755acb458c71f18baa2 | [
"BSD-3-Clause"
] | null | null | null | lightserv/neuroglancer/forms.py | BrainCOGS/lightserv | a47bfb911f095030d811b755acb458c71f18baa2 | [
"BSD-3-Clause"
] | null | null | null | lightserv/neuroglancer/forms.py | BrainCOGS/lightserv | a47bfb911f095030d811b755acb458c71f18baa2 | [
"BSD-3-Clause"
] | null | null | null | from flask import session, current_app
from flask_wtf import FlaskForm
from wtforms import (StringField, SubmitField, TextAreaField,
SelectField, BooleanField, IntegerField,
DecimalField, FieldList,FormField,HiddenField)
# from wtforms.fields.html5 import DateField
from wtforms.validators import (DataRequired, Length, InputRequired,
ValidationError, Optional)
max_channels_allowed = 8 # 4 channels but both dorsal up and ventral up
""" Raw data """
class ChannelForm(FlaskForm):
""" A sub-form for each channel in an ImageResolutionForm """
channel_name = HiddenField('Channel Name')
viz_left_lightsheet = BooleanField("Visualize?")
viz_right_lightsheet = BooleanField("Visualize?")
ventral_up = HiddenField("ventral_up")
class ImageResolutionForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
image_resolution = HiddenField('image resolution')
channel_forms = FieldList(FormField(ChannelForm),
min_entries=0,max_entries=max_channels_allowed)
class RawDataSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their raw data for a given imaging request in Neuroglancer
"""
image_resolution_forms = FieldList(FormField(ImageResolutionForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit') # renders a new resolution table
def validate_image_resolution_forms(self,image_resolution_forms):
""" Check to make sure at least one checkbox was checked """
any_checked = False
for image_resolution_dict in self.image_resolution_forms.data:
for channel_dict in image_resolution_dict['channel_forms']:
if channel_dict['viz_left_lightsheet'] or channel_dict['viz_right_lightsheet']:
any_checked=True
if not any_checked:
raise ValidationError("No light sheets were chosen to display."
" You must choose at least one in order to proceed.")
""" Stitched data """
class StitchedDataSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their stitched (full resolution) data for a given processing request in Neuroglancer.
"""
image_resolution_forms = FieldList(FormField(ImageResolutionForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit') # renders a new resolution table
def validate_image_resolution_forms(self,image_resolution_forms):
""" Check to make sure at least one checkbox was checked """
any_checked = False
for image_resolution_dict in self.image_resolution_forms.data:
for channel_dict in image_resolution_dict['channel_forms']:
if channel_dict['viz_left_lightsheet'] or channel_dict['viz_right_lightsheet']:
any_checked=True
if not any_checked:
raise ValidationError("No light sheets were chosen to display."
" You must choose at least one in order to proceed.")
""" Blended data """
class BlendedChannelForm(FlaskForm):
""" A sub-form for each channel in an ImageResolutionForm """
channel_name = HiddenField('Channel Name')
viz = BooleanField("Visualize?",default=0)
ventral_up = HiddenField("ventral_up")
class BlendedImageResolutionForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
image_resolution = HiddenField('image resolution')
channel_forms = FieldList(FormField(BlendedChannelForm),
min_entries=0,max_entries=max_channels_allowed)
class BlendedDataSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their blended (full resolution) data for a given processing request in Neuroglancer.
"""
image_resolution_forms = FieldList(
FormField(BlendedImageResolutionForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit') # renders a new resolution table
def validate_image_resolution_forms(self,image_resolution_forms):
""" Check to make sure at least one checkbox was checked """
any_checked = False
for image_resolution_dict in self.image_resolution_forms.data:
for channel_dict in image_resolution_dict['channel_forms']:
if channel_dict['viz']:
any_checked=True
if not any_checked:
raise ValidationError("No channels were chosen to display."
" You must choose at least one in order to proceed.")
""" Downsized data """
class DownsizedChannelForm(FlaskForm):
""" A sub-form for each channel in an ImageResolutionForm """
channel_name = HiddenField('Channel Name')
viz = BooleanField("Visualize?",default=0)
ventral_up = HiddenField("ventral_up")
class DownsizedImageResolutionForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
image_resolution = HiddenField('image resolution')
channel_forms = FieldList(FormField(DownsizedChannelForm),
min_entries=0,max_entries=max_channels_allowed)
class DownsizedDataSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their Downsized (full resolution) data for a given imaging request in Neuroglancer.
"""
image_resolution_forms = FieldList(
FormField(DownsizedImageResolutionForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit') # renders a new resolution table
def validate_image_resolution_forms(self,image_resolution_forms):
""" Check to make sure at least one checkbox was checked """
any_checked = False
for image_resolution_dict in self.image_resolution_forms.data:
for channel_dict in image_resolution_dict['channel_forms']:
if channel_dict['viz']:
any_checked=True
if not any_checked:
raise ValidationError("No channels were chosen to display."
" You must choose at least one in order to proceed.")
""" Registered data """
class RegisteredChannelForm(FlaskForm):
""" A sub-form for each channel in an ImageResolutionForm """
channel_name = HiddenField('Channel Name')
viz = BooleanField("Visualize?",default=0)
viz_atlas = BooleanField("Overlay Atlas?",default=0)
ventral_up = HiddenField("ventral_up")
class RegisteredImageResolutionForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
image_resolution = HiddenField('image resolution')
channel_forms = FieldList(FormField(RegisteredChannelForm),
min_entries=0,max_entries=max_channels_allowed)
class RegisteredDataSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their registered data for a given processing request in Neuroglancer.
"""
image_resolution_forms = FieldList(
FormField(RegisteredImageResolutionForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit') # renders a new resolution table
def validate_image_resolution_forms(self,image_resolution_forms):
""" Check to make sure at least one checkbox was checked """
any_checked = False
for image_resolution_dict in self.image_resolution_forms.data:
for channel_dict in image_resolution_dict['channel_forms']:
if channel_dict['viz']:
any_checked=True
if not any_checked:
raise ValidationError("No channels were chosen to display."
" You must choose at least one in order to proceed.")
""" General visualization form """
class GeneralImageResolutionForm(FlaskForm):
""" A sub-form for each image resolution """
image_resolution = HiddenField('image resolution')
raw_channel_forms = FieldList(FormField(ChannelForm),min_entries=0,max_entries=4)
stitched_channel_forms = FieldList(FormField(ChannelForm),min_entries=0,max_entries=4)
blended_channel_forms = FieldList(FormField(BlendedChannelForm),min_entries=0,max_entries=4)
downsized_channel_forms = FieldList(FormField(DownsizedChannelForm),min_entries=0,max_entries=4)
registered_channel_forms = FieldList(FormField(RegisteredChannelForm),min_entries=0,max_entries=4)
class GeneralDataSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their precomputed data products for a given processing request in Neuroglancer.
"""
image_resolution_forms = FieldList(
FormField(GeneralImageResolutionForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit') # renders a new resolution table
def validate_image_resolution_forms(self,image_resolution_forms):
""" Check to make sure at least one checkbox was checked
out of all types of forms"""
any_checked=False
for image_resolution_dict in self.image_resolution_forms.data:
for channel_dict in image_resolution_dict['raw_channel_forms']:
if channel_dict['viz_left_lightsheet'] or channel_dict['viz_right_lightsheet']:
any_checked=True
for channel_dict in image_resolution_dict['stitched_channel_forms']:
if channel_dict['viz_left_lightsheet'] or channel_dict['viz_right_lightsheet']:
any_checked=True
for channel_dict in image_resolution_dict['blended_channel_forms']:
if channel_dict['viz']:
any_checked=True
for channel_dict in image_resolution_dict['downsized_channel_forms']:
if channel_dict['viz']:
any_checked=True
for channel_dict in image_resolution_dict['registered_channel_forms']:
if channel_dict['viz']:
any_checked=True
if not any_checked:
raise ValidationError("No channels were chosen to display."
" You must choose at least one in order to proceed.")
""" Brain selection form for Jess' c-Fos and tracing experiments """
class AnimalForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
dataset = HiddenField('dataset')
animal_id = HiddenField('animal_id')
eroded_cells = HiddenField('Eroded cells?')
viz = BooleanField("Visualize?")
class CfosSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their raw data for a given imaging request in Neuroglancer
"""
animal_forms = FieldList(FormField(AnimalForm),min_entries=0,max_entries=40)
submit = SubmitField('Submit')
def validate_animal_forms(self,animal_forms):
""" Check to make sure at 1 checkbox was checked. No more no less."""
n_checked = [animal_form.data['viz']==True for animal_form in self.animal_forms].count(True)
if n_checked == 0:
raise ValidationError("No animal ids were checked when you hit submit. One needs to be checked")
elif n_checked >1:
raise ValidationError("Only one box can be checked when you hit submit.")
class TracingAnimalForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
dataset = HiddenField('dataset')
viz = BooleanField("Visualize?")
class TracingSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their raw data for a given imaging request in Neuroglancer
"""
animal_forms = FieldList(FormField(TracingAnimalForm),min_entries=0,max_entries=4)
submit = SubmitField('Submit')
def validate_animal_forms(self,animal_forms):
""" Check to make sure at 1 checkbox was checked. No more no less."""
n_checked = [animal_form.data['viz']==True for animal_form in self.animal_forms].count(True)
if n_checked == 0:
raise ValidationError("No animal ids were checked when you hit submit. One needs to be checked")
elif n_checked >1:
raise ValidationError("Only one box can be checked when you hit submit.")
""" Brain selection form for Chris Zimmerman's c-Fos and tracing experiments """
class LightservAnimalForm(FlaskForm):
""" A sub-form for each image resolution in RawDataSetupForm """
sample_name = HiddenField('sample_name')
viz = BooleanField("Visualize?")
class LightservCfosSetupForm(FlaskForm):
""" A form for setting up how user wants to visualize
their raw data for a given imaging request in Neuroglancer
"""
sample_forms = FieldList(FormField(LightservAnimalForm),min_entries=0,max_entries=40)
submit = SubmitField('Submit')
def validate_sample_forms(self,animal_forms):
""" Check to make sure at 1 checkbox was checked. No more no less."""
n_checked = [sample_form.data['viz']==True for sample_form in self.sample_forms].count(True)
if n_checked == 0:
raise ValidationError("No animals were checked when you hit submit. One needs to be checked")
elif n_checked >1:
raise ValidationError("Only one box can be checked when you hit submit.") | 43.459854 | 99 | 0.773682 | 1,588 | 11,908 | 5.624055 | 0.111461 | 0.097414 | 0.053745 | 0.028216 | 0.818721 | 0.812115 | 0.799686 | 0.799686 | 0.790617 | 0.777852 | 0 | 0.004876 | 0.138898 | 11,908 | 274 | 100 | 43.459854 | 0.866101 | 0.215737 | 0 | 0.662857 | 0 | 0 | 0.179885 | 0.010182 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051429 | false | 0 | 0.022857 | 0 | 0.502857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
03f4e931dcd0ab92c32305863f5922481a3b5eb9 | 194 | py | Python | cuisine/data/__init__.py | j3kstrum/recipe-tweaker | 941d2fed87fc2c568b29d3b3baee3f8ebfe4daca | [
"MIT"
] | null | null | null | cuisine/data/__init__.py | j3kstrum/recipe-tweaker | 941d2fed87fc2c568b29d3b3baee3f8ebfe4daca | [
"MIT"
] | null | null | null | cuisine/data/__init__.py | j3kstrum/recipe-tweaker | 941d2fed87fc2c568b29d3b3baee3f8ebfe4daca | [
"MIT"
] | null | null | null | from cuisine.data.recipe import Recipe
from cuisine.data.recipe_group import RecipeGroup
from cuisine.data.ingredient import Ingredient
from cuisine.data.ingredient_group import IngredientGroup
| 38.8 | 57 | 0.876289 | 26 | 194 | 6.461538 | 0.346154 | 0.261905 | 0.357143 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082474 | 194 | 4 | 58 | 48.5 | 0.94382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
03f9623d947a72bb21c2343f7c06ca05310254f2 | 4,315 | py | Python | Tests/PythonTests/test_volume_grid_emitter.py | ADMTec/CubbyFlow | c71457fd04ccfaf3ef22772bab9bcec4a0a3b611 | [
"MIT"
] | 216 | 2017-01-25T04:34:30.000Z | 2021-07-15T12:36:06.000Z | Tests/PythonTests/test_volume_grid_emitter.py | ADMTec/CubbyFlow | c71457fd04ccfaf3ef22772bab9bcec4a0a3b611 | [
"MIT"
] | 323 | 2017-01-26T13:53:13.000Z | 2021-07-14T16:03:38.000Z | Tests/PythonTests/test_volume_grid_emitter.py | ADMTec/CubbyFlow | c71457fd04ccfaf3ef22772bab9bcec4a0a3b611 | [
"MIT"
] | 33 | 2017-01-25T05:05:49.000Z | 2021-06-17T17:30:56.000Z | import numpy as np
import pyCubbyFlow
from pytest_utils import *
def test_volume_grid_emitter2():
# Basic ctor test
sphere = pyCubbyFlow.Sphere2(center=(0.5, 0.5), radius=0.15)
emitter = pyCubbyFlow.VolumeGridEmitter2(sphere, False)
assert emitter.sourceRegion
assert not emitter.isOneShot
assert emitter.isEnabled
# Another basic ctor test
emitter2 = pyCubbyFlow.VolumeGridEmitter2(
sourceRegion=sphere, isOneShot=False)
assert emitter2.sourceRegion
assert not emitter2.isOneShot
assert emitter2.isEnabled
# One-shot emitter
emitter3 = pyCubbyFlow.VolumeGridEmitter2(
sourceRegion=sphere, isOneShot=True)
assert emitter3.isOneShot
frame = pyCubbyFlow.Frame()
solver = pyCubbyFlow.GridSmokeSolver2(resolution=(32, 32), domainSizeX=1.0)
solver.emitter = emitter3
emitter3.AddStepFunctionTarget(solver.smokeDensity, 0.0, 1.0)
emitter3.AddStepFunctionTarget(solver.temperature, 0.0, 1.0)
# Emit some smoke
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
solver.Update(frame)
frame.Advance()
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff > 0.0
assert not emitter3.isEnabled
# Should not emit more smoke
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
emitter3.Update(0, 0)
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff < 1e-20
# Re-enabling the emitter should make it emit one more time
emitter3.isEnabled = True
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
solver.Update(frame)
frame.Advance()
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff > 0.0
assert not emitter3.isEnabled
# ...and gets disabled again
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
emitter3.Update(0, 0)
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff < 1e-20
def test_volume_grid_emitter3():
# Basic ctor test
sphere = pyCubbyFlow.Sphere3(center=(0.5, 0.5, 0.5), radius=0.15)
emitter = pyCubbyFlow.VolumeGridEmitter3(sphere, False)
assert emitter.sourceRegion
assert not emitter.isOneShot
assert emitter.isEnabled
# Another basic ctor test
emitter2 = pyCubbyFlow.VolumeGridEmitter3(
sourceRegion=sphere, isOneShot=False)
assert emitter2.sourceRegion
assert not emitter2.isOneShot
assert emitter2.isEnabled
# One-shot emitter
emitter3 = pyCubbyFlow.VolumeGridEmitter3(
sourceRegion=sphere, isOneShot=True)
assert emitter3.isOneShot
frame = pyCubbyFlow.Frame()
solver = pyCubbyFlow.GridSmokeSolver3(
resolution=(32, 32, 32), domainSizeX=1.0)
solver.emitter = emitter3
emitter3.AddStepFunctionTarget(solver.smokeDensity, 0.0, 1.0)
emitter3.AddStepFunctionTarget(solver.temperature, 0.0, 1.0)
# Emit some smoke
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
solver.Update(frame)
frame.Advance()
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff > 0.0
assert not emitter3.isEnabled
# Should not emit more smoke
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
emitter3.Update(0, 0)
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff < 1e-20
# Re-enabling the emitter should make it emit one more time
emitter3.isEnabled = True
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
solver.Update(frame)
frame.Advance()
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff > 0.0
assert not emitter3.isEnabled
# ...and gets disabled again
old_den = np.array(solver.smokeDensity.DataView(), copy=True)
emitter3.Update(0, 0)
new_den = np.array(solver.smokeDensity.DataView(), copy=True)
diff = np.linalg.norm(old_den - new_den)
assert diff < 1e-20
| 32.938931 | 79 | 0.706837 | 555 | 4,315 | 5.425225 | 0.136937 | 0.107605 | 0.053138 | 0.085022 | 0.931252 | 0.882763 | 0.882763 | 0.882763 | 0.882763 | 0.862172 | 0 | 0.032571 | 0.188876 | 4,315 | 130 | 80 | 33.192308 | 0.827714 | 0.085516 | 0 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282609 | 1 | 0.021739 | false | 0 | 0.032609 | 0 | 0.054348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ff069fc771c8e2b79cd427adc4e4b14fb2db2342 | 19,655 | py | Python | tests/test_rt_ipv4.py | nkarjala/tf-vrouter-1 | dd8606fcc6b91e041130276aead42433978e4ced | [
"BSD-2-Clause"
] | 1 | 2022-01-20T03:23:49.000Z | 2022-01-20T03:23:49.000Z | tests/test_rt_ipv4.py | nkarjala/tf-vrouter-1 | dd8606fcc6b91e041130276aead42433978e4ced | [
"BSD-2-Clause"
] | null | null | null | tests/test_rt_ipv4.py | nkarjala/tf-vrouter-1 | dd8606fcc6b91e041130276aead42433978e4ced | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/python
import os
import sys
sys.path.append(os.getcwd())
sys.path.append(os.getcwd() + '/lib/')
from imports import * # noqa
# anything with *test* will be assumed by pytest as a test
# The vrouter_test_fixture is passed as an argument to the test
class TestRTIPv4(unittest.TestCase):
@classmethod
def setup_class(cls):
ObjectBase.setUpClass()
# do auto cleanup and auto idx allocation for vif and nh
ObjectBase.set_auto_features(cleanup=True, vif_idx=True, nh_idx=True)
@classmethod
def teardown_class(cls):
ObjectBase.tearDownClass()
def setup_method(self, method):
ObjectBase.setUp(method)
def teardown_method(self, method):
ObjectBase.tearDown()
# Add route to all levels:8,16,24,32
def test_rt_add_all(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 11")
# add inet routes
inet_rt1 = InetRoute(
vrf=0,
prefix="10.1.1.2",
prefix_len=32,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=4)
inet_rt2 = InetRoute(
vrf=0,
prefix="20.1.1.0",
prefix_len=24,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt3 = InetRoute(
vrf=0,
prefix="30.1.0.0",
prefix_len=16,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
inet_rt4 = InetRoute(
vrf=0,
prefix="40.0.0.0",
prefix_len=8,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=7)
ObjectBase.sync_all()
# Query the objects back
self.assertEqual("tap_1", vmi.get_vif_name())
self.assertEqual(nh1.idx(), nh1.get_nh_idx())
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
self.assertEqual(nh1.idx(), inet_rt3.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt4.get_rtr_nh_idx())
self.assertEqual(4, int(inet_rt1.get('rtr_label')))
self.assertEqual(5, int(inet_rt2.get('rtr_label')))
self.assertEqual(6, int(inet_rt3.get('rtr_label')))
self.assertEqual(7, int(inet_rt4.get('rtr_label')))
# Add a route followed by a more specific route
def test_rt_add_sub_route(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 11")
# add inet routes
inet_rt1 = InetRoute(
vrf=1,
prefix="10.2.0.0",
prefix_len=16,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=1,
prefix="10.2.10.0",
prefix_len=24,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=1,
prefix="10.2.10.1",
prefix_len=32,
nh_idx=nh2.idx())
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_query_obj.get_rtr_nh_idx())
# Add a specific route followed by a super route
def test_rt_add_super_route(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 11")
# add inet routes
inet_rt1 = InetRoute(
vrf=2,
prefix="10.1.1.4",
prefix_len=32,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=2,
prefix="10.1.1.0",
prefix_len=24,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=2,
prefix="10.1.1.10",
prefix_len=32,
nh_idx=nh2.idx())
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_query_obj.get_rtr_nh_idx())
self.assertEqual(6, int(inet_query_obj.get('rtr_label')))
# Add a classless route with subnet mask=10
def test_rt_add_classless_prefix(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
# add classless inet route
inet_rt = InetRoute(
vrf=3,
prefix="1.1.0.0",
prefix_len=10,
nh_idx=nh1.idx(),
rtr_label=5)
ObjectBase.sync_all()
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt.get_rtr_nh_idx())
# Add routes at every level(8,16,24,32) and delete each one of them
def test_rt_del_all(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 02")
# add inet routes
inet_rt1 = InetRoute(
vrf=4,
prefix="10.0.0.0",
prefix_len=8,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=4,
prefix="20.10.0.0",
prefix_len=16,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
inet_rt3 = InetRoute(
vrf=4,
prefix="30.1.10.0",
prefix_len=24,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=7)
inet_rt4 = InetRoute(
vrf=4,
prefix="40.1.1.10",
prefix_len=32,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=8)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=4,
prefix="30.1.10.0",
prefix_len=24,
nh_idx=nh1.idx())
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
self.assertEqual(nh1.idx(), inet_rt3.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt4.get_rtr_nh_idx())
# delete routes
inet_rt1.rtr_nh_id = 0
inet_rt1.rtr_label_flags = 0
inet_rt1.delete()
inet_rt2.rtr_nh_id = 0
inet_rt2.rtr_label_flags = 0
inet_rt2.delete()
inet_rt3.rtr_nh_id = 0
inet_rt3.rtr_label_flags = 0
inet_rt3.delete()
inet_rt4.rtr_nh_id = 0
inet_rt4.rtr_label_flags = 0
inet_rt4.delete()
self.assertNotIn(inet_rt1.__obj_id__, ObjectBase.__obj_dict__)
self.assertNotIn(inet_rt2.__obj_id__, ObjectBase.__obj_dict__)
self.assertNotIn(inet_rt3.__obj_id__, ObjectBase.__obj_dict__)
self.assertNotIn(inet_rt4.__obj_id__, ObjectBase.__obj_dict__)
self.assertEqual(0, inet_query_obj.get_rtr_nh_idx())
# Add super route followed by specific route and then delete the
# specific route
def test_rt_del_sub_route(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 02")
# add inet routes
inet_rt1 = InetRoute(
vrf=5,
prefix="10.2.0.0",
prefix_len=16,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=5,
prefix="10.2.10.0",
prefix_len=24,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=5,
prefix="10.2.10.1",
prefix_len=32,
nh_idx=6)
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
# delete sub route
inet_rt2.rtr_nh_id = nh1.idx()
inet_rt2.rtr_replace_plen = 16
inet_rt2.rtr_label_flags = inet_rt1.rtr_label_flags
inet_rt2.rtr_label = inet_rt1.rtr_label
inet_rt2.delete()
self.assertEqual(nh1.idx(), inet_query_obj.get_rtr_nh_idx())
self.assertEqual(5, int(inet_query_obj.get('rtr_label')))
self.assertNotIn(inet_rt2.__obj_id__, ObjectBase.__obj_dict__)
# Add a super route(/16) followed by a specific route(/24) followed by a
# classless super route(/18) and then delete the clasless route
def test_rt_del_classless_super_route(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 02")
nh3 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 04")
# add inet routes
inet_rt1 = InetRoute(
vrf=6,
prefix="10.2.0.0",
prefix_len=16,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=6,
prefix="10.2.10.0",
prefix_len=24,
nh_idx=nh3.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
inet_rt3 = InetRoute(
vrf=6,
prefix="10.2.0.0",
prefix_len=18,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=7)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=6,
prefix="10.2.11.0",
prefix_len=18,
nh_idx=6)
# Query the objects back
self.assertEqual(nh2.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh3.idx(), inet_rt2.get_rtr_nh_idx())
self.assertEqual(nh1.idx(), inet_rt3.get_rtr_nh_idx())
# delete super route
inet_rt3.rtr_nh_id = nh2.idx()
inet_rt3.rtr_replace_plen = 16
inet_rt3.rtr_label_flags = inet_rt1.rtr_label_flags
inet_rt3.rtr_label = inet_rt1.rtr_label
inet_rt3.delete()
self.assertEqual(nh2.idx(), inet_query_obj.get_rtr_nh_idx())
self.assertEqual(5, int(inet_query_obj.get('rtr_label')))
self.assertNotIn(inet_rt3.__obj_id__, ObjectBase.__obj_dict__)
# Add a specific route followed by a super route and then delete the
# specific route followed by deletion of super route
def test_rt_del_all_sub_routes_and_super_bucket(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 02")
# add inet routes
inet_rt1 = InetRoute(
vrf=7,
prefix="10.2.10.0",
prefix_len=24,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=7,
prefix="10.2.0.0",
prefix_len=16,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
ObjectBase.sync_all()
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
# delete all routes
inet_rt1.rtr_replace_plen = inet_rt2.rtr_prefix_len
inet_rt1.rtr_nh_id = nh2.idx()
# inet_rt1.rtr_label = inet_rt2.rtr_label
inet_rt1.rtr_label_flags = inet_rt2.rtr_label_flags
inet_rt1.delete()
self.assertNotIn(inet_rt1.__obj_id__, ObjectBase.__obj_dict__)
inet_rt2.rtr_nh_id = 0
inet_rt2.rtr_label = 0
inet_rt2.rtr_label_flags = 0
inet_rt2.delete()
self.assertNotIn(inet_rt2.__obj_id__, ObjectBase.__obj_dict__)
# Add super route(/16) followed by specific mid level route(/24) followed
# by a more specific route(/32) and then delete the mid level(/24) route
def test_rt_del_mid_level(self):
vmi = VirtualVif(name="tap_1", ipv4_str="1.1.1.10",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 02")
nh3 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 04")
# add inet routes
inet_rt1 = InetRoute(
vrf=8,
prefix="10.10.0.0",
prefix_len=16,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=5)
inet_rt2 = InetRoute(
vrf=8,
prefix="10.10.4.0",
prefix_len=24,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=6)
inet_rt3 = InetRoute(
vrf=8,
prefix="10.10.4.2",
prefix_len=32,
nh_idx=nh3.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=7)
ObjectBase.sync_all()
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
self.assertEqual(nh3.idx(), inet_rt3.get_rtr_nh_idx())
# delete mid level route
inet_rt2.rtr_nh_id = nh1.idx()
inet_rt2.rtr_replace_plen = inet_rt1.rtr_prefix_len
inet_rt2.rtr_label_flags = inet_rt1.rtr_label_flags
inet_rt2.rtr_label = inet_rt1.rtr_label
inet_rt2.delete()
# deletion verification query
self.assertNotIn(inet_rt2.__obj_id__, ObjectBase.__obj_dict__)
for i in range(256):
temp_prefix = "10.10.4." + str(i)
temp_inet = InetRoute(
vrf=8,
prefix=temp_prefix,
prefix_len=32,
nh_idx=1)
if(temp_inet.rtr_prefix == inet_rt3.rtr_prefix):
self.assertEqual(nh3.idx(), temp_inet.get_rtr_nh_idx())
else:
self.assertEqual(nh1.idx(), temp_inet.get_rtr_nh_idx())
del(temp_inet)
# Add default route(bucket expands automatically)
def test_rt_add_default_route(self):
vmi = VirtualVif(name="tap_5", ipv4_str="192.168.1.1",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
# add default inet route
inet_rt1 = InetRoute(
vrf=9,
prefix="0.0.0.0",
prefix_len=0,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=12)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=9,
prefix="2.0.0.0",
prefix_len=8,
nh_idx=5)
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh1.idx(), inet_query_obj.get_rtr_nh_idx())
# Add default route followed by a sub route and delete the sub route
def test_rt_add_default_route_del_sub_route(self):
vmi = VirtualVif(name="tap_5", ipv4_str="192.168.1.1",
mac_str="de:ad:be:ef:00:02")
nh1 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 00")
nh2 = EncapNextHop(encap_oif_id=vmi.idx(),
encap="de ad be ef 00 02 de ad be ef 00 01 08 01")
# add inet routes
inet_rt1 = InetRoute(
vrf=10,
prefix="0.0.0.0",
prefix_len=0,
nh_idx=nh1.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=12)
inet_rt2 = InetRoute(
vrf=10,
prefix="10.0.0.0",
prefix_len=8,
nh_idx=nh2.idx(),
rtr_label_flags=constants.VR_RT_LABEL_VALID_FLAG,
rtr_label=13)
ObjectBase.sync_all()
inet_query_obj = InetRoute(
vrf=10,
prefix="10.0.0.0",
prefix_len=8,
nh_idx=6)
# Query the objects back
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(nh2.idx(), inet_rt2.get_rtr_nh_idx())
# delete sub route
inet_rt2.rtr_nh_id = nh1.idx()
inet_rt2.rtr_replace_plen = 0
inet_rt2.rtr_label = inet_rt1.rtr_label
inet_rt2.delete()
self.assertNotIn(inet_rt2.__obj_id__, ObjectBase.__obj_dict__)
self.assertEqual(nh1.idx(), inet_rt1.get_rtr_nh_idx())
self.assertEqual(12, int(inet_rt1.get('rtr_label')))
| 34.064125 | 77 | 0.568558 | 2,849 | 19,655 | 3.626887 | 0.059319 | 0.06426 | 0.031937 | 0.042582 | 0.853673 | 0.802865 | 0.776638 | 0.747508 | 0.724281 | 0.690506 | 0 | 0.073618 | 0.326177 | 19,655 | 576 | 78 | 34.123264 | 0.706584 | 0.082218 | 0 | 0.783529 | 0 | 0 | 0.089105 | 0 | 0 | 0 | 0 | 0 | 0.129412 | 1 | 0.035294 | false | 0 | 0.007059 | 0 | 0.044706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
207ba82e05b047dfea862aa079170fd284814300 | 270 | py | Python | colour_hdri/models/datasets/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 92 | 2015-09-19T22:11:15.000Z | 2022-03-13T06:37:53.000Z | colour_hdri/models/datasets/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 24 | 2017-05-25T08:55:10.000Z | 2022-03-30T18:26:43.000Z | colour_hdri/models/datasets/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 9 | 2016-01-18T17:29:51.000Z | 2020-11-12T12:54:18.000Z | # -*- coding: utf-8 -*-
from .dng import (CCS_ILLUMINANT_ADOBEDNG, CCT_ILLUMINANTS_ADOBEDNG,
LIGHT_SOURCE_TAG_TO_DNG_ILLUMINANTS)
__all__ = [
'CCS_ILLUMINANT_ADOBEDNG',
'CCT_ILLUMINANTS_ADOBEDNG',
'LIGHT_SOURCE_TAG_TO_DNG_ILLUMINANTS',
]
| 24.545455 | 68 | 0.718519 | 31 | 270 | 5.548387 | 0.516129 | 0.151163 | 0.244186 | 0.27907 | 0.848837 | 0.848837 | 0.848837 | 0.848837 | 0.848837 | 0.848837 | 0 | 0.004545 | 0.185185 | 270 | 10 | 69 | 27 | 0.777273 | 0.077778 | 0 | 0 | 0 | 0 | 0.331984 | 0.331984 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
455557c099f4ea126d24f3fbdf29435e0ed47d86 | 123 | py | Python | defaultargs/__init__.py | keithlee-co-uk/defaultargs | 41a7702d5bf4cf7f2564da173593f91fbe9033a1 | [
"MIT"
] | null | null | null | defaultargs/__init__.py | keithlee-co-uk/defaultargs | 41a7702d5bf4cf7f2564da173593f91fbe9033a1 | [
"MIT"
] | null | null | null | defaultargs/__init__.py | keithlee-co-uk/defaultargs | 41a7702d5bf4cf7f2564da173593f91fbe9033a1 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
from defaultargs.defaultargs import defaultargs
from defaultargs.defaultargs import databaseargs
| 20.5 | 48 | 0.788618 | 13 | 123 | 7.461538 | 0.538462 | 0.309278 | 0.536082 | 0.659794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.121951 | 123 | 5 | 49 | 24.6 | 0.888889 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
45860e2ccdabe903b2830d4d550721b86e8fe0c5 | 6,407 | py | Python | loldib/getratings/models/NA/na_leona/na_leona_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_leona/na_leona_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_leona/na_leona_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Leona_Sup_Aatrox(Ratings):
pass
class NA_Leona_Sup_Ahri(Ratings):
pass
class NA_Leona_Sup_Akali(Ratings):
pass
class NA_Leona_Sup_Alistar(Ratings):
pass
class NA_Leona_Sup_Amumu(Ratings):
pass
class NA_Leona_Sup_Anivia(Ratings):
pass
class NA_Leona_Sup_Annie(Ratings):
pass
class NA_Leona_Sup_Ashe(Ratings):
pass
class NA_Leona_Sup_AurelionSol(Ratings):
pass
class NA_Leona_Sup_Azir(Ratings):
pass
class NA_Leona_Sup_Bard(Ratings):
pass
class NA_Leona_Sup_Blitzcrank(Ratings):
pass
class NA_Leona_Sup_Brand(Ratings):
pass
class NA_Leona_Sup_Braum(Ratings):
pass
class NA_Leona_Sup_Caitlyn(Ratings):
pass
class NA_Leona_Sup_Camille(Ratings):
pass
class NA_Leona_Sup_Cassiopeia(Ratings):
pass
class NA_Leona_Sup_Chogath(Ratings):
pass
class NA_Leona_Sup_Corki(Ratings):
pass
class NA_Leona_Sup_Darius(Ratings):
pass
class NA_Leona_Sup_Diana(Ratings):
pass
class NA_Leona_Sup_Draven(Ratings):
pass
class NA_Leona_Sup_DrMundo(Ratings):
pass
class NA_Leona_Sup_Ekko(Ratings):
pass
class NA_Leona_Sup_Elise(Ratings):
pass
class NA_Leona_Sup_Evelynn(Ratings):
pass
class NA_Leona_Sup_Ezreal(Ratings):
pass
class NA_Leona_Sup_Fiddlesticks(Ratings):
pass
class NA_Leona_Sup_Fiora(Ratings):
pass
class NA_Leona_Sup_Fizz(Ratings):
pass
class NA_Leona_Sup_Galio(Ratings):
pass
class NA_Leona_Sup_Gangplank(Ratings):
pass
class NA_Leona_Sup_Garen(Ratings):
pass
class NA_Leona_Sup_Gnar(Ratings):
pass
class NA_Leona_Sup_Gragas(Ratings):
pass
class NA_Leona_Sup_Graves(Ratings):
pass
class NA_Leona_Sup_Hecarim(Ratings):
pass
class NA_Leona_Sup_Heimerdinger(Ratings):
pass
class NA_Leona_Sup_Illaoi(Ratings):
pass
class NA_Leona_Sup_Irelia(Ratings):
pass
class NA_Leona_Sup_Ivern(Ratings):
pass
class NA_Leona_Sup_Janna(Ratings):
pass
class NA_Leona_Sup_JarvanIV(Ratings):
pass
class NA_Leona_Sup_Jax(Ratings):
pass
class NA_Leona_Sup_Jayce(Ratings):
pass
class NA_Leona_Sup_Jhin(Ratings):
pass
class NA_Leona_Sup_Jinx(Ratings):
pass
class NA_Leona_Sup_Kalista(Ratings):
pass
class NA_Leona_Sup_Karma(Ratings):
pass
class NA_Leona_Sup_Karthus(Ratings):
pass
class NA_Leona_Sup_Kassadin(Ratings):
pass
class NA_Leona_Sup_Katarina(Ratings):
pass
class NA_Leona_Sup_Kayle(Ratings):
pass
class NA_Leona_Sup_Kayn(Ratings):
pass
class NA_Leona_Sup_Kennen(Ratings):
pass
class NA_Leona_Sup_Khazix(Ratings):
pass
class NA_Leona_Sup_Kindred(Ratings):
pass
class NA_Leona_Sup_Kled(Ratings):
pass
class NA_Leona_Sup_KogMaw(Ratings):
pass
class NA_Leona_Sup_Leblanc(Ratings):
pass
class NA_Leona_Sup_LeeSin(Ratings):
pass
class NA_Leona_Sup_Leona(Ratings):
pass
class NA_Leona_Sup_Lissandra(Ratings):
pass
class NA_Leona_Sup_Lucian(Ratings):
pass
class NA_Leona_Sup_Lulu(Ratings):
pass
class NA_Leona_Sup_Lux(Ratings):
pass
class NA_Leona_Sup_Malphite(Ratings):
pass
class NA_Leona_Sup_Malzahar(Ratings):
pass
class NA_Leona_Sup_Maokai(Ratings):
pass
class NA_Leona_Sup_MasterYi(Ratings):
pass
class NA_Leona_Sup_MissFortune(Ratings):
pass
class NA_Leona_Sup_MonkeyKing(Ratings):
pass
class NA_Leona_Sup_Mordekaiser(Ratings):
pass
class NA_Leona_Sup_Morgana(Ratings):
pass
class NA_Leona_Sup_Nami(Ratings):
pass
class NA_Leona_Sup_Nasus(Ratings):
pass
class NA_Leona_Sup_Nautilus(Ratings):
pass
class NA_Leona_Sup_Nidalee(Ratings):
pass
class NA_Leona_Sup_Nocturne(Ratings):
pass
class NA_Leona_Sup_Nunu(Ratings):
pass
class NA_Leona_Sup_Olaf(Ratings):
pass
class NA_Leona_Sup_Orianna(Ratings):
pass
class NA_Leona_Sup_Ornn(Ratings):
pass
class NA_Leona_Sup_Pantheon(Ratings):
pass
class NA_Leona_Sup_Poppy(Ratings):
pass
class NA_Leona_Sup_Quinn(Ratings):
pass
class NA_Leona_Sup_Rakan(Ratings):
pass
class NA_Leona_Sup_Rammus(Ratings):
pass
class NA_Leona_Sup_RekSai(Ratings):
pass
class NA_Leona_Sup_Renekton(Ratings):
pass
class NA_Leona_Sup_Rengar(Ratings):
pass
class NA_Leona_Sup_Riven(Ratings):
pass
class NA_Leona_Sup_Rumble(Ratings):
pass
class NA_Leona_Sup_Ryze(Ratings):
pass
class NA_Leona_Sup_Sejuani(Ratings):
pass
class NA_Leona_Sup_Shaco(Ratings):
pass
class NA_Leona_Sup_Shen(Ratings):
pass
class NA_Leona_Sup_Shyvana(Ratings):
pass
class NA_Leona_Sup_Singed(Ratings):
pass
class NA_Leona_Sup_Sion(Ratings):
pass
class NA_Leona_Sup_Sivir(Ratings):
pass
class NA_Leona_Sup_Skarner(Ratings):
pass
class NA_Leona_Sup_Sona(Ratings):
pass
class NA_Leona_Sup_Soraka(Ratings):
pass
class NA_Leona_Sup_Swain(Ratings):
pass
class NA_Leona_Sup_Syndra(Ratings):
pass
class NA_Leona_Sup_TahmKench(Ratings):
pass
class NA_Leona_Sup_Taliyah(Ratings):
pass
class NA_Leona_Sup_Talon(Ratings):
pass
class NA_Leona_Sup_Taric(Ratings):
pass
class NA_Leona_Sup_Teemo(Ratings):
pass
class NA_Leona_Sup_Thresh(Ratings):
pass
class NA_Leona_Sup_Tristana(Ratings):
pass
class NA_Leona_Sup_Trundle(Ratings):
pass
class NA_Leona_Sup_Tryndamere(Ratings):
pass
class NA_Leona_Sup_TwistedFate(Ratings):
pass
class NA_Leona_Sup_Twitch(Ratings):
pass
class NA_Leona_Sup_Udyr(Ratings):
pass
class NA_Leona_Sup_Urgot(Ratings):
pass
class NA_Leona_Sup_Varus(Ratings):
pass
class NA_Leona_Sup_Vayne(Ratings):
pass
class NA_Leona_Sup_Veigar(Ratings):
pass
class NA_Leona_Sup_Velkoz(Ratings):
pass
class NA_Leona_Sup_Vi(Ratings):
pass
class NA_Leona_Sup_Viktor(Ratings):
pass
class NA_Leona_Sup_Vladimir(Ratings):
pass
class NA_Leona_Sup_Volibear(Ratings):
pass
class NA_Leona_Sup_Warwick(Ratings):
pass
class NA_Leona_Sup_Xayah(Ratings):
pass
class NA_Leona_Sup_Xerath(Ratings):
pass
class NA_Leona_Sup_XinZhao(Ratings):
pass
class NA_Leona_Sup_Yasuo(Ratings):
pass
class NA_Leona_Sup_Yorick(Ratings):
pass
class NA_Leona_Sup_Zac(Ratings):
pass
class NA_Leona_Sup_Zed(Ratings):
pass
class NA_Leona_Sup_Ziggs(Ratings):
pass
class NA_Leona_Sup_Zilean(Ratings):
pass
class NA_Leona_Sup_Zyra(Ratings):
pass
| 15.364508 | 46 | 0.761667 | 972 | 6,407 | 4.59465 | 0.151235 | 0.216301 | 0.370802 | 0.463502 | 0.797582 | 0.797582 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173404 | 6,407 | 416 | 47 | 15.401442 | 0.843278 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
458f8e005f4721f7c3694470a0479684d8c280d1 | 11,764 | py | Python | tests/test_testing.py | apollo13/lightbus | ad9bb5e376e7aabb400d01307345e00fd07e4677 | [
"Apache-2.0"
] | null | null | null | tests/test_testing.py | apollo13/lightbus | ad9bb5e376e7aabb400d01307345e00fd07e4677 | [
"Apache-2.0"
] | null | null | null | tests/test_testing.py | apollo13/lightbus | ad9bb5e376e7aabb400d01307345e00fd07e4677 | [
"Apache-2.0"
] | null | null | null | import pytest
from lightbus import EventMessage, RpcMessage
from lightbus.utilities import testing
pytestmark = pytest.mark.unit
@pytest.fixture
def mock_result():
rpc = testing.TestRpcTransport()
result = testing.TestResultTransport()
event = testing.TestEventTransport()
schema = testing.TestSchemaTransport()
return testing.MockResult(rpc, result, event, schema)
@pytest.mark.parametrize(
"method_name", ["assert_events_fired", "assertEventFired"], ids=["snake", "camel"]
)
def test_mock_result_assert_events_fired_simple(mock_result: testing.MockResult, method_name):
assert_events_fired = getattr(mock_result, method_name)
mock_result.event.events = [(EventMessage(api_name="api", event_name="event"), {})]
# No exception
try:
assert_events_fired("api.event")
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
with pytest.raises(AssertionError):
assert_events_fired("api.bad_event")
@pytest.mark.parametrize(
"method_name", ["assert_events_fired", "assertEventFired"], ids=["snake", "camel"]
)
def test_mock_result_assert_events_fired_times(mock_result: testing.MockResult, method_name):
assert_events_fired = getattr(mock_result, method_name)
mock_result.event.events = [
(EventMessage(api_name="api", event_name="event"), {}),
(EventMessage(api_name="api", event_name="event"), {}),
]
# No error
try:
assert_events_fired("api.event")
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
# No error
try:
assert_events_fired("api.event", times=2)
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
# Error
with pytest.raises(AssertionError):
assert_events_fired("api.event", times=0)
# Error
with pytest.raises(AssertionError):
assert_events_fired("api.event", times=1)
# Error
with pytest.raises(AssertionError):
assert_events_fired("api.event", times=3)
@pytest.mark.parametrize(
"method_name", ["assert_events_fired", "assertEventFired"], ids=["snake", "camel"]
)
def test_mock_result_assert_events_fired_zero(mock_result: testing.MockResult, method_name):
assert_events_fired = getattr(mock_result, method_name)
mock_result.event.events = []
# No error
try:
assert_events_fired("api.event", times=0)
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
# Error
with pytest.raises(AssertionError):
assert_events_fired("api.event", times=1)
@pytest.mark.parametrize(
"method_name", ["assert_event_not_fired", "assertEventNotFired"], ids=["snake", "camel"]
)
def test_mock_result_assert_event_not_fired(mock_result: testing.MockResult, method_name):
assert_event_not_fired = getattr(mock_result, method_name)
mock_result.event.events = [(EventMessage(api_name="api", event_name="event"), {})]
# No exception
try:
assert_event_not_fired("api.bad_event")
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
with pytest.raises(AssertionError):
assert_event_not_fired("api.event")
@pytest.mark.parametrize(
"method_name", ["get_event_messages", "getEventMessages"], ids=["snake", "camel"]
)
def test_mock_result_get_event_messages(mock_result: testing.MockResult, method_name):
get_event_messages = getattr(mock_result, method_name)
event1 = EventMessage(api_name="api", event_name="event1")
event2 = EventMessage(api_name="api", event_name="event2")
mock_result.event.events = [(event1, {}), (event2, {})]
assert get_event_messages() == [event1, event2]
@pytest.mark.parametrize(
"method_name", ["get_event_messages", "getEventMessages"], ids=["snake", "camel"]
)
def test_mock_result_get_event_messages_filtered(mock_result: testing.MockResult, method_name):
get_event_messages = getattr(mock_result, method_name)
event1 = EventMessage(api_name="api", event_name="event1")
event2 = EventMessage(api_name="api", event_name="event2")
mock_result.event.events = [(event1, {}), (event2, {})]
assert get_event_messages("api.event2") == [event2]
@pytest.mark.parametrize(
"method_name", ["assert_rpc_called", "assertRpcCalled"], ids=["snake", "camel"]
)
def test_mock_result_assert_rpc_called_simple(mock_result: testing.MockResult, method_name):
assert_rpc_called = getattr(mock_result, method_name)
mock_result.rpc.rpcs = [(RpcMessage(api_name="api", procedure_name="rpc"), {})]
# No exception
try:
assert_rpc_called("api.rpc")
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
with pytest.raises(AssertionError):
assert_rpc_called("api.bad_rpc")
@pytest.mark.parametrize(
"method_name", ["assert_rpc_called", "assertRpcCalled"], ids=["snake", "camel"]
)
def test_mock_result_assert_rpc_called_times(mock_result: testing.MockResult, method_name):
assert_rpc_called = getattr(mock_result, method_name)
mock_result.rpc.rpcs = [
(RpcMessage(api_name="api", procedure_name="rpc"), {}),
(RpcMessage(api_name="api", procedure_name="rpc"), {}),
]
# No error
try:
assert_rpc_called("api.rpc")
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
# No error
try:
assert_rpc_called("api.rpc", times=2)
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
# Error
with pytest.raises(AssertionError):
assert_rpc_called("api.rpc", times=0)
# Error
with pytest.raises(AssertionError):
assert_rpc_called("api.rpc", times=1)
# Error
with pytest.raises(AssertionError):
assert_rpc_called("api.rpc", times=3)
@pytest.mark.parametrize(
"method_name", ["assert_rpc_called", "assertRpcCalled"], ids=["snake", "camel"]
)
def test_mock_result_assert_rpc_called_zero(mock_result: testing.MockResult, method_name):
assert_rpc_called = getattr(mock_result, method_name)
mock_result.rpc.rpcs = []
# No error
try:
assert_rpc_called("api.rpc", times=0)
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
# Error
with pytest.raises(AssertionError):
assert_rpc_called("api.rpc", times=1)
@pytest.mark.parametrize(
"method_name", ["assert_rpc_not_called", "assertRpcNotCalled"], ids=["snake", "camel"]
)
def test_mock_result_assert_rpc_not_called(mock_result: testing.MockResult, method_name):
assert_rpc_not_called = getattr(mock_result, method_name)
mock_result.rpc.rpcs = [(RpcMessage(api_name="api", procedure_name="rpc"), {})]
# No exception
try:
assert_rpc_not_called("api.bad_rpc")
except AssertionError as e:
assert False, f"{method_name} incorrectly raised an assertion error: {e}"
with pytest.raises(AssertionError):
assert_rpc_not_called("api.rpc")
@pytest.mark.parametrize(
"method_name", ["get_rpc_messages", "getRpcMessages"], ids=["snake", "camel"]
)
def test_mock_result_get_rpc_messages(mock_result: testing.MockResult, method_name):
get_rpc_messages = getattr(mock_result, method_name)
rpc1 = RpcMessage(api_name="api", procedure_name="rpc1")
rpc2 = RpcMessage(api_name="api", procedure_name="rpc2")
mock_result.rpc.rpcs = [(rpc1, {}), (rpc2, {})]
assert get_rpc_messages() == [rpc1, rpc2]
@pytest.mark.parametrize(
"method_name", ["get_rpc_messages", "getRpcMessages"], ids=["snake", "camel"]
)
def test_mock_result_get_rpc_messages_filtered(mock_result: testing.MockResult, method_name):
get_rpc_messages = getattr(mock_result, method_name)
rpc1 = RpcMessage(api_name="api", procedure_name="rpc1")
rpc2 = RpcMessage(api_name="api", procedure_name="rpc2")
mock_result.rpc.rpcs = [(rpc1, {}), (rpc2, {})]
assert get_rpc_messages("api.rpc2") == [rpc2]
@pytest.mark.parametrize(
"property_name", ["event_names_fired", "eventNamesFired"], ids=["snake", "camel"]
)
def test_mock_result_event_names_fired(mock_result: testing.MockResult, property_name):
mock_result.event.events = [
(EventMessage(api_name="api", event_name="event"), {}),
(EventMessage(api_name="api2", event_name="event2"), {}),
]
assert getattr(mock_result, property_name) == ["api.event", "api2.event2"]
@pytest.mark.parametrize(
"property_name", ["event_names_fired", "eventNamesFired"], ids=["snake", "camel"]
)
def test_mock_result_rpc_names_fired(mock_result: testing.MockResult, property_name):
mock_result.event.events = [
(RpcMessage(api_name="api", procedure_name="rpc"), {}),
(RpcMessage(api_name="api2", procedure_name="rpc2"), {}),
]
assert getattr(mock_result, property_name) == ["api.rpc", "api2.rpc2"]
def test_bus_mocker_event_ok(dummy_bus, dummy_api):
dummy_bus.client.register_api(dummy_api)
with testing.BusMocker(dummy_bus) as bus_mocker:
bus_mocker.mock_event_firing("my.dummy.my_event")
dummy_bus.my.dummy.my_event.fire(field="x")
bus_mocker.assert_events_fired("my.dummy.my_event")
def test_bus_mocker_event_not_mocked(dummy_bus, dummy_api):
dummy_bus.client.register_api(dummy_api)
with testing.BusMocker(dummy_bus) as bus_mocker:
# We don't call mock_event_firing, so we get an error here
with pytest.raises(AssertionError):
dummy_bus.my.dummy.my_event.fire(field="x")
def test_bus_mocker_event_mocking_disabled(dummy_bus, dummy_api):
dummy_bus.client.register_api(dummy_api)
with testing.BusMocker(dummy_bus, require_mocking=False) as bus_mocker:
# We don't call mock_event_firing, but we've disabled mocking so that is ok
dummy_bus.my.dummy.my_event.fire(field="x")
bus_mocker.assert_events_fired("my.dummy.my_event")
def test_bus_mocker_event_mocking_disabled_but_mocked_anyway(dummy_bus, dummy_api):
dummy_bus.client.register_api(dummy_api)
with testing.BusMocker(dummy_bus, require_mocking=False) as bus_mocker:
bus_mocker.mock_event_firing("my.dummy.my_event")
dummy_bus.my.dummy.my_event.fire(field="x")
bus_mocker.assert_events_fired("my.dummy.my_event")
def test_bus_mocker_rpc_ok(dummy_bus):
with testing.BusMocker(dummy_bus) as bus_mocker:
bus_mocker.mock_rpc_call("api.rpc", result=1)
result = dummy_bus.api.rpc()
assert result == 1
bus_mocker.assert_rpc_called("api.rpc")
def test_bus_mocker_rpc_not_mocked(dummy_bus):
with testing.BusMocker(dummy_bus) as bus_mocker:
# We don't call mock_rpc_call, so we get an error here
with pytest.raises(AssertionError):
dummy_bus.api.rpc()
def test_bus_mocker_rpc_mocking_disabled(dummy_bus):
with testing.BusMocker(dummy_bus, require_mocking=False) as bus_mocker:
# We don't call mock_rpc_call, but we've disabled mocking so that is ok
result = dummy_bus.api.rpc()
# No mocking setup, so RPCs just return None
assert result == None
bus_mocker.assert_rpc_called("api.rpc")
def test_bus_mocker_rpc_mocking_disabled_but_mocked_anyway(dummy_bus):
with testing.BusMocker(dummy_bus, require_mocking=False) as bus_mocker:
bus_mocker.mock_rpc_call("api.rpc", result=1)
result = dummy_bus.api.rpc()
assert result == 1
bus_mocker.assert_rpc_called("api.rpc")
| 35.327327 | 95 | 0.711493 | 1,546 | 11,764 | 5.107374 | 0.071798 | 0.072188 | 0.045213 | 0.028369 | 0.930978 | 0.91768 | 0.909954 | 0.884245 | 0.850937 | 0.813197 | 0 | 0.005692 | 0.16372 | 11,764 | 332 | 96 | 35.433735 | 0.79691 | 0.038252 | 0 | 0.654867 | 0 | 0 | 0.164128 | 0.003809 | 0 | 0 | 0 | 0 | 0.420354 | 1 | 0.10177 | false | 0 | 0.013274 | 0 | 0.119469 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4598b2a38a7ab2c9ac9d118be65ebf1607a2eca0 | 64,987 | py | Python | porerefiner/protocols/porerefiner/rpc/porerefiner_pb2.py | CFSAN-Biostatistics/porerefiner | 64f96498bd6c036cfac46def1d9d94362001e67c | [
"MIT"
] | 8 | 2019-10-10T20:05:18.000Z | 2021-02-19T21:53:43.000Z | porerefiner/protocols/porerefiner/rpc/porerefiner_pb2.py | CFSAN-Biostatistics/porerefiner | 64f96498bd6c036cfac46def1d9d94362001e67c | [
"MIT"
] | 2 | 2020-07-17T07:24:17.000Z | 2021-02-19T22:28:12.000Z | porerefiner/protocols/porerefiner/rpc/porerefiner_pb2.py | CFSAN-Biostatistics/porerefiner | 64f96498bd6c036cfac46def1d9d94362001e67c | [
"MIT"
] | 2 | 2019-10-01T15:45:59.000Z | 2019-10-28T19:15:32.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: porerefiner/protocols/porerefiner/rpc/porerefiner.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2
from google.protobuf.timestamp_pb2 import *
from google.protobuf.duration_pb2 import *
DESCRIPTOR = _descriptor.FileDescriptor(
name='porerefiner/protocols/porerefiner/rpc/porerefiner.proto',
package='porerefiner.rpc',
syntax='proto3',
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n7porerefiner/protocols/porerefiner/rpc/porerefiner.proto\x12\x0fporerefiner.rpc\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1egoogle/protobuf/duration.proto\";\n\tTripleTag\x12\x11\n\tnamespace\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\r\n\x05value\x18\x03 \x01(\t\"\xd2\x07\n\x03Run\x12\n\n\x02id\x18\x01 \x01(\x05\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x15\n\rmnemonic_name\x18\x03 \x01(\t\x12\x12\n\nlibrary_id\x18\x04 \x01(\t\x12\x0e\n\x06status\x18\x05 \x01(\t\x12\x0c\n\x04path\x18\x06 \x01(\t\x12\x15\n\rflowcell_type\x18\x07 \x01(\t\x12\x13\n\x0b\x66lowcell_id\x18\x08 \x01(\t\x12\x19\n\x11\x62\x61secalling_model\x18\t \x01(\t\x12\x16\n\x0esequencing_kit\x18\n \x01(\t\x12+\n\x07started\x18\x0b \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12*\n\x07\x65lapsed\x18\x0c \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x13\n\x0b\x62\x61rcode_kit\x18\r \x03(\t\x12(\n\x05\x66iles\x18\x0f \x03(\x0b\x32\x19.porerefiner.rpc.Run.File\x12,\n\x07samples\x18\x14 \x03(\x0b\x32\x1b.porerefiner.rpc.Run.Sample\x12\x0c\n\x04tags\x18\x1e \x03(\t\x12-\n\ttrip_tags\x18# \x03(\x0b\x32\x1a.porerefiner.rpc.TripleTag\x12&\n\x04jobs\x18( \x03(\x0b\x32\x18.porerefiner.rpc.Run.Job\x1a\x9b\x01\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04path\x18\x02 \x01(\t\x12\x0f\n\x07spot_id\x18\x05 \x01(\t\x12\x0c\n\x04size\x18\x08 \x01(\x04\x12\r\n\x05ready\x18\n \x01(\x08\x12\x0c\n\x04hash\x18\x0c \x01(\t\x12\x0c\n\x04tags\x18\x1e \x03(\t\x12-\n\ttrip_tags\x18# \x03(\x0b\x32\x1a.porerefiner.rpc.TripleTag\x1a\x8e\x02\n\x06Sample\x12\n\n\x02id\x18\x01 \x01(\x05\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x11\n\taccession\x18\x03 \x01(\t\x12\x12\n\nbarcode_id\x18\x04 \x01(\t\x12\x13\n\x0b\x62\x61rcode_seq\x18\x05 \x01(\t\x12\x10\n\x08organism\x18\x06 \x01(\t\x12\x16\n\x0e\x65xtraction_kit\x18\x07 \x01(\t\x12\x0f\n\x07\x63omment\x18\x08 \x01(\t\x12\x0c\n\x04user\x18\t \x01(\t\x12(\n\x05\x66iles\x18\x14 \x03(\x0b\x32\x19.porerefiner.rpc.Run.File\x12\x0c\n\x04tags\x18\x1e \x03(\t\x12-\n\ttrip_tags\x18( \x03(\x0b\x32\x1a.porerefiner.rpc.TripleTag\x1a/\n\x03Job\x12\n\n\x02id\x18\x01 \x01(\x05\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x0e\n\x06status\x18\x03 \x01(\t\"E\n\x05\x45rror\x12\x0e\n\x04\x63ode\x18\x01 \x01(\x05H\x00\x12\x0e\n\x04type\x18\x02 \x01(\tH\x00\x12\x13\n\x0b\x65rr_message\x18\x03 \x01(\tB\x07\n\x05\x65rror\"d\n\x0bRunResponse\x12#\n\x03run\x18\x01 \x01(\x0b\x32\x14.porerefiner.rpc.RunH\x00\x12\'\n\x05\x65rror\x18\x02 \x01(\x0b\x32\x16.porerefiner.rpc.ErrorH\x00\x42\x07\n\x05reply\"+\n\x0eRunListRequest\x12\x0b\n\x03\x61ll\x18\x01 \x01(\x08\x12\x0c\n\x04tags\x18\x14 \x03(\t\"-\n\x07RunList\x12\"\n\x04runs\x18\x01 \x03(\x0b\x32\x14.porerefiner.rpc.Run\"m\n\x0fRunListResponse\x12(\n\x04runs\x18\x01 \x01(\x0b\x32\x18.porerefiner.rpc.RunListH\x00\x12\'\n\x05\x65rror\x18\x02 \x01(\x0b\x32\x16.porerefiner.rpc.ErrorH\x00\x42\x07\n\x05reply\"2\n\nRunRequest\x12\x0c\n\x02id\x18\x01 \x01(\rH\x00\x12\x0e\n\x04name\x18\x02 \x01(\tH\x00\x42\x06\n\x04term\"E\n\x0fRunRsyncRequest\x12\x0c\n\x02id\x18\x01 \x01(\rH\x00\x12\x0e\n\x04name\x18\x02 \x01(\tH\x00\x12\x0c\n\x04\x64\x65st\x18\x03 \x01(\tB\x06\n\x04term\"9\n\x10RunRsyncResponse\x12%\n\x05\x65rror\x18\x01 \x01(\x0b\x32\x16.porerefiner.rpc.Error\"8\n\x0fGenericResponse\x12%\n\x05\x65rror\x18\x01 \x01(\x0b\x32\x16.porerefiner.rpc.Error\"H\n\nTagRequest\x12\n\n\x02id\x18\x01 \x01(\r\x12\x0c\n\x04tags\x18\x02 \x03(\t\x12\r\n\x05untag\x18\x03 \x01(\x08\x12\x11\n\tnamespace\x18\x04 \x01(\t\"M\n\x10TripleTagRequest\x12\n\n\x02id\x18\x01 \x01(\r\x12-\n\ttrip_tags\x18\x02 \x03(\x0b\x32\x1a.porerefiner.rpc.TripleTag\"\xc1\x03\n\x0bSampleSheet\x12\x17\n\x0fporerefiner_ver\x18\x01 \x01(\t\x12\x12\n\nlibrary_id\x18\x02 \x01(\t\x12\x16\n\x0esequencing_kit\x18\x03 \x01(\t\x12(\n\x04\x64\x61te\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x13\n\x0b\x62\x61rcode_kit\x18\x05 \x03(\t\x12\x34\n\x07samples\x18\n \x03(\x0b\x32#.porerefiner.rpc.SampleSheet.Sample\x12\x0c\n\x04tags\x18\x1e \x03(\t\x12-\n\ttrip_tags\x18# \x03(\x0b\x32\x1a.porerefiner.rpc.TripleTag\x1a\xba\x01\n\x06Sample\x12\x11\n\tsample_id\x18\x01 \x01(\t\x12\x11\n\taccession\x18\x02 \x01(\t\x12\x12\n\nbarcode_id\x18\x03 \x01(\t\x12\x10\n\x08organism\x18\x04 \x01(\t\x12\x16\n\x0e\x65xtraction_kit\x18\x05 \x01(\t\x12\x0f\n\x07\x63omment\x18\x06 \x01(\t\x12\x0c\n\x04user\x18\x07 \x01(\t\x12-\n\ttrip_tags\x18\x14 \x03(\x0b\x32\x1a.porerefiner.rpc.TripleTag\"e\n\x10RunAttachRequest\x12\x0c\n\x02id\x18\x01 \x01(\rH\x00\x12\x0e\n\x04name\x18\x02 \x01(\tH\x00\x12+\n\x05sheet\x18\x05 \x01(\x0b\x32\x1c.porerefiner.rpc.SampleSheetB\x06\n\x04term2\x96\x03\n\x0bPoreRefiner\x12L\n\x07GetRuns\x12\x1f.porerefiner.rpc.RunListRequest\x1a .porerefiner.rpc.RunListResponse\x12G\n\nGetRunInfo\x12\x1b.porerefiner.rpc.RunRequest\x1a\x1c.porerefiner.rpc.RunResponse\x12W\n\x10\x41ttachSheetToRun\x12!.porerefiner.rpc.RunAttachRequest\x1a .porerefiner.rpc.GenericResponse\x12Q\n\nRsyncRunTo\x12 .porerefiner.rpc.RunRsyncRequest\x1a!.porerefiner.rpc.RunRsyncResponse\x12\x44\n\x03Tag\x12\x1b.porerefiner.rpc.TagRequest\x1a .porerefiner.rpc.GenericResponseP\x00P\x01\x62\x06proto3'
,
dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,google_dot_protobuf_dot_duration__pb2.DESCRIPTOR,],
public_dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,google_dot_protobuf_dot_duration__pb2.DESCRIPTOR,])
_TRIPLETAG = _descriptor.Descriptor(
name='TripleTag',
full_name='porerefiner.rpc.TripleTag',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='namespace', full_name='porerefiner.rpc.TripleTag.namespace', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.TripleTag.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='porerefiner.rpc.TripleTag.value', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=141,
serialized_end=200,
)
_RUN_FILE = _descriptor.Descriptor(
name='File',
full_name='porerefiner.rpc.Run.File',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.Run.File.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='path', full_name='porerefiner.rpc.Run.File.path', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='spot_id', full_name='porerefiner.rpc.Run.File.spot_id', index=2,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='size', full_name='porerefiner.rpc.Run.File.size', index=3,
number=8, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ready', full_name='porerefiner.rpc.Run.File.ready', index=4,
number=10, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='hash', full_name='porerefiner.rpc.Run.File.hash', index=5,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='porerefiner.rpc.Run.File.tags', index=6,
number=30, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trip_tags', full_name='porerefiner.rpc.Run.File.trip_tags', index=7,
number=35, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=704,
serialized_end=859,
)
_RUN_SAMPLE = _descriptor.Descriptor(
name='Sample',
full_name='porerefiner.rpc.Run.Sample',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.Run.Sample.id', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.Run.Sample.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='accession', full_name='porerefiner.rpc.Run.Sample.accession', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='barcode_id', full_name='porerefiner.rpc.Run.Sample.barcode_id', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='barcode_seq', full_name='porerefiner.rpc.Run.Sample.barcode_seq', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='organism', full_name='porerefiner.rpc.Run.Sample.organism', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='extraction_kit', full_name='porerefiner.rpc.Run.Sample.extraction_kit', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='comment', full_name='porerefiner.rpc.Run.Sample.comment', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='user', full_name='porerefiner.rpc.Run.Sample.user', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='files', full_name='porerefiner.rpc.Run.Sample.files', index=9,
number=20, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='porerefiner.rpc.Run.Sample.tags', index=10,
number=30, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trip_tags', full_name='porerefiner.rpc.Run.Sample.trip_tags', index=11,
number=40, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=862,
serialized_end=1132,
)
_RUN_JOB = _descriptor.Descriptor(
name='Job',
full_name='porerefiner.rpc.Run.Job',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.Run.Job.id', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.Run.Job.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='status', full_name='porerefiner.rpc.Run.Job.status', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1134,
serialized_end=1181,
)
_RUN = _descriptor.Descriptor(
name='Run',
full_name='porerefiner.rpc.Run',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.Run.id', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.Run.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='mnemonic_name', full_name='porerefiner.rpc.Run.mnemonic_name', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='library_id', full_name='porerefiner.rpc.Run.library_id', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='status', full_name='porerefiner.rpc.Run.status', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='path', full_name='porerefiner.rpc.Run.path', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='flowcell_type', full_name='porerefiner.rpc.Run.flowcell_type', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='flowcell_id', full_name='porerefiner.rpc.Run.flowcell_id', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='basecalling_model', full_name='porerefiner.rpc.Run.basecalling_model', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='sequencing_kit', full_name='porerefiner.rpc.Run.sequencing_kit', index=9,
number=10, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='started', full_name='porerefiner.rpc.Run.started', index=10,
number=11, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='elapsed', full_name='porerefiner.rpc.Run.elapsed', index=11,
number=12, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='barcode_kit', full_name='porerefiner.rpc.Run.barcode_kit', index=12,
number=13, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='files', full_name='porerefiner.rpc.Run.files', index=13,
number=15, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='samples', full_name='porerefiner.rpc.Run.samples', index=14,
number=20, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='porerefiner.rpc.Run.tags', index=15,
number=30, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trip_tags', full_name='porerefiner.rpc.Run.trip_tags', index=16,
number=35, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='jobs', full_name='porerefiner.rpc.Run.jobs', index=17,
number=40, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_RUN_FILE, _RUN_SAMPLE, _RUN_JOB, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=203,
serialized_end=1181,
)
_ERROR = _descriptor.Descriptor(
name='Error',
full_name='porerefiner.rpc.Error',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='code', full_name='porerefiner.rpc.Error.code', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='type', full_name='porerefiner.rpc.Error.type', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='err_message', full_name='porerefiner.rpc.Error.err_message', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='error', full_name='porerefiner.rpc.Error.error',
index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
],
serialized_start=1183,
serialized_end=1252,
)
_RUNRESPONSE = _descriptor.Descriptor(
name='RunResponse',
full_name='porerefiner.rpc.RunResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='run', full_name='porerefiner.rpc.RunResponse.run', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='error', full_name='porerefiner.rpc.RunResponse.error', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='reply', full_name='porerefiner.rpc.RunResponse.reply',
index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
],
serialized_start=1254,
serialized_end=1354,
)
_RUNLISTREQUEST = _descriptor.Descriptor(
name='RunListRequest',
full_name='porerefiner.rpc.RunListRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='all', full_name='porerefiner.rpc.RunListRequest.all', index=0,
number=1, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='porerefiner.rpc.RunListRequest.tags', index=1,
number=20, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1356,
serialized_end=1399,
)
_RUNLIST = _descriptor.Descriptor(
name='RunList',
full_name='porerefiner.rpc.RunList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='runs', full_name='porerefiner.rpc.RunList.runs', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1401,
serialized_end=1446,
)
_RUNLISTRESPONSE = _descriptor.Descriptor(
name='RunListResponse',
full_name='porerefiner.rpc.RunListResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='runs', full_name='porerefiner.rpc.RunListResponse.runs', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='error', full_name='porerefiner.rpc.RunListResponse.error', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='reply', full_name='porerefiner.rpc.RunListResponse.reply',
index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
],
serialized_start=1448,
serialized_end=1557,
)
_RUNREQUEST = _descriptor.Descriptor(
name='RunRequest',
full_name='porerefiner.rpc.RunRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.RunRequest.id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.RunRequest.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='term', full_name='porerefiner.rpc.RunRequest.term',
index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
],
serialized_start=1559,
serialized_end=1609,
)
_RUNRSYNCREQUEST = _descriptor.Descriptor(
name='RunRsyncRequest',
full_name='porerefiner.rpc.RunRsyncRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.RunRsyncRequest.id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.RunRsyncRequest.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='dest', full_name='porerefiner.rpc.RunRsyncRequest.dest', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='term', full_name='porerefiner.rpc.RunRsyncRequest.term',
index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
],
serialized_start=1611,
serialized_end=1680,
)
_RUNRSYNCRESPONSE = _descriptor.Descriptor(
name='RunRsyncResponse',
full_name='porerefiner.rpc.RunRsyncResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='error', full_name='porerefiner.rpc.RunRsyncResponse.error', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1682,
serialized_end=1739,
)
_GENERICRESPONSE = _descriptor.Descriptor(
name='GenericResponse',
full_name='porerefiner.rpc.GenericResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='error', full_name='porerefiner.rpc.GenericResponse.error', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1741,
serialized_end=1797,
)
_TAGREQUEST = _descriptor.Descriptor(
name='TagRequest',
full_name='porerefiner.rpc.TagRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.TagRequest.id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='porerefiner.rpc.TagRequest.tags', index=1,
number=2, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='untag', full_name='porerefiner.rpc.TagRequest.untag', index=2,
number=3, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='namespace', full_name='porerefiner.rpc.TagRequest.namespace', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1799,
serialized_end=1871,
)
_TRIPLETAGREQUEST = _descriptor.Descriptor(
name='TripleTagRequest',
full_name='porerefiner.rpc.TripleTagRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.TripleTagRequest.id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trip_tags', full_name='porerefiner.rpc.TripleTagRequest.trip_tags', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1873,
serialized_end=1950,
)
_SAMPLESHEET_SAMPLE = _descriptor.Descriptor(
name='Sample',
full_name='porerefiner.rpc.SampleSheet.Sample',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='sample_id', full_name='porerefiner.rpc.SampleSheet.Sample.sample_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='accession', full_name='porerefiner.rpc.SampleSheet.Sample.accession', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='barcode_id', full_name='porerefiner.rpc.SampleSheet.Sample.barcode_id', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='organism', full_name='porerefiner.rpc.SampleSheet.Sample.organism', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='extraction_kit', full_name='porerefiner.rpc.SampleSheet.Sample.extraction_kit', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='comment', full_name='porerefiner.rpc.SampleSheet.Sample.comment', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='user', full_name='porerefiner.rpc.SampleSheet.Sample.user', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trip_tags', full_name='porerefiner.rpc.SampleSheet.Sample.trip_tags', index=7,
number=20, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=2216,
serialized_end=2402,
)
_SAMPLESHEET = _descriptor.Descriptor(
name='SampleSheet',
full_name='porerefiner.rpc.SampleSheet',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='porerefiner_ver', full_name='porerefiner.rpc.SampleSheet.porerefiner_ver', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='library_id', full_name='porerefiner.rpc.SampleSheet.library_id', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='sequencing_kit', full_name='porerefiner.rpc.SampleSheet.sequencing_kit', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='date', full_name='porerefiner.rpc.SampleSheet.date', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='barcode_kit', full_name='porerefiner.rpc.SampleSheet.barcode_kit', index=4,
number=5, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='samples', full_name='porerefiner.rpc.SampleSheet.samples', index=5,
number=10, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='porerefiner.rpc.SampleSheet.tags', index=6,
number=30, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trip_tags', full_name='porerefiner.rpc.SampleSheet.trip_tags', index=7,
number=35, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_SAMPLESHEET_SAMPLE, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1953,
serialized_end=2402,
)
_RUNATTACHREQUEST = _descriptor.Descriptor(
name='RunAttachRequest',
full_name='porerefiner.rpc.RunAttachRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='porerefiner.rpc.RunAttachRequest.id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='porerefiner.rpc.RunAttachRequest.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='sheet', full_name='porerefiner.rpc.RunAttachRequest.sheet', index=2,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='term', full_name='porerefiner.rpc.RunAttachRequest.term',
index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
],
serialized_start=2404,
serialized_end=2505,
)
_RUN_FILE.fields_by_name['trip_tags'].message_type = _TRIPLETAG
_RUN_FILE.containing_type = _RUN
_RUN_SAMPLE.fields_by_name['files'].message_type = _RUN_FILE
_RUN_SAMPLE.fields_by_name['trip_tags'].message_type = _TRIPLETAG
_RUN_SAMPLE.containing_type = _RUN
_RUN_JOB.containing_type = _RUN
_RUN.fields_by_name['started'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_RUN.fields_by_name['elapsed'].message_type = google_dot_protobuf_dot_duration__pb2._DURATION
_RUN.fields_by_name['files'].message_type = _RUN_FILE
_RUN.fields_by_name['samples'].message_type = _RUN_SAMPLE
_RUN.fields_by_name['trip_tags'].message_type = _TRIPLETAG
_RUN.fields_by_name['jobs'].message_type = _RUN_JOB
_ERROR.oneofs_by_name['error'].fields.append(
_ERROR.fields_by_name['code'])
_ERROR.fields_by_name['code'].containing_oneof = _ERROR.oneofs_by_name['error']
_ERROR.oneofs_by_name['error'].fields.append(
_ERROR.fields_by_name['type'])
_ERROR.fields_by_name['type'].containing_oneof = _ERROR.oneofs_by_name['error']
_RUNRESPONSE.fields_by_name['run'].message_type = _RUN
_RUNRESPONSE.fields_by_name['error'].message_type = _ERROR
_RUNRESPONSE.oneofs_by_name['reply'].fields.append(
_RUNRESPONSE.fields_by_name['run'])
_RUNRESPONSE.fields_by_name['run'].containing_oneof = _RUNRESPONSE.oneofs_by_name['reply']
_RUNRESPONSE.oneofs_by_name['reply'].fields.append(
_RUNRESPONSE.fields_by_name['error'])
_RUNRESPONSE.fields_by_name['error'].containing_oneof = _RUNRESPONSE.oneofs_by_name['reply']
_RUNLIST.fields_by_name['runs'].message_type = _RUN
_RUNLISTRESPONSE.fields_by_name['runs'].message_type = _RUNLIST
_RUNLISTRESPONSE.fields_by_name['error'].message_type = _ERROR
_RUNLISTRESPONSE.oneofs_by_name['reply'].fields.append(
_RUNLISTRESPONSE.fields_by_name['runs'])
_RUNLISTRESPONSE.fields_by_name['runs'].containing_oneof = _RUNLISTRESPONSE.oneofs_by_name['reply']
_RUNLISTRESPONSE.oneofs_by_name['reply'].fields.append(
_RUNLISTRESPONSE.fields_by_name['error'])
_RUNLISTRESPONSE.fields_by_name['error'].containing_oneof = _RUNLISTRESPONSE.oneofs_by_name['reply']
_RUNREQUEST.oneofs_by_name['term'].fields.append(
_RUNREQUEST.fields_by_name['id'])
_RUNREQUEST.fields_by_name['id'].containing_oneof = _RUNREQUEST.oneofs_by_name['term']
_RUNREQUEST.oneofs_by_name['term'].fields.append(
_RUNREQUEST.fields_by_name['name'])
_RUNREQUEST.fields_by_name['name'].containing_oneof = _RUNREQUEST.oneofs_by_name['term']
_RUNRSYNCREQUEST.oneofs_by_name['term'].fields.append(
_RUNRSYNCREQUEST.fields_by_name['id'])
_RUNRSYNCREQUEST.fields_by_name['id'].containing_oneof = _RUNRSYNCREQUEST.oneofs_by_name['term']
_RUNRSYNCREQUEST.oneofs_by_name['term'].fields.append(
_RUNRSYNCREQUEST.fields_by_name['name'])
_RUNRSYNCREQUEST.fields_by_name['name'].containing_oneof = _RUNRSYNCREQUEST.oneofs_by_name['term']
_RUNRSYNCRESPONSE.fields_by_name['error'].message_type = _ERROR
_GENERICRESPONSE.fields_by_name['error'].message_type = _ERROR
_TRIPLETAGREQUEST.fields_by_name['trip_tags'].message_type = _TRIPLETAG
_SAMPLESHEET_SAMPLE.fields_by_name['trip_tags'].message_type = _TRIPLETAG
_SAMPLESHEET_SAMPLE.containing_type = _SAMPLESHEET
_SAMPLESHEET.fields_by_name['date'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_SAMPLESHEET.fields_by_name['samples'].message_type = _SAMPLESHEET_SAMPLE
_SAMPLESHEET.fields_by_name['trip_tags'].message_type = _TRIPLETAG
_RUNATTACHREQUEST.fields_by_name['sheet'].message_type = _SAMPLESHEET
_RUNATTACHREQUEST.oneofs_by_name['term'].fields.append(
_RUNATTACHREQUEST.fields_by_name['id'])
_RUNATTACHREQUEST.fields_by_name['id'].containing_oneof = _RUNATTACHREQUEST.oneofs_by_name['term']
_RUNATTACHREQUEST.oneofs_by_name['term'].fields.append(
_RUNATTACHREQUEST.fields_by_name['name'])
_RUNATTACHREQUEST.fields_by_name['name'].containing_oneof = _RUNATTACHREQUEST.oneofs_by_name['term']
DESCRIPTOR.message_types_by_name['TripleTag'] = _TRIPLETAG
DESCRIPTOR.message_types_by_name['Run'] = _RUN
DESCRIPTOR.message_types_by_name['Error'] = _ERROR
DESCRIPTOR.message_types_by_name['RunResponse'] = _RUNRESPONSE
DESCRIPTOR.message_types_by_name['RunListRequest'] = _RUNLISTREQUEST
DESCRIPTOR.message_types_by_name['RunList'] = _RUNLIST
DESCRIPTOR.message_types_by_name['RunListResponse'] = _RUNLISTRESPONSE
DESCRIPTOR.message_types_by_name['RunRequest'] = _RUNREQUEST
DESCRIPTOR.message_types_by_name['RunRsyncRequest'] = _RUNRSYNCREQUEST
DESCRIPTOR.message_types_by_name['RunRsyncResponse'] = _RUNRSYNCRESPONSE
DESCRIPTOR.message_types_by_name['GenericResponse'] = _GENERICRESPONSE
DESCRIPTOR.message_types_by_name['TagRequest'] = _TAGREQUEST
DESCRIPTOR.message_types_by_name['TripleTagRequest'] = _TRIPLETAGREQUEST
DESCRIPTOR.message_types_by_name['SampleSheet'] = _SAMPLESHEET
DESCRIPTOR.message_types_by_name['RunAttachRequest'] = _RUNATTACHREQUEST
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
TripleTag = _reflection.GeneratedProtocolMessageType('TripleTag', (_message.Message,), {
'DESCRIPTOR' : _TRIPLETAG,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.TripleTag)
})
_sym_db.RegisterMessage(TripleTag)
Run = _reflection.GeneratedProtocolMessageType('Run', (_message.Message,), {
'File' : _reflection.GeneratedProtocolMessageType('File', (_message.Message,), {
'DESCRIPTOR' : _RUN_FILE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.Run.File)
})
,
'Sample' : _reflection.GeneratedProtocolMessageType('Sample', (_message.Message,), {
'DESCRIPTOR' : _RUN_SAMPLE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.Run.Sample)
})
,
'Job' : _reflection.GeneratedProtocolMessageType('Job', (_message.Message,), {
'DESCRIPTOR' : _RUN_JOB,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.Run.Job)
})
,
'DESCRIPTOR' : _RUN,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.Run)
})
_sym_db.RegisterMessage(Run)
_sym_db.RegisterMessage(Run.File)
_sym_db.RegisterMessage(Run.Sample)
_sym_db.RegisterMessage(Run.Job)
Error = _reflection.GeneratedProtocolMessageType('Error', (_message.Message,), {
'DESCRIPTOR' : _ERROR,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.Error)
})
_sym_db.RegisterMessage(Error)
RunResponse = _reflection.GeneratedProtocolMessageType('RunResponse', (_message.Message,), {
'DESCRIPTOR' : _RUNRESPONSE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunResponse)
})
_sym_db.RegisterMessage(RunResponse)
RunListRequest = _reflection.GeneratedProtocolMessageType('RunListRequest', (_message.Message,), {
'DESCRIPTOR' : _RUNLISTREQUEST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunListRequest)
})
_sym_db.RegisterMessage(RunListRequest)
RunList = _reflection.GeneratedProtocolMessageType('RunList', (_message.Message,), {
'DESCRIPTOR' : _RUNLIST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunList)
})
_sym_db.RegisterMessage(RunList)
RunListResponse = _reflection.GeneratedProtocolMessageType('RunListResponse', (_message.Message,), {
'DESCRIPTOR' : _RUNLISTRESPONSE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunListResponse)
})
_sym_db.RegisterMessage(RunListResponse)
RunRequest = _reflection.GeneratedProtocolMessageType('RunRequest', (_message.Message,), {
'DESCRIPTOR' : _RUNREQUEST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunRequest)
})
_sym_db.RegisterMessage(RunRequest)
RunRsyncRequest = _reflection.GeneratedProtocolMessageType('RunRsyncRequest', (_message.Message,), {
'DESCRIPTOR' : _RUNRSYNCREQUEST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunRsyncRequest)
})
_sym_db.RegisterMessage(RunRsyncRequest)
RunRsyncResponse = _reflection.GeneratedProtocolMessageType('RunRsyncResponse', (_message.Message,), {
'DESCRIPTOR' : _RUNRSYNCRESPONSE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunRsyncResponse)
})
_sym_db.RegisterMessage(RunRsyncResponse)
GenericResponse = _reflection.GeneratedProtocolMessageType('GenericResponse', (_message.Message,), {
'DESCRIPTOR' : _GENERICRESPONSE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.GenericResponse)
})
_sym_db.RegisterMessage(GenericResponse)
TagRequest = _reflection.GeneratedProtocolMessageType('TagRequest', (_message.Message,), {
'DESCRIPTOR' : _TAGREQUEST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.TagRequest)
})
_sym_db.RegisterMessage(TagRequest)
TripleTagRequest = _reflection.GeneratedProtocolMessageType('TripleTagRequest', (_message.Message,), {
'DESCRIPTOR' : _TRIPLETAGREQUEST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.TripleTagRequest)
})
_sym_db.RegisterMessage(TripleTagRequest)
SampleSheet = _reflection.GeneratedProtocolMessageType('SampleSheet', (_message.Message,), {
'Sample' : _reflection.GeneratedProtocolMessageType('Sample', (_message.Message,), {
'DESCRIPTOR' : _SAMPLESHEET_SAMPLE,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.SampleSheet.Sample)
})
,
'DESCRIPTOR' : _SAMPLESHEET,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.SampleSheet)
})
_sym_db.RegisterMessage(SampleSheet)
_sym_db.RegisterMessage(SampleSheet.Sample)
RunAttachRequest = _reflection.GeneratedProtocolMessageType('RunAttachRequest', (_message.Message,), {
'DESCRIPTOR' : _RUNATTACHREQUEST,
'__module__' : 'porerefiner.protocols.porerefiner.rpc.porerefiner_pb2'
# @@protoc_insertion_point(class_scope:porerefiner.rpc.RunAttachRequest)
})
_sym_db.RegisterMessage(RunAttachRequest)
_POREREFINER = _descriptor.ServiceDescriptor(
name='PoreRefiner',
full_name='porerefiner.rpc.PoreRefiner',
file=DESCRIPTOR,
index=0,
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_start=2508,
serialized_end=2914,
methods=[
_descriptor.MethodDescriptor(
name='GetRuns',
full_name='porerefiner.rpc.PoreRefiner.GetRuns',
index=0,
containing_service=None,
input_type=_RUNLISTREQUEST,
output_type=_RUNLISTRESPONSE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetRunInfo',
full_name='porerefiner.rpc.PoreRefiner.GetRunInfo',
index=1,
containing_service=None,
input_type=_RUNREQUEST,
output_type=_RUNRESPONSE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='AttachSheetToRun',
full_name='porerefiner.rpc.PoreRefiner.AttachSheetToRun',
index=2,
containing_service=None,
input_type=_RUNATTACHREQUEST,
output_type=_GENERICRESPONSE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='RsyncRunTo',
full_name='porerefiner.rpc.PoreRefiner.RsyncRunTo',
index=3,
containing_service=None,
input_type=_RUNRSYNCREQUEST,
output_type=_RUNRSYNCRESPONSE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='Tag',
full_name='porerefiner.rpc.PoreRefiner.Tag',
index=4,
containing_service=None,
input_type=_TAGREQUEST,
output_type=_GENERICRESPONSE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
])
_sym_db.RegisterServiceDescriptor(_POREREFINER)
DESCRIPTOR.services_by_name['PoreRefiner'] = _POREREFINER
# @@protoc_insertion_point(module_scope)
| 45.765493 | 5,141 | 0.752951 | 8,374 | 64,987 | 5.523883 | 0.041915 | 0.048944 | 0.077631 | 0.068876 | 0.835787 | 0.773007 | 0.748752 | 0.719264 | 0.709817 | 0.695419 | 0 | 0.034728 | 0.119578 | 64,987 | 1,419 | 5,142 | 45.797745 | 0.773735 | 0.02322 | 0 | 0.704819 | 1 | 0.003012 | 0.1902 | 0.151917 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006024 | 0 | 0.006024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
45a5035af3fd6017a31ed729ec8425b80433c516 | 47 | py | Python | tests/test_handler_with_slash/test_handler.py | preearor/aws-lambda-python-runtime-interface-client | cc60047cbc4aa576fef72a9dfca0d1fcb0da00b4 | [
"Apache-2.0"
] | 136 | 2020-12-01T18:33:09.000Z | 2022-03-26T19:42:37.000Z | tests/test_handler_with_slash/test_handler.py | pushkarchawda/aws-lambda-python-runtime-interface-client | 67ff94e107bb3ae80c4f7f3aef40858185fb11fb | [
"Apache-2.0"
] | 43 | 2020-12-03T00:03:10.000Z | 2022-03-31T08:42:55.000Z | tests/test_handler_with_slash/test_handler.py | pushkarchawda/aws-lambda-python-runtime-interface-client | 67ff94e107bb3ae80c4f7f3aef40858185fb11fb | [
"Apache-2.0"
] | 42 | 2020-12-01T19:09:39.000Z | 2022-03-02T19:36:52.000Z | def my_handler():
return "Good with slash"
| 15.666667 | 28 | 0.680851 | 7 | 47 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 47 | 2 | 29 | 23.5 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0.319149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
45c1c1711c4f261d9e67f9abfafd02aa3603d916 | 6,311 | py | Python | tensorflow_io/cifar/python/ops/cifar_ops.py | rjpower/tensorflow-io | 39aa0b46cfaa403121fdddbd491a03d2f3190a87 | [
"Apache-2.0"
] | 1 | 2019-10-10T06:11:23.000Z | 2019-10-10T06:11:23.000Z | tensorflow_io/cifar/python/ops/cifar_ops.py | rjpower/tensorflow-io | 39aa0b46cfaa403121fdddbd491a03d2f3190a87 | [
"Apache-2.0"
] | null | null | null | tensorflow_io/cifar/python/ops/cifar_ops.py | rjpower/tensorflow-io | 39aa0b46cfaa403121fdddbd491a03d2f3190a87 | [
"Apache-2.0"
] | 1 | 2019-10-10T06:11:24.000Z | 2019-10-10T06:11:24.000Z | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""CIFAR Dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow import dtypes
from tensorflow.compat.v1 import data
from tensorflow_io.core.python.ops import core_ops as cifar_ops
class _CIFAR10Dataset(data.Dataset):
"""A CIFAR File Dataset that reads the cifar file."""
def __init__(self, filename, filters, batch=None):
"""Create a `CIFARDataset`.
Args:
filename: A `tf.string` tensor containing one or more filenames.
"""
self._data_input = cifar_ops.cifar10_input(filename, filters)
self._batch = 0 if batch is None else batch
super(_CIFAR10Dataset, self).__init__()
def _inputs(self):
return []
def _as_variant_tensor(self):
return cifar_ops.cifar10_dataset(
self._data_input,
self._batch,
output_types=self.output_types,
output_shapes=self.output_shapes)
@property
def output_classes(self):
return tf.Tensor, tf.Tensor
@property
def output_shapes(self):
return (
tf.TensorShape([]),
tf.TensorShape([3, 32, 32])) if self._batch == 0 else (
tf.TensorShape([None]),
tf.TensorShape([None, 3, 32, 32]))
@property
def output_types(self):
return dtypes.uint8, dtypes.uint8
class CIFAR10Dataset(_CIFAR10Dataset):
"""A CIFAR File Dataset that reads the cifar file."""
def __init__(self, filename, batch=None, test=False):
"""Create a `CIFAR10Dataset`.
Args:
filename: A `tf.string` tensor containing one or more filenames.
test: A boolean to indicate if the data input is for test or for train.
"""
self._filename = filename
self._batch = 0 if batch is None else batch
if test:
self._filters = ["tar.gz:test_batch.bin"]
else:
self._filters = [
"tar.gz:data_batch_" + str(i) +".bin" for i in range(1, 6)]
super(CIFAR10Dataset, self).__init__(
self._filename, self._filters, batch=self._batch)
def _as_variant_tensor(self):
if self._batch == 0:
return _CIFAR10Dataset( # pylint: disable=protected-access
self._filename, self._filters, batch=self._batch).map(
lambda label, image: (tf.transpose(image, [1, 2, 0]), label))._as_variant_tensor()
return _CIFAR10Dataset( # pylint: disable=protected-access
self._filename, self._filters, batch=self._batch).map(
lambda label, image: (tf.transpose(image, [0, 2, 3, 1]), label))._as_variant_tensor()
@property
def output_shapes(self):
return (
tf.TensorShape([32, 32, 3]),
tf.TensorShape([])) if self._batch == 0 else (
tf.TensorShape([None, 32, 32, 3]),
tf.TensorShape([None]))
class _CIFAR100Dataset(data.Dataset):
"""A CIFAR File Dataset that reads the cifar file."""
def __init__(self, filename, filters, batch=None):
"""Create a `CIFAR100Dataset`.
Args:
filename: A `tf.string` tensor containing one or more filenames.
"""
self._data_input = cifar_ops.cifar100_input(filename, filters)
self._batch = 0 if batch is None else batch
super(_CIFAR100Dataset, self).__init__()
def _inputs(self):
return []
def _as_variant_tensor(self):
return cifar_ops.cifar100_dataset(
self._data_input,
self._batch,
output_types=self.output_types,
output_shapes=self.output_shapes)
@property
def output_classes(self):
return tf.Tensor, tf.Tensor, tf.Tensor
@property
def output_shapes(self):
return (
tf.TensorShape([]),
tf.TensorShape([]),
tf.TensorShape([3, 32, 32])) if self._batch == 0 else (
tf.TensorShape([None]),
tf.TensorShape([None]),
tf.TensorShape([None, 3, 32, 32]))
@property
def output_types(self):
return dtypes.uint8, dtypes.uint8, dtypes.uint8
class CIFAR100Dataset(_CIFAR100Dataset):
"""A CIFAR File Dataset that reads the cifar file."""
def __init__(self, filename, batch=None, test=False, mode='fine'):
"""Create a `CIFAR100Dataset`.
Args:
filename: A `tf.string` tensor containing one or more filenames.
test: A boolean to indicate if the data input is for test or for train.
mode: A string indicate if `coarse` or `fine` label is used.
"""
self._filename = filename
self._batch = 0 if batch is None else batch
if test:
self._filters = ["tar.gz:test.bin"]
else:
self._filters = ["tar.gz:train.bin"]
self._mode = mode
super(CIFAR100Dataset, self).__init__(
self._filename, self._filters, batch=self._batch)
def _as_variant_tensor(self):
if self._batch == 0:
return _CIFAR100Dataset( # pylint: disable=protected-access
self._filename, self._filters, batch=self._batch).map(
lambda coarse, fine, image: (tf.transpose(image, [1, 2, 0]), fine if self._mode == 'fine' else coarse))._as_variant_tensor()
return _CIFAR100Dataset( # pylint: disable=protected-access
self._filename, self._filters, batch=self._batch).map(
lambda coarse, fine, image: (tf.transpose(image, [0, 2, 3, 1]), fine if self._mode == 'fine' else coarse))._as_variant_tensor()
@property
def output_classes(self):
return tf.Tensor, tf.Tensor
@property
def output_shapes(self):
return (
tf.TensorShape([32, 32, 3]),
tf.TensorShape([])) if self._batch == 0 else (
tf.TensorShape([None, 32, 32, 3]),
tf.TensorShape([None]))
@property
def output_types(self):
return dtypes.uint8, dtypes.uint8
| 33.748663 | 139 | 0.661068 | 825 | 6,311 | 4.861818 | 0.180606 | 0.040389 | 0.024931 | 0.017951 | 0.73822 | 0.732984 | 0.719272 | 0.717776 | 0.715782 | 0.715782 | 0 | 0.027212 | 0.213912 | 6,311 | 186 | 140 | 33.930108 | 0.781294 | 0.255902 | 0 | 0.722689 | 0 | 0 | 0.018823 | 0.004596 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168067 | false | 0 | 0.058824 | 0.117647 | 0.411765 | 0.008403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
45dc68d2d8624068b100b88c6b5e2680a2bc35b9 | 37 | py | Python | CursoPython/Exemplos/Aula011.py | XiaoNaihe/Python | 5ba12ae8beff325b069d13210d34116373de2f5d | [
"MIT"
] | null | null | null | CursoPython/Exemplos/Aula011.py | XiaoNaihe/Python | 5ba12ae8beff325b069d13210d34116373de2f5d | [
"MIT"
] | null | null | null | CursoPython/Exemplos/Aula011.py | XiaoNaihe/Python | 5ba12ae8beff325b069d13210d34116373de2f5d | [
"MIT"
] | null | null | null | print('\033[7;33;44mOlá mundo\033[m') | 37 | 37 | 0.702703 | 8 | 37 | 3.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 0.027027 | 37 | 1 | 37 | 37 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
b34241a7fd207d661af96d2961a764522407121f | 197 | py | Python | Attacks/__init__.py | ndiab/CRYPTO | 6691972dbfc33eae4c3c133a4df00de4e336b0cb | [
"Apache-2.0"
] | 26 | 2017-03-27T10:32:59.000Z | 2022-02-22T10:36:17.000Z | Attacks/__init__.py | ndiab/CRYPTO | 6691972dbfc33eae4c3c133a4df00de4e336b0cb | [
"Apache-2.0"
] | 1 | 2018-02-11T19:13:57.000Z | 2018-02-11T19:25:22.000Z | Attacks/__init__.py | ndiab/CRYPTO | 6691972dbfc33eae4c3c133a4df00de4e336b0cb | [
"Apache-2.0"
] | 4 | 2018-04-26T12:28:27.000Z | 2022-01-23T15:09:23.000Z | from Attacks.attacks import *
from Attacks.bellcore import *
from Attacks.broadcast import *
from Attacks.wiener import *
from Attacks.bsgs import *
from Attacks.vulnerable_keys_generator import *
| 28.142857 | 47 | 0.817259 | 26 | 197 | 6.115385 | 0.384615 | 0.415094 | 0.534591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121827 | 197 | 6 | 48 | 32.833333 | 0.919075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2fe5b33208c152a4af3bc2ff2683b5680d7e06d3 | 83,179 | py | Python | src/genie/libs/parser/iosxr/tests/test_show_mpls.py | nujo/genieparser | 083b01efc46afc32abe1a1858729578beab50cd3 | [
"Apache-2.0"
] | 4 | 2020-08-20T12:23:12.000Z | 2021-06-15T14:10:02.000Z | src/genie/libs/parser/iosxr/tests/test_show_mpls.py | nujo/genieparser | 083b01efc46afc32abe1a1858729578beab50cd3 | [
"Apache-2.0"
] | 119 | 2020-07-10T22:37:51.000Z | 2021-03-18T02:40:05.000Z | src/genie/libs/parser/iosxr/tests/test_show_mpls.py | nujo/genieparser | 083b01efc46afc32abe1a1858729578beab50cd3 | [
"Apache-2.0"
] | 2 | 2020-07-10T15:33:42.000Z | 2021-04-05T09:48:56.000Z | # Python
import unittest
from unittest.mock import Mock
# ATS
from pyats.topology import Device
from pyats.topology import loader
# Metaparser
from genie.metaparser.util.exceptions import SchemaEmptyParserError, SchemaMissingKeyError
# iosxr show_mpls
from genie.libs.parser.iosxr.show_mpls import (ShowMplsLabelRange,
ShowMplsLdpNeighborBrief,
ShowMplsLabelTableDetail,
ShowMplsLabelTablePrivate,
ShowMplsInterfaces,
ShowMplsForwarding,
ShowMplsForwardingVrf,
ShowMplsLdpNeighbor,
ShowMplsLdpNeighborDetail)
# ==================================================
# Unit test for 'show mpls label range'
# ==================================================
class TestShowMplsLabelRange(unittest.TestCase):
'''Unit test for 'show mpls label range' '''
device = Device(name ='aDevice')
empty_output = {'execute.return_value': ''}
golden_parsed_output = {
'range_for_dynamic_labels': {
'min_range': 24000,
'max_range': 1048575
},
}
golden_output = {'execute.return_value':'''
RP/0/RP0/CPU0:R3#show mpls label range
Thu Aug 29 5:24:12.183 UTC
Range for dynamic labels: Min/Max: 24000/1048575
'''}
def test_show_mpls_label_range_empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsLabelRange(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_show_mpls_label_range_golden(self):
self.maxDiff = None
self.device = Mock(**self.golden_output)
obj = ShowMplsLabelRange(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output)
# ==================================================
# Unit test for 'show mpls ldp neighbor'
# ==================================================
class test_show_mpls_ldp_neighbor(unittest.TestCase):
empty_output = {'execute.return_value': ''}
maxDiff = None
golden_parsed_output ={
'vrf':{
'default':{
'peers':{
'10.16.0.2':{
'label_space_id':{
0:{
'tcp_connection': '10.16.0.2:646 - 10.16.0.9:38143',
'graceful_restart': 'No',
'session_holdtime': 180,
'state': 'Oper',
'msg_sent': 24710,
'msg_rcvd': 24702,
'neighbor': 'Downstream-Unsolicited',
'uptime': '2w0d',
'address_family':{
'ipv4':{
'ldp_discovery_sources': {
'interface':{
'GigabitEthernet0/0/0/0':{}
},
},
'address_bound': ['10.16.0.2', '10.16.27.2', '10.16.28.2', '10.16.29.2']
}
}
},
},
},
'10.16.0.7':{
'label_space_id':{
0:{
'tcp_connection': '10.16.0.7:646 - 10.16.0.9:19323',
'graceful_restart': 'No',
'session_holdtime': 180,
'state': 'Oper',
'msg_sent': 24664,
'msg_rcvd': 24686,
'neighbor': 'Downstream-Unsolicited',
'uptime': '2w0d',
'address_family':{
'ipv4':{
'ldp_discovery_sources': {
'interface':{
'GigabitEthernet0/0/0/1':{}
},
},
'address_bound': ['10.16.0.7', '10.16.27.7', '10.16.78.7', '10.16.79.7'],
}
}
},
},
},
},
}
}
}
golden_output = {'execute.return_value': '''
RP/0/RP0/CPU0:R9#show mpls ldp neighbor
Thu Jan 2 20:51:12.829 UTC
Peer LDP Identifier: 10.16.0.2:0
TCP connection: 10.16.0.2:646 - 10.16.0.9:38143
Graceful Restart: No
Session Holdtime: 180 sec
State: Oper; Msgs sent/rcvd: 24710/24702; Downstream-Unsolicited
Up time: 2w0d
LDP Discovery Sources:
IPv4: (1)
GigabitEthernet0/0/0/0
IPv6: (0)
Addresses bound to this peer:
IPv4: (4)
10.16.0.2 10.16.27.2 10.16.28.2 10.16.29.2
IPv6: (0)
Peer LDP Identifier: 10.16.0.7:0
TCP connection: 10.16.0.7:646 - 10.16.0.9:19323
Graceful Restart: No
Session Holdtime: 180 sec
State: Oper; Msgs sent/rcvd: 24664/24686; Downstream-Unsolicited
Up time: 2w0d
LDP Discovery Sources:
IPv4: (1)
GigabitEthernet0/0/0/1
IPv6: (0)
Addresses bound to this peer:
IPv4: (4)
10.16.0.7 10.16.27.7 10.16.78.7 10.16.79.7
IPv6: (0)
'''}
def test_empty(self):
self.dev = Mock(**self.empty_output)
obj = ShowMplsLdpNeighbor(device=self.dev)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_show_mpls_ldp_neighbor_golden(self):
self.device = Mock(**self.golden_output)
obj = ShowMplsLdpNeighbor(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output)
# ==================================================
# Unit test for 'show mpls ldp neighbor detail'
# ==================================================
class test_show_mpls_ldp_neighbor_detail(unittest.TestCase):
empty_output = {'execute.return_value': ''}
maxDiff = None
golden_parsed_output1 = {
'vrf': {
'default':{
'peers':{
'192.168.70.6':{
'label_space_id':{
0:{
'tcp_connection': '192.168.70.6:15332 - 192.168.1.1:646',
'graceful_restart': 'Yes (Reconnect Timeout: 120 sec, Recovery: 180 sec)',
'session_holdtime': 180,
'state': 'Oper',
'msg_sent': 851,
'msg_rcvd': 232,
'neighbor': 'Downstream-Unsolicited',
'uptime': '00:02:44',
'address_family':{
'ipv4':{
'ldp_discovery_sources': {
'interface':{
'Bundle-Ether1.3':{}
},
'targeted_hello':{
'192.168.1.1':{
'192.168.70.6':{
'active': False,
},
},
}
},
'address_bound': ['10.10.10.1', '10.126.249.223', '10.126.249.224', '10.76.23.2',
'10.219.1.2', '10.19.1.2', '10.76.1.2', '10.135.1.2',
'10.151.1.2', '192.168.106.1', '192.168.205.1', '192.168.51.1',
'192.168.196.1', '192.168.171.1', '192.168.70.6'],
}
},
'peer_holdtime': 180,
'ka_interval': 60,
'peer_state': 'Estab',
'nsr': 'Operational',
'clients': 'Session Protection',
'session_protection':{
'session_state': 'Ready',
'duration_int': 86400,
},
'capabilities': {
'sent': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
'received': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
},
},
},
},
},
}
}
}
golden_output1 = { 'execute.return_value' : '''
RP/0/RP0/CPU0:R2#show mpls ldp neighbor detail
Peer LDP Identifier: 192.168.70.6:0
TCP connection: 192.168.70.6:15332 - 192.168.1.1:646
Graceful Restart: Yes (Reconnect Timeout: 120 sec, Recovery: 180 sec)
Session Holdtime: 180 sec
State: Oper; Msgs sent/rcvd: 851/232; Downstream-Unsolicited
Up time: 00:02:44
LDP Discovery Sources:
IPv4: (2)
Bundle-Ether1.3
Targeted Hello (192.168.1.1 -> 192.168.70.6, active/passive)
IPv6: (0)
Addresses bound to this peer:
IPv4: (15)
10.10.10.1 10.126.249.223 10.126.249.224 10.76.23.2
10.219.1.2 10.19.1.2 10.76.1.2 10.135.1.2
10.151.1.2 192.168.106.1 192.168.205.1 192.168.51.1
192.168.196.1 192.168.171.1 192.168.70.6
IPv6: (0)
Peer holdtime: 180 sec; KA interval: 60 sec; Peer state: Estab
NSR: Operational
Clients: Session Protection
Session Protection:
Enabled, state: Ready
Duration: 86400 sec
Capabilities:
Sent:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
Received:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
'''
}
golden_parsed_output2 = {
'vrf': {
'all':{
'peers':{
'192.168.70.6':{
'label_space_id':{
0:{
'tcp_connection': '192.168.70.6:15332 - 192.168.1.1:646',
'graceful_restart': 'Yes (Reconnect Timeout: 120 sec, Recovery: 180 sec)',
'session_holdtime': 180,
'state': 'Oper',
'msg_sent': 851,
'msg_rcvd': 232,
'neighbor': 'Downstream-Unsolicited',
'uptime': '00:02:44',
'address_family':{
'ipv4':{
'ldp_discovery_sources': {
'interface':{
'Bundle-Ether1.3':{}
},
'targeted_hello':{
'192.168.1.1':{
'192.168.70.6':{
'active': False,
},
},
}
},
'address_bound': ['10.10.10.1', '10.126.249.223', '10.126.249.224', '10.76.23.2',
'10.219.1.2', '10.19.1.2', '10.76.1.2', '10.135.1.2',
'10.151.1.2', '192.168.106.1', '192.168.205.1', '192.168.51.1',
'192.168.196.1', '192.168.171.1', '192.168.70.6'],
}
},
'peer_holdtime': 180,
'ka_interval': 60,
'peer_state': 'Estab',
'nsr': 'Operational',
'clients': 'Session Protection',
'session_protection':{
'session_state': 'Ready',
'duration_int': 86400,
},
'capabilities': {
'sent': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
'received': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
},
},
},
},
},
}
}
}
golden_output2 = { 'execute.return_value' : '''
RP/0/RP0/CPU0:R2#show mpls ldp vrf all neighbor detail
Peer LDP Identifier: 192.168.70.6:0
TCP connection: 192.168.70.6:15332 - 192.168.1.1:646
Graceful Restart: Yes (Reconnect Timeout: 120 sec, Recovery: 180 sec)
Session Holdtime: 180 sec
State: Oper; Msgs sent/rcvd: 851/232; Downstream-Unsolicited
Up time: 00:02:44
LDP Discovery Sources:
IPv4: (2)
Bundle-Ether1.3
Targeted Hello (192.168.1.1 -> 192.168.70.6, active/passive)
IPv6: (0)
Addresses bound to this peer:
IPv4: (15)
10.10.10.1 10.126.249.223 10.126.249.224 10.76.23.2
10.219.1.2 10.19.1.2 10.76.1.2 10.135.1.2
10.151.1.2 192.168.106.1 192.168.205.1 192.168.51.1
192.168.196.1 192.168.171.1 192.168.70.6
IPv6: (0)
Peer holdtime: 180 sec; KA interval: 60 sec; Peer state: Estab
NSR: Operational
Clients: Session Protection
Session Protection:
Enabled, state: Ready
Duration: 86400 sec
Capabilities:
Sent:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
Received:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
'''
}
golden_parsed_output3 = {
'vrf': {
'default':{
'peers':{
'10.16.0.7':{
'label_space_id':{
0:{
'tcp_connection': '10.16.0.7:646 - 10.16.0.9:19323',
'graceful_restart': 'No',
'session_holdtime': 180,
'state': 'Oper',
'msg_sent': 24671,
'msg_rcvd': 24693,
'neighbor': 'Downstream-Unsolicited',
'uptime': '2w1d',
'address_family':{
'ipv4':{
'ldp_discovery_sources': {
'interface':{
'GigabitEthernet0/0/0/1':{}
},
},
'address_bound': ['10.16.0.7', '10.16.27.7', '10.16.78.7', '10.16.79.7'],
}
},
'peer_holdtime': 180,
'ka_interval': 60,
'peer_state': 'Estab',
'nsr': 'Disabled',
'capabilities': {
'sent': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
'received': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
},
},
},
},
},
}
}
}
golden_output3 = {'execute.return_value' : '''
RP/0/RP0/CPU0:R9#show mpls ldp neighbor GigabitEthernet0/0/0/1 detail
Thu Jan 2 20:56:36.689 UTC
Peer LDP Identifier: 10.16.0.7:0
TCP connection: 10.16.0.7:646 - 10.16.0.9:19323
Graceful Restart: No
Session Holdtime: 180 sec
State: Oper; Msgs sent/rcvd: 24671/24693; Downstream-Unsolicited
Up time: 2w1d
LDP Discovery Sources:
IPv4: (1)
GigabitEthernet0/0/0/1
IPv6: (0)
Addresses bound to this peer:
IPv4: (4)
10.16.0.7 10.16.27.7 10.16.78.7 10.16.79.7
IPv6: (0)
Peer holdtime: 180 sec; KA interval: 60 sec; Peer state: Estab
NSR: Disabled
Capabilities:
Sent:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
Received:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
'''}
golden_parsed_output4 = {
'vrf': {
'Vpn1':{
'peers':{
'10.16.0.7':{
'label_space_id':{
0:{
'tcp_connection': '10.16.0.7:646 - 10.16.0.9:19323',
'graceful_restart': 'No',
'session_holdtime': 180,
'state': 'Oper',
'msg_sent': 24671,
'msg_rcvd': 24693,
'neighbor': 'Downstream-Unsolicited',
'uptime': '2w1d',
'address_family':{
'ipv4':{
'ldp_discovery_sources': {
'interface':{
'GigabitEthernet0/0/0/1':{}
},
},
'address_bound': ['10.16.0.7', '10.16.27.7', '10.16.78.7', '10.16.79.7'],
}
},
'peer_holdtime': 180,
'ka_interval': 60,
'peer_state': 'Estab',
'nsr': 'Disabled',
'capabilities': {
'sent': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
'received': {
'0x508': 'MP: Point-to-Multipoint (P2MP)',
'0x509': 'MP: Multipoint-to-Multipoint (MP2MP)',
'0x50b': 'Typed Wildcard FEC',
},
},
},
},
},
},
}
}
}
golden_output4 = {'execute.return_value' : '''
RP/0/RP0/CPU0:R9#show mpls ldp neighbor vrf Vpn1 GigabitEthernet0/0/0/1 detail
Thu Jan 2 20:56:36.689 UTC
Peer LDP Identifier: 10.16.0.7:0
TCP connection: 10.16.0.7:646 - 10.16.0.9:19323
Graceful Restart: No
Session Holdtime: 180 sec
State: Oper; Msgs sent/rcvd: 24671/24693; Downstream-Unsolicited
Up time: 2w1d
LDP Discovery Sources:
IPv4: (1)
GigabitEthernet0/0/0/1
IPv6: (0)
Addresses bound to this peer:
IPv4: (4)
10.16.0.7 10.16.27.7 10.16.78.7 10.16.79.7
IPv6: (0)
Peer holdtime: 180 sec; KA interval: 60 sec; Peer state: Estab
NSR: Disabled
Capabilities:
Sent:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
Received:
0x508 (MP: Point-to-Multipoint (P2MP))
0x509 (MP: Multipoint-to-Multipoint (MP2MP))
0x50b (Typed Wildcard FEC)
'''}
def test_empty(self):
self.dev = Mock(**self.empty_output)
obj = ShowMplsLdpNeighborDetail(device=self.dev)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_show_mpls_ldp_neighbor_detail_golden1(self):
self.device = Mock(**self.golden_output1)
obj = ShowMplsLdpNeighborDetail(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output1)
def test_show_mpls_ldp_neighbor_detail_golden2(self):
self.device = Mock(**self.golden_output2)
obj = ShowMplsLdpNeighborDetail(device=self.device)
parsed_output = obj.parse(vrf='all')
self.assertEqual(parsed_output, self.golden_parsed_output2)
def test_show_mpls_ldp_neighbor_detail_golden3(self):
self.device = Mock(**self.golden_output3)
obj = ShowMplsLdpNeighborDetail(device=self.device)
parsed_output = obj.parse(interface='GigabitEthernet0/0/0/1')
self.assertEqual(parsed_output, self.golden_parsed_output3)
def test_show_mpls_ldp_neighbor_detail_golden4(self):
self.device = Mock(**self.golden_output4)
obj = ShowMplsLdpNeighborDetail(device=self.device)
parsed_output = obj.parse(vrf='Vpn1' ,interface='GigabitEthernet0/0/0/1')
self.assertEqual(parsed_output, self.golden_parsed_output4)
# ==================================================
# Unit test for 'show mpls ldp neighbor brief'
# ==================================================
class test_show_mpls_ldp_neighbor_brief(unittest.TestCase):
'''Unit test for 'show mpls ldp neighbor brief'''
device = Device(name='aDevice')
empty_output = {'execute.return_value': ''}
golden_parsed_output1 = {
'peer': {
'10.16.2.2:0': {
'addresses': {
'address': 5},
'discovery': {
'discovery': 2},
'gr': 'N',
'up_time': '00:01:02'},
'10.36.3.3:0': {
'addresses': {
'address': 8},
'discovery': {
'discovery': 3},
'gr': 'Y',
'up_time': '00:01:04'},
'10.64.4.4:0': {
'addresses': {
'ipv4': 3,
'ipv6': 0},
'discovery': {
'ipv4': 1,
'ipv6': 0},
'gr': 'Y',
'labels': {
'ipv4': 5,
'ipv6': 0},
'nsr': 'N',
'up_time': '1d00h'},
'10.49.46.2:0': {
'addresses': {
'ipv4': 3,
'ipv6': 3},
'discovery': {
'ipv4': 1,
'ipv6': 1},
'gr': 'N',
'labels': {
'ipv4': 5,
'ipv6': 5},
'nsr': 'N',
'up_time': '1d00h'},
'10.49.46.46:0': {
'addresses': {
'ipv4': 4,
'ipv6': 4},
'discovery': {
'ipv4': 2,
'ipv6': 2},
'gr': 'Y',
'labels': {
'ipv4': 5,
'ipv6': 5},
'nsr': 'N',
'up_time': '1d00h'},
'10.144.6.1:0': {
'addresses': {
'ipv4': 0,
'ipv6': 2},
'discovery': {
'ipv4': 0,
'ipv6': 1},
'gr': 'Y',
'labels': {
'ipv4': 0,
'ipv6': 5},
'nsr': 'N',
'up_time': '23:25:50'}}}
golden_output1 = {'execute.return_value': '''
RP/0/0/CPU0:router# show mpls ldp neighbor brief
Peer GR Up Time Discovery Address
----------------- -- --------------- --------- -------
10.36.3.3:0 Y 00:01:04 3 8
10.16.2.2:0 N 00:01:02 2 5
Peer GR NSR Up Time Discovery Addresses Labels
ipv4 ipv6 ipv4 ipv6 ipv4 ipv6
----------------- -- --- ---------- ---------- ---------- ------------
10.64.4.4:0 Y N 1d00h 1 0 3 0 5 0
10.49.46.2:0 N N 1d00h 1 1 3 3 5 5
10.49.46.46:0 Y N 1d00h 2 2 4 4 5 5
10.144.6.1:0 Y N 23:25:50 0 1 0 2 0 5
'''}
golden_parsed_output2 = {
'peer': {
'10.36.3.3:0': {
'gr': 'Y',
'up_time': '00:01:04',
'discovery': {
'discovery': 3},
'addresses': {
'address': 8}},
'10.16.2.2:0': {
'gr': 'N',
'up_time': '00:01:02',
'discovery': {'discovery': 2},
'addresses': {'address': 5}}}}
golden_output2 = {'execute.return_value': '''
RP/0/RP0/CPU0:router# show mpls ldp neighbor brief
Peer GR Up Time Discovery Address
----------------- -- --------------- --------- -------
10.36.3.3:0 Y 00:01:04 3 8
10.16.2.2:0 N 00:01:02 2 5
'''}
golden_parsed_output3 = {
'peer': {
'10.4.1.1:0': {
'addresses': {
'ipv4': 9,
'ipv6': 0},
'discovery': {
'ipv4': 1,
'ipv6': 0},
'gr': 'N',
'labels': {
'ipv4': 15,
'ipv6': 0},
'nsr': 'N',
'up_time': '00:08:57'}}}
golden_output3 = {'execute.return_value': '''
+++ R2_xr: executing command 'show mpls ldp neighbor brief' +++
show mpls ldp neighbor brief
Wed Apr 17 16:45:04.410 UTC
Peer GR NSR Up Time Discovery Addresses Labels
ipv4 ipv6 ipv4 ipv6 ipv4 ipv6
----------------- -- --- ---------- ---------- ---------- ------------
10.4.1.1:0 N N 00:08:57 1 0 9 0 15 0
'''}
def test_show_mpls_ldp_neighbor_brief_empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsLdpNeighborBrief(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_show_mpls_ldp_neighbor_brief_golden1(self):
self.maxDiff = None
self.device = Mock(**self.golden_output1)
obj = ShowMplsLdpNeighborBrief(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output1)
def test_show_mpls_ldp_neighbor_brief_golden2(self):
self.maxDiff = None
self.device = Mock(**self.golden_output2)
obj = ShowMplsLdpNeighborBrief(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output2)
def test_show_mpls_ldp_neighbor_brief_golden3(self):
self.maxDiff = None
self.device = Mock(**self.golden_output3)
obj = ShowMplsLdpNeighborBrief(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output3)
# ==================================================
# Unit test for 'show mpls label table detail'
# ==================================================
class TestShowMplsLabelTableDetail(unittest.TestCase):
'''Unit test for show mpls label table detail'''
device = Device(name='aDevice')
empty_output = {'execute.return_value': ''}
golden_parsed_output1 = {
'table':{
0:{
'label':{
0:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
},
},
},
1:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
},
},
},
2:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
},
},
},
13:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
},
},
},
16000:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'No'
},
},
'label_type':{
'Lbl-blk SRGB':{
'vers':0,
'start_label':16000,
'size':8000
},
},
},
24000:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
},
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':0,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.1.2.2'
},
},
},
24001:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
},
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':2,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.1.2.2'
},
},
},
24002:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
},
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':1,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.1.2.2'
},
},
},
24003:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':3,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.1.2.2'
},
},
},
},
},
},
}
golden_output1 = {'execute.return_value': '''
RP/0/RP0/CPU0:iosxrv9000-1#show mpls label table detail
Mon Sep 30 13:26:56.133 EDT
Table Label Owner State Rewrite
----- ------- ------------------------------- ------ -------
0 0 LSD(A) InUse Yes
0 1 LSD(A) InUse Yes
0 2 LSD(A) InUse Yes
0 13 LSD(A) InUse Yes
0 16000 ISIS(A):SR InUse No
(Lbl-blk SRGB, vers:0, (start_label=16000, size=8000)
0 24000 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=0, type=0, intf=Gi0/0/0/1, nh=10.1.2.2)
0 24001 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=2, type=0, intf=Gi0/0/0/1, nh=10.1.2.2)
0 24002 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=1, type=0, intf=Gi0/0/0/1, nh=10.1.2.2)
0 24003 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=3, type=0, intf=Gi0/0/0/1, nh=10.1.2.2)
'''}
golden_parsed_output2 = {
'table':{
0:{
'label':{
0:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
1:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
2:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
13:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
15000:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'Lbl-blk SRLB':{
'vers':0,
'start_label':15000,
'size':1000,
'app_notify':0
}
}
},
16000:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'Lbl-blk SRGB':{
'vers':0,
'start_label':16000,
'size':7000
}
}
},
24000:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':0,
'type':0,
'interface':'Gi0/0/0/0',
'nh':'10.1.3.1'
}
}
},
24001:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':2,
'type':0,
'interface':'Gi0/0/0/0',
'nh':'10.1.3.1'
}
}
},
24002:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':1,
'type':0,
'interface':'Gi0/0/0/0',
'nh':'10.1.3.1'
}
}
},
24003:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':3,
'type':0,
'interface':'Gi0/0/0/0',
'nh':'10.1.3.1'
}
}
},
24004:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':0,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.3.4.4'
}
}
},
24005:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':2,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.3.4.4'
}
}
},
24006:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':1,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.3.4.4'
}
}
},
24007:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
},
'label_type':{
'SR Adj Segment IPv4':{
'vers':0,
'index':3,
'type':0,
'interface':'Gi0/0/0/1',
'nh':'10.3.4.4'
}
}
}
}
}
}
}
golden_output2 = {'execute.return_value': '''
RP/0/RP0/CPU0:R3#show mpls label table detail
Thu Aug 29 15:33:47.761 UTC
Table Label Owner State Rewrite
----- ------- ------------------------------- ------ -------
0 0 LSD(A) InUse Yes
0 1 LSD(A) InUse Yes
0 2 LSD(A) InUse Yes
0 13 LSD(A) InUse Yes
0 15000 LSD(A) InUse No
(Lbl-blk SRLB, vers:0, (start_label=15000, size=1000, app_notify=0)
0 16000 ISIS(A):SR InUse No
(Lbl-blk SRGB, vers:0, (start_label=16000, size=7000)
0 24000 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=0, type=0, intf=Gi0/0/0/0, nh=10.1.3.1)
0 24001 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=2, type=0, intf=Gi0/0/0/0, nh=10.1.3.1)
0 24002 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=1, type=0, intf=Gi0/0/0/0, nh=10.1.3.1)
0 24003 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=3, type=0, intf=Gi0/0/0/0, nh=10.1.3.1)
0 24004 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=0, type=0, intf=Gi0/0/0/1, nh=10.3.4.4)
0 24005 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=2, type=0, intf=Gi0/0/0/1, nh=10.3.4.4)
0 24006 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=1, type=0, intf=Gi0/0/0/1, nh=10.3.4.4)
0 24007 ISIS(A):SR InUse Yes
(SR Adj Segment IPv4, vers:0, index=3, type=0, intf=Gi0/0/0/1, nh=10.3.4.4)
'''}
golden_parsed_output3 = {
'table':{
0:{
'label':{
0:{
'owner':{
'LSD':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
1:{
'owner':{
'LSD':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
2:{
'owner':{
'LSD':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
13:{
'owner':{
'LSD':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
44:{
'owner':{
'Static':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'IPv4':{
'vers':0,
'default':True,
'prefix':'10.16.2.2/3'
}
}
},
1999:{
'owner':{
'Static':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'IPv4':{
'vers':0,
'default':True,
'prefix':'10.4.1.1/24'
}
}
},
16001:{
'owner':{
'LDP:lsd_test_ut':{
'state':'InUse',
'rewrite':'No'
},
'Static:lsd_test_ut':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'IPv4':{
'vers':0,
'default':False,
'prefix':'10.106.10.10/15'
}
}
},
19990:{
'owner':{
'Static':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'IPv4':{
'vers':0,
'default':True,
'prefix':'10.4.1.4/24'
}
}
},
19999:{
'owner':{
'Static':{
'state':'InUse',
'rewrite':'No'
}
},
'label_type':{
'IPv4':{
'vers':0,
'default':True,
'prefix':'10.4.1.3/24'
}
}
}
}
}
}
}
golden_output3 = {'execute.return_value':'''
RP/0/0/CPU0:Apr 30 16:30:55.494 : mpls_lsd[276]: app_bit:40 app_bit_pnd:0
show mpls label table detail
Tue Apr 30 16:31:05.102 EDT
Table Label Owner State Rewrite
----- ------- ---------------------------- ------ -------
0 0 LSD InUse Yes
0 1 LSD InUse Yes
0 2 LSD InUse Yes
0 13 LSD InUse Yes
0 44 Static InUse No
(IPv4, vers:0, default, 10.16.2.2/3)
0 1999 Static InUse No
(IPv4, vers:0, default, 10.4.1.1/24)
0 16001 LDP:lsd_test_ut InUse No
Static:lsd_test_ut InUse No
(IPv4, vers:0, , 10.106.10.10/15)
0 19990 Static InUse No
(IPv4, vers:0, default, 10.4.1.4/24)
0 19999 Static InUse No
(IPv4, vers:0, default, 10.4.1.3/24)
'''}
def test_show_mpls_label_table_detail_empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsLabelTableDetail(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_show_mpls_label_table_detail_golden1(self):
self.maxDiff = None
self.device = Mock(**self.golden_output1)
obj = ShowMplsLabelTableDetail(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output1)
def test_show_mpls_label_table_detail_golden2(self):
self.maxDiff = None
self.device = Mock(**self.golden_output2)
obj = ShowMplsLabelTableDetail(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output2)
def test_show_mpls_label_table_detail_golden3(self):
self.maxDiff = None
self.device = Mock(**self.golden_output3)
obj = ShowMplsLabelTableDetail(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output3)
# ==================================================
# Unit test for 'show mpls label table private'
# ==================================================
class TestShowMplsLabelTablePrivate(unittest.TestCase):
empty_output = {'execute.return_value': ''}
golden_parsed_output = {
'table':{
0:{
'label':{
0:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
1:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
2:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
13:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
15000:{
'owner':{
'LSD(A)':{
'state':'InUse',
'rewrite':'No'
}
}
},
16000:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'No'
}
}
},
24000:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
}
},
24001:{
'owner':{
'ISIS(A):SR':{
'state':'InUse',
'rewrite':'Yes'
}
}
}
}
}
}
}
golden_output = {'execute.return_value': '''
RP/0/RP0/CPU0:R3#show mpls label table private
Thu Aug 29 15:35:09.897 UTC
Table Label Owner State Rewrite
----- ------- ------------------------------- ------ -------
0 0 LSD(A) InUse Yes
0 1 LSD(A) InUse Yes
0 2 LSD(A) InUse Yes
0 13 LSD(A) InUse Yes
0 15000 LSD(A) InUse No
0 16000 ISIS(A):SR InUse No
0 24000 ISIS(A):SR InUse Yes
0 24001 ISIS(A):SR InUse Yes
'''}
def test_show_mpls_label_table_private_empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsLabelTablePrivate(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_show_mpls_label_table_private_golden(self):
self.maxDiff = None
self.device = Mock(**self.golden_output)
obj = ShowMplsLabelTablePrivate(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output)
# ==================================================
# Unit test for 'show mpls interfaces'
# ==================================================
class TestShowMplsInterfaces(unittest.TestCase):
device = Device(name='aDevice')
empty_output = {'execute.return_value': ''}
golden_parsed_output1 = {
'interfaces': {
'GigabitEthernet0/0/0/0': {
'enabled': 'Yes',
'ldp': 'No',
'static': 'No',
'tunnel': 'No',
},
'GigabitEthernet0/0/0/1': {
'enabled': 'Yes',
'ldp': 'No',
'static': 'No',
'tunnel': 'No',
},
},
}
golden_output1 = {'execute.return_value': '''
RP/0/RP0/CPU0:R3#show mpls interfaces
Thu Aug 29 15:32:55.170 UTC
Interface LDP Tunnel Static Enabled
-------------------------- -------- -------- -------- --------
GigabitEthernet0/0/0/0 No No No Yes
GigabitEthernet0/0/0/1 No No No Yes
'''}
golden_parsed_output2 = {
'interfaces': {
'GigabitEthernet0/0/0/0': {
'enabled': 'Yes',
'ldp': 'No',
'static': 'No',
'tunnel': 'No',
},
},
}
golden_output2 = {'execute.return_value': '''
RP/0/RP0/CPU0:R3#show mpls interfaces GigabitEthernet0/0/0/0
Thu Aug 29 15:32:55.170 UTC
Interface LDP Tunnel Static Enabled
-------------------------- -------- -------- -------- --------
GigabitEthernet0/0/0/0 No No No Yes
'''}
def test__empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsInterfaces(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_golden(self):
self.maxDiff = None
self.device = Mock(**self.golden_output1)
obj = ShowMplsInterfaces(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output1)
def test_golden2(self):
self.maxDiff = None
self.device = Mock(**self.golden_output2)
obj = ShowMplsInterfaces(device=self.device)
parsed_output = obj.parse(interface='GigabitEthernet0/0/0/0')
self.assertEqual(parsed_output, self.golden_parsed_output2)
# ==================================================
# Unit test for 'show mpls forwarding'
# ==================================================
class TestShowMplsForwarding(unittest.TestCase):
device = Device(name='aDevice')
empty_output = {'execute.return_value': ''}
golden_parsed_output1 = {
'local_label': {
'16001': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Pfx (idx 1)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'16002': {
'outgoing_label': {
'16002': {
'prefix_or_id': {
'SR Pfx (idx 2)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'16004': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Pfx (idx 4)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1': {
'bytes_switched': 0,
'next_hop': '10.3.4.4',
},
},
},
},
},
},
},
'16005': {
'outgoing_label': {
'16005': {
'prefix_or_id': {
'SR Pfx (idx 5)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'24000': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 0)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'24001': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 2)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'24002': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 1)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'24003': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 3)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0': {
'bytes_switched': 0,
'next_hop': '10.1.3.1',
},
},
},
},
},
},
},
'24004': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 0)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1': {
'bytes_switched': 0,
'next_hop': '10.3.4.4',
},
},
},
},
},
},
},
'24005': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 2)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1': {
'bytes_switched': 0,
'next_hop': '10.3.4.4',
},
},
},
},
},
},
},
'24006': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 1)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1': {
'bytes_switched': 0,
'next_hop': '10.3.4.4',
},
},
},
},
},
},
},
'24007': {
'outgoing_label': {
'Pop': {
'prefix_or_id': {
'SR Adj (idx 3)': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1': {
'bytes_switched': 0,
'next_hop': '10.3.4.4',
},
},
},
},
},
},
},
},
}
golden_output1 = {'execute.return_value': '''
RP/0/RP0/CPU0:R3#show mpls forwarding
Thu Aug 29 15:29:39.411 UTC
Local Outgoing Prefix Outgoing Next Hop Bytes
Label Label or ID Interface Switched
------ ----------- ------------------ ------------ --------------- ------------
16001 Pop SR Pfx (idx 1) Gi0/0/0/0 10.1.3.1 0
16002 16002 SR Pfx (idx 2) Gi0/0/0/0 10.1.3.1 0
16004 Pop SR Pfx (idx 4) Gi0/0/0/1 10.3.4.4 0
16005 16005 SR Pfx (idx 5) Gi0/0/0/0 10.1.3.1 0
24000 Pop SR Adj (idx 0) Gi0/0/0/0 10.1.3.1 0
24001 Pop SR Adj (idx 2) Gi0/0/0/0 10.1.3.1 0
24002 Pop SR Adj (idx 1) Gi0/0/0/0 10.1.3.1 0
24003 Pop SR Adj (idx 3) Gi0/0/0/0 10.1.3.1 0
24004 Pop SR Adj (idx 0) Gi0/0/0/1 10.3.4.4 0
24005 Pop SR Adj (idx 2) Gi0/0/0/1 10.3.4.4 0
24006 Pop SR Adj (idx 1) Gi0/0/0/1 10.3.4.4 0
24007 Pop SR Adj (idx 3) Gi0/0/0/1 10.3.4.4 0
'''}
golden_parsed_output2 = {
"local_label": {
"24000": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.4.1.1/32": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.90": {
"next_hop": "10.12.90.1",
"bytes_switched": 9321675
}
}
}
}
}
}
},
"24002": {
"outgoing_label": {
"Pop": {
"prefix_or_id": {
"10.13.110.0/24": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.110": {
"next_hop": "10.12.110.1",
"bytes_switched": 0
}
}
}
}
}
}
},
"24003": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.13.115.0/24": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.115": {
"next_hop": "10.12.115.1",
"bytes_switched": 0
}
}
}
}
}
}
},
"24004": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.13.90.0/24": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.90": {
"next_hop": "10.12.90.1",
"bytes_switched": 0
},
"GigabitEthernet0/0/0/1.90": {
"next_hop": "10.23.90.3",
"bytes_switched": 0
}
}
}
}
}
}
},
"24005": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"2001:1:1:1::1/128[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.390": {
"next_hop": "fe80::f816:3eff:fe53:2cc7",
"bytes_switched": 3928399
}
}
}
}
}
}
},
"24006": {
"outgoing_label": {
"Aggregate": {
"prefix_or_id": {
"VRF1: Per-VRF Aggr[V]": {
"outgoing_interface": {
"VRF1": {
"bytes_switched": 832
}
}
}
}
}
}
},
"24007": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"2001:3:3:3::3/128[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/1.390": {
"next_hop": "fe80::5c00:ff:fe02:7",
"bytes_switched": 3762357
}
}
}
}
}
}
},
"24008": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.4.1.1/32[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.390": {
"next_hop": "10.12.90.1",
"bytes_switched": 6281421
}
}
}
}
}
}
},
"24009": {
"outgoing_label": {
"Aggregate": {
"prefix_or_id": {
"VRF1: Per-VRF Aggr[V]": {
"outgoing_interface": {
"VRF1": {
"bytes_switched": 0
}
}
}
}
}
}
},
"24010": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.36.3.3/32[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/1.390": {
"next_hop": "10.23.90.3",
"bytes_switched": 7608898
}
}
}
}
}
}
},
"24011": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.1.0.0/8": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.120": {
"next_hop": "10.12.120.1",
"bytes_switched": 0
}
}
}
}
}
}
},
"24012": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.13.120.0/24": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.120": {
"next_hop": "10.12.120.1",
"bytes_switched": 0
},
"GigabitEthernet0/0/0/1.120": {
"next_hop": "10.23.120.3",
"bytes_switched": 0
}
}
}
}
}
}
}
}
}
golden_output2 = {'execute.return_value': r'''
show mpls forwarding
Mon Dec 2 19:56:50.899 UTC
Local Outgoing Prefix Outgoing Next Hop Bytes
Label Label or ID Interface Switched
------ ----------- ------------------ ------------ --------------- ------------
24000 Unlabelled 10.4.1.1/32 Gi0/0/0/0.90 10.12.90.1 9321675
24002 Pop 10.13.110.0/24 Gi0/0/0/0.110 10.12.110.1 0
24003 Unlabelled 10.13.115.0/24 Gi0/0/0/0.115 10.12.115.1 0
24004 Unlabelled 10.13.90.0/24 Gi0/0/0/0.90 10.12.90.1 0
Unlabelled 10.13.90.0/24 Gi0/0/0/1.90 10.23.90.3 0
24005 Unlabelled 2001:1:1:1::1/128[V] \
Gi0/0/0/0.390 fe80::f816:3eff:fe53:2cc7 \
3928399
24006 Aggregate VRF1: Per-VRF Aggr[V] \
VRF1 832
24007 Unlabelled 2001:3:3:3::3/128[V] \
Gi0/0/0/1.390 fe80::5c00:ff:fe02:7 \
3762357
24008 Unlabelled 10.4.1.1/32[V] Gi0/0/0/0.390 10.12.90.1 6281421
24009 Aggregate VRF1: Per-VRF Aggr[V] \
VRF1 0
24010 Unlabelled 10.36.3.3/32[V] Gi0/0/0/1.390 10.23.90.3 7608898
24011 Unlabelled 10.1.0.0/8 Gi0/0/0/0.120 10.12.120.1 0
24012 Unlabelled 10.13.120.0/24 Gi0/0/0/0.120 10.12.120.1 0
Unlabelled 10.13.120.0/24 Gi0/0/0/1.120 10.23.120.3 0
'''}
def test__empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsForwarding(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse()
def test_golden(self):
self.maxDiff = None
self.device = Mock(**self.golden_output1)
obj = ShowMplsForwarding(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output1)
def test_golden2(self):
self.maxDiff = None
self.device = Mock(**self.golden_output2)
obj = ShowMplsForwarding(device=self.device)
parsed_output = obj.parse()
self.assertEqual(parsed_output, self.golden_parsed_output2)
# ==================================================
# Unit test for 'show mpls forwarding vrf {vrf}'
# ==================================================
class TestShowMplsForwardingVrf(unittest.TestCase):
maxDiff = None
device = Device(name='aDevice')
empty_output = {'execute.return_value': ''}
golden_parsed_output1 = {
"vrf": {
"VRF1": {
"local_label": {
"24005": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"2001:1:1:1::1/128[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.390": {
"next_hop": "fe80::f816:3eff:fe53:2cc7",
"bytes_switched": 4102415
}
}
}
}
}
}
},
"24006": {
"outgoing_label": {
"Aggregate": {
"prefix_or_id": {
"VRF1: Per-VRF Aggr[V]": {
"outgoing_interface": {
"VRF1": {
"bytes_switched": 832
}
}
}
}
}
}
},
"24007": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"2001:3:3:3::3/128[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/1.390": {
"next_hop": "fe80::5c00:ff:fe02:7",
"bytes_switched": 3929713
}
}
}
}
}
}
},
"24008": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.4.1.1/32[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/0.390": {
"next_hop": "10.12.90.1",
"bytes_switched": 6560001
}
}
}
}
}
}
},
"24009": {
"outgoing_label": {
"Aggregate": {
"prefix_or_id": {
"VRF1: Per-VRF Aggr[V]": {
"outgoing_interface": {
"VRF1": {
"bytes_switched": 0
}
}
}
}
}
}
},
"24010": {
"outgoing_label": {
"Unlabelled": {
"prefix_or_id": {
"10.36.3.3/32[V]": {
"outgoing_interface": {
"GigabitEthernet0/0/0/1.390": {
"next_hop": "10.23.90.3",
"bytes_switched": 7947290
}
}
}
}
}
}
}
}
}
}
}
golden_output1 = {'execute.return_value': r'''
show mpls forwarding vrf VRF1
Tue Dec 3 16:00:45.325 UTC
Local Outgoing Prefix Outgoing Next Hop Bytes
Label Label or ID Interface Switched
------ ----------- ------------------ ------------ --------------- ------------
24005 Unlabelled 2001:1:1:1::1/128[V] \
Gi0/0/0/0.390 fe80::f816:3eff:fe53:2cc7 \
4102415
24006 Aggregate VRF1: Per-VRF Aggr[V] \
VRF1 832
24007 Unlabelled 2001:3:3:3::3/128[V] \
Gi0/0/0/1.390 fe80::5c00:ff:fe02:7 \
3929713
24008 Unlabelled 10.4.1.1/32[V] Gi0/0/0/0.390 10.12.90.1 6560001
24009 Aggregate VRF1: Per-VRF Aggr[V] \
VRF1 0
24010 Unlabelled 10.36.3.3/32[V] Gi0/0/0/1.390 10.23.90.3 7947290
'''}
def test__empty(self):
self.device = Mock(**self.empty_output)
obj = ShowMplsForwardingVrf(device=self.device)
with self.assertRaises(SchemaEmptyParserError):
parsed_output = obj.parse(vrf='default')
def test_golden(self):
self.device = Mock(**self.golden_output1)
obj = ShowMplsForwardingVrf(device=self.device)
parsed_output = obj.parse(vrf='VRF1')
self.assertEqual(parsed_output, self.golden_parsed_output1)
if __name__ == '__main__':
unittest.main() | 39.216879 | 123 | 0.310836 | 6,248 | 83,179 | 4.032811 | 0.06146 | 0.012621 | 0.010319 | 0.035361 | 0.90737 | 0.876493 | 0.85661 | 0.821725 | 0.787792 | 0.75509 | 0 | 0.130863 | 0.566561 | 83,179 | 2,121 | 124 | 39.216879 | 0.568025 | 0.017685 | 0 | 0.618255 | 0 | 0.031266 | 0.365833 | 0.030259 | 0 | 0 | 0.002939 | 0 | 0.013616 | 1 | 0.013616 | false | 0.001009 | 0.003026 | 0 | 0.048412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
641bfba1dcba1f5f13f6181529c362c667ac147e | 21,924 | py | Python | test/test_vsql_func.py | LivingLogic/LivingApps.Python.LivingAPI | 70bb71d7f582535a4c52e1f00d9ed070f3f2cc4f | [
"MIT"
] | 2 | 2017-09-15T15:28:23.000Z | 2019-01-25T09:23:53.000Z | test/test_vsql_func.py | LivingLogic/LivingApps.Python.LivingAPI | 70bb71d7f582535a4c52e1f00d9ed070f3f2cc4f | [
"MIT"
] | 1 | 2019-01-28T08:06:23.000Z | 2019-01-28T14:45:52.000Z | test/test_vsql_func.py | LivingLogic/LivingApps.Python.LivingAPI | 70bb71d7f582535a4c52e1f00d9ed070f3f2cc4f | [
"MIT"
] | 1 | 2019-01-25T21:20:55.000Z | 2019-01-25T21:20:55.000Z | """
Tests for vSQL functions.
The test are done via the Python DB interface.
To run the tests, :mod:`pytest` is required.
"""
import math
from conftest import *
###
### Tests
###
def test_today(config_persons):
check_vsql(config_persons, "today() >= @(2000-02-29)")
def test_now(config_persons):
check_vsql(config_persons, "now() >= @(2000-02-29T12:34:56)")
def test_bool(config_persons):
check_vsql(config_persons, "not bool()")
def test_bool_none(config_persons):
check_vsql(config_persons, "not bool(None)")
def test_bool_false(config_persons):
check_vsql(config_persons, "not bool(False)")
def test_bool_true(config_persons):
check_vsql(config_persons, "bool(True)")
def test_bool_int_none(config_persons):
check_vsql(config_persons, "not bool(app.p_int_none.value)")
def test_bool_int_false(config_persons):
check_vsql(config_persons, "not bool(0)")
def test_bool_int_true(config_persons):
check_vsql(config_persons, "bool(42)")
def test_bool_number_false(config_persons):
check_vsql(config_persons, "not bool(0.0)")
def test_bool_number_true(config_persons):
check_vsql(config_persons, "bool(42.5)")
def test_bool_datedelta_false(config_persons):
check_vsql(config_persons, "not bool(days(0))")
def test_bool_datedelta_true(config_persons):
check_vsql(config_persons, "bool(days(42))")
def test_bool_datetimedelta_false(config_persons):
check_vsql(config_persons, "not bool(minutes(0))")
def test_bool_datetimedelta_true(config_persons):
check_vsql(config_persons, "bool(minutes(42))")
def test_bool_monthdelta_false(config_persons):
check_vsql(config_persons, "not bool(monthdelta(0))")
def test_bool_monthdelta_true(config_persons):
check_vsql(config_persons, "bool(monthdelta(42))")
def test_bool_date(config_persons):
check_vsql(config_persons, "bool(@(2000-02-29))")
def test_bool_datetime(config_persons):
check_vsql(config_persons, "bool(@(2000-02-29T12:34:56))")
def test_bool_color(config_persons):
check_vsql(config_persons, "bool(#fff)")
def test_bool_str_false(config_persons):
check_vsql(config_persons, "not bool('')")
def test_bool_str_true(config_persons):
check_vsql(config_persons, "bool('gurk')")
def test_bool_intlist(config_persons):
check_vsql(config_persons, "bool([42])")
def test_bool_numberlist(config_persons):
check_vsql(config_persons, "bool([42.5])")
def test_bool_strlist(config_persons):
check_vsql(config_persons, "bool(['gurk'])")
def test_bool_datelist(config_persons):
check_vsql(config_persons, "bool([today()])")
def test_bool_datetimelist(config_persons):
check_vsql(config_persons, "bool([now()])")
def test_bool_intset(config_persons):
check_vsql(config_persons, "bool({42})")
def test_bool_numberset(config_persons):
check_vsql(config_persons, "bool({42.5})")
def test_bool_strset(config_persons):
check_vsql(config_persons, "bool({'gurk'})")
def test_bool_dateset(config_persons):
check_vsql(config_persons, "bool({today()})")
def test_bool_datetimeset(config_persons):
check_vsql(config_persons, "bool({now()})")
def test_int(config_persons):
check_vsql(config_persons, "not int()")
def test_int_bool_false(config_persons):
check_vsql(config_persons, "not int(False)")
def test_int_bool_true(config_persons):
check_vsql(config_persons, "int(True)")
def test_int_int(config_persons):
check_vsql(config_persons, "int(42) == 42")
def test_int_number(config_persons):
check_vsql(config_persons, "int(42.4) == 42")
def test_int_str_ok(config_persons):
check_vsql(config_persons, "int('42') == 42")
def test_int_str_bad(config_persons):
check_vsql(config_persons, "int('42.5') is None")
def test_int_str_very_bad(config_persons):
check_vsql(config_persons, "int('verybad') is None")
def test_float(config_persons):
check_vsql(config_persons, "float() == 0.0")
def test_float_bool_false(config_persons):
check_vsql(config_persons, "float(False) == 0.0")
def test_float_bool_true(config_persons):
check_vsql(config_persons, "float(True) == 1.0")
def test_float_int(config_persons):
check_vsql(config_persons, "float(42) == 42.0")
def test_float_number(config_persons):
check_vsql(config_persons, "float(42.5) == 42.5")
def test_float_str(config_persons):
check_vsql(config_persons, "float('42.5') == 42.5")
def test_float_str_bad(config_persons):
check_vsql(config_persons, "float('bad') is None")
def test_str(config_persons):
check_vsql(config_persons, "str() is None")
def test_str_bool_false(config_persons):
check_vsql(config_persons, "str(False) == 'False'")
def test_str_bool_true(config_persons):
check_vsql(config_persons, "str(True) == 'True'")
def test_str_int(config_persons):
check_vsql(config_persons, "str(-42) == '-42'")
def test_str_number(config_persons):
check_vsql(config_persons, "str(42.0) == '42.0' and str(-42.5) == '-42.5'")
def test_str_str(config_persons):
check_vsql(config_persons, "str('foo') == 'foo'")
def test_str_date(config_persons):
check_vsql(config_persons, "str(@(2000-02-29)) == '2000-02-29'")
def test_str_datetime(config_persons):
check_vsql(config_persons, "str(@(2000-02-29T12:34:56)) == '2000-02-29 12:34:56'")
def test_str_datedelta_1(config_persons):
check_vsql(config_persons, "str(days(1)) == '1 day'")
def test_str_datedelta_2(config_persons):
check_vsql(config_persons, "str(days(42)) == '42 days'")
def test_str_datetimedelta_1(config_persons):
check_vsql(config_persons, "str(seconds(42)) == '0:00:42'")
def test_str_datetimedelta_2(config_persons):
check_vsql(config_persons, "str(minutes(42)) == '0:42:00'")
def test_str_datetimedelta_3(config_persons):
check_vsql(config_persons, "str(hours(17) + minutes(23)) == '17:23:00'")
def test_str_datetimedelta_4(config_persons):
check_vsql(config_persons, "str(hours(42) + seconds(0)) == '1 day, 18:00:00'")
def test_str_datetimedelta_5(config_persons):
check_vsql(config_persons, "str(days(42) + seconds(0)) == '42 days, 0:00:00'")
def test_str_datetimedelta_6(config_persons):
check_vsql(config_persons, "str(days(42) + hours(17) + minutes(23)) == '42 days, 17:23:00'")
def test_str_datetimedelta_7(config_persons):
check_vsql(config_persons, "str(-days(1) - hours(12) - minutes(34) - seconds(56)) == '-2 days, 11:25:04'")
def test_str_monthdelta_1(config_persons):
check_vsql(config_persons, "str(monthdelta(0)) == '0 months'")
def test_str_monthdelta_2(config_persons):
check_vsql(config_persons, "str(monthdelta(1)) == '1 month'")
def test_str_monthdelta_3(config_persons):
check_vsql(config_persons, "str(monthdelta(42)) == '42 months'")
def test_str_color_1(config_persons):
check_vsql(config_persons, "str(#000f) == '#000'")
def test_str_color_2(config_persons):
check_vsql(config_persons, "str(#fff0) == 'rgba(255, 255, 255, 0.000)'")
def test_str_color_3(config_persons):
check_vsql(config_persons, "str(#123456) == '#123456'")
def test_str_color_4(config_persons):
check_vsql(config_persons, "str(#12345678) == 'rgba(18, 52, 86, 0.471)'")
def test_str_geo_without_info(config_persons):
check_vsql(config_persons, "str(geo(49.95, 11.59)) == '<geo lat=49.95 long=11.59 info=None>'")
def test_str_geo_with_info(config_persons):
check_vsql(config_persons, "str(geo(49.95, 11.59, 'Here')) == '<geo lat=49.95 long=11.59 info=\\'Here\\'>'")
def test_str_intlist(config_persons):
check_vsql(config_persons, "str([1, 2, 3, None]) == '[1, 2, 3, None]'")
def test_str_numberlist(config_persons):
check_vsql(config_persons, "str([1.2, 3.4, 5.6, None]) == '[1.2, 3.4, 5.6, None]'")
def test_str_strlist(config_persons):
check_vsql(config_persons, "str(['foo', 'bar', None]) == '[\\'foo\\', \\'bar\\', None]'")
def test_str_datelist(config_persons):
check_vsql(config_persons, "str([@(2000-02-29), None]) == '[@(2000-02-29), None]'")
def test_str_datetimelist(config_persons):
check_vsql(config_persons, "str([@(2000-02-29T12:34:56), None]) == '[@(2000-02-29T12:34:56), None]'")
# For the set test only include one non-``None`` value,
# as the order of the other elements is undefined
def test_str_intset(config_persons):
check_vsql(config_persons, "str({1, None}) == '{1, None}'")
def test_str_numberset(config_persons):
check_vsql(config_persons, "str({1.2, None}) == '{1.2, None}'")
def test_str_strset(config_persons):
check_vsql(config_persons, "str({'foo', None}) == '{\\'foo\\', None}'")
def test_str_dateset(config_persons):
check_vsql(config_persons, "str({@(2000-02-29), None}) == '{@(2000-02-29), None}'")
def test_str_datetimeset(config_persons):
check_vsql(config_persons, "str({@(2000-02-29T12:34:56), None}) == '{@(2000-02-29T12:34:56), None}'")
def test_repr_none(config_persons):
check_vsql(config_persons, "repr(None) == 'None'")
def test_repr_bool_false(config_persons):
check_vsql(config_persons, "repr(False) == 'False'")
def test_repr_bool_True(config_persons):
check_vsql(config_persons, "repr(True) == 'True'")
def test_repr_int(config_persons):
check_vsql(config_persons, "repr(-42) == '-42'")
def test_repr_number_1(config_persons):
check_vsql(config_persons, "repr(42.0) == '42.0'")
def test_repr_number_2(config_persons):
check_vsql(config_persons, "repr(-42.5) == '-42.5'")
def test_repr_str(config_persons):
check_vsql(config_persons, "repr('foo\"bar') == '\\'foo\\\"bar\\''")
def test_repr_date(config_persons):
check_vsql(config_persons, "repr(@(2000-02-29)) == '@(2000-02-29)'")
def test_repr_datetime(config_persons):
check_vsql(config_persons, "repr(@(2000-02-29T12:34:56)) == '@(2000-02-29T12:34:56)'")
def test_repr_datedelta_1(config_persons):
check_vsql(config_persons, "repr(days(1)) == 'timedelta(1)'")
def test_repr_datedelta_2(config_persons):
check_vsql(config_persons, "repr(days(42)) == 'timedelta(42)'")
def test_repr_datetimedelta_1(config_persons):
# FIXME: Oracle doesn't have enough precision for seconds
check_vsql(config_persons, "repr(seconds(42)) == 'timedelta(0, 42)'")
def test_repr_datetimedelta_2(config_persons):
check_vsql(config_persons, "repr(minutes(42)) == 'timedelta(0, 2520)'")
def test_repr_datetimedelta_3(config_persons):
check_vsql(config_persons, "repr(hours(17) + minutes(23)) == 'timedelta(0, 62580)'")
def test_repr_datetimedelta_4(config_persons):
check_vsql(config_persons, "repr(hours(42) + seconds(0)) == 'timedelta(1, 64800)'")
def test_repr_datetimedelta_5(config_persons):
check_vsql(config_persons, "repr(days(42) + seconds(0)) == 'timedelta(42)'")
def test_repr_datetimedelta_6(config_persons):
check_vsql(config_persons, "repr(days(42) + hours(17) + minutes(23)) == 'timedelta(42, 62580)'")
def test_repr_monthdelta(config_persons):
check_vsql(config_persons, "repr(monthdelta(42)) == 'monthdelta(42)'")
def test_repr_color_1(config_persons):
check_vsql(config_persons, "repr(#000) == '#000'")
def test_repr_color_2(config_persons):
check_vsql(config_persons, "repr(#369c) == '#369c'")
def test_repr_color_3(config_persons):
check_vsql(config_persons, "repr(#123456) == '#123456'")
def test_repr_color_4(config_persons):
check_vsql(config_persons, "repr(#12345678) == '#12345678'")
def test_repr_geo_without_info(config_persons):
check_vsql(config_persons, "repr(geo(49.95, 11.59)) == '<geo lat=49.95 long=11.59 info=None>'")
def test_repr_geo_with_info(config_persons):
check_vsql(config_persons, "repr(geo(49.95, 11.59, 'Here')) == '<geo lat=49.95 long=11.59 info=\\'Here\\'>'")
def test_repr_intlist(config_persons):
check_vsql(config_persons, "repr([1, 2, 3, None]) == '[1, 2, 3, None]'")
def test_repr_numberlist(config_persons):
check_vsql(config_persons, "repr([1.2, 3.4, 5.6, None]) == '[1.2, 3.4, 5.6, None]'")
def test_repr_strlist(config_persons):
check_vsql(config_persons, "repr(['foo', 'bar', None]) == '[\\'foo\\', \\'bar\\', None]'")
def test_repr_datelist(config_persons):
check_vsql(config_persons, "repr([@(2000-02-29), None]) == '[@(2000-02-29), None]'")
def test_repr_datetimelist(config_persons):
check_vsql(config_persons, "repr([@(2000-02-29T12:34:56), None]) == '[@(2000-02-29T12:34:56), None]'")
# For the set test only include one non-``None`` value,
# as the order of the other elements is undefined
def test_repr_intset(config_persons):
check_vsql(config_persons, "repr({1, None}) == '{1, None}'")
def test_repr_numberset(config_persons):
check_vsql(config_persons, "repr({1.2, None}) == '{1.2, None}'")
def test_repr_strset(config_persons):
check_vsql(config_persons, "repr({'foo', None}) == '{\\\'foo\\\', None}'")
def test_repr_dateset(config_persons):
check_vsql(config_persons, "repr({@(2000-02-29), None}) == '{@(2000-02-29), None}'")
def test_repr_datetimeset(config_persons):
check_vsql(config_persons, "repr({@(2000-02-29T12:34:56), None}) == '{@(2000-02-29T12:34:56), None}'")
def test_date_int(config_persons):
check_vsql(config_persons, "date(2000, 2, 29) == @(2000-02-29)")
def test_date_datetime(config_persons):
check_vsql(config_persons, "date(@(2000-02-29T12:34:56)) == @(2000-02-29)")
def test_datetime_int3(config_persons):
check_vsql(config_persons, "datetime(2000, 2, 29) == @(2000-02-29T)")
def test_datetime_int4(config_persons):
check_vsql(config_persons, "datetime(2000, 2, 29, 12) == @(2000-02-29T12:00:00)")
def test_datetime_int5(config_persons):
check_vsql(config_persons, "datetime(2000, 2, 29, 12, 34) == @(2000-02-29T12:34:00)")
def test_datetime_int6(config_persons):
check_vsql(config_persons, "datetime(2000, 2, 29, 12, 34, 56) == @(2000-02-29T12:34:56)")
def test_datetime_date(config_persons):
check_vsql(config_persons, "datetime(@(2000-02-29)) == @(2000-02-29T00:00:00)")
def test_datetime_date_int1(config_persons):
check_vsql(config_persons, "datetime(@(2000-02-29), 12) == @(2000-02-29T12:00:00)")
def test_datetime_date_int2(config_persons):
check_vsql(config_persons, "datetime(@(2000-02-29), 12, 34) == @(2000-02-29T12:34:00)")
def test_datetime_date_int3(config_persons):
check_vsql(config_persons, "datetime(@(2000-02-29), 12, 34, 56) == @(2000-02-29T12:34:56)")
def test_len_str1(config_persons):
check_vsql(config_persons, "len('') == 0")
def test_len_str2(config_persons):
check_vsql(config_persons, "len('gurk') == 4")
def test_len_str3(config_persons):
check_vsql(config_persons, "len('\\t\\n') == 2")
def test_len_intlist(config_persons):
check_vsql(config_persons, "len([1, 2, 3]) == 3")
def test_len_numberlist(config_persons):
check_vsql(config_persons, "len([1.2, 3.4, 5.6]) == 3")
def test_len_strlist(config_persons):
check_vsql(config_persons, "len(['foo', 'bar', 'baz']) == 3")
def test_len_datelist(config_persons):
check_vsql(config_persons, "len([@(2000-02-29), @(2000-02-29), @(2000-03-01)]) == 3")
def test_len_datetimelist(config_persons):
check_vsql(config_persons, "len([@(2000-02-29T12:34:56), @(2000-02-29T12:34:56), @(2000-03-01T12:34:56)]) == 3")
def test_len_intset(config_persons):
check_vsql(config_persons, "len({1, 1, 2, 2, 3, 3, None, None}) == 4")
def test_len_numberset(config_persons):
check_vsql(config_persons, "len({1.2, 3.4, 5.6, None, 1.2, 3.4, 5.6, None}) == 4")
def test_len_strset(config_persons):
check_vsql(config_persons, "len({'foo', 'bar', 'baz', None, 'foo', 'bar', 'baz'}) == 4")
def test_len_dateset(config_persons):
check_vsql(config_persons, "len({@(2000-02-29), @(2000-02-29), @(2000-03-21), None}) == 3")
def test_len_datetimeset(config_persons):
check_vsql(config_persons, "len({@(2000-02-29T12:34:56), None, @(2000-02-29T12:34:56), None, @(2000-02-29T11:22:33)}) == 3")
def test_timedelta(config_persons):
check_vsql(config_persons, "not timedelta()")
def test_timedelta_int1(config_persons):
check_vsql(config_persons, "timedelta(42)")
def test_timedelta_int2(config_persons):
check_vsql(config_persons, "timedelta(42, 12)")
def test_monthdelta(config_persons):
check_vsql(config_persons, "not monthdelta()")
def test_monthdelta_int(config_persons):
check_vsql(config_persons, "monthdelta(42)")
def test_years(config_persons):
check_vsql(config_persons, "years(25)")
def test_months(config_persons):
check_vsql(config_persons, "months(3)")
def test_weeks(config_persons):
check_vsql(config_persons, "weeks(3)")
def test_days(config_persons):
check_vsql(config_persons, "days(12)")
def test_hours(config_persons):
check_vsql(config_persons, "hours(8)")
def test_minutes(config_persons):
check_vsql(config_persons, "minutes(45)")
def test_seconds(config_persons):
check_vsql(config_persons, "seconds(60)")
def test_md5(config_persons):
check_vsql(config_persons, "md5('gurk') == '4b5b6a3fa4af2541daa569277c7ff4c5'")
def test_random(config_persons):
check_vsql(config_persons, "random() + 1")
def test_randrange(config_persons):
check_vsql(config_persons, "randrange(1, 10)")
def test_seq(config_persons):
check_vsql(config_persons, "seq()")
def test_rgb1(config_persons):
check_vsql(config_persons, "rgb(0.2, 0.4, 0.6) == #369")
def test_rgb2(config_persons):
check_vsql(config_persons, "rgb(0.2, 0.4, 0.6, 0.8) == #369c")
def test_list_str(config_persons):
check_vsql(config_persons, "list('gurk') == ['g', 'u', 'r', 'k']")
def test_list_intlist(config_persons):
check_vsql(config_persons, "list([1, 2, 3]) == [1, 2, 3]")
def test_list_numberlist(config_persons):
check_vsql(config_persons, "list([1.2, 3.4, 5.6]) == [1.2, 3.4, 5.6]")
def test_list_strlist(config_persons):
check_vsql(config_persons, "list(['foo', 'bar', 'baz', None]) == ['foo', 'bar', 'baz', None]")
def test_list_datelist(config_persons):
check_vsql(config_persons, "list([@(2000-02-29), @(2000-03-01), None]) == [@(2000-02-29), @(2000-03-01), None]")
def test_list_datetimelist(config_persons):
check_vsql(config_persons, "list([@(2000-02-29T12:34:56), @(2000-02-29T11:22:33), None]) == [@(2000-02-29T12:34:56), @(2000-02-29T11:22:33), None]")
def test_list_intset(config_persons):
check_vsql(config_persons, "list({1, None}) == [1, None]")
def test_list_numberset(config_persons):
check_vsql(config_persons, "list({1.2, None}) == [1.2, None]")
def test_list_strset(config_persons):
check_vsql(config_persons, "list({'foo', None}) == ['foo', None]")
def test_list_dateset(config_persons):
check_vsql(config_persons, "list({@(2000-02-29), None}) == [@(2000-02-29), None]")
def test_list_datetimeset(config_persons):
check_vsql(config_persons, "list({@(2000-02-29T12:34:56), None}) == [@(2000-02-29T12:34:56), None]")
def test_set_str(config_persons):
check_vsql(config_persons, "set('mississippi') == {'i', 'm', 'p', 's'}")
def test_set_intlist(config_persons):
check_vsql(config_persons, "set([1, 2, 3, 2, 1, None]) == {1, 2, 3, None}")
def test_set_numberlist(config_persons):
check_vsql(config_persons, "set([1.2, 3.4, 5.6, 3.4, 1.2, None]) == {1.2, 3.4, 5.6, None}")
def test_set_strlist(config_persons):
check_vsql(config_persons, "set(['foo', 'bar', 'baz', None, 'baz', 'bar', 'foo']) == {'foo', 'bar', 'baz', None}")
def test_set_datelist(config_persons):
check_vsql(config_persons, "set([@(2000-02-29), @(2000-03-01), None, @(2000-03-01), @(2000-02-29)]) == {@(2000-02-29), @(2000-03-01), None}")
def test_set_datetimelist(config_persons):
check_vsql(config_persons, "set([@(2000-02-29T12:34:56), @(2000-02-29T11:22:33), @(2000-02-29T11:22:33), None, @(2000-02-29T12:34:56)]) == {@(2000-02-29T12:34:56), @(2000-02-29T11:22:33), None}")
def test_set_intset(config_persons):
check_vsql(config_persons, "set({1, None}) == {1, None}")
def test_set_numberset(config_persons):
check_vsql(config_persons, "set({1.2, None}) == {1.2, None}")
def test_set_strset(config_persons):
check_vsql(config_persons, "set({'foo', None}) == {'foo', None}")
def test_set_dateset(config_persons):
check_vsql(config_persons, "set({@(2000-02-29), None}) == {@(2000-02-29), None}")
def test_set_datetimeset(config_persons):
check_vsql(config_persons, "set({@(2000-02-29T12:34:56), None}) == {@(2000-02-29T12:34:56), None}")
def test_dist(config_persons):
check_vsql(config_persons, "abs(dist(geo(49.95, 11.59, 'Here'), geo(12.34, 56.67, 'There')) - 5845.77551787602) < 1e-5")
def test_abs(config_persons):
check_vsql(config_persons, "abs(-42) == 42")
def test_cos_bool(config_persons):
check_vsql(config_persons, "cos(False) == 1")
def test_cos_int(config_persons):
check_vsql(config_persons, "cos(0) == 1")
def test_cos_number1(config_persons):
check_vsql(config_persons, "cos(0.0) == 1")
def test_cos_number2(config_persons):
check_vsql(config_persons, f"abs(cos({math.pi} / 2)) < 1e-10")
def test_cos_number3(config_persons):
check_vsql(config_persons, f"abs(cos({math.pi}) + 1) < 1e-10")
def test_sin_bool(config_persons):
check_vsql(config_persons, "sin(False) == 0")
def test_sin_int(config_persons):
check_vsql(config_persons, "sin(0) == 0")
def test_sin_number1(config_persons):
check_vsql(config_persons, "sin(0.0) == 0")
def test_sin_number2(config_persons):
check_vsql(config_persons, f"abs(sin({math.pi} / 2) - 1) < 1e-10")
def test_sin_number3(config_persons):
check_vsql(config_persons, f"abs(sin({math.pi})) < 1e-10")
def test_tan_bool(config_persons):
check_vsql(config_persons, "tan(False) == 0")
def test_tan_int(config_persons):
check_vsql(config_persons, "tan(0) == 0")
def test_tan_number1(config_persons):
check_vsql(config_persons, "tan(0.0) == 0")
def test_tan_number2(config_persons):
check_vsql(config_persons, f"abs(tan(0.25 * {math.pi}) - 1) < 1e-10")
def test_tan_number3(config_persons):
check_vsql(config_persons, f"abs(tan(0.75 * {math.pi}) + 1) < 1e-10")
def test_sqrt_bool1(config_persons):
check_vsql(config_persons, "sqrt(False) == 0.0")
def test_sqrt_bool2(config_persons):
check_vsql(config_persons, "sqrt(True) == 1.0")
def test_sqrt_int1(config_persons):
check_vsql(config_persons, "sqrt(16) == 4.0")
def test_sqrt_int2(config_persons):
check_vsql(config_persons, "sqrt(-16) is None")
def test_sqrt_number1(config_persons):
check_vsql(config_persons, "sqrt(16.0) == 4.0")
def test_sqrt_number2(config_persons):
check_vsql(config_persons, "sqrt(-16.0) is None")
| 34.635071 | 196 | 0.729383 | 3,475 | 21,924 | 4.296403 | 0.058705 | 0.353516 | 0.203952 | 0.299129 | 0.834092 | 0.808104 | 0.749967 | 0.641259 | 0.432552 | 0.336236 | 0 | 0.087024 | 0.088533 | 21,924 | 632 | 197 | 34.689873 | 0.660111 | 0.017606 | 0 | 0 | 0 | 0.112745 | 0.315843 | 0.054654 | 0 | 0 | 0 | 0.001582 | 0 | 1 | 0.497549 | false | 0 | 0.004902 | 0 | 0.502451 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
ff9ba251afcb1c5b3410734e285459502fe06ec0 | 45 | py | Python | python-crash-course/chapter02/caf.py | dym0080/learn-python | 2c1f34e9fa249334305350cf4f7fefb351871bda | [
"MIT"
] | null | null | null | python-crash-course/chapter02/caf.py | dym0080/learn-python | 2c1f34e9fa249334305350cf4f7fefb351871bda | [
"MIT"
] | null | null | null | python-crash-course/chapter02/caf.py | dym0080/learn-python | 2c1f34e9fa249334305350cf4f7fefb351871bda | [
"MIT"
] | null | null | null | print(2+6)
print(2*4)
print(10-2)
print(24/3) | 11.25 | 11 | 0.666667 | 12 | 45 | 2.5 | 0.583333 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 0.066667 | 45 | 4 | 12 | 11.25 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
ffb860a3d6e16dc22011da278efab826ee889099 | 11,575 | py | Python | ParamSklearn/components/base.py | automl/paramsklearn | 0466802aad28bfc6df945f874b4b81a3f337009d | [
"BSD-3-Clause"
] | 8 | 2015-10-09T08:23:51.000Z | 2020-07-06T01:49:37.000Z | ParamSklearn/components/base.py | automl/paramsklearn | 0466802aad28bfc6df945f874b4b81a3f337009d | [
"BSD-3-Clause"
] | 2 | 2015-10-31T12:57:23.000Z | 2015-11-24T12:54:04.000Z | ParamSklearn/components/base.py | automl/paramsklearn | 0466802aad28bfc6df945f874b4b81a3f337009d | [
"BSD-3-Clause"
] | 8 | 2015-12-03T17:28:35.000Z | 2021-06-26T17:32:29.000Z | class ParamSklearnClassificationAlgorithm(object):
"""Provide an abstract interface for classification algorithms in
ParamSklearn.
Make a subclass of this and put it into the directory
`ParamSklearn/components/classification` to make it available."""
def __init__(self):
self.estimator = None
self.properties = None
@staticmethod
def get_properties(dataset_properties=None):
"""Get the properties of the underlying algorithm. These are:
* Short name
* Full name
* Can the algorithm handle missing values?
(handles_missing_values : {True, False})
* Can the algorithm handle nominal features?
(handles_nominal_features : {True, False})
* Can the algorithm handle numerical features?
(handles_numerical_features : {True, False})
* Does the algorithm prefer data scaled in [0,1]?
(prefers_data_scaled : {True, False}
* Does the algorithm prefer data normalized to 0-mean, 1std?
(prefers_data_normalized : {True, False}
* Can the algorithm handle multiclass-classification problems?
(handles_multiclass : {True, False})
* Can the algorithm handle multilabel-classification problems?
(handles_multilabel : {True, False}
* Is the algorithm deterministic for a given seed?
(is_deterministic : {True, False)
* Can the algorithm handle sparse data?
(handles_sparse : {True, False}
* What are the preferred types of the data array?
(preferred_dtype : list of tuples)
Returns
-------
dict
"""
raise NotImplementedError()
@staticmethod
def get_hyperparameter_search_space(dataset_properties=None):
"""Return the configuration space of this classification algorithm.
Returns
-------
HPOlibConfigspace.configuration_space.ConfigurationSpace
The configuration space of this classification algorithm.
"""
raise NotImplementedError()
def fit(self, X, y):
"""The fit function calls the fit function of the underlying
scikit-learn model and returns `self`.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Training data
y : array-like, shape = [n_samples]
Returns
-------
self : returns an instance of self.
Targets
Notes
-----
Please see the `scikit-learn API documentation
<http://scikit-learn.org/dev/developers/index.html#apis-of-scikit
-learn-objects>`_ for further information."""
raise NotImplementedError()
def predict(self, X):
"""The predict function calls the predict function of the
underlying scikit-learn model and returns an array with the predictions.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Returns
-------
array, shape = (n_samples,)
Returns the predicted values
Notes
-----
Please see the `scikit-learn API documentation
<http://scikit-learn.org/dev/developers/index.html#apis-of-scikit
-learn-objects>`_ for further information."""
raise NotImplementedError()
def predict_proba(self, X):
"""Predict probabilities.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Returns
-------
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
"""
raise NotImplementedError()
def get_estimator(self):
"""Return the underlying estimator object.
Returns
-------
estimator : the underlying estimator object
"""
return self.estimator
def __str__(self):
name = self.get_properties()['name']
return "ParamSklearn %s" % name
class ParamSklearnPreprocessingAlgorithm(object):
"""Provide an abstract interface for preprocessing algorithms in
ParamSklearn.
Make a subclass of this and put it into the directory
`ParamSklearn/components/preprocessing` to make it available."""
def __init__(self):
self.preprocessor = None
@staticmethod
def get_properties(dataset_properties=None):
"""Get the properties of the underlying algorithm. These are:
* Short name
* Full name
* Can the algorithm handle missing values?
(handles_missing_values : {True, False})
* Can the algorithm handle nominal features?
(handles_nominal_features : {True, False})
* Can the algorithm handle numerical features?
(handles_numerical_features : {True, False})
* Does the algorithm prefer data scaled in [0,1]?
(prefers_data_scaled : {True, False}
* Does the algorithm prefer data normalized to 0-mean, 1std?
(prefers_data_normalized : {True, False}
* Can preprocess regression data?
(handles_regression : {True, False}
* Can preprocess classification data?
(handles_classification : {True, False}
* Can the algorithm handle multiclass-classification problems?
(handles_multiclass : {True, False})
* Can the algorithm handle multilabel-classification problems?
(handles_multilabel : {True, False}
* Is the algorithm deterministic for a given seed?
(is_deterministic : {True, False)
* Can the algorithm handle sparse data?
(handles_sparse : {True, False}
* What are the preferred types of the data array?
(preferred_dtype : list of tuples)
Returns
-------
dict
"""
raise NotImplementedError()
@staticmethod
def get_hyperparameter_search_space(dataset_properties=None):
"""Return the configuration space of this preprocessing algorithm.
Returns
-------
HPOlibConfigspace.configuration_space.ConfigurationSpace
The configuration space of this preprocessing algorithm.
"""
raise NotImplementedError()
def fit(self, X, Y):
"""The fit function calls the fit function of the underlying
scikit-learn preprocessing algorithm and returns `self`.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Training data
y : array-like, shape = [n_samples]
Returns
-------
self : returns an instance of self.
Notes
-----
Please see the `scikit-learn API documentation
<http://scikit-learn.org/dev/developers/index.html#apis-of-scikit
-learn-objects>`_ for further information."""
raise NotImplementedError()
def transform(self, X):
"""The transform function calls the transform function of the
underlying scikit-learn model and returns the transformed array.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Returns
-------
X : array
Return the transformed training data
Notes
-----
Please see the `scikit-learn API documentation
<http://scikit-learn.org/dev/developers/index.html#apis-of-scikit
-learn-objects>`_ for further information."""
raise NotImplementedError()
def get_preprocessor(self):
"""Return the underlying preprocessor object.
Returns
-------
preprocessor : the underlying preprocessor object
"""
return self.preprocessor
def __str__(self):
name = self.get_properties()['name']
return "ParamSklearn %" % name
class ParamSklearnRegressionAlgorithm(object):
"""Provide an abstract interface for regression algorithms in
ParamSklearn.
Make a subclass of this and put it into the directory
`ParamSklearn/components/regression` to make it available."""
def __init__(self):
self.estimator = None
self.properties = None
@staticmethod
def get_properties(dataset_properties=None):
"""Get the properties of the underlying algorithm. These are:
* Short name
* Full name
* Can the algorithm handle missing values?
(handles_missing_values : {True, False})
* Can the algorithm handle nominal features?
(handles_nominal_features : {True, False})
* Can the algorithm handle numerical features?
(handles_numerical_features : {True, False})
* Does the algorithm prefer data scaled in [0,1]?
(prefers_data_scaled : {True, False}
* Does the algorithm prefer data normalized to 0-mean, 1std?
(prefers_data_normalized : {True, False}
* Is the algorithm deterministic for a given seed?
(is_deterministic : {True, False)
* Can the algorithm handle sparse data?
(handles_sparse : {True, False}
* What are the preferred types of the data array?
(preferred_dtype : list of tuples)
Returns
-------
dict
"""
raise NotImplementedError()
@staticmethod
def get_hyperparameter_search_space(dataset_properties=None):
"""Return the configuration space of this regression algorithm.
Returns
-------
HPOlibConfigspace.configuration_space.ConfigurationSpace
The configuration space of this regression algorithm.
"""
raise NotImplementedError()
def fit(self, X, y):
"""The fit function calls the fit function of the underlying
scikit-learn model and returns `self`.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Training data
y : array-like, shape = [n_samples]
Returns
-------
self : returns an instance of self.
Targets
Notes
-----
Please see the `scikit-learn API documentation
<http://scikit-learn.org/dev/developers/index.html#apis-of-scikit
-learn-objects>`_ for further information."""
raise NotImplementedError()
def predict(self, X):
"""The predict function calls the predict function of the
underlying scikit-learn model and returns an array with the predictions.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Returns
-------
array, shape = (n_samples,)
Returns the predicted values
Notes
-----
Please see the `scikit-learn API documentation
<http://scikit-learn.org/dev/developers/index.html#apis-of-scikit
-learn-objects>`_ for further information."""
raise NotImplementedError()
def predict_proba(self, X):
"""Predict probabilities.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Returns
-------
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
"""
raise NotImplementedError()
def get_estimator(self):
"""Return the underlying estimator object.
Returns
-------
estimator : the underlying estimator object
"""
return self.estimator
def __str__(self):
name = self.get_properties()['name']
return "ParamSklearn %" % name
| 32.063712 | 80 | 0.614082 | 1,202 | 11,575 | 5.802829 | 0.115641 | 0.034839 | 0.034409 | 0.048172 | 0.918853 | 0.918853 | 0.903799 | 0.89319 | 0.888602 | 0.881577 | 0 | 0.001714 | 0.294341 | 11,575 | 360 | 81 | 32.152778 | 0.852228 | 0.652959 | 0 | 0.85 | 0 | 0 | 0.024434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.383333 | false | 0 | 0 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
442129b17cfaf3b7e7149065aa324f8179c003ec | 217 | py | Python | tests/test_frkl_project_meta.py | makkus/frkl.project-meta | a16da72ba41abdb93044bea9d14322433c090ac0 | [
"BlueOak-1.0.0",
"Apache-2.0"
] | null | null | null | tests/test_frkl_project_meta.py | makkus/frkl.project-meta | a16da72ba41abdb93044bea9d14322433c090ac0 | [
"BlueOak-1.0.0",
"Apache-2.0"
] | null | null | null | tests/test_frkl_project_meta.py | makkus/frkl.project-meta | a16da72ba41abdb93044bea9d14322433c090ac0 | [
"BlueOak-1.0.0",
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Tests for `frkl_project_meta` package."""
import frkl.project_meta
import pytest # noqa
def test_assert():
assert frkl.project_meta.get_version() is not None
| 16.692308 | 54 | 0.695853 | 32 | 217 | 4.53125 | 0.75 | 0.227586 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0.156682 | 217 | 12 | 55 | 18.083333 | 0.786885 | 0.396313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
443b131b2aa3de49c34d50153fc85fbe5c6a5444 | 50,800 | py | Python | tests/helpers/test_script.py | hmctl/ha-core | 677c276b41f99dbef34fdbaa9bdb14b3685163ef | [
"Apache-2.0"
] | 2 | 2020-03-02T19:17:52.000Z | 2020-03-02T19:17:53.000Z | tests/helpers/test_script.py | hmctl/ha-core | 677c276b41f99dbef34fdbaa9bdb14b3685163ef | [
"Apache-2.0"
] | 6 | 2021-02-08T21:05:36.000Z | 2022-03-12T00:54:00.000Z | tests/helpers/test_script.py | hmctl/ha-core | 677c276b41f99dbef34fdbaa9bdb14b3685163ef | [
"Apache-2.0"
] | 1 | 2020-03-07T10:43:50.000Z | 2020-03-07T10:43:50.000Z | """The tests for the Script component."""
# pylint: disable=protected-access
import asyncio
from datetime import timedelta
import logging
from unittest import mock
import asynctest
import pytest
import voluptuous as vol
# Otherwise can't test just this file (import order issue)
from homeassistant import exceptions
import homeassistant.components.scene as scene
from homeassistant.const import ATTR_ENTITY_ID, SERVICE_TURN_ON
from homeassistant.core import Context, callback
from homeassistant.helpers import config_validation as cv, script
import homeassistant.util.dt as dt_util
from tests.common import async_fire_time_changed
ENTITY_ID = "script.test"
_ALL_RUN_MODES = [None, "background", "blocking"]
async def test_firing_event_basic(hass):
"""Test the firing of events."""
event = "test_event"
context = Context()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA({"event": event, "event_data": {"hello": "world"}})
# For this one test we'll make sure "legacy" works the same as None.
for run_mode in _ALL_RUN_MODES + ["legacy"]:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert not script_obj.can_cancel
await script_obj.async_run(context=context)
await hass.async_block_till_done()
assert len(events) == 1
assert events[0].context is context
assert events[0].data.get("hello") == "world"
assert not script_obj.can_cancel
async def test_firing_event_template(hass):
"""Test the firing of events."""
event = "test_event"
context = Context()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA(
{
"event": event,
"event_data_template": {
"dict": {
1: "{{ is_world }}",
2: "{{ is_world }}{{ is_world }}",
3: "{{ is_world }}{{ is_world }}{{ is_world }}",
},
"list": ["{{ is_world }}", "{{ is_world }}{{ is_world }}"],
},
}
)
for run_mode in _ALL_RUN_MODES:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert not script_obj.can_cancel
await script_obj.async_run({"is_world": "yes"}, context=context)
await hass.async_block_till_done()
assert len(events) == 1
assert events[0].context is context
assert events[0].data == {
"dict": {1: "yes", 2: "yesyes", 3: "yesyesyes"},
"list": ["yes", "yesyes"],
}
async def test_calling_service_basic(hass):
"""Test the calling of a service."""
context = Context()
@callback
def record_call(service):
"""Add recorded event to set."""
calls.append(service)
hass.services.async_register("test", "script", record_call)
schema = cv.SCRIPT_SCHEMA({"service": "test.script", "data": {"hello": "world"}})
for run_mode in _ALL_RUN_MODES:
calls = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert not script_obj.can_cancel
await script_obj.async_run(context=context)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context is context
assert calls[0].data.get("hello") == "world"
async def test_cancel_no_wait(hass, caplog):
"""Test stopping script."""
event = "test_event"
async def async_simulate_long_service(service):
"""Simulate a service that takes a not insignificant time."""
await asyncio.sleep(0.01)
hass.services.async_register("test", "script", async_simulate_long_service)
@callback
def monitor_event(event):
"""Signal event happened."""
event_sem.release()
hass.bus.async_listen(event, monitor_event)
schema = cv.SCRIPT_SCHEMA([{"event": event}, {"service": "test.script"}])
for run_mode in _ALL_RUN_MODES:
event_sem = asyncio.Semaphore(0)
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
tasks = []
for _ in range(3):
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
tasks.append(hass.async_create_task(event_sem.acquire()))
await asyncio.wait_for(asyncio.gather(*tasks), 1)
# Can't assert just yet because we haven't verified stopping works yet.
# If assert fails we can hang test if async_stop doesn't work.
script_was_runing = script_obj.is_running
await script_obj.async_stop()
await hass.async_block_till_done()
assert script_was_runing
assert not script_obj.is_running
async def test_activating_scene(hass):
"""Test the activation of a scene."""
context = Context()
@callback
def record_call(service):
"""Add recorded event to set."""
calls.append(service)
hass.services.async_register(scene.DOMAIN, SERVICE_TURN_ON, record_call)
schema = cv.SCRIPT_SCHEMA({"scene": "scene.hello"})
for run_mode in _ALL_RUN_MODES:
calls = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert not script_obj.can_cancel
await script_obj.async_run(context=context)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context is context
assert calls[0].data.get(ATTR_ENTITY_ID) == "scene.hello"
async def test_calling_service_template(hass):
"""Test the calling of a service."""
context = Context()
@callback
def record_call(service):
"""Add recorded event to set."""
calls.append(service)
hass.services.async_register("test", "script", record_call)
schema = cv.SCRIPT_SCHEMA(
{
"service_template": """
{% if True %}
test.script
{% else %}
test.not_script
{% endif %}""",
"data_template": {
"hello": """
{% if is_world == 'yes' %}
world
{% else %}
not world
{% endif %}
"""
},
}
)
for run_mode in _ALL_RUN_MODES:
calls = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert not script_obj.can_cancel
await script_obj.async_run({"is_world": "yes"}, context=context)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context is context
assert calls[0].data.get("hello") == "world"
async def test_multiple_runs_no_wait(hass):
"""Test multiple runs with no wait in script."""
logger = logging.getLogger("TEST")
async def async_simulate_long_service(service):
"""Simulate a service that takes a not insignificant time."""
@callback
def service_done_cb(event):
logger.debug("simulated service (%s:%s) done", fire, listen)
service_done.set()
calls.append(service)
fire = service.data.get("fire")
listen = service.data.get("listen")
logger.debug("simulated service (%s:%s) started", fire, listen)
service_done = asyncio.Event()
unsub = hass.bus.async_listen(listen, service_done_cb)
hass.bus.async_fire(fire)
await service_done.wait()
unsub()
hass.services.async_register("test", "script", async_simulate_long_service)
heard_event = asyncio.Event()
@callback
def heard_event_cb(event):
logger.debug("heard: %s", event)
heard_event.set()
schema = cv.SCRIPT_SCHEMA(
[
{
"service": "test.script",
"data_template": {"fire": "{{ fire1 }}", "listen": "{{ listen1 }}"},
},
{
"service": "test.script",
"data_template": {"fire": "{{ fire2 }}", "listen": "{{ listen2 }}"},
},
]
)
for run_mode in _ALL_RUN_MODES:
calls = []
heard_event.clear()
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
# Start script twice in such a way that second run will be started while first
# run is in the middle of the first service call.
unsub = hass.bus.async_listen("1", heard_event_cb)
logger.debug("starting 1st script")
coro = script_obj.async_run(
{"fire1": "1", "listen1": "2", "fire2": "3", "listen2": "4"}
)
if run_mode == "background":
await coro
else:
hass.async_create_task(coro)
await asyncio.wait_for(heard_event.wait(), 1)
unsub()
logger.debug("starting 2nd script")
await script_obj.async_run(
{"fire1": "2", "listen1": "3", "fire2": "4", "listen2": "4"}
)
await hass.async_block_till_done()
assert len(calls) == 4
async def test_delay_basic(hass):
"""Test the delay."""
delay_alias = "delay step"
delay_started_flag = asyncio.Event()
@callback
def delay_started_cb():
delay_started_flag.set()
delay = timedelta(milliseconds=10)
schema = cv.SCRIPT_SCHEMA({"delay": delay, "alias": delay_alias})
for run_mode in _ALL_RUN_MODES:
delay_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=delay_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=delay_started_cb, run_mode=run_mode
)
assert script_obj.can_cancel
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(delay_started_flag.wait(), 1)
assert script_obj.is_running
assert script_obj.last_action == delay_alias
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + delay
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
assert script_obj.last_action is None
async def test_multiple_runs_delay(hass):
"""Test multiple runs with delay in script."""
event = "test_event"
delay_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def delay_started_cb():
delay_started_flag.set()
delay = timedelta(milliseconds=10)
schema = cv.SCRIPT_SCHEMA(
[
{"event": event, "event_data": {"value": 1}},
{"delay": delay},
{"event": event, "event_data": {"value": 2}},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
delay_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=delay_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=delay_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(delay_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 1
assert events[-1].data["value"] == 1
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
# Start second run of script while first run is in a delay.
await script_obj.async_run()
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + delay
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
if run_mode in (None, "legacy"):
assert len(events) == 2
else:
assert len(events) == 4
assert events[-3].data["value"] == 1
assert events[-2].data["value"] == 2
assert events[-1].data["value"] == 2
async def test_delay_template_ok(hass):
"""Test the delay as a template."""
delay_started_flag = asyncio.Event()
@callback
def delay_started_cb():
delay_started_flag.set()
schema = cv.SCRIPT_SCHEMA({"delay": "00:00:{{ 1 }}"})
for run_mode in _ALL_RUN_MODES:
delay_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=delay_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=delay_started_cb, run_mode=run_mode
)
assert script_obj.can_cancel
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(delay_started_flag.wait(), 1)
assert script_obj.is_running
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
async def test_delay_template_invalid(hass, caplog):
"""Test the delay as a template that fails."""
event = "test_event"
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA(
[
{"event": event},
{"delay": "{{ invalid_delay }}"},
{"delay": {"seconds": 5}},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
start_idx = len(caplog.records)
await script_obj.async_run()
await hass.async_block_till_done()
assert any(
rec.levelname == "ERROR" and "Error rendering" in rec.message
for rec in caplog.records[start_idx:]
)
assert not script_obj.is_running
assert len(events) == 1
async def test_delay_template_complex_ok(hass):
"""Test the delay with a working complex template."""
delay_started_flag = asyncio.Event()
@callback
def delay_started_cb():
delay_started_flag.set()
milliseconds = 10
schema = cv.SCRIPT_SCHEMA({"delay": {"milliseconds": "{{ milliseconds }}"}})
for run_mode in _ALL_RUN_MODES:
delay_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=delay_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=delay_started_cb, run_mode=run_mode
)
assert script_obj.can_cancel
try:
coro = script_obj.async_run({"milliseconds": milliseconds})
if run_mode == "background":
await coro
else:
hass.async_create_task(coro)
await asyncio.wait_for(delay_started_flag.wait(), 1)
assert script_obj.is_running
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + timedelta(milliseconds=milliseconds)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
async def test_delay_template_complex_invalid(hass, caplog):
"""Test the delay with a complex template that fails."""
event = "test_event"
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA(
[
{"event": event},
{"delay": {"seconds": "{{ invalid_delay }}"}},
{"delay": {"seconds": 5}},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
start_idx = len(caplog.records)
await script_obj.async_run()
await hass.async_block_till_done()
assert any(
rec.levelname == "ERROR" and "Error rendering" in rec.message
for rec in caplog.records[start_idx:]
)
assert not script_obj.is_running
assert len(events) == 1
async def test_cancel_delay(hass):
"""Test the cancelling while the delay is present."""
delay_started_flag = asyncio.Event()
event = "test_event"
@callback
def delay_started_cb():
delay_started_flag.set()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
delay = timedelta(milliseconds=10)
schema = cv.SCRIPT_SCHEMA([{"delay": delay}, {"event": event}])
for run_mode in _ALL_RUN_MODES:
delay_started_flag.clear()
events = []
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=delay_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=delay_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(delay_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 0
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
await script_obj.async_stop()
assert not script_obj.is_running
# Make sure the script is really stopped.
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + delay
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 0
async def test_wait_template_basic(hass):
"""Test the wait template."""
wait_alias = "wait step"
wait_started_flag = asyncio.Event()
@callback
def wait_started_cb():
wait_started_flag.set()
schema = cv.SCRIPT_SCHEMA(
{
"wait_template": "{{ states.switch.test.state == 'off' }}",
"alias": wait_alias,
}
)
for run_mode in _ALL_RUN_MODES:
wait_started_flag.clear()
hass.states.async_set("switch.test", "on")
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
assert script_obj.can_cancel
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert script_obj.last_action == wait_alias
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
assert script_obj.last_action is None
async def test_multiple_runs_wait_template(hass):
"""Test multiple runs with wait_template in script."""
event = "test_event"
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
schema = cv.SCRIPT_SCHEMA(
[
{"event": event, "event_data": {"value": 1}},
{"wait_template": "{{ states.switch.test.state == 'off' }}"},
{"event": event, "event_data": {"value": 2}},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
wait_started_flag.clear()
hass.states.async_set("switch.test", "on")
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 1
assert events[-1].data["value"] == 1
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
# Start second run of script while first run is in wait_template.
if run_mode == "blocking":
hass.async_create_task(script_obj.async_run())
else:
await script_obj.async_run()
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
if run_mode in (None, "legacy"):
assert len(events) == 2
else:
assert len(events) == 4
assert events[-3].data["value"] == 1
assert events[-2].data["value"] == 2
assert events[-1].data["value"] == 2
async def test_cancel_wait_template(hass):
"""Test the cancelling while wait_template is present."""
wait_started_flag = asyncio.Event()
event = "test_event"
@callback
def wait_started_cb():
wait_started_flag.set()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA(
[
{"wait_template": "{{ states.switch.test.state == 'off' }}"},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
wait_started_flag.clear()
events = []
hass.states.async_set("switch.test", "on")
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 0
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
await script_obj.async_stop()
assert not script_obj.is_running
# Make sure the script is really stopped.
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 0
async def test_wait_template_not_schedule(hass):
"""Test the wait template with correct condition."""
event = "test_event"
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
hass.states.async_set("switch.test", "on")
schema = cv.SCRIPT_SCHEMA(
[
{"event": event},
{"wait_template": "{{ states.switch.test.state == 'on' }}"},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
await script_obj.async_run()
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 2
async def test_wait_template_timeout_halt(hass):
"""Test the wait template, halt on timeout."""
event = "test_event"
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
timeout = timedelta(milliseconds=10)
schema = cv.SCRIPT_SCHEMA(
[
{
"wait_template": "{{ states.switch.test.state == 'off' }}",
"continue_on_timeout": False,
"timeout": timeout,
},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
wait_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 0
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + timeout
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 0
async def test_wait_template_timeout_continue(hass):
"""Test the wait template with continuing the script."""
event = "test_event"
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
timeout = timedelta(milliseconds=10)
schema = cv.SCRIPT_SCHEMA(
[
{
"wait_template": "{{ states.switch.test.state == 'off' }}",
"continue_on_timeout": True,
"timeout": timeout,
},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
wait_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 0
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + timeout
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 1
async def test_wait_template_timeout_default(hass):
"""Test the wait template with default continue."""
event = "test_event"
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
timeout = timedelta(milliseconds=10)
schema = cv.SCRIPT_SCHEMA(
[
{
"wait_template": "{{ states.switch.test.state == 'off' }}",
"timeout": timeout,
},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
wait_started_flag.clear()
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
try:
if run_mode == "background":
await script_obj.async_run()
else:
hass.async_create_task(script_obj.async_run())
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 0
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
if run_mode in (None, "legacy"):
future = dt_util.utcnow() + timeout
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 1
async def test_wait_template_variables(hass):
"""Test the wait template with variables."""
wait_started_flag = asyncio.Event()
@callback
def wait_started_cb():
wait_started_flag.set()
schema = cv.SCRIPT_SCHEMA({"wait_template": "{{ is_state(data, 'off') }}"})
for run_mode in _ALL_RUN_MODES:
wait_started_flag.clear()
hass.states.async_set("switch.test", "on")
if run_mode is None:
script_obj = script.Script(hass, schema, change_listener=wait_started_cb)
else:
script_obj = script.Script(
hass, schema, change_listener=wait_started_cb, run_mode=run_mode
)
assert script_obj.can_cancel
try:
coro = script_obj.async_run({"data": "switch.test"})
if run_mode == "background":
await coro
else:
hass.async_create_task(coro)
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
async def test_condition_basic(hass):
"""Test if we can use conditions in a script."""
event = "test_event"
events = []
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA(
[
{"event": event},
{
"condition": "template",
"value_template": "{{ states.test.entity.state == 'hello' }}",
},
{"event": event},
]
)
for run_mode in _ALL_RUN_MODES:
events = []
hass.states.async_set("test.entity", "hello")
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert not script_obj.can_cancel
await script_obj.async_run()
await hass.async_block_till_done()
assert len(events) == 2
hass.states.async_set("test.entity", "goodbye")
await script_obj.async_run()
await hass.async_block_till_done()
assert len(events) == 3
@asynctest.patch("homeassistant.helpers.script.condition.async_from_config")
async def test_condition_created_once(async_from_config, hass):
"""Test that the conditions do not get created multiple times."""
event = "test_event"
events = []
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
hass.states.async_set("test.entity", "hello")
script_obj = script.Script(
hass,
cv.SCRIPT_SCHEMA(
[
{"event": event},
{
"condition": "template",
"value_template": '{{ states.test.entity.state == "hello" }}',
},
{"event": event},
]
),
)
await script_obj.async_run()
await script_obj.async_run()
await hass.async_block_till_done()
assert async_from_config.call_count == 1
assert len(script_obj._config_cache) == 1
async def test_condition_all_cached(hass):
"""Test that multiple conditions get cached."""
event = "test_event"
events = []
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
hass.states.async_set("test.entity", "hello")
script_obj = script.Script(
hass,
cv.SCRIPT_SCHEMA(
[
{"event": event},
{
"condition": "template",
"value_template": '{{ states.test.entity.state == "hello" }}',
},
{
"condition": "template",
"value_template": '{{ states.test.entity.state != "hello" }}',
},
{"event": event},
]
),
)
await script_obj.async_run()
await hass.async_block_till_done()
assert len(script_obj._config_cache) == 2
async def test_last_triggered(hass):
"""Test the last_triggered."""
event = "test_event"
schema = cv.SCRIPT_SCHEMA({"event": event})
for run_mode in _ALL_RUN_MODES:
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
assert script_obj.last_triggered is None
time = dt_util.utcnow()
with mock.patch("homeassistant.helpers.script.utcnow", return_value=time):
await script_obj.async_run()
await hass.async_block_till_done()
assert script_obj.last_triggered == time
async def test_propagate_error_service_not_found(hass):
"""Test that a script aborts when a service is not found."""
event = "test_event"
@callback
def record_event(event):
events.append(event)
hass.bus.async_listen(event, record_event)
schema = cv.SCRIPT_SCHEMA([{"service": "test.script"}, {"event": event}])
run_modes = _ALL_RUN_MODES
if "background" in run_modes:
run_modes.remove("background")
for run_mode in run_modes:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
with pytest.raises(exceptions.ServiceNotFound):
await script_obj.async_run()
assert len(events) == 0
assert not script_obj.is_running
async def test_propagate_error_invalid_service_data(hass):
"""Test that a script aborts when we send invalid service data."""
event = "test_event"
@callback
def record_event(event):
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def record_call(service):
"""Add recorded event to set."""
calls.append(service)
hass.services.async_register(
"test", "script", record_call, schema=vol.Schema({"text": str})
)
schema = cv.SCRIPT_SCHEMA(
[{"service": "test.script", "data": {"text": 1}}, {"event": event}]
)
run_modes = _ALL_RUN_MODES
if "background" in run_modes:
run_modes.remove("background")
for run_mode in run_modes:
events = []
calls = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
with pytest.raises(vol.Invalid):
await script_obj.async_run()
assert len(events) == 0
assert len(calls) == 0
assert not script_obj.is_running
async def test_propagate_error_service_exception(hass):
"""Test that a script aborts when a service throws an exception."""
event = "test_event"
@callback
def record_event(event):
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def record_call(service):
"""Add recorded event to set."""
raise ValueError("BROKEN")
hass.services.async_register("test", "script", record_call)
schema = cv.SCRIPT_SCHEMA([{"service": "test.script"}, {"event": event}])
run_modes = _ALL_RUN_MODES
if "background" in run_modes:
run_modes.remove("background")
for run_mode in run_modes:
events = []
if run_mode is None:
script_obj = script.Script(hass, schema)
else:
script_obj = script.Script(hass, schema, run_mode=run_mode)
with pytest.raises(ValueError):
await script_obj.async_run()
assert len(events) == 0
assert not script_obj.is_running
async def test_referenced_entities():
"""Test referenced entities."""
script_obj = script.Script(
None,
cv.SCRIPT_SCHEMA(
[
{
"service": "test.script",
"data": {"entity_id": "light.service_not_list"},
},
{
"service": "test.script",
"data": {"entity_id": ["light.service_list"]},
},
{
"condition": "state",
"entity_id": "sensor.condition",
"state": "100",
},
{"service": "test.script", "data": {"without": "entity_id"}},
{"scene": "scene.hello"},
{"event": "test_event"},
{"delay": "{{ delay_period }}"},
]
),
)
assert script_obj.referenced_entities == {
"light.service_not_list",
"light.service_list",
"sensor.condition",
"scene.hello",
}
# Test we cache results.
assert script_obj.referenced_entities is script_obj.referenced_entities
async def test_referenced_devices():
"""Test referenced entities."""
script_obj = script.Script(
None,
cv.SCRIPT_SCHEMA(
[
{"domain": "light", "device_id": "script-dev-id"},
{
"condition": "device",
"device_id": "condition-dev-id",
"domain": "switch",
},
]
),
)
assert script_obj.referenced_devices == {"script-dev-id", "condition-dev-id"}
# Test we cache results.
assert script_obj.referenced_devices is script_obj.referenced_devices
async def test_if_running_with_legacy_run_mode(hass, caplog):
"""Test using if_running with run_mode='legacy'."""
# TODO: REMOVE
if _ALL_RUN_MODES == [None]:
return
with pytest.raises(exceptions.HomeAssistantError):
script.Script(
hass,
[],
if_running="ignore",
run_mode="legacy",
logger=logging.getLogger("TEST"),
)
assert any(
rec.levelname == "ERROR"
and rec.name == "TEST"
and all(text in rec.message for text in ("if_running", "legacy"))
for rec in caplog.records
)
async def test_if_running_ignore(hass, caplog):
"""Test overlapping runs with if_running='ignore'."""
# TODO: REMOVE
if _ALL_RUN_MODES == [None]:
return
event = "test_event"
events = []
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
script_obj = script.Script(
hass,
cv.SCRIPT_SCHEMA(
[
{"event": event, "event_data": {"value": 1}},
{"wait_template": "{{ states.switch.test.state == 'off' }}"},
{"event": event, "event_data": {"value": 2}},
]
),
change_listener=wait_started_cb,
if_running="ignore",
run_mode="background",
logger=logging.getLogger("TEST"),
)
try:
await script_obj.async_run()
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 1
assert events[0].data["value"] == 1
# Start second run of script while first run is suspended in wait_template.
# This should ignore second run.
await script_obj.async_run()
assert script_obj.is_running
assert any(
rec.levelname == "INFO" and rec.name == "TEST" and "Skipping" in rec.message
for rec in caplog.records
)
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 2
assert events[1].data["value"] == 2
async def test_if_running_error(hass, caplog):
"""Test overlapping runs with if_running='error'."""
# TODO: REMOVE
if _ALL_RUN_MODES == [None]:
return
event = "test_event"
events = []
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
script_obj = script.Script(
hass,
cv.SCRIPT_SCHEMA(
[
{"event": event, "event_data": {"value": 1}},
{"wait_template": "{{ states.switch.test.state == 'off' }}"},
{"event": event, "event_data": {"value": 2}},
]
),
change_listener=wait_started_cb,
if_running="error",
run_mode="background",
logger=logging.getLogger("TEST"),
)
try:
await script_obj.async_run()
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 1
assert events[0].data["value"] == 1
# Start second run of script while first run is suspended in wait_template.
# This should cause an error.
with pytest.raises(exceptions.HomeAssistantError):
await script_obj.async_run()
assert script_obj.is_running
assert any(
rec.levelname == "ERROR"
and rec.name == "TEST"
and "Already running" in rec.message
for rec in caplog.records
)
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 2
assert events[1].data["value"] == 2
async def test_if_running_restart(hass, caplog):
"""Test overlapping runs with if_running='restart'."""
# TODO: REMOVE
if _ALL_RUN_MODES == [None]:
return
event = "test_event"
events = []
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
script_obj = script.Script(
hass,
cv.SCRIPT_SCHEMA(
[
{"event": event, "event_data": {"value": 1}},
{"wait_template": "{{ states.switch.test.state == 'off' }}"},
{"event": event, "event_data": {"value": 2}},
]
),
change_listener=wait_started_cb,
if_running="restart",
run_mode="background",
logger=logging.getLogger("TEST"),
)
try:
await script_obj.async_run()
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 1
assert events[0].data["value"] == 1
# Start second run of script while first run is suspended in wait_template.
# This should stop first run then start a new run.
wait_started_flag.clear()
await script_obj.async_run()
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 2
assert events[1].data["value"] == 1
assert any(
rec.levelname == "INFO"
and rec.name == "TEST"
and "Restarting" in rec.message
for rec in caplog.records
)
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 3
assert events[2].data["value"] == 2
async def test_if_running_parallel(hass):
"""Test overlapping runs with if_running='parallel'."""
# TODO: REMOVE
if _ALL_RUN_MODES == [None]:
return
event = "test_event"
events = []
wait_started_flag = asyncio.Event()
@callback
def record_event(event):
"""Add recorded event to set."""
events.append(event)
hass.bus.async_listen(event, record_event)
@callback
def wait_started_cb():
wait_started_flag.set()
hass.states.async_set("switch.test", "on")
script_obj = script.Script(
hass,
cv.SCRIPT_SCHEMA(
[
{"event": event, "event_data": {"value": 1}},
{"wait_template": "{{ states.switch.test.state == 'off' }}"},
{"event": event, "event_data": {"value": 2}},
]
),
change_listener=wait_started_cb,
if_running="parallel",
run_mode="background",
logger=logging.getLogger("TEST"),
)
try:
await script_obj.async_run()
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 1
assert events[0].data["value"] == 1
# Start second run of script while first run is suspended in wait_template.
# This should start a new, independent run.
wait_started_flag.clear()
await script_obj.async_run()
await asyncio.wait_for(wait_started_flag.wait(), 1)
assert script_obj.is_running
assert len(events) == 2
assert events[1].data["value"] == 1
except (AssertionError, asyncio.TimeoutError):
await script_obj.async_stop()
raise
else:
hass.states.async_set("switch.test", "off")
await hass.async_block_till_done()
assert not script_obj.is_running
assert len(events) == 4
assert events[2].data["value"] == 2
assert events[3].data["value"] == 2
| 29.095074 | 88 | 0.582677 | 5,936 | 50,800 | 4.738713 | 0.050708 | 0.066231 | 0.036333 | 0.046287 | 0.861815 | 0.829998 | 0.805752 | 0.796758 | 0.775143 | 0.765509 | 0 | 0.005237 | 0.308346 | 50,800 | 1,745 | 89 | 29.111748 | 0.795338 | 0.036988 | 0 | 0.751577 | 0 | 0 | 0.092389 | 0.010535 | 0 | 0 | 0 | 0.000573 | 0.128549 | 1 | 0.036278 | false | 0 | 0.011041 | 0 | 0.051262 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
92229faf48f14884ebaf122c2d59c463536e2356 | 184 | py | Python | tests/test_mail_admin.py | yaal-fr/canaille | 828d190adea7bc6e34d59bac42cbc1283509880b | [
"MIT"
] | 3 | 2020-11-03T14:44:53.000Z | 2021-09-26T16:49:01.000Z | tests/test_mail_admin.py | yaal-fr/canaille | 828d190adea7bc6e34d59bac42cbc1283509880b | [
"MIT"
] | null | null | null | tests/test_mail_admin.py | yaal-fr/canaille | 828d190adea7bc6e34d59bac42cbc1283509880b | [
"MIT"
] | null | null | null | def test_reset_html(testclient, logged_admin):
testclient.get("/admin/mail/reset.html")
def test_reset_txt(testclient, logged_admin):
testclient.get("/admin/mail/reset.txt")
| 26.285714 | 46 | 0.76087 | 26 | 184 | 5.153846 | 0.384615 | 0.104478 | 0.179104 | 0.462687 | 0.716418 | 0.716418 | 0.716418 | 0.716418 | 0 | 0 | 0 | 0 | 0.097826 | 184 | 6 | 47 | 30.666667 | 0.807229 | 0 | 0 | 0 | 0 | 0 | 0.233696 | 0.233696 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2b9b40ce9d30bb63a1a48ecfb541209cee108874 | 9,550 | py | Python | cisco-ios-xr/ydk/models/_deviate/_cisco_xr_openconfig_mpls_deviations.py | tkamata-test/ydk-py | b637e7853a8edbbd31fbc05afa3aa4110b31c5f9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/_deviate/_cisco_xr_openconfig_mpls_deviations.py | tkamata-test/ydk-py | b637e7853a8edbbd31fbc05afa3aa4110b31c5f9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/_deviate/_cisco_xr_openconfig_mpls_deviations.py | tkamata-test/ydk-py | b637e7853a8edbbd31fbc05afa3aa4110b31c5f9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
from enum import Enum
from ydk._core._dm_meta_info import _MetaInfoClassMember, _MetaInfoClass, _MetaInfoEnum
from ydk.types import Empty, YList, DELETE, Decimal64, FixedBitsDict
from ydk._core._dm_meta_info import ATTRIBUTE, REFERENCE_CLASS, REFERENCE_LIST, REFERENCE_LEAFLIST, REFERENCE_IDENTITY_CLASS, REFERENCE_ENUM_CLASS, REFERENCE_BITS, REFERENCE_UNION
from ydk.providers._importer import _yang_ns
_deviation_table = {
'Mpls.Global_.Config.null_label' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Global_.MplsInterfaceAttributes.Interface' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Global_.State.null_label' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Bandwidth.Config.specification_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Bandwidth.State.specification_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Config.description' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Config.preference' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Config.reoptimize_timer' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Config.signaling_protocol' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.Config.source' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.AdminGroups.State.exclude_group' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.AdminGroups.State.include_all_group' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.AdminGroups.State.include_any_group' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.CandidateSecondaryPaths.CandidateSecondaryPath' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.Config.cspf_tiebreaker' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.Config.hold_priority' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.Config.retry_timer' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.Config.setup_priority' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.Config.use_cspf' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.State.cspf_tiebreaker' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.State.hold_priority' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.State.retry_timer' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.State.setup_priority' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PPrimaryPaths.State.use_cspf' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.P2PTunnelAttributes.P2PSecondaryPaths' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.ConstrainedPath.Tunnel.State.preference' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.UnconstrainedPath.PathSetupProtocol.Ldp.Tunnel.P2PLsp.fec_address' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.UnconstrainedPath.PathSetupProtocol.Ldp.Tunnel.ldp_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.UnconstrainedPath.PathSetupProtocol.Ldp.Tunnel.tunnel_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.UnconstrainedPath.PathSetupProtocol.SegmentRouting.Tunnel.P2PLsp.Fec' : {
'deviation_typ' : 'not_supported',
},
'Mpls.Lsps.UnconstrainedPath.PathSetupProtocol.SegmentRouting.Tunnel.tunnel_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.Global_.Hellos.Config.refresh_reduction' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.Global_.Hellos.State.refresh_reduction' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.Hellos.Config.hello_interval' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.Hellos.State.hello_interval' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.Protection.Config.bypass_optimize_interval' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.Protection.State.bypass_optimize_interval' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.State.Bandwidth' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.State.active_reservation_count' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.InterfaceAttributes.Interface.State.highwater_mark' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.Neighbors.State.Neighbor' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.RsvpTe.Sessions.State.Session' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.SegmentRouting.Interfaces' : {
'deviation_typ' : 'not_supported',
},
'Mpls.SignalingProtocols.SegmentRouting.Srgb' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.Config.down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.Config.threshold_specification' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.Config.threshold_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.Config.up_down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.Config.up_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.State.down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.State.threshold_specification' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.State.threshold_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.State.up_down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.IgpFloodingBandwidth.State.up_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.Srlg.Srlg.Config.flooding_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeGlobalAttributes.Srlg.Srlg.State.flooding_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.Config.delta_percentage' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.Config.threshold_specification' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.Config.threshold_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.Config.up_down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.State.delta_percentage' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.State.down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.State.threshold_specification' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.State.threshold_type' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.State.up_down_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.IgpFloodingBandwidth.State.up_thresholds' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.State.admin_group' : {
'deviation_typ' : 'not_supported',
},
'Mpls.TeInterfaceAttributes.Interface.State.srlg_membership' : {
'deviation_typ' : 'not_supported',
},
}
| 44.418605 | 183 | 0.704921 | 831 | 9,550 | 7.824308 | 0.138387 | 0.1255 | 0.156875 | 0.251 | 0.893725 | 0.893725 | 0.886804 | 0.830821 | 0.781759 | 0.634574 | 0 | 0.004281 | 0.168377 | 9,550 | 214 | 184 | 44.626168 | 0.814404 | 0 | 0 | 0.322275 | 0 | 0 | 0.705414 | 0.520264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.009479 | 0.023697 | 0 | 0.023697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2bdf62ef86101ec6fe1cd9e487922ab17768a00f | 56,968 | py | Python | tests/test_torchtrain/test_train_loop/test_e2e_train_loop/test_end2end_train_loop_checkpoint_cont_training_pred.py | mv1388/AIToolbox | c64ac4810a02d230ce471d86b758e82ea232a7e7 | [
"MIT"
] | 3 | 2019-10-12T12:24:09.000Z | 2020-08-02T02:42:43.000Z | tests/test_torchtrain/test_train_loop/test_e2e_train_loop/test_end2end_train_loop_checkpoint_cont_training_pred.py | mv1388/aitoolbox | 1060435e6cbdfd19abcb726c4080b663536b7467 | [
"MIT"
] | 3 | 2020-04-10T14:07:07.000Z | 2020-04-22T19:04:38.000Z | tests/test_torchtrain/test_train_loop/test_e2e_train_loop/test_end2end_train_loop_checkpoint_cont_training_pred.py | mv1388/aitoolbox | 1060435e6cbdfd19abcb726c4080b663536b7467 | [
"MIT"
] | null | null | null | import unittest
import os
import shutil
import random
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data.dataloader import DataLoader
from torch.utils.data.dataset import TensorDataset
from aitoolbox import TrainLoopCheckpointEndSave, ClassificationResultPackage, TrainLoop
from aitoolbox.torchtrain.model import TTModel
from aitoolbox.experiment.local_load.local_model_load import PyTorchLocalModelLoader
from aitoolbox.torchtrain.callbacks.model_load import ModelLoadContinueTraining
from aitoolbox.torchtrain.schedulers.basic import StepLRScheduler
from aitoolbox.torchtrain.schedulers.warmup import LinearWithWarmupScheduler
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class FFNet(TTModel):
def __init__(self):
super().__init__()
self.ff_1 = nn.Linear(50, 100)
self.ff_2 = nn.Linear(100, 100)
self.ff_3 = nn.Linear(100, 10)
def forward(self, batch_data):
ff_out = F.relu(self.ff_1(batch_data))
ff_out = F.relu(self.ff_2(ff_out))
ff_out = self.ff_3(ff_out)
out_softmax = F.log_softmax(ff_out, dim=1)
return out_softmax
def get_loss(self, batch_data, criterion, device):
input_data, target = batch_data
input_data = input_data.to(device)
target = target.to(device)
predicted = self(input_data)
loss = criterion(predicted, target)
return loss
def get_predictions(self, batch_data, device):
input_data, target = batch_data
input_data = input_data.to(device)
predicted = self(input_data).argmax(dim=1, keepdim=False)
return predicted.cpu(), target, {'example_feat_sum': input_data.sum(dim=1).tolist()}
class TestEnd2EndTrainLoopModelOptimizerSaveReloadContinueTraining(unittest.TestCase):
def test_e2e_ff_net_continue_training_further_1_epoch(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_loader = PyTorchLocalModelLoader(THIS_DIR)
model_loader.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='model', epoch_num=None)
model_reloaded = FFNet()
model_reloaded = model_loader.init_model(model_reloaded)
optimizer_reloaded = optim.Adam(model_reloaded.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reloaded = model_loader.init_optimizer(optimizer_reloaded)
criterion_reloaded = nn.NLLLoss()
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(), model_reloaded.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
train_loop_cont = TrainLoop(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion
)
train_loop_cont.epoch = 5
train_loop_cont.fit(num_epochs=6)
train_loop_reload = TrainLoop(
model_reloaded,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reloaded, criterion_reloaded
)
train_loop_reload.epoch = 5
train_loop_reload.fit(num_epochs=6)
train_pred, _, _ = train_loop_cont.predict_on_train_set()
val_pred, _, _ = train_loop_cont.predict_on_validation_set()
test_pred, _, _ = train_loop_cont.predict_on_test_set()
train_pred_reload, _, _ = train_loop_reload.predict_on_train_set()
val_pred_reload, _, _ = train_loop_reload.predict_on_validation_set()
test_pred_reload, _, _ = train_loop_reload.predict_on_test_set()
train_loss = train_loop_cont.evaluate_loss_on_train_set()
val_loss = train_loop_cont.evaluate_loss_on_validation_set()
test_loss = train_loop_cont.evaluate_loss_on_test_set()
train_loss_reload = train_loop_reload.evaluate_loss_on_train_set()
val_loss_reload = train_loop_reload.evaluate_loss_on_validation_set()
test_loss_reload = train_loop_reload.evaluate_loss_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload.tolist())
self.assertEqual(train_loss, train_loss_reload)
self.assertEqual(val_loss, val_loss_reload)
self.assertEqual(test_loss, test_loss_reload)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_continue_training_further_5_epoch(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_loader = PyTorchLocalModelLoader(THIS_DIR)
model_loader.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='model', epoch_num=None)
model_reloaded = FFNet()
model_reloaded = model_loader.init_model(model_reloaded)
optimizer_reloaded = optim.Adam(model_reloaded.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reloaded = model_loader.init_optimizer(optimizer_reloaded)
criterion_reloaded = nn.NLLLoss()
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(), model_reloaded.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
train_loop_cont = TrainLoop(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion
)
train_loop_cont.epoch = 5
train_loop_cont.fit(num_epochs=10)
train_loop_reload = TrainLoop(
model_reloaded,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reloaded, criterion_reloaded
)
train_loop_reload.epoch = 5
train_loop_reload.fit(num_epochs=10)
train_pred, _, _ = train_loop_cont.predict_on_train_set()
val_pred, _, _ = train_loop_cont.predict_on_validation_set()
test_pred, _, _ = train_loop_cont.predict_on_test_set()
train_pred_reload, _, _ = train_loop_reload.predict_on_train_set()
val_pred_reload, _, _ = train_loop_reload.predict_on_validation_set()
test_pred_reload, _, _ = train_loop_reload.predict_on_test_set()
train_loss = train_loop_cont.evaluate_loss_on_train_set()
val_loss = train_loop_cont.evaluate_loss_on_validation_set()
test_loss = train_loop_cont.evaluate_loss_on_test_set()
train_loss_reload = train_loop_reload.evaluate_loss_on_train_set()
val_loss_reload = train_loop_reload.evaluate_loss_on_validation_set()
test_loss_reload = train_loop_reload.evaluate_loss_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload.tolist())
self.assertEqual(train_loss, train_loss_reload)
self.assertEqual(val_loss, val_loss_reload)
self.assertEqual(test_loss, test_loss_reload)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_continue_training_compare_in_memory_checkpoint_end_save(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_loader_final = PyTorchLocalModelLoader(THIS_DIR)
model_loader_final.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='model', epoch_num=None)
model_loader_ep5 = PyTorchLocalModelLoader(THIS_DIR)
model_loader_ep5.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='checkpoint_model', epoch_num=4)
model_reload_final = FFNet()
model_reload_final = model_loader_final.init_model(model_reload_final)
optimizer_reload_final = optim.Adam(model_reload_final.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_final = model_loader_final.init_optimizer(optimizer_reload_final)
criterion_reload_final = nn.NLLLoss()
model_reload_ep5 = FFNet()
model_reload_ep5 = model_loader_ep5.init_model(model_reload_ep5)
optimizer_reload_ep5 = optim.Adam(model_reload_ep5.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_ep5 = model_loader_ep5.init_optimizer(optimizer_reload_ep5)
criterion_reload_ep5 = nn.NLLLoss()
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_final.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep5.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
train_loop_cont = TrainLoop(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion
)
train_loop_cont.epoch = 5
train_loop_cont.fit(num_epochs=6)
train_loop_reload_final = TrainLoop(
model_reload_final,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_final, criterion_reload_final
)
train_loop_reload_final.epoch = 5
train_loop_reload_final.fit(num_epochs=6)
train_loop_reload_ep5 = TrainLoop(
model_reload_ep5,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_ep5, criterion_reload_ep5
)
train_loop_reload_ep5.epoch = 5
train_loop_reload_ep5.fit(num_epochs=6)
train_pred, _, _ = train_loop_cont.predict_on_train_set()
val_pred, _, _ = train_loop_cont.predict_on_validation_set()
test_pred, _, _ = train_loop_cont.predict_on_test_set()
train_pred_reload_final, _, _ = train_loop_reload_final.predict_on_train_set()
val_pred_reload_final, _, _ = train_loop_reload_final.predict_on_validation_set()
test_pred_reload_final, _, _ = train_loop_reload_final.predict_on_test_set()
train_pred_reload_ep5, _, _ = train_loop_reload_ep5.predict_on_train_set()
val_pred_reload_ep5, _, _ = train_loop_reload_ep5.predict_on_validation_set()
test_pred_reload_ep5, _, _ = train_loop_reload_ep5.predict_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload_final.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_final.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_final.tolist())
self.assertEqual(train_pred.tolist(), train_pred_reload_ep5.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_ep5.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_ep5.tolist())
train_loss = train_loop_cont.evaluate_loss_on_train_set()
val_loss = train_loop_cont.evaluate_loss_on_validation_set()
test_loss = train_loop_cont.evaluate_loss_on_test_set()
train_loss_reload_final = train_loop_reload_final.evaluate_loss_on_train_set()
val_loss_reload_final = train_loop_reload_final.evaluate_loss_on_validation_set()
test_loss_reload_final = train_loop_reload_final.evaluate_loss_on_test_set()
train_loss_reload_ep5 = train_loop_reload_ep5.evaluate_loss_on_train_set()
val_loss_reload_ep5 = train_loop_reload_ep5.evaluate_loss_on_validation_set()
test_loss_reload_ep5 = train_loop_reload_ep5.evaluate_loss_on_test_set()
self.assertEqual(train_loss, train_loss_reload_final)
self.assertEqual(val_loss, val_loss_reload_final)
self.assertEqual(test_loss, test_loss_reload_final)
self.assertEqual(train_loss, train_loss_reload_ep5)
self.assertEqual(val_loss, val_loss_reload_ep5)
self.assertEqual(test_loss, test_loss_reload_ep5)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_continue_training_checkpoint_1_back_compare_in_memory_end_save(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_loader_final = PyTorchLocalModelLoader(THIS_DIR)
model_loader_final.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='model', epoch_num=None)
model_reload_final = FFNet()
model_reload_final = model_loader_final.init_model(model_reload_final)
optimizer_reload_final = optim.Adam(model_reload_final.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_final = model_loader_final.init_optimizer(optimizer_reload_final)
criterion_reload_final = nn.NLLLoss()
model_loader_ep4 = PyTorchLocalModelLoader(THIS_DIR)
model_loader_ep4.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='checkpoint_model', epoch_num=3)
model_reload_ep4 = FFNet()
model_reload_ep4 = model_loader_ep4.init_model(model_reload_ep4)
optimizer_reload_ep4 = optim.Adam(model_reload_ep4.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_ep4 = model_loader_ep4.init_optimizer(optimizer_reload_ep4)
criterion_reload_ep4 = nn.NLLLoss()
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_final.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_final.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertNotEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertNotEqual(orig_state['step'], reload_state['step'])
self.assertNotEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertNotEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_loop_reload_final = TrainLoop(
model_reload_final,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_final, criterion_reload_final
)
train_loop_reload_ep4 = TrainLoop(
model_reload_ep4,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_ep4, criterion_reload_ep4
)
train_loop_reload_ep4.epoch = 4
train_loop_reload_ep4.fit(num_epochs=5)
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model_reload_final.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer_reload_final.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_pred, _, _ = train_loop.predict_on_train_set()
val_pred, _, _ = train_loop.predict_on_validation_set()
test_pred, _, _ = train_loop.predict_on_test_set()
train_pred_reload_final, _, _ = train_loop_reload_final.predict_on_train_set()
val_pred_reload_final, _, _ = train_loop_reload_final.predict_on_validation_set()
test_pred_reload_final, _, _ = train_loop_reload_final.predict_on_test_set()
train_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_train_set()
val_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_validation_set()
test_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload_final.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_final.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_final.tolist())
self.assertEqual(train_pred.tolist(), train_pred_reload_ep4.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_ep4.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_ep4.tolist())
train_loss = train_loop.evaluate_loss_on_train_set()
val_loss = train_loop.evaluate_loss_on_validation_set()
test_loss = train_loop.evaluate_loss_on_test_set()
train_loss_reload_final = train_loop_reload_final.evaluate_loss_on_train_set()
val_loss_reload_final = train_loop_reload_final.evaluate_loss_on_validation_set()
test_loss_reload_final = train_loop_reload_final.evaluate_loss_on_test_set()
train_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_train_set()
val_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_validation_set()
test_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_test_set()
self.assertEqual(train_loss, train_loss_reload_final)
self.assertEqual(val_loss, val_loss_reload_final)
self.assertEqual(test_loss, test_loss_reload_final)
self.assertEqual(train_loss, train_loss_reload_ep4)
self.assertEqual(val_loss, val_loss_reload_ep4)
self.assertEqual(test_loss, test_loss_reload_ep4)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_continue_training_checkpoint_1_back_continue_train_1_epoch_compare_in_memory_end_save(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_loader_final = PyTorchLocalModelLoader(THIS_DIR)
model_loader_final.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='model', epoch_num=None)
model_reload_final = FFNet()
model_reload_final = model_loader_final.init_model(model_reload_final)
optimizer_reload_final = optim.Adam(model_reload_final.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_final = model_loader_final.init_optimizer(optimizer_reload_final)
criterion_reload_final = nn.NLLLoss()
model_loader_ep4 = PyTorchLocalModelLoader(THIS_DIR)
model_loader_ep4.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='checkpoint_model', epoch_num=3)
model_reload_ep4 = FFNet()
model_reload_ep4 = model_loader_ep4.init_model(model_reload_ep4)
optimizer_reload_ep4 = optim.Adam(model_reload_ep4.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_ep4 = model_loader_ep4.init_optimizer(optimizer_reload_ep4)
criterion_reload_ep4 = nn.NLLLoss()
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_final.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_final.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertNotEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertNotEqual(orig_state['step'], reload_state['step'])
self.assertNotEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertNotEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_loop_cont = TrainLoop(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion
)
train_loop_cont.epoch = 5
train_loop_cont.fit(num_epochs=6)
train_loop_reload_final = TrainLoop(
model_reload_final,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_final, criterion_reload_final
)
train_loop_reload_final.epoch = 5
train_loop_reload_final.fit(num_epochs=6)
train_loop_reload_ep4 = TrainLoop(
model_reload_ep4,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_ep4, criterion_reload_ep4
)
train_loop_reload_ep4.epoch = 4
train_loop_reload_ep4.fit(num_epochs=6)
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model_reload_final.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer_reload_final.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_pred, _, _ = train_loop_cont.predict_on_train_set()
val_pred, _, _ = train_loop_cont.predict_on_validation_set()
test_pred, _, _ = train_loop_cont.predict_on_test_set()
train_pred_reload_final, _, _ = train_loop_reload_final.predict_on_train_set()
val_pred_reload_final, _, _ = train_loop_reload_final.predict_on_validation_set()
test_pred_reload_final, _, _ = train_loop_reload_final.predict_on_test_set()
train_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_train_set()
val_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_validation_set()
test_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload_final.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_final.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_final.tolist())
self.assertEqual(train_pred.tolist(), train_pred_reload_ep4.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_ep4.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_ep4.tolist())
train_loss = train_loop_cont.evaluate_loss_on_train_set()
val_loss = train_loop_cont.evaluate_loss_on_validation_set()
test_loss = train_loop_cont.evaluate_loss_on_test_set()
train_loss_reload_final = train_loop_reload_final.evaluate_loss_on_train_set()
val_loss_reload_final = train_loop_reload_final.evaluate_loss_on_validation_set()
test_loss_reload_final = train_loop_reload_final.evaluate_loss_on_test_set()
train_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_train_set()
val_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_validation_set()
test_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_test_set()
self.assertEqual(train_loss, train_loss_reload_final)
self.assertEqual(val_loss, val_loss_reload_final)
self.assertEqual(test_loss, test_loss_reload_final)
self.assertEqual(train_loss, train_loss_reload_ep4)
self.assertEqual(val_loss, val_loss_reload_ep4)
self.assertEqual(test_loss, test_loss_reload_ep4)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_continue_training_checkpoint_1_back_continue_train_5_epoch_compare_in_memory_end_save(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_loader_final = PyTorchLocalModelLoader(THIS_DIR)
model_loader_final.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='model', epoch_num=None)
model_reload_final = FFNet()
model_reload_final = model_loader_final.init_model(model_reload_final)
optimizer_reload_final = optim.Adam(model_reload_final.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_final = model_loader_final.init_optimizer(optimizer_reload_final)
criterion_reload_final = nn.NLLLoss()
model_loader_ep4 = PyTorchLocalModelLoader(THIS_DIR)
model_loader_ep4.load_model(train_loop.project_name, train_loop.experiment_name,
train_loop.experiment_timestamp,
model_save_dir='checkpoint_model', epoch_num=3)
model_reload_ep4 = FFNet()
model_reload_ep4 = model_loader_ep4.init_model(model_reload_ep4)
optimizer_reload_ep4 = optim.Adam(model_reload_ep4.parameters(), lr=0.001, betas=(0.9, 0.999))
optimizer_reload_ep4 = model_loader_ep4.init_optimizer(optimizer_reload_ep4)
criterion_reload_ep4 = nn.NLLLoss()
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_final.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_final.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertNotEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertNotEqual(orig_state['step'], reload_state['step'])
self.assertNotEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertNotEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_loop_cont = TrainLoop(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion
)
train_loop_cont.epoch = 5
train_loop_cont.fit(num_epochs=10)
train_loop_reload_final = TrainLoop(
model_reload_final,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_final, criterion_reload_final
)
train_loop_reload_final.epoch = 5
train_loop_reload_final.fit(num_epochs=10)
train_loop_reload_ep4 = TrainLoop(
model_reload_ep4,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_ep4, criterion_reload_ep4
)
train_loop_reload_ep4.epoch = 4
train_loop_reload_ep4.fit(num_epochs=10)
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model_reload_final.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer_reload_final.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_pred, _, _ = train_loop_cont.predict_on_train_set()
val_pred, _, _ = train_loop_cont.predict_on_validation_set()
test_pred, _, _ = train_loop_cont.predict_on_test_set()
train_pred_reload_final, _, _ = train_loop_reload_final.predict_on_train_set()
val_pred_reload_final, _, _ = train_loop_reload_final.predict_on_validation_set()
test_pred_reload_final, _, _ = train_loop_reload_final.predict_on_test_set()
train_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_train_set()
val_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_validation_set()
test_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload_final.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_final.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_final.tolist())
self.assertEqual(train_pred.tolist(), train_pred_reload_ep4.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_ep4.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_ep4.tolist())
train_loss = train_loop_cont.evaluate_loss_on_train_set()
val_loss = train_loop_cont.evaluate_loss_on_validation_set()
test_loss = train_loop_cont.evaluate_loss_on_test_set()
train_loss_reload_final = train_loop_reload_final.evaluate_loss_on_train_set()
val_loss_reload_final = train_loop_reload_final.evaluate_loss_on_validation_set()
test_loss_reload_final = train_loop_reload_final.evaluate_loss_on_test_set()
train_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_train_set()
val_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_validation_set()
test_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_test_set()
self.assertEqual(train_loss, train_loss_reload_final)
self.assertEqual(val_loss, val_loss_reload_final)
self.assertEqual(test_loss, test_loss_reload_final)
self.assertEqual(train_loss, train_loss_reload_ep4)
self.assertEqual(val_loss, val_loss_reload_ep4)
self.assertEqual(test_loss, test_loss_reload_ep4)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_callback_continue_train_checkpoint_1_back_cont_train_5_epoch_compare_in_memory_end_save(self):
self.set_seeds()
batch_size = 10
train_dataset = TensorDataset(torch.randn(100, 50), torch.randint(low=0, high=10, size=(100,)))
val_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
test_dataset = TensorDataset(torch.randn(30, 50), torch.randint(low=0, high=10, size=(30,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=5)
model_reload_final = FFNet()
optimizer_reload_final = optim.Adam(model_reload_final.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion_reload_final = nn.NLLLoss()
model_reload_ep4 = FFNet()
optimizer_reload_ep4 = optim.Adam(model_reload_ep4.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion_reload_ep4 = nn.NLLLoss()
train_loop_cont = TrainLoop(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion
)
train_loop_cont.epoch = 5
train_loop_cont.fit(num_epochs=10)
train_loop_reload_final = TrainLoop(
model_reload_final,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_final, criterion_reload_final
)
train_loop_reload_final.epoch = 5
train_loop_reload_final.fit(num_epochs=10, callbacks=[
ModelLoadContinueTraining(train_loop.experiment_timestamp, saved_model_dir='model', epoch_num=None,
project_name=train_loop.project_name, experiment_name=train_loop.experiment_name,
local_model_result_folder_path=THIS_DIR, cloud_save_mode='local')
])
train_loop_reload_ep4 = TrainLoop(
model_reload_ep4,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_ep4, criterion_reload_ep4
)
train_loop_reload_ep4.epoch = 4
train_loop_reload_ep4.fit(num_epochs=10, callbacks=[
ModelLoadContinueTraining(train_loop.experiment_timestamp, saved_model_dir='checkpoint_model', epoch_num=3,
project_name=train_loop.project_name, experiment_name=train_loop.experiment_name,
local_model_result_folder_path=THIS_DIR, cloud_save_mode='local')
])
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
for (orig_k, orig_state), (reload_k, reload_state) in zip(model_reload_final.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
for orig_state, reload_state in zip(optimizer_reload_final.state_dict()['state'].values(),
optimizer_reload_ep4.state_dict()['state'].values()):
self.assertEqual(orig_state['step'], reload_state['step'])
self.assertEqual(orig_state['exp_avg'].tolist(), reload_state['exp_avg'].tolist())
self.assertEqual(orig_state['exp_avg_sq'].tolist(), reload_state['exp_avg_sq'].tolist())
train_pred, _, _ = train_loop_cont.predict_on_train_set()
val_pred, _, _ = train_loop_cont.predict_on_validation_set()
test_pred, _, _ = train_loop_cont.predict_on_test_set()
train_pred_reload_final, _, _ = train_loop_reload_final.predict_on_train_set()
val_pred_reload_final, _, _ = train_loop_reload_final.predict_on_validation_set()
test_pred_reload_final, _, _ = train_loop_reload_final.predict_on_test_set()
train_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_train_set()
val_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_validation_set()
test_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload_final.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_final.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_final.tolist())
self.assertEqual(train_pred.tolist(), train_pred_reload_ep4.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_ep4.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_ep4.tolist())
train_loss = train_loop_cont.evaluate_loss_on_train_set()
val_loss = train_loop_cont.evaluate_loss_on_validation_set()
test_loss = train_loop_cont.evaluate_loss_on_test_set()
train_loss_reload_final = train_loop_reload_final.evaluate_loss_on_train_set()
val_loss_reload_final = train_loop_reload_final.evaluate_loss_on_validation_set()
test_loss_reload_final = train_loop_reload_final.evaluate_loss_on_test_set()
train_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_train_set()
val_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_validation_set()
test_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_test_set()
self.assertEqual(train_loss, train_loss_reload_final)
self.assertEqual(val_loss, val_loss_reload_final)
self.assertEqual(test_loss, test_loss_reload_final)
self.assertEqual(train_loss, train_loss_reload_ep4)
self.assertEqual(val_loss, val_loss_reload_ep4)
self.assertEqual(test_loss, test_loss_reload_ep4)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def test_e2e_ff_net_scheduler_callback_continue_training_5_epoch_compare(self):
self.set_seeds()
batch_size = 100
num_epochs = 10
train_dataset = TensorDataset(torch.randn(1000, 50), torch.randint(low=0, high=10, size=(1000,)))
val_dataset = TensorDataset(torch.randn(300, 50), torch.randint(low=0, high=10, size=(300,)))
test_dataset = TensorDataset(torch.randn(300, 50), torch.randint(low=0, high=10, size=(300,)))
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size)
scheduler_cb = [
LinearWithWarmupScheduler(num_warmup_steps=1, num_training_steps=len(train_dataloader) * num_epochs)
]
model = FFNet()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.NLLLoss()
train_loop = TrainLoopCheckpointEndSave(
model,
train_dataloader, val_dataloader, test_dataloader,
optimizer, criterion,
project_name='e2e_train_loop_example', experiment_name='TrainLoopCheckpointEndSave_example',
local_model_result_folder_path=THIS_DIR,
hyperparams={'batch_size': batch_size},
val_result_package=ClassificationResultPackage(), test_result_package=ClassificationResultPackage(),
cloud_save_mode=None
)
train_loop.fit(num_epochs=num_epochs, callbacks=scheduler_cb)
model_reload_ep4 = FFNet()
optimizer_reload_ep4 = optim.Adam(model_reload_ep4.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion_reload_ep4 = nn.NLLLoss()
scheduler_cb_reloaded = [
LinearWithWarmupScheduler(num_warmup_steps=1, num_training_steps=len(train_dataloader) * num_epochs,
last_epoch=(len(train_dataloader) * 4) - 1)
]
train_loop_reload_ep4 = TrainLoop(
model_reload_ep4,
train_dataloader, val_dataloader, test_dataloader,
optimizer_reload_ep4, criterion_reload_ep4
)
train_loop_reload_ep4.epoch = 4
train_loop_reload_ep4.fit(num_epochs=num_epochs, callbacks=[
ModelLoadContinueTraining(train_loop.experiment_timestamp, saved_model_dir='checkpoint_model', epoch_num=3,
project_name=train_loop.project_name, experiment_name=train_loop.experiment_name,
local_model_result_folder_path=THIS_DIR, cloud_save_mode='local')
] + scheduler_cb_reloaded)
for (orig_k, orig_state), (reload_k, reload_state) in zip(model.state_dict().items(),
model_reload_ep4.state_dict().items()):
self.assertEqual(orig_k, reload_k)
self.assertEqual(orig_state.tolist(), reload_state.tolist())
self.check_loaded_representation(optimizer_reload_ep4, scheduler_cb_reloaded, optimizer, scheduler_cb)
train_pred, _, _ = train_loop.predict_on_train_set()
val_pred, _, _ = train_loop.predict_on_validation_set()
test_pred, _, _ = train_loop.predict_on_test_set()
train_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_train_set()
val_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_validation_set()
test_pred_reload_ep4, _, _ = train_loop_reload_ep4.predict_on_test_set()
self.assertEqual(train_pred.tolist(), train_pred_reload_ep4.tolist())
self.assertEqual(val_pred.tolist(), val_pred_reload_ep4.tolist())
self.assertEqual(test_pred.tolist(), test_pred_reload_ep4.tolist())
train_loss = train_loop.evaluate_loss_on_train_set()
val_loss = train_loop.evaluate_loss_on_validation_set()
test_loss = train_loop.evaluate_loss_on_test_set()
train_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_train_set()
val_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_validation_set()
test_loss_reload_ep4 = train_loop_reload_ep4.evaluate_loss_on_test_set()
self.assertEqual(train_loss, train_loss_reload_ep4)
self.assertEqual(val_loss, val_loss_reload_ep4)
self.assertEqual(test_loss, test_loss_reload_ep4)
project_path = os.path.join(THIS_DIR, 'e2e_train_loop_example')
if os.path.exists(project_path):
shutil.rmtree(project_path)
def check_loaded_representation(self, optimizer_reload, scheduler_reload_cb, optimizer, scheduler_cb):
self.assertEqual(optimizer_reload.state_dict().keys(), optimizer.state_dict().keys())
loaded_optimizer_state = optimizer_reload.state_dict()['state']
for state_idx in range(len(loaded_optimizer_state)):
opti_state = optimizer.state_dict()['state'][state_idx]
loaded_state = loaded_optimizer_state[state_idx]
self.assertEqual(opti_state.keys(), loaded_state.keys())
self.assertEqual(opti_state['step'], loaded_state['step'])
self.assertEqual(opti_state['exp_avg'].tolist(), loaded_state['exp_avg'].tolist())
self.assertEqual(opti_state['exp_avg_sq'].tolist(), loaded_state['exp_avg_sq'].tolist())
self.assertEqual(
optimizer_reload.state_dict()['param_groups'],
optimizer.state_dict()['param_groups']
)
for scheduler_idx in range(len(scheduler_cb)):
self.assertEqual(
scheduler_reload_cb[scheduler_idx].state_dict(),
scheduler_cb[scheduler_idx].state_dict()
)
@staticmethod
def set_seeds():
manual_seed = 0
np.random.seed(manual_seed)
random.seed(manual_seed)
torch.manual_seed(manual_seed)
# if you are suing GPU
torch.cuda.manual_seed(manual_seed)
torch.cuda.manual_seed_all(manual_seed)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
| 52.26422 | 131 | 0.681488 | 6,993 | 56,968 | 5.080938 | 0.028314 | 0.063832 | 0.048549 | 0.033098 | 0.934423 | 0.929217 | 0.917255 | 0.913878 | 0.908024 | 0.90248 | 0 | 0.018368 | 0.223055 | 56,968 | 1,089 | 132 | 52.312213 | 0.784393 | 0.000351 | 0 | 0.826136 | 0 | 0 | 0.029326 | 0.010958 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.015909 | false | 0 | 0.019318 | 0 | 0.040909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2bf995cfadbc5b3c8431aec73e9bc9e8d357b26c | 23,271 | py | Python | book_sort_main.py | K-Tett/Book-Sortation-Algorithm | 2341fc41d35bcd0acaf8de1e055d49bf3c798289 | [
"MIT"
] | null | null | null | book_sort_main.py | K-Tett/Book-Sortation-Algorithm | 2341fc41d35bcd0acaf8de1e055d49bf3c798289 | [
"MIT"
] | null | null | null | book_sort_main.py | K-Tett/Book-Sortation-Algorithm | 2341fc41d35bcd0acaf8de1e055d49bf3c798289 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Mini Project - Linux Pro
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1N3wB_O9ze_vuqUK1w8z5eJ_NOW3ZU-r6
"""
"""
TODO:
- [ ] Import local files
- [ ] Make GUI to import csv files
- [ ] Add options to choose which sorting algorithm to go for
- [ ] Add more sortation
- [ ] Visualize the sortation
"""
from timeit import default_timer as time
"""
#When used with Google Colab to mount the Google Drive to the program
#make google colab connect to google drive
from google.colab import drive
drive.mount('/content/gdrive')
#give the directory of where the module is
import os, sys
sys.path.append('/content/gdrive/MyDrive/Colab Notebooks')
#python modules
import utils
import sorts
bookshelf = utils.load_book('/content/gdrive/MyDrive/Colab Notebooks/books_new - books_new.csv')
"""
####### Comparsion Functions #######
def by_title_ascending(book_a, book_b):
return book_a['title_lowercase'] > book_b['title_lowercase']
def by_author_ascending(book_a, book_b):
return book_a['author_lowercase'] > book_b['author_lowercase']
def by_genre_ascending(book_a, book_b):
return book_a['genre_lowercase'] > book_b['genre_lowercase']
def by_total_length_of_book(book_a, book_b):
return len(book_a['author_lowercase']) + len(book_a['title_lowercase']) > len(book_b['author_lowercase']) + len(book_b['title_lowercase'])
####### Outputs Functions #######
####### Bubble Sort #######
print(f"\n==== Bubble Sort - Title A-Z ====\n")
start = time()
sort_1 = sorts.bubble_sort(bookshelf, by_title_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_1:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes: \n", end - start, "s \n")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Title Z-A ====\n")
start = time()
sort_2 = sorts.bubble_sort(bookshelf, by_title_ascending)
sort_2.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes: \n", end - start, "s \n")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Author A-Z ====\n")
start = time()
sort_3 = sorts.bubble_sort(bookshelf, by_author_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_3:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes: \n", end - start, "s \n")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Author Z-A ====\n")
start = time()
sort_4 = sorts.bubble_sort(bookshelf, by_author_ascending)
sort_4.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes: \n", end - start, "s \n")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Genre A-Z ====\n")
start = time()
sort_3 = sorts.bubble_sort(bookshelf, by_genre_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_3:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Genre Z-A ====\n")
start = time()
sort_4 = sorts.bubble_sort(bookshelf, by_genre_ascending)
sort_4.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Total Length of Book (low to high) ====\n")
start = time()
sort_5 = sorts.bubble_sort(bookshelf, by_total_length_of_book)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Bubble Sort - Total Length of Book (high to low) ====\n")
start = time()
sort_6 = sorts.bubble_sort(bookshelf, by_total_length_of_book)
sort_6.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
####### Quick Sort #######
print(f"\n==== Quick Sort - Title A-Z ====\n")
start = time()
sort_1_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_title_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_1_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:", end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Title Z-A ====\n")
start = time()
sort_2_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_title_ascending)
sort_2_2.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_2_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Author A-Z ====\n")
start = time()
sort_3_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_author_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_3_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Author Z-A ====\n")
start = time()
sort_4_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_author_ascending)
sort_4_2.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_4_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Genre A-Z ====\n")
start = time()
sort_5_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_genre_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_5_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Genre Z-A ====\n")
start = time()
sort_6_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_genre_ascending)
sort_6_2.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_6_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Total Length of Book (low to high) ====\n")
start = time()
sort_7_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_total_length_of_book)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_7_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Quick Sort - Total Length of Book (high to low) ====\n")
start = time()
sort_8_2 = sorts.quick_sort(bookshelf, 0, len(bookshelf) - 1, by_total_length_of_book)
sort_8_2.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_8_2:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
####### Selection Sort #######
print(f"\n==== Selection Sort - Title A-Z ====\n")
start = time()
sort_1_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_title_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_1_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:", end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Title Z-A ====\n")
start = time()
sort_2_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_title_ascending)
sort_2_4.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_2_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Author A-Z ====\n")
start = time()
sort_3_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_author_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_3_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Author Z-A ====\n")
start = time()
sort_4_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_author_ascending)
sort_4_4.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_4_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Genre A-Z ====\n")
start = time()
sort_5_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_genre_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_5_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Genre Z-A ====\n")
start = time()
sort_6_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_genre_ascending)
sort_6_4.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_6_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Total Length of Book (low to high) ====\n")
start = time()
sort_7_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_total_length_of_book)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_7_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print(end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Selection Sort - Total Length of Book (high to low) ====\n")
start = time()
sort_8_4 = sorts.selection_sort(bookshelf, len(bookshelf), by_total_length_of_book)
sort_8_4.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_8_4:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
####### Insertion Sort #######
print(f"\n==== Insertion Sort - Title A-Z ====\n")
start = time()
sort_1_5 = sorts.insertion_sort(bookshelf, by_title_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_1_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:", end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Title Z-A ====\n")
start = time()
sort_2_5 = sorts.insertion_sort(bookshelf, by_title_ascending)
sort_2_5.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_2_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "S")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Author A-Z ====\n")
start = time()
sort_3_5 = sorts.insertion_sort(bookshelf, by_author_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_3_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Author Z-A ====\n")
start = time()
sort_4_5 = sorts.insertion_sort(bookshelf, by_author_ascending)
sort_4_5.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_4_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Genre A-Z ====\n")
start = time()
sort_5_5 = sorts.insertion_sort(bookshelf, by_genre_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_5_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Genre Z-A ====\n")
start = time()
sort_6_5 = sorts.insertion_sort(bookshelf, by_genre_ascending)
sort_6_5.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_6_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Total Length of Book (low to high) ====\n")
start = time()
sort_7_5 = sorts.insertion_sort(bookshelf, by_total_length_of_book)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_7_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Insertion Sort - Total Length of Book (high to low) ====\n")
start = time()
sort_8_5 = sorts.insertion_sort(bookshelf, by_total_length_of_book)
sort_8_5.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_8_5:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
####### Heap Sort #######
print(f"\n==== Heap Sort - Title A-Z ====\n")
start = time()
sort_1_6 = sorts.heap_sort(bookshelf, by_title_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_1_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:", end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Heap Sort - Title Z-A ====\n")
start = time()
sort_2_6 = sorts.heap_sort(bookshelf, by_title_ascending)
sort_2_6.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_2_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINSIHED ---")
print(f"\n==== Heap Sort - Author A-Z ====\n")
start = time()
sort_3_6 = sorts.heap_sort(bookshelf, by_author_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_3_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Heap Sort - Author Z-A ====\n")
start = time()
sort_4_6 = sorts.heap_sort(bookshelf, by_author_ascending)
sort_4_6.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_4_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Heap Sort - Genre A-Z ====\n")
start = time()
sort_5_6 = sorts.heap_sort(bookshelf, by_genre_ascending)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_5_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Heap Sort - Genre Z-A ====\n")
start = time()
sort_6_6 = sorts.heap_sort(bookshelf, by_genre_ascending)
sort_6_6.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_6_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Heap Sort - Total Length of Book (low to high) ====\n")
start = time()
sort_7_6 = sorts.heap_sort(bookshelf, by_total_length_of_book)
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_7_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
print(f"\n==== Heap Sort - Total Length of Book (high to low) ====\n")
start = time ()
sort_8_6 = sorts.heap_sort(bookshelf, by_total_length_of_book)
sort_8_6.reverse()
print('{:<55} | {:>23} | {:>10}\n'.format("Title of the book", "Author", "Genre"))
for book in sort_8_6:
print('{:<55} | {:>23} | {:>10}'.format(book['Title'], book['Author'], book['Genre']))
end = time()
print("Sorting time takes:",end - start, "s")
print(f"--- SORTING FINISHED ---\n")
"""| |Bubble Sort |Quick Sort | Selection Sort | Insertion Sort | Heap Sort |
|-|-|-|-|-|-|
|Average time execution (seconds)|0.0369|0.0281| 0.0334 | 0.1731 |0.0333|
"""
# -*- coding: utf-8 -*-
"""sorts
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1M6Rw33CbJqWZBwpTai3tJkBX9BMMQGv4
"""
#sorts.py module
import random
#optimized bubble sort algorithm
def bubble_sort(array, comparison_function_used):
swaps = 0
sorted = False
while not sorted:
sorted = True
for idx in range(len(array) - 1):
if comparison_function_used(array[idx], array[idx + 1]): #if left > right
sorted = False
array[idx], array[idx + 1] = array[idx + 1], array[idx] #swap function
swaps += 1
print("Bubble Sort: There were {0} swaps in the operation".format(swaps))
return array
#random pivoted point quick sort algorithm
def partition(array, low, high, comparison_function_used):
i = (low-1) # index of smaller element
pivot = array[high] # pivot
for j in range(low, high):
# If current element is smaller than or
# equal to pivot
if comparison_function_used(pivot, array[j]):
# increment index of smaller element
i = i+1
array[i], array[j] = array[j], array[i]
array[i+1], array[high] = array[high], array[i+1]
return (i+1)
def quick_sort(array, low, high, comparison_function_used):
if len(array) == 1:
return array
if low < high:
pi = partition(array, low, high, comparison_function_used)
quick_sort(array, low, pi-1, comparison_function_used)
quick_sort(array, pi+1, high, comparison_function_used)
return array
#selection sort algorithm (might be in reverse by default?)
def selection_sort(array, size, comparison_function_used):
for step in range(size):
min_index = step
min_string = array[step]
for i in range(step + 1, size):
if comparison_function_used(min_string, array[i]):
min_string = array[i]
min_index = i
if min_index != 1:
array[step], array[min_index] = array[min_index], array[step]
return array
#insertion sort algorithm
def insertion_sort(array, comparison_function_used):
for step in range(1, len(array)):
key = array[step]
j = step - 1
while j>=0 and comparison_function_used(array[j], key):
array[j + 1] = array[j]
j -= 1
array[j+1] = key
return array
#heap sorting algorithm
def heapify(array, n, i, comparison_function_used):
largest = i
l = 2 * i + 1
r = 2 * i + 2
if l < n and comparison_function_used(array[l], array[i]):
largest = l
if r < n and comparison_function_used(array[r], array[largest]):
largest = r
if largest != i:
array[i], array[largest] = array[largest], array[i]
heapify(array, n, largest, comparison_function_used)
return array
def heap_sort(array, comparison_function_used):
n = len(array)
for i in range(n//2, -1, -1):
heapify(array, n, i, comparison_function_used)
for i in range(n-1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, i, 0, comparison_function_used)
return array
# -*- coding: utf-8 -*-
"""utils.py
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/16h5Xf4OSr-4jHxiW3aTBoWiD8ZHxhv41
"""
"""
script that load the book that the user called to open
then make the title of the book, author, and the genre lowercase.
return the naming as bookshelf
"""
import csv
#bookshelf data from the csv file
#credit: https://gist.github.com/jaidevd/23aef12e9bf56c618c41
def load_book(filename):
bookshelf = []
with open(filename, 'r') as file:
shelf = csv.DictReader(file)
for book in shelf:
book['author_lowercase'] = book['Author'].lower()
book['title_lowercase'] = book['Title'].lower()
book['genre_lowercase'] = book['Genre'].lower()
#update the bookshelf with new file list
bookshelf.append(book)
return bookshelf | 36.938095 | 140 | 0.62623 | 3,541 | 23,271 | 3.996329 | 0.058176 | 0.058653 | 0.05088 | 0.062186 | 0.817539 | 0.785881 | 0.768921 | 0.74553 | 0.715497 | 0.673663 | 0 | 0.038769 | 0.139874 | 23,271 | 630 | 141 | 36.938095 | 0.668215 | 0.035151 | 0 | 0.53617 | 1 | 0 | 0.362712 | 0 | 0 | 0 | 0 | 0.001587 | 0 | 1 | 0.025532 | false | 0 | 0.006383 | 0.008511 | 0.059574 | 0.42766 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
92154bb2f34905b5ef8527a0d1609403c1fdd128 | 1,219 | py | Python | episode-6/numbers.py | abdulAttya/python-minecraft-clone | 6545a63165cc38516b70a8ed318646792ec51699 | [
"MIT"
] | 89 | 2020-08-27T17:10:16.000Z | 2022-03-23T15:50:49.000Z | episode-6/numbers.py | abdulAttya/python-minecraft-clone | 6545a63165cc38516b70a8ed318646792ec51699 | [
"MIT"
] | 47 | 2020-11-15T11:28:42.000Z | 2022-03-17T21:36:59.000Z | episode-6/numbers.py | abdulAttya/python-minecraft-clone | 6545a63165cc38516b70a8ed318646792ec51699 | [
"MIT"
] | 35 | 2020-10-22T02:10:57.000Z | 2022-03-31T09:13:15.000Z | vertex_positions = [
0.5, 0.5, 0.5, 0.5, -0.5, 0.5, 0.5, -0.5, -0.5, 0.5, 0.5, -0.5,
-0.5, 0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, 0.5, -0.5, 0.5, 0.5,
-0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5, -0.5, 0.5, 0.5, 0.5,
-0.5, -0.5, 0.5, -0.5, -0.5, -0.5, 0.5, -0.5, -0.5, 0.5, -0.5, 0.5,
-0.5, 0.5, 0.5, -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, -0.5, 0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, 0.5, -0.5,
]
tex_coords = [
0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0,
]
shading = [
0.80, 0.80, 0.80, 0.80,
0.80, 0.80, 0.80, 0.80,
1.00, 1.00, 1.00, 1.00,
0.49, 0.49, 0.49, 0.49,
0.92, 0.92, 0.92, 0.92,
0.92, 0.92, 0.92, 0.92,
]
indices = [
0, 1, 2, 0, 2, 3, # right
4, 5, 6, 4, 6, 7, # left
8, 9, 10, 8, 10, 11, # top
12, 13, 14, 12, 14, 15, # bottom
16, 17, 18, 16, 18, 19, # front
20, 21, 22, 20, 22, 23, # back
] | 34.828571 | 72 | 0.397047 | 384 | 1,219 | 1.255208 | 0.104167 | 0.394191 | 0.473029 | 0.589212 | 0.746888 | 0.746888 | 0.721992 | 0.697095 | 0.697095 | 0.697095 | 0 | 0.458242 | 0.253486 | 1,219 | 35 | 73 | 34.828571 | 0.071429 | 0.026251 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ecffac80127ae5dbe830b4674a69bf27a15d22dd | 46,298 | py | Python | pyrsss/emtf/usgs_conductivity.py | butala/pyrsss | 3f4412cdab36ecd36de72e5607791d2633c658a9 | [
"MIT"
] | 7 | 2016-12-27T08:00:54.000Z | 2021-12-16T06:55:16.000Z | pyrsss/emtf/usgs_conductivity.py | butala/pyrsss | 3f4412cdab36ecd36de72e5607791d2633c658a9 | [
"MIT"
] | 1 | 2017-01-31T20:36:08.000Z | 2017-01-31T20:36:08.000Z | pyrsss/emtf/usgs_conductivity.py | butala/pyrsss | 3f4412cdab36ecd36de72e5607791d2633c658a9 | [
"MIT"
] | 6 | 2016-04-08T16:25:14.000Z | 2021-05-02T12:05:16.000Z | from io import StringIO
from .conductivity import parse_conductivity
"""
USGS ground conductivity models are available at
ftp://hazards.cr.usgs.gov/Rigler/Conductivity_Latest/
"""
"""
Adirondack Mountains - central core.
"""
AK_1A = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*AK1A earth conductivity model
*/ 3.0, 4997.0, 5000.0, 12000.0, 15000.0, 60000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 18215. , 10000. , 1000. , 25.0 , 244. , 159. , 28.9 , 7.95 , 2.40 , .891 , .479 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
3.000e+00 Layer thickness in m (layer 1)
0.0000549 Conductivity in S/m (layer 2)
4.997e+03 Layer thickness in m (layer 2)
0.0001000 Conductivity in S/m (layer 3)
5.000e+03 Layer thickness in m (layer 3)
0.0010000 Conductivity in S/m (layer 4)
1.200e+04 Layer thickness in m (layer 4)
0.0400000 Conductivity in S/m (layer 5)
1.500e+04 Layer thickness in m (layer 5)
0.0041000 Conductivity in S/m (layer 6)
6.000e+04 Layer thickness in m (layer 6)
0.0063000 Conductivity in S/m (layer 7)
1.500e+05 Layer thickness in m (layer 7)
0.0346000 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.1258000 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.4168000 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
1.1220000 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
2.0892000 Semi-infinite earth conductivity
"""
"""
Adirondack Mountains - excludes meta-anorthosite.
"""
AK_1B = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*AK1B earth conductivity model
*/ 3.0, 9997.0, 12000.0, 15000.0, 60000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 10000. , 1000. , 25.0 , 244. , 159. , 28.9 , 7.95 , 2.40 , .891 , .479 ,/ !Resistivities in Ohm-m
10 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
3.000e+00 Layer thickness in m (layer 1)
0.0001000 Conductivity in S/m (layer 2)
9.997e+03 Layer thickness in m (layer 2)
0.0010000 Conductivity in S/m (layer 3)
1.200e+04 Layer thickness in m (layer 3)
0.0400000 Conductivity in S/m (layer 4)
1.500e+04 Layer thickness in m (layer 4)
0.0041000 Conductivity in S/m (layer 5)
6.000e+04 Layer thickness in m (layer 5)
0.0063000 Conductivity in S/m (layer 6)
1.500e+05 Layer thickness in m (layer 6)
0.0346000 Conductivity in S/m (layer 7)
1.600e+05 Layer thickness in m (layer 7)
0.1258000 Conductivity in S/m (layer 8)
1.100e+05 Layer thickness in m (layer 8)
0.4168000 Conductivity in S/m (layer 9)
1.500e+05 Layer thickness in m (layer 9)
1.1220000 Conductivity in S/m (layer10)
2.300e+05 Layer thickness in m (layer10)
2.0892000 Semi-infinite earth conductivity
"""
"""
Appalachian Plateaus - southern portion of the Appalachian Plateau.
"""
AP_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*AP1 earth conductivity model
*/ 4000.0, 12000.0, 25000.0, 14000.0, 45000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 80.0 , 80.0 , 20.0 , 303. , 100. , 10.0 , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
10 Number of layers from surface
0.0125000 Conductivity in S/m (layer 1)
4.000e+03 Layer thickness in m (layer 1)
0.0125000 Conductivity in S/m (layer 2)
1.200e+04 Layer thickness in m (layer 2)
0.0500000 Conductivity in S/m (layer 3)
2.500e+04 Layer thickness in m (layer 3)
0.0033000 Conductivity in S/m (layer 4)
1.400e+04 Layer thickness in m (layer 4)
0.0100000 Conductivity in S/m (layer 5)
4.500e+04 Layer thickness in m (layer 5)
0.1000000 Conductivity in S/m (layer 6)
1.500e+05 Layer thickness in m (layer 6)
0.0199520 Conductivity in S/m (layer 7)
1.600e+05 Layer thickness in m (layer 7)
0.0501180 Conductivity in S/m (layer 8)
1.800e+05 Layer thickness in m (layer 8)
0.1778270 Conductivity in S/m (layer 9)
8.000e+04 Layer thickness in m (layer 9)
0.6309570 Conductivity in S/m (layer10)
2.300e+05 Layer thickness in m (layer10)
1.1220100 Semi-infinite earth conductivity
"""
"""
Northern Appalachian Plateaus - northern portion of the Appalachian Plateau (southern New York).
"""
AP_2 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*AP2 earth conductivity model
*/ 25.0, 2725.0, 12250.0, 10000.0, 9000.0, 66000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 303. , 10000. , 303. , 1000. , 244. , 159. , 28.9 , 7.95 , 2.40 , .891 , .479 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
2.500e+01 Layer thickness in m (layer 1)
0.0033000 Conductivity in S/m (layer 2)
2.725e+03 Layer thickness in m (layer 2)
0.0001000 Conductivity in S/m (layer 3)
1.225e+04 Layer thickness in m (layer 3)
0.0033000 Conductivity in S/m (layer 4)
1.000e+04 Layer thickness in m (layer 4)
0.0010000 Conductivity in S/m (layer 5)
9.000e+03 Layer thickness in m (layer 5)
0.0041000 Conductivity in S/m (layer 6)
6.600e+04 Layer thickness in m (layer 6)
0.0063000 Conductivity in S/m (layer 7)
1.500e+05 Layer thickness in m (layer 7)
0.0346000 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.1258000 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.4168000 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
1.1220000 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
2.0892000 Semi-infinite earth conductivity
"""
"""
Northwest Basin and Range - northwestern margin of the Basin and
Range geophysical province.
"""
BR_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*BR1 earth conductivity model
*/ 30.0, 970.0, 2000.0, 9000.0, 4000.0, 3000.0, 4000.0, 5000.0, 6000.0, 6000.0, 11000.0, 13000.0, 13000.0, 27000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 10.0 , 100. , 10.0 , 100. , 114. , 89.0 , 40.0 , 13.3 , 5.67 , 4.82 , 7.62 , 16.0 , 32.0 , 50.0 , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
19 Number of layers from surface
0.1000000 Conductivity in S/m (layer 1)
3.000e+01 Layer thickness in m (layer 1)
0.0100000 Conductivity in S/m (layer 2)
9.700e+02 Layer thickness in m (layer 2)
0.1000000 Conductivity in S/m (layer 3)
2.000e+03 Layer thickness in m (layer 3)
0.0100000 Conductivity in S/m (layer 4)
9.000e+03 Layer thickness in m (layer 4)
0.0087500 Conductivity in S/m (layer 5)
4.000e+03 Layer thickness in m (layer 5)
0.0112500 Conductivity in S/m (layer 6)
3.000e+03 Layer thickness in m (layer 6)
0.0250000 Conductivity in S/m (layer 7)
4.000e+03 Layer thickness in m (layer 7)
0.0750000 Conductivity in S/m (layer 8)
5.000e+03 Layer thickness in m (layer 8)
0.1762500 Conductivity in S/m (layer 9)
6.000e+03 Layer thickness in m (layer 9)
0.2075000 Conductivity in S/m (layer10)
6.000e+03 Layer thickness in m (layer10)
0.1312500 Conductivity in S/m (layer11)
1.100e+04 Layer thickness in m (layer11)
0.0625000 Conductivity in S/m (layer12)
1.300e+04 Layer thickness in m (layer12)
0.0312500 Conductivity in S/m (layer13)
1.300e+04 Layer thickness in m (layer13)
0.0200000 Conductivity in S/m (layer14)
2.700e+04 Layer thickness in m (layer14)
0.0047860 Conductivity in S/m (layer15)
1.500e+05 Layer thickness in m (layer15)
0.0199520 Conductivity in S/m (layer16)
1.600e+05 Layer thickness in m (layer16)
0.0501180 Conductivity in S/m (layer17)
1.100e+05 Layer thickness in m (layer17)
0.1778270 Conductivity in S/m (layer18)
1.500e+05 Layer thickness in m (layer18)
0.6309570 Conductivity in S/m (layer19)
2.300e+05 Layer thickness in m (layer19)
1.1220100 Semi-infinite earth conductivity
"""
"""
Colorado Plateau - Approximately centered in the Four Corners
region of the southwestern United States, the Colorado Plateau
encompasses southeastern Utah, northern Arizona, northwestern New
Mexico and the western edge of Colorado.
"""
CL_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*CL1 earth conductivity model
*/ 100.0, 1900.0, 31000.0, 12000.0, 105000.0, 100000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 10.0 , 50.0 , 3030. , 80.0 , 400. , 69.9 , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
10 Number of layers from surface
0.1000000 Conductivity in S/m (layer 1)
1.000e+02 Layer thickness in m (layer 1)
0.0200000 Conductivity in S/m (layer 2)
1.900e+03 Layer thickness in m (layer 2)
0.0003300 Conductivity in S/m (layer 3)
3.100e+04 Layer thickness in m (layer 3)
0.0125000 Conductivity in S/m (layer 4)
1.200e+04 Layer thickness in m (layer 4)
0.0025000 Conductivity in S/m (layer 5)
1.050e+05 Layer thickness in m (layer 5)
0.0143000 Conductivity in S/m (layer 6)
1.000e+05 Layer thickness in m (layer 6)
0.0199520 Conductivity in S/m (layer 7)
1.600e+05 Layer thickness in m (layer 7)
0.0501180 Conductivity in S/m (layer 8)
1.100e+05 Layer thickness in m (layer 8)
0.1778270 Conductivity in S/m (layer 9)
1.500e+05 Layer thickness in m (layer 9)
0.6309570 Conductivity in S/m (layer10)
2.300e+05 Layer thickness in m (layer10)
1.1220100 Semi-infinite earth conductivity
"""
"""
Columbia Plateau - straddles eastern Washington and Oregon, with
an arcuate extension into southwestern Idaho.
"""
CO_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*CO1 earth conductivity model
*/ 30.0, 1470.0, 1500.0, 5000.0, 4000.0, 3000.0, 4000.0, 5000.0, 6000.0, 6000.0, 11000.0, 13000.0, 13000.0, 27000.0, 50000.0, 100000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 50.0 , 100. , 15.2 , 303. , 52.5 , 53.3 , 37.2 , 21.1 , 15.5 , 17.0 , 26.0 , 42.7 , 60.4 , 64.0 , 152. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
20 Number of layers from surface
0.0200000 Conductivity in S/m (layer 1)
3.000e+01 Layer thickness in m (layer 1)
0.0100000 Conductivity in S/m (layer 2)
1.470e+03 Layer thickness in m (layer 2)
0.0666000 Conductivity in S/m (layer 3)
1.500e+03 Layer thickness in m (layer 3)
0.0033000 Conductivity in S/m (layer 4)
5.000e+03 Layer thickness in m (layer 4)
0.0190600 Conductivity in S/m (layer 5)
4.000e+03 Layer thickness in m (layer 5)
0.0187500 Conductivity in S/m (layer 6)
3.000e+03 Layer thickness in m (layer 6)
0.0268700 Conductivity in S/m (layer 7)
4.000e+03 Layer thickness in m (layer 7)
0.0475000 Conductivity in S/m (layer 8)
5.000e+03 Layer thickness in m (layer 8)
0.0646800 Conductivity in S/m (layer 9)
6.000e+04 Layer thickness in m (layer 9)
0.0587500 Conductivity in S/m (layer10)
6.000e+03 Layer thickness in m (layer10)
0.0384300 Conductivity in S/m (layer11)
1.100e+04 Layer thickness in m (layer11)
0.0234300 Conductivity in S/m (layer12)
1.300e+04 Layer thickness in m (layer12)
0.0165600 Conductivity in S/m (layer13)
1.300e+04 Layer thickness in m (layer13)
0.0156200 Conductivity in S/m (layer14)
2.700e+04 Layer thickness in m (layer14)
0.0066000 Conductivity in S/m (layer15)
5.000e+04 Layer thickness in m (layer15)
0.0047860 Conductivity in S/m (layer16)
1.000e+05 Layer thickness in m (layer16)
0.0199520 Conductivity in S/m (layer17)
1.600e+05 Layer thickness in m (layer17)
0.0501180 Conductivity in S/m (layer18)
1.100e+05 Layer thickness in m (layer18)
0.1778270 Conductivity in S/m (layer19)
1.500e+05 Layer thickness in m (layer19)
0.6309570 Conductivity in S/m (layer20)
2.300e+05 Layer thickness in m (layer20)
1.1220100 Semi-infinite earth conductivity
"""
"""
Coastal Plain - South Carolina.
"""
CP_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*CP1 earth conductivity model
*/ 5.0, 395.0, 9600.0, 14000.0, 13000.0, 31000.0, 77000.0, 105000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 400. , 31.3 , 250. , 625. , 6250. , 1000. , 800. , 5.00 , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
12 Number of layers from surface
0.0025000 Conductivity in S/m (layer 1)
5.000e+00 Layer thickness in m (layer 1)
0.0320000 Conductivity in S/m (layer 2)
3.950e+02 Layer thickness in m (layer 2)
0.0040000 Conductivity in S/m (layer 3)
9.600e+03 Layer thickness in m (layer 3)
0.0016000 Conductivity in S/m (layer 4)
1.400e+04 Layer thickness in m (layer 4)
0.0001600 Conductivity in S/m (layer 5)
1.300e+04 Layer thickness in m (layer 5)
0.0010000 Conductivity in S/m (layer 6)
3.100e+04 Layer thickness in m (layer 6)
0.0012500 Conductivity in S/m (layer 7)
7.700e+04 Layer thickness in m (layer 7)
0.2000000 Conductivity in S/m (layer 8)
1.050e+05 Layer thickness in m (layer 8)
0.0199520 Conductivity in S/m (layer 9)
1.600e+05 Layer thickness in m (layer 9)
0.0501180 Conductivity in S/m (layer10)
1.100e+05 Layer thickness in m (layer10)
0.1778270 Conductivity in S/m (layer11)
1.500e+05 Layer thickness in m (layer11)
0.6309570 Conductivity in S/m (layer12)
2.300e+05 Layer thickness in m (layer12)
1.1220100 Semi-infinite earth conductivity
"""
"""
Coastal Plain - Georgia.
"""
CP_2 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*CP2 earth conductivity model
*/ 15.0, 1485.0, 13500.0, 17000.0, 36000.0, 77000.0, 105000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 50.0 , 100. , 2000. , 5000. , 2000. , 800. , 5.00 , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.0200000 Conductivity in S/m (layer 1)
1.500e+01 Layer thickness in m (layer 1)
0.0100000 Conductivity in S/m (layer 2)
1.485e+03 Layer thickness in m (layer 2)
0.0005000 Conductivity in S/m (layer 3)
1.350e+04 Layer thickness in m (layer 3)
0.0002000 Conductivity in S/m (layer 4)
1.700e+04 Layer thickness in m (layer 4)
0.0005000 Conductivity in S/m (layer 5)
3.600e+04 Layer thickness in m (layer 5)
0.0012500 Conductivity in S/m (layer 6)
7.700e+04 Layer thickness in m (layer 6)
0.2000000 Conductivity in S/m (layer 7)
1.050e+05 Layer thickness in m (layer 7)
0.0199520 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.0501180 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.1778270 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
0.6309570 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
1.1220100 Semi-infinite earth conductivity
"""
"""
Cascade-Sierra Mountains - central Oregon and Washington states,
but extending southward into California.
"""
CS_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*CS1 earth conductivity model
*/ 30.0, 470.0, 1500.0, 18000.0, 10000.0, 13000.0, 57000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 60.2 , 200. , 15.2 , 182. , 15.2 , 152. , 120. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
12 Number of layers from surface
0.0166000 Conductivity in S/m (layer 1)
3.000e+01 Layer thickness in m (layer 1)
0.0050000 Conductivity in S/m (layer 2)
4.700e+02 Layer thickness in m (layer 2)
0.0660000 Conductivity in S/m (layer 3)
1.500e+03 Layer thickness in m (layer 3)
0.0055000 Conductivity in S/m (layer 4)
1.800e+04 Layer thickness in m (layer 4)
0.0660000 Conductivity in S/m (layer 5)
1.000e+04 Layer thickness in m (layer 5)
0.0066000 Conductivity in S/m (layer 6)
1.300e+04 Layer thickness in m (layer 6)
0.0083000 Conductivity in S/m (layer 7)
5.700e+04 Layer thickness in m (layer 7)
0.0047860 Conductivity in S/m (layer 8)
1.500e+05 Layer thickness in m (layer 8)
0.0199520 Conductivity in S/m (layer 9)
1.600e+05 Layer thickness in m (layer 9)
0.0501180 Conductivity in S/m (layer10)
1.100e+05 Layer thickness in m (layer10)
0.1778270 Conductivity in S/m (layer11)
1.500e+05 Layer thickness in m (layer11)
0.6309570 Conductivity in S/m (layer12)
2.300e+05 Layer thickness in m (layer12)
1.1220100 Semi-infinite earth conductivity
"""
"""
Florida Peninsula - Florida.
"""
FL_1 = """
* Lines starting with * are just comments.
* Text after the numbers is ignored
*FL1 earth conductivity model
*/ 1000, 5000, 34000, 60000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 67. , 224. , 3162. , 155. , 138. , 34. , 15.5 , 4.2 , 1.2 , 0.87 ,/ !Resistivities in Ohm-m
9 Number of layers from surface
0.0149000 Conductivity in S/m (layer 1)
1.000e+03 Layer thickness in m (layer 1)
0.0044720 Conductivity in S/m (layer 2)
5.0000+03 Layer thickness in m (layer 2)
0.0003160 Conductivity in S/m (layer 3)
3.400e+04 Layer thickness in m (layer 3)
0.0064500 Conductivity in S/m (layer 4)
6.000e+04 Layer thickness in m (layer 4)
0.0072000 Conductivity in S/m (layer 5)
1.500e+05 Layer thickness in m (layer 5)
0.0295000 Conductivity in S/m (layer 6)
1.600e+05 Layer thickness in m (layer 6)
0.0644000 Conductivity in S/m (layer 7)
1.100e+05 Layer thickness in m (layer 7)
0.2370000 Conductivity in S/m (layer 8)
1.500e+05 Layer thickness in m (layer 8)
0.8540000 Conductivity in S/m (layer 9)
2.300e+05 Layer thickness in m (layer 9)
1.6880000 Semi-infinite earth conductivity
"""
"""
Interior Plains - typical of the eastern portion of North Dakota
only.
"""
IP_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*IP1 earth conductivity model
*/ 330.0, 1670.0, 13000.0, 10000.0, 20000.0, 55000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 30.0 , 20.0 , 4545. , 654., 3030. , 5000. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.0333000 Conductivity in S/m (layer 1)
3.300e+02 Layer thickness in m (layer 1)
0.0500000 Conductivity in S/m (layer 2)
1.670e+03 Layer thickness in m (layer 2)
0.0002200 Conductivity in S/m (layer 3)
1.300e+04 Layer thickness in m (layer 3)
0.0015300 Conductivity in S/m (layer 4)
1.000e+04 Layer thickness in m (layer 4)
0.0003300 Conductivity in S/m (layer 5)
2.000e+04 Layer thickness in m (layer 5)
0.0002000 Conductivity in S/m (layer 6)
5.500e+04 Layer thickness in m (layer 6)
0.0047860 Conductivity in S/m (layer 7)
1.500e+05 Layer thickness in m (layer 7)
0.0199520 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.0501180 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.1778270 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
0.6309570 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
1.1220100 Semi-infinite earth conductivity
"""
"""
Interior Plains (North American Conductive Anomaly) - a narrow
region of low resistivity extending across western South and North
Dakota, and north into Canada along the Saskatchewan-Manitoba
provincial boundary.
"""
IP_2 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*IP2 earth conductivity model
*/ 330.0, 4570.0, 10100.0, 2000.0, 2000.0, 6000.0, 20000.0, 55000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 30.0 , 50.0 , 455. , 250. , 1.00 , 250. , 250. , 800. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
13 Number of layers from surface
0.0333000 Conductivity in S/m (layer 1)
3.300e+02 Layer thickness in m (layer 1)
0.0200000 Conductivity in S/m (layer 2)
4.570e+03 Layer thickness in m (layer 2)
0.0022000 Conductivity in S/m (layer 3)
1.010e+04 Layer thickness in m (layer 3)
0.0040000 Conductivity in S/m (layer 4)
2.000e+03 Layer thickness in m (layer 4)
1.0000000 Conductivity in S/m (layer 5)
2.000e+03 Layer thickness in m (layer 5)
0.0040000 Conductivity in S/m (layer 6)
6.000e+03 Layer thickness in m (layer 6)
0.0040000 Conductivity in S/m (layer 7)
2.000e+04 Layer thickness in m (layer 7)
0.0012500 Conductivity in S/m (layer 8)
5.500e+04 Layer thickness in m (layer 8)
0.0047860 Conductivity in S/m (layer 9)
1.500e+05 Layer thickness in m (layer 9)
0.0199520 Conductivity in S/m (layer10)
1.600e+05 Layer thickness in m (layer10)
0.0501180 Conductivity in S/m (layer11)
1.100e+05 Layer thickness in m (layer11)
0.1778270 Conductivity in S/m (layer12)
1.500e+05 Layer thickness in m (layer12)
0.6309570 Conductivity in S/m (layer13)
2.300e+05 Layer thickness in m (layer13)
1.1220100 Semi-infinite earth conductivity
"""
"""
Interior plains - Michigan region.
"""
IP_3 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*IP3 earth conductivity model
*/ 150.0, 1950.0, 12900.0, 5000.0, 23000.0, 57000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 50.0 , 8000. , 200. , 625. , 1000. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
1.500e+02 Layer thickness in m (layer 1)
0.0200000 Conductivity in S/m (layer 2)
1.950e+03 Layer thickness in m (layer 2)
0.0001250 Conductivity in S/m (layer 3)
1.290e+04 Layer thickness in m (layer 3)
0.0050000 Conductivity in S/m (layer 4)
5.000e+03 Layer thickness in m (layer 4)
0.0016000 Conductivity in S/m (layer 5)
2.300e+04 Layer thickness in m (layer 5)
0.0010000 Conductivity in S/m (layer 6)
5.700e+04 Layer thickness in m (layer 6)
0.0047860 Conductivity in S/m (layer 7)
1.500e+05 Layer thickness in m (layer 7)
0.0199520 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.0501180 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.1778270 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
0.6309570 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
1.1220100 Semi-infinite earth conductivity
"""
"""
Interior Plains (Great Plains)
"""
IP_4 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*IP4 earth conductivity model
*/ 100.0, 500.0, 1000.0, 900.0, 16500.0, 9000.0, 19000.0, 53000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 10.0 , 15.0 , 1.70 , 20.0 , 2801. , 40.0 , 265. , 500. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
13 Number of layers from surface
0.1000000 Conductivity in S/m (layer 1)
1.000e+02 Layer thickness in m (layer 1)
0.0666000 Conductivity in S/m (layer 2)
5.000e+02 Layer thickness in m (layer 2)
0.5882000 Conductivity in S/m (layer 3)
1.000e+03 Layer thickness in m (layer 3)
0.0500000 Conductivity in S/m (layer 4)
9.000e+02 Layer thickness in m (layer 4)
0.0003570 Conductivity in S/m (layer 5)
1.650e+04 Layer thickness in m (layer 5)
0.0250000 Conductivity in S/m (layer 6)
9.000e+03 Layer thickness in m (layer 6)
0.0037700 Conductivity in S/m (layer 7)
1.900e+04 Layer thickness in m (layer 7)
0.0020000 Conductivity in S/m (layer 8)
5.300e+04 Layer thickness in m (layer 8)
0.0047860 Conductivity in S/m (layer 9)
1.500e+05 Layer thickness in m (layer 9)
0.0199520 Conductivity in S/m (layer10)
1.600e+05 Layer thickness in m (layer10)
0.0501180 Conductivity in S/m (layer11)
1.100e+05 Layer thickness in m (layer11)
0.1778270 Conductivity in S/m (layer12)
1.500e+05 Layer thickness in m (layer12)
0.6309570 Conductivity in S/m (layer13)
2.300e+05 Layer thickness in m (layer13)
1.1220100 Semi-infinite earth conductivity
"""
"""
New England - southwest portion of Maine, just north of Portland.
"""
NE_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*NE1 earth conductivity model
*/ 30.0, 9970.0, 5000.0, 10000.0, 11000.0, 64000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 60.2 , 2000. , 2000. , 303. , 303. , 244. , 159. , 28.9 , 7.95 , 2.40 , .891 , .479 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.0166000 Conductivity in S/m (layer 1)
3.000e+01 Layer thickness in m (layer 1)
0.0005000 Conductivity in S/m (layer 2)
9.970e+03 Layer thickness in m (layer 2)
0.0005000 Conductivity in S/m (layer 3)
5.000e+03 Layer thickness in m (layer 3)
0.0033000 Conductivity in S/m (layer 4)
1.000e+04 Layer thickness in m (layer 4)
0.0033000 Conductivity in S/m (layer 5)
1.100e+04 Layer thickness in m (layer 5)
0.0041000 Conductivity in S/m (layer 6)
6.400e+04 Layer thickness in m (layer 6)
0.0063000 Conductivity in S/m (layer 7)
1.500e+05 Layer thickness in m (layer 7)
0.0346000 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.1258000 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.4168000 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
1.1220100 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
2.0892000 Semi-infinite earth conductivity
"""
"""
Pacific Border (Willamette Valley) - the Pacific Border
physiographic province, extending southward through Oregon and into
western California.
"""
PB_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*PB1 earth conductivity model
*/ 3.0, 397.0, 3600.0, 4000.0, 22000.0, 4000.0, 66000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 7.50 , 25.0 , 85.5 , 400. , 30.3 , 400. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
12 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
3.000e+00 Layer thickness in m (layer 1)
0.1330000 Conductivity in S/m (layer 2)
3.970e+02 Layer thickness in m (layer 2)
0.0400000 Conductivity in S/m (layer 3)
3.600e+03 Layer thickness in m (layer 3)
0.0117000 Conductivity in S/m (layer 4)
4.000e+03 Layer thickness in m (layer 4)
0.0025000 Conductivity in S/m (layer 5)
2.200e+04 Layer thickness in m (layer 5)
0.0330000 Conductivity in S/m (layer 6)
4.000e+03 Layer thickness in m (layer 6)
0.0025000 Conductivity in S/m (layer 7)
6.600e+04 Layer thickness in m (layer 7)
0.0047860 Conductivity in S/m (layer 8)
1.500e+05 Layer thickness in m (layer 8)
0.0199520 Conductivity in S/m (layer 9)
1.600e+05 Layer thickness in m (layer 9)
0.0501180 Conductivity in S/m (layer10)
1.100e+05 Layer thickness in m (layer10)
0.1778270 Conductivity in S/m (layer11)
1.500e+05 Layer thickness in m (layer11)
0.6309570 Conductivity in S/m (layer12)
2.300e+05 Layer thickness in m (layer12)
1.1220100 Semi-infinite earth conductivity
"""
"""
Pacific Border (Puget Lowlands) - the Puget Lowlands portion of
the Pacific Border physiographic province.
"""
PB_2 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*PB2 earth conductivity model
*/ 3.0, 1097.0, 6900.0, 17000.0, 7000.0, 13000.0, 20000.0, 15000.0, 70000.0, 100000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 10.0 , 20.0 , 100. , 30.3 , 250. , 400. , 667. , 800. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
14 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
3.000e+00 Layer thickness in m (layer 1)
0.1000000 Conductivity in S/m (layer 2)
1.097e+03 Layer thickness in m (layer 2)
0.0500000 Conductivity in S/m (layer 3)
6.900e+03 Layer thickness in m (layer 3)
0.0100000 Conductivity in S/m (layer 4)
1.700e+04 Layer thickness in m (layer 4)
0.0330000 Conductivity in S/m (layer 5)
7.000e+03 Layer thickness in m (layer 5)
0.0040000 Conductivity in S/m (layer 6)
1.300e+04 Layer thickness in m (layer 6)
0.0025000 Conductivity in S/m (layer 7)
2.000e+04 Layer thickness in m (layer 7)
0.0015000 Conductivity in S/m (layer 8)
1.500e+04 Layer thickness in m (layer 8)
0.0012500 Conductivity in S/m (layer 9)
7.000e+04 Layer thickness in m (layer 9)
0.0047860 Conductivity in S/m (layer10)
1.000e+05 Layer thickness in m (layer10)
0.0199520 Conductivity in S/m (layer11)
1.600e+05 Layer thickness in m (layer11)
0.0501180 Conductivity in S/m (layer12)
1.100e+05 Layer thickness in m (layer12)
0.1778270 Conductivity in S/m (layer13)
1.500e+05 Layer thickness in m (layer13)
0.6309570 Conductivity in S/m (layer14)
2.300e+05 Layer thickness in m (layer14)
1.1220100 Semi-infinite earth conductivity
"""
"""
Piedmont (SE Appalachians) - the Piedmont physiographic province,
and is located between CP-1 and AP-1.
"""
PT_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*PT1 earth conductivity model
*/ 6000.0, 7000.0, 1500.0, 3500.0, 21000.0, 29000.0, 32000.0, 45000.0, 105000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 1000. , 625. , 200. , 1000. , 800. , 4000. , 800. , 8000. , 400. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
13 Number of layers from surface
0.0010000 Conductivity in S/m (layer 1)
6.000e+03 Layer thickness in m (layer 1)
0.0016000 Conductivity in S/m (layer 2)
7.000e+03 Layer thickness in m (layer 2)
0.0050000 Conductivity in S/m (layer 3)
1.500e+03 Layer thickness in m (layer 3)
0.0010000 Conductivity in S/m (layer 4)
3.500e+03 Layer thickness in m (layer 4)
0.0012500 Conductivity in S/m (layer 5)
2.100e+04 Layer thickness in m (layer 5)
0.0002500 Conductivity in S/m (layer 6)
2.900e+04 Layer thickness in m (layer 6)
0.0012500 Conductivity in S/m (layer 7)
3.200e+04 Layer thickness in m (layer 7)
0.0001250 Conductivity in S/m (layer 8)
4.500e+04 Layer thickness in m (layer 8)
0.0025000 Conductivity in S/m (layer 9)
1.050e+05 Layer thickness in m (layer 9)
0.0199520 Conductivity in S/m (layer10)
1.600e+05 Layer thickness in m (layer10)
0.0501180 Conductivity in S/m (layer11)
1.100e+05 Layer thickness in m (layer11)
0.1778270 Conductivity in S/m (layer12)
1.500e+05 Layer thickness in m (layer12)
0.6309570 Conductivity in S/m (layer13)
2.300e+05 Layer thickness in m (layer13)
1.1220100 Semi-infinite earth conductivity
"""
"""
St. Lawrence Lowlands - upper New York state
"""
SL_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*SL1 earth conductivity model
*/ 3.0, 175.0, 9825.0, 12000.0, 18000.0, 600000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 4.00 , 500. , 10000. , 667. , 25.0 , 244. , 159. , 28.9 , 7.95 , 2.40 , .891 , .479 ,/ !Resistivities in Ohm-m
11 Number of layers from surface
0.2500000 Conductivity in S/m (layer 1)
3.000e+00 Layer thickness in m (layer 1)
0.0020000 Conductivity in S/m (layer 2)
1.720e+02 Layer thickness in m (layer 2)
0.0001000 Conductivity in S/m (layer 3)
9.825e+03 Layer thickness in m (layer 3)
0.0015000 Conductivity in S/m (layer 4)
1.200e+04 Layer thickness in m (layer 4)
0.0400000 Conductivity in S/m (layer 5)
1.800e+04 Layer thickness in m (layer 5)
0.0041000 Conductivity in S/m (layer 6)
6.000e+04 Layer thickness in m (layer 6)
0.0063000 Conductivity in S/m (layer 7)
1.500e+05 Layer thickness in m (layer 7)
0.0346000 Conductivity in S/m (layer 8)
1.600e+05 Layer thickness in m (layer 8)
0.1258000 Conductivity in S/m (layer 9)
1.100e+05 Layer thickness in m (layer 9)
0.4168000 Conductivity in S/m (layer10)
1.500e+05 Layer thickness in m (layer10)
1.1220000 Conductivity in S/m (layer11)
2.300e+05 Layer thickness in m (layer11)
2.0892000 Semi-infinite earth conductivity
"""
"""
Superior Upland - the northern portions of Minnesota, Wisconsin,
and Michigan's Upper Peninsula.
"""
SU_1 = """\
* Lines starting with * are just comments.
* Text after the numbers is ignored
*SU1 earth conductivity model
*/ 30.0, 13470.0, 11500.0, 13000.0, 62000.0, 150000.0, 160000.0, 110000.0, 150000.0, 230000.0, INF,/ ! layer thicknesses in m
*/ 100. , 5988. , 200. , 303. , 1000. , 209. , 50.1 , 20.0 , 5.62 , 1.58 , .891 ,/ !Resistivities in Ohm-m
10 Number of layers from surface
0.0100000 Conductivity in S/m (layer 1)
3.000e+01 Layer thickness in m (layer 1)
0.0001670 Conductivity in S/m (layer 2)
1.347e+04 Layer thickness in m (layer 2)
0.0050000 Conductivity in S/m (layer 3)
1.150e+04 Layer thickness in m (layer 3)
0.0033000 Conductivity in S/m (layer 4)
1.300e+04 Layer thickness in m (layer 4)
0.0010000 Conductivity in S/m (layer 5)
6.200e+04 Layer thickness in m (layer 5)
0.0047860 Conductivity in S/m (layer 6)
1.500e+05 Layer thickness in m (layer 6)
0.0199520 Conductivity in S/m (layer 7)
1.600e+05 Layer thickness in m (layer 7)
0.0501180 Conductivity in S/m (layer 8)
1.100e+05 Layer thickness in m (layer 8)
0.1778270 Conductivity in S/m (layer 9)
1.500e+05 Layer thickness in m (layer 9)
0.6309570 Conductivity in S/m (layer10)
2.300e+05 Layer thickness in m (layer10)
1.1220100 Semi-infinite earth conductivity
"""
"""
Mapping between USGS region name and conductivity string
specification.
"""
USGS_CONDUCTIVITY_MAP = {
'AK_1A': AK_1A,
'AK_1B': AK_1B,
'AP_1': AP_1,
'AP_2': AP_2,
'BR_1': BR_1,
'CL_1': CL_1,
'CO_1': CO_1,
'CP_1': CP_1,
'CP_2': CP_2,
'CS_1': CS_1,
'FL_1': FL_1,
'IP_1': IP_1,
'IP_2': IP_2,
'IP_3': IP_3,
'IP_4': IP_4,
'NE_1': NE_1,
'PB_1': PB_1,
'PB_2': PB_2,
'PT_1': PT_1,
'SL_1': SL_1,
'SU_1': SU_1}
"""
Mapping between USGS region and conductivity specification.
"""
USGS_MODEL_MAP = {}
for name, spec in USGS_CONDUCTIVITY_MAP.items():
fid = StringIO(spec)
USGS_MODEL_MAP[name] = parse_conductivity(fid)
| 41.3375 | 238 | 0.515033 | 6,310 | 46,298 | 3.767195 | 0.07813 | 0.09541 | 0.160279 | 0.170965 | 0.85882 | 0.837028 | 0.800682 | 0.70952 | 0.658323 | 0.62063 | 0 | 0.246662 | 0.399909 | 46,298 | 1,119 | 239 | 41.374441 | 0.608934 | 0 | 0 | 0.504828 | 0 | 0.057931 | 0.97991 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002759 | 0 | 0.002759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a6738125d5f52f65b7f72778d3c8a45d9c6e7893 | 18,941 | py | Python | Rank/GuildRank.py | msg-gg/msg.gg-crawling | 9952cff52b264bfe86ecd129372416bfcc9bdc84 | [
"MIT"
] | null | null | null | Rank/GuildRank.py | msg-gg/msg.gg-crawling | 9952cff52b264bfe86ecd129372416bfcc9bdc84 | [
"MIT"
] | null | null | null | Rank/GuildRank.py | msg-gg/msg.gg-crawling | 9952cff52b264bfe86ecd129372416bfcc9bdc84 | [
"MIT"
] | null | null | null | # pip install selenium
# pip install webdriver-manager
import urllib
import soup as soup
import time
from selenium import webdriver
import pandas
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.implicitly_wait(3) # 웹 자원 로드를 위해 3초 기다려줌
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import time
# 이미지 크롤링
body = driver.find_element_by_tag_name('body')
# 인기순/작성순 선택할 수 있는 영역 클릭
# driver.find_element_by_xpath('//paper-button[@class="dropdown-trigger style-scope yt-dropdown-menu"]').click()
# 인기순 카테고리 클릭
# driver.find_element_by_xpath('//paper-listbox[@class="dropdown-content style-scope yt-dropdown-menu"]/a[1]').click()
page = driver.page_source
soup = BeautifulSoup(page, 'html.parser')
# comments=soup.find_all('yt-formatted-string',attrs={'class':'style-scope ytd-comment-renderer'})
cmmt_box = soup.find_all(attrs={'id': 'wrap'})
# real=soup.find('video')
# real=real.get('src')
# print(real)
# //*[@id="container"]/div/div/div[3]/div[1]/table/tbody/tr[1]/td[2]/dl/dt/a/text()
# //*[@id="container"]/div/div/div[3]/div[1]/table/tbody/tr[1]/td[3]
# //*[@id="container"]/div/div/div[3]/div[1]/table/tbody/tr[2]/td[2]/dl/dt/a/text()
from collections import OrderedDict
import json
data = OrderedDict()
worldRank = []
rebootRank = []
reboot2Rank = []
auroraRank = []
redRank = []
enosisRank = []
unionRank = []
scaniaRank = []
lunaRank = []
zenithRank = []
croaRank = []
beraRank = []
elysiumRank = []
arcaneRank = []
novaRank = []
for j in range(1, 6):
#루나
driver.get('https://maple.gg/rank/guild/luna?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_9.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
lunaRank.append(character)
#스카니아
driver.get('https://maple.gg/rank/guild/scania?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_8.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
scaniaRank.append(character)
#엘리시움
driver.get('https://maple.gg/rank/guild/elysium?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_13.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
elysiumRank.append(character)
#리부트
driver.get('https://maple.gg/rank/guild/reboot?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_2.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
rebootRank.append(character)
#크로아
driver.get('https://maple.gg/rank/guild/croa?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_11.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
croaRank.append(character)
#오로라
driver.get('https://maple.gg/rank/guild/aurora?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_4.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
auroraRank.append(character)
#베라
driver.get('https://maple.gg/rank/guild/bera?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_12.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
beraRank.append(character)
#레드
driver.get('https://maple.gg/rank/guild/red?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_5.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
redRank.append(character)
#유니온
driver.get('https://maple.gg/rank/guild/union?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_7.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
unionRank.append(character)
#제니스
driver.get('https://maple.gg/rank/guild/zenith?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_10.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
zenithRank.append(character)
#이노시스
driver.get('https://maple.gg/rank/guild/enosis?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_6.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
enosisRank.append(character)
#리부트2
driver.get('https://maple.gg/rank/guild/reboot2?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_2.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
reboot2Rank.append(character)
#아케인
driver.get('https://maple.gg/rank/guild/arcane?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_14.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
arcaneRank.append(character)
#노바
for j in range(1, 4):
driver.get('https://maple.gg/rank/guild/nova?page=' + str(j))
for i in range(1, 21):
character = {}
rank = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/th').text
serverImg = 'https://ssl.nx.com/s2/game/maplestory/renewal/common/world_icon/icon_15.png'
guildName = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[1]/a').text
master = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[2]/a').text
level = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[3]').text
point = driver.find_element_by_xpath(
'//*[@id="app"]/section[4]/section/div/table/tbody/tr[' + str(i) + ']/td[4]').text
character['rank'] = rank
character['serverImg'] = serverImg
character['guildName'] = guildName
character['master'] = master
character['level'] = level
character['point'] = point
novaRank.append(character)
data['lunaRank'] = lunaRank
data['scaniaRank'] = scaniaRank
data['elysiumRank'] = elysiumRank
data['rebootRank'] = rebootRank
data['croaRank'] = croaRank
data['auroraRank'] = auroraRank
data['beraRank'] = beraRank
data['redRank'] = redRank
data['unionRank'] = unionRank
data['zenithRank'] = zenithRank
data['enosisRank'] = enosisRank
data['arcaneRank'] = arcaneRank
data['reboot2Rank'] = reboot2Rank
data['novaRank'] = novaRank
with open('GuildRank.json', 'w', encoding="utf-8") as make_file:
json.dump(data, make_file, ensure_ascii=False, indent="\t")
| 43.844907 | 118 | 0.590887 | 2,482 | 18,941 | 4.405721 | 0.078566 | 0.066758 | 0.113489 | 0.12684 | 0.841975 | 0.835391 | 0.833379 | 0.7893 | 0.7893 | 0.7893 | 0 | 0.015211 | 0.205163 | 18,941 | 431 | 119 | 43.946636 | 0.711126 | 0.040547 | 0 | 0.742029 | 0 | 0.04058 | 0.358871 | 0.204487 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.031884 | 0 | 0.031884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a685509d6c8daa05bb656cd43f1db7327b547076 | 22,608 | py | Python | sdk/python/pulumi_azure/keyvault/certificate_issuer.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/keyvault/certificate_issuer.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/keyvault/certificate_issuer.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['CertificateIssuerArgs', 'CertificateIssuer']
@pulumi.input_type
class CertificateIssuerArgs:
def __init__(__self__, *,
key_vault_id: pulumi.Input[str],
provider_name: pulumi.Input[str],
account_id: Optional[pulumi.Input[str]] = None,
admins: Optional[pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None,
org_id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a CertificateIssuer resource.
:param pulumi.Input[str] key_vault_id: The ID of the Key Vault in which to create the Certificate Issuer.
:param pulumi.Input[str] provider_name: The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
:param pulumi.Input[str] account_id: The account number with the third-party Certificate Issuer.
:param pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]] admins: One or more `admin` blocks as defined below.
:param pulumi.Input[str] name: The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
:param pulumi.Input[str] org_id: The ID of the organization as provided to the issuer.
:param pulumi.Input[str] password: The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
"""
pulumi.set(__self__, "key_vault_id", key_vault_id)
pulumi.set(__self__, "provider_name", provider_name)
if account_id is not None:
pulumi.set(__self__, "account_id", account_id)
if admins is not None:
pulumi.set(__self__, "admins", admins)
if name is not None:
pulumi.set(__self__, "name", name)
if org_id is not None:
pulumi.set(__self__, "org_id", org_id)
if password is not None:
pulumi.set(__self__, "password", password)
@property
@pulumi.getter(name="keyVaultId")
def key_vault_id(self) -> pulumi.Input[str]:
"""
The ID of the Key Vault in which to create the Certificate Issuer.
"""
return pulumi.get(self, "key_vault_id")
@key_vault_id.setter
def key_vault_id(self, value: pulumi.Input[str]):
pulumi.set(self, "key_vault_id", value)
@property
@pulumi.getter(name="providerName")
def provider_name(self) -> pulumi.Input[str]:
"""
The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
"""
return pulumi.get(self, "provider_name")
@provider_name.setter
def provider_name(self, value: pulumi.Input[str]):
pulumi.set(self, "provider_name", value)
@property
@pulumi.getter(name="accountId")
def account_id(self) -> Optional[pulumi.Input[str]]:
"""
The account number with the third-party Certificate Issuer.
"""
return pulumi.get(self, "account_id")
@account_id.setter
def account_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "account_id", value)
@property
@pulumi.getter
def admins(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]]]:
"""
One or more `admin` blocks as defined below.
"""
return pulumi.get(self, "admins")
@admins.setter
def admins(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]]]):
pulumi.set(self, "admins", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="orgId")
def org_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the organization as provided to the issuer.
"""
return pulumi.get(self, "org_id")
@org_id.setter
def org_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "org_id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
"""
The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@pulumi.input_type
class _CertificateIssuerState:
def __init__(__self__, *,
account_id: Optional[pulumi.Input[str]] = None,
admins: Optional[pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]]] = None,
key_vault_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
org_id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
provider_name: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering CertificateIssuer resources.
:param pulumi.Input[str] account_id: The account number with the third-party Certificate Issuer.
:param pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]] admins: One or more `admin` blocks as defined below.
:param pulumi.Input[str] key_vault_id: The ID of the Key Vault in which to create the Certificate Issuer.
:param pulumi.Input[str] name: The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
:param pulumi.Input[str] org_id: The ID of the organization as provided to the issuer.
:param pulumi.Input[str] password: The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
:param pulumi.Input[str] provider_name: The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
"""
if account_id is not None:
pulumi.set(__self__, "account_id", account_id)
if admins is not None:
pulumi.set(__self__, "admins", admins)
if key_vault_id is not None:
pulumi.set(__self__, "key_vault_id", key_vault_id)
if name is not None:
pulumi.set(__self__, "name", name)
if org_id is not None:
pulumi.set(__self__, "org_id", org_id)
if password is not None:
pulumi.set(__self__, "password", password)
if provider_name is not None:
pulumi.set(__self__, "provider_name", provider_name)
@property
@pulumi.getter(name="accountId")
def account_id(self) -> Optional[pulumi.Input[str]]:
"""
The account number with the third-party Certificate Issuer.
"""
return pulumi.get(self, "account_id")
@account_id.setter
def account_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "account_id", value)
@property
@pulumi.getter
def admins(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]]]:
"""
One or more `admin` blocks as defined below.
"""
return pulumi.get(self, "admins")
@admins.setter
def admins(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['CertificateIssuerAdminArgs']]]]):
pulumi.set(self, "admins", value)
@property
@pulumi.getter(name="keyVaultId")
def key_vault_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Key Vault in which to create the Certificate Issuer.
"""
return pulumi.get(self, "key_vault_id")
@key_vault_id.setter
def key_vault_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key_vault_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="orgId")
def org_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the organization as provided to the issuer.
"""
return pulumi.get(self, "org_id")
@org_id.setter
def org_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "org_id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
"""
The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="providerName")
def provider_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
"""
return pulumi.get(self, "provider_name")
@provider_name.setter
def provider_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "provider_name", value)
class CertificateIssuer(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
account_id: Optional[pulumi.Input[str]] = None,
admins: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['CertificateIssuerAdminArgs']]]]] = None,
key_vault_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
org_id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
provider_name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a Key Vault Certificate Issuer.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
current = azure.core.get_client_config()
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_key_vault = azure.keyvault.KeyVault("exampleKeyVault",
location=example_resource_group.location,
resource_group_name=example_resource_group.name,
sku_name="standard",
tenant_id=current.tenant_id)
example_certificate_issuer = azure.keyvault.CertificateIssuer("exampleCertificateIssuer",
org_id="ExampleOrgName",
key_vault_id=example_key_vault.id,
provider_name="DigiCert",
account_id="0000",
password="example-password")
```
## Import
Key Vault Certificate Issuers can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:keyvault/certificateIssuer:CertificateIssuer example "https://key-vault-name.vault.azure.net/certificates/issuers/example"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] account_id: The account number with the third-party Certificate Issuer.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['CertificateIssuerAdminArgs']]]] admins: One or more `admin` blocks as defined below.
:param pulumi.Input[str] key_vault_id: The ID of the Key Vault in which to create the Certificate Issuer.
:param pulumi.Input[str] name: The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
:param pulumi.Input[str] org_id: The ID of the organization as provided to the issuer.
:param pulumi.Input[str] password: The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
:param pulumi.Input[str] provider_name: The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: CertificateIssuerArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Key Vault Certificate Issuer.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
current = azure.core.get_client_config()
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_key_vault = azure.keyvault.KeyVault("exampleKeyVault",
location=example_resource_group.location,
resource_group_name=example_resource_group.name,
sku_name="standard",
tenant_id=current.tenant_id)
example_certificate_issuer = azure.keyvault.CertificateIssuer("exampleCertificateIssuer",
org_id="ExampleOrgName",
key_vault_id=example_key_vault.id,
provider_name="DigiCert",
account_id="0000",
password="example-password")
```
## Import
Key Vault Certificate Issuers can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:keyvault/certificateIssuer:CertificateIssuer example "https://key-vault-name.vault.azure.net/certificates/issuers/example"
```
:param str resource_name: The name of the resource.
:param CertificateIssuerArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(CertificateIssuerArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
account_id: Optional[pulumi.Input[str]] = None,
admins: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['CertificateIssuerAdminArgs']]]]] = None,
key_vault_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
org_id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
provider_name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = CertificateIssuerArgs.__new__(CertificateIssuerArgs)
__props__.__dict__["account_id"] = account_id
__props__.__dict__["admins"] = admins
if key_vault_id is None and not opts.urn:
raise TypeError("Missing required property 'key_vault_id'")
__props__.__dict__["key_vault_id"] = key_vault_id
__props__.__dict__["name"] = name
__props__.__dict__["org_id"] = org_id
__props__.__dict__["password"] = password
if provider_name is None and not opts.urn:
raise TypeError("Missing required property 'provider_name'")
__props__.__dict__["provider_name"] = provider_name
super(CertificateIssuer, __self__).__init__(
'azure:keyvault/certificateIssuer:CertificateIssuer',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
account_id: Optional[pulumi.Input[str]] = None,
admins: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['CertificateIssuerAdminArgs']]]]] = None,
key_vault_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
org_id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
provider_name: Optional[pulumi.Input[str]] = None) -> 'CertificateIssuer':
"""
Get an existing CertificateIssuer resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] account_id: The account number with the third-party Certificate Issuer.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['CertificateIssuerAdminArgs']]]] admins: One or more `admin` blocks as defined below.
:param pulumi.Input[str] key_vault_id: The ID of the Key Vault in which to create the Certificate Issuer.
:param pulumi.Input[str] name: The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
:param pulumi.Input[str] org_id: The ID of the organization as provided to the issuer.
:param pulumi.Input[str] password: The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
:param pulumi.Input[str] provider_name: The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _CertificateIssuerState.__new__(_CertificateIssuerState)
__props__.__dict__["account_id"] = account_id
__props__.__dict__["admins"] = admins
__props__.__dict__["key_vault_id"] = key_vault_id
__props__.__dict__["name"] = name
__props__.__dict__["org_id"] = org_id
__props__.__dict__["password"] = password
__props__.__dict__["provider_name"] = provider_name
return CertificateIssuer(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="accountId")
def account_id(self) -> pulumi.Output[Optional[str]]:
"""
The account number with the third-party Certificate Issuer.
"""
return pulumi.get(self, "account_id")
@property
@pulumi.getter
def admins(self) -> pulumi.Output[Optional[Sequence['outputs.CertificateIssuerAdmin']]]:
"""
One or more `admin` blocks as defined below.
"""
return pulumi.get(self, "admins")
@property
@pulumi.getter(name="keyVaultId")
def key_vault_id(self) -> pulumi.Output[str]:
"""
The ID of the Key Vault in which to create the Certificate Issuer.
"""
return pulumi.get(self, "key_vault_id")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name which should be used for this Key Vault Certificate Issuer. Changing this forces a new Key Vault Certificate Issuer to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="orgId")
def org_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of the organization as provided to the issuer.
"""
return pulumi.get(self, "org_id")
@property
@pulumi.getter
def password(self) -> pulumi.Output[Optional[str]]:
"""
The password associated with the account and organization ID at the third-party Certificate Issuer. If not specified, will not overwrite any previous value.
"""
return pulumi.get(self, "password")
@property
@pulumi.getter(name="providerName")
def provider_name(self) -> pulumi.Output[str]:
"""
The name of the third-party Certificate Issuer. Possible values are: `DigiCert`, `GlobalSign`, `OneCertV2-PrivateCA`, `OneCertV2-PublicCA` and `SslAdminV2`.
"""
return pulumi.get(self, "provider_name")
| 45.95122 | 204 | 0.656582 | 2,694 | 22,608 | 5.316258 | 0.076095 | 0.082949 | 0.078201 | 0.073733 | 0.874668 | 0.860494 | 0.84646 | 0.828097 | 0.823977 | 0.808826 | 0 | 0.00175 | 0.241552 | 22,608 | 491 | 205 | 46.044807 | 0.833499 | 0.394153 | 0 | 0.746212 | 1 | 0 | 0.100233 | 0.026949 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159091 | false | 0.090909 | 0.026515 | 0 | 0.280303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
a699267b3d9e2de509cad388b67d4477f5eb2242 | 128 | py | Python | shiSock-0.4.0/underDevelopment/shiSock/secureClient.py | AnanyaRamanA/shiSock | 51efb0eba17eb106b9480598d278536ddd7732c3 | [
"MIT"
] | null | null | null | shiSock-0.4.0/underDevelopment/shiSock/secureClient.py | AnanyaRamanA/shiSock | 51efb0eba17eb106b9480598d278536ddd7732c3 | [
"MIT"
] | null | null | null | shiSock-0.4.0/underDevelopment/shiSock/secureClient.py | AnanyaRamanA/shiSock | 51efb0eba17eb106b9480598d278536ddd7732c3 | [
"MIT"
] | 1 | 2021-10-31T13:47:42.000Z | 2021-10-31T13:47:42.000Z | class __secureClient():
def __init__(self):
pass
class secureClient():
def __init__(self):
pass
| 11.636364 | 23 | 0.578125 | 12 | 128 | 5.333333 | 0.5 | 0.53125 | 0.625 | 0.75 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.328125 | 128 | 10 | 24 | 12.8 | 0.744186 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
a6b968cc09fe5e4a51d602fb59617b06ab8d17b4 | 11,186 | py | Python | access/tests/test_access.py | GeoDaCenter/accessibility | 731ca101ca3744740ea246fd9f57e29f893e8405 | [
"BSD-3-Clause"
] | 6 | 2019-10-01T20:36:03.000Z | 2021-01-17T00:43:19.000Z | access/tests/test_access.py | GeoDaCenter/accessibility | 731ca101ca3744740ea246fd9f57e29f893e8405 | [
"BSD-3-Clause"
] | 31 | 2019-06-14T15:56:06.000Z | 2020-05-31T18:52:48.000Z | access/tests/test_access.py | GeoDaCenter/accessibility | 731ca101ca3744740ea246fd9f57e29f893e8405 | [
"BSD-3-Clause"
] | 2 | 2018-07-20T22:09:06.000Z | 2018-09-21T19:21:10.000Z | import sys
sys.path.append('../..')
import math
import unittest
import numpy as np
import pandas as pd
import geopandas as gpd
from access import access, weights
import util as tu
class TestAccess(unittest.TestCase):
def setUp(self):
n = 5
self.supply_grid = tu.create_nxn_grid(n)
self.demand_grid = self.supply_grid.sample(1)
self.cost_matrix = tu.create_cost_matrix(self.supply_grid, 'euclidean')
def test_access_initialize_without_demand_index_col_raises_value_error(self):
with self.assertRaises(ValueError):
bad_index_name = 'Not a col in demand df'
access(demand_df = self.demand_grid, demand_index = bad_index_name,
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value')
def test_access_initialize_without_supply_index_col_raises_value_error(self):
with self.assertRaises(ValueError):
bad_index_name = 'Not a col in supply df'
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = bad_index_name,
supply_value = 'value')
def test_access_initialize_without_demand_value_col_raises_value_error(self):
with self.assertRaises(ValueError):
bad_value_name = 'Not a col in demand df'
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = bad_value_name,
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value')
def test_access_initialize_without_supply_value_col_raises_value_error(self):
with self.assertRaises(ValueError):
bad_value_name = 'Not a col in supply df'
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = bad_value_name)
def test_access_initialize_without_supply_value_col_in_list_raises_value_error(self):
with self.assertRaises(ValueError):
bad_value_name = ['Not a col in supply df']
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = bad_value_name)
def test_access_initialize_with_supply_value_col_in_list(self):
value_in_list = ['value']
self.model = access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = value_in_list)
actual = self.model.supply_types
self.assertEqual(actual, ['value'])
def test_access_initialize_with_supply_value_col_in_dict_raises_value_error(self):
with self.assertRaises(ValueError):
value_in_dict = {'value':''}
self.model = access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = value_in_dict)
def test_access_initialize_without_valid_cost_origin_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_origin = "Not a valid cost origin column"
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
cost_df = self.cost_matrix,
cost_origin = bad_cost_origin,
cost_dest = 'dest',
cost_name = 'cost')
def test_access_initialize_without_valid_cost_dest_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_dest = "Not a valid cost dest column"
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
cost_df = self.cost_matrix,
cost_origin = 'origin',
cost_dest = bad_cost_dest,
cost_name = 'cost')
def test_access_initialize_without_valid_cost_name_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_name = "Not a valid cost name column"
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
cost_df = self.cost_matrix,
cost_origin = 'origin',
cost_dest = 'dest',
cost_name = bad_cost_name)
def test_access_initialize_without_valid_cost_name_in_list_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_name = ["Not a valid cost name column"]
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
cost_df = self.cost_matrix,
cost_origin = 'origin',
cost_dest = 'dest',
cost_name = bad_cost_name)
def test_access_initialize_with_valid_cost_name_in_list(self):
cost_name_list = ['cost']
self.model = access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
cost_df = self.cost_matrix,
cost_origin = 'origin',
cost_dest = 'dest',
cost_name = cost_name_list)
actual = self.model.cost_names
self.assertEqual(actual, ['cost'])
def test_access_initialize_with_valid_cost_name_in_dict_raises_value_error(self):
with self.assertRaises(ValueError):
cost_name_dict = {'cost':''}
self.model = access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
cost_df = self.cost_matrix,
cost_origin = 'origin',
cost_dest = 'dest',
cost_name = cost_name_dict)
def test_access_initialize_without_valid_neighbor_cost_origin_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_origin = "Not a valid cost origin column"
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
neighbor_cost_df = self.cost_matrix,
neighbor_cost_origin = bad_cost_origin,
neighbor_cost_dest = 'dest',
neighbor_cost_name = 'cost')
def test_access_initialize_without_valid_neighbor_cost_dest_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_dest = "Not a valid cost dest column"
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
neighbor_cost_df = self.cost_matrix,
neighbor_cost_origin = 'origin',
neighbor_cost_dest = bad_cost_dest,
neighbor_cost_name = 'cost')
def test_access_initialize_without_valid_neighbor_cost_name_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_name = "Not a valid cost name column"
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
neighbor_cost_df = self.cost_matrix,
neighbor_cost_origin = 'origin',
neighbor_cost_dest = 'dest',
neighbor_cost_name = bad_cost_name)
def test_access_initialize_without_valid_neighbor_cost_name_in_list_raises_value_error(self):
with self.assertRaises(ValueError):
bad_cost_name = ["Not a valid cost name column"]
access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
neighbor_cost_df = self.cost_matrix,
neighbor_cost_origin = 'origin',
neighbor_cost_dest = 'dest',
neighbor_cost_name = bad_cost_name)
def test_access_initialize_with_valid_neighbor_cost_name_in_list(self):
cost_name_list = ['cost']
self.model = access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
neighbor_cost_df = self.cost_matrix,
neighbor_cost_origin = 'origin',
neighbor_cost_dest = 'dest',
neighbor_cost_name = cost_name_list)
actual = self.model.neighbor_cost_names
self.assertEqual(actual, ['cost'])
def test_access_initialize_with_valid_neighbor_cost_name_in_dict_raises_value_error(self):
with self.assertRaises(ValueError):
cost_name_dict = {'cost':''}
self.model = access(demand_df = self.demand_grid, demand_index = 'id',
demand_value = 'value',
supply_df = self.supply_grid, supply_index = 'id',
supply_value = 'value',
neighbor_cost_df = self.cost_matrix,
neighbor_cost_origin = 'origin',
neighbor_cost_dest = 'dest',
neighbor_cost_name = cost_name_dict)
| 41.738806 | 97 | 0.570177 | 1,221 | 11,186 | 4.805078 | 0.054054 | 0.051133 | 0.052497 | 0.074484 | 0.92739 | 0.920402 | 0.913244 | 0.903528 | 0.878302 | 0.862792 | 0 | 0.000276 | 0.352137 | 11,186 | 267 | 98 | 41.895131 | 0.8093 | 0 | 0 | 0.708543 | 0 | 0 | 0.066512 | 0 | 0 | 0 | 0 | 0 | 0.095477 | 1 | 0.100503 | false | 0 | 0.040201 | 0 | 0.145729 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a6f74aaf3e73f623544ce478cef50911a7224dff | 1,736 | py | Python | artist_service.py | talbertmegan/musick | d1ec42ba7ac98bebcf739bef1afeff3fa3a9b17d | [
"MIT"
] | null | null | null | artist_service.py | talbertmegan/musick | d1ec42ba7ac98bebcf739bef1afeff3fa3a9b17d | [
"MIT"
] | null | null | null | artist_service.py | talbertmegan/musick | d1ec42ba7ac98bebcf739bef1afeff3fa3a9b17d | [
"MIT"
] | null | null | null | artist_info = {
"Brendon Urie":{
'image':'Brendon Urie.jpg',
'soundcloud_url': 'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/334177834&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',
},
"Taylor Swift":{
'image':'Taylor Swift.jpg',
'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/166985759&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',
},
"ABBA":{
'image':'ABBA.jpg',
'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/253187333&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',
},
"Britney Spears":{
'image': 'Britney Spears.jpg',
'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/247262764&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',
},
"Christina Aguilera":{
'image':'Christina Aguilera.jpg',
'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/99265635&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',
},
"Jessica Simpson":{
'image':'Jessica Simpson.jpg',
'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/297578035&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true'
}
}
| 64.296296 | 239 | 0.796659 | 258 | 1,736 | 5.193798 | 0.174419 | 0.071642 | 0.071642 | 0.09403 | 0.826119 | 0.826119 | 0.826119 | 0.826119 | 0.826119 | 0.826119 | 0 | 0.056014 | 0.023041 | 1,736 | 26 | 240 | 66.769231 | 0.73408 | 0 | 0 | 0 | 0 | 0.230769 | 0.913594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5bcc1c8acee108f0956d904dd8f166ae649e9e46 | 255 | py | Python | swtoolkit/api/interfaces/ipartdoc.py | szcyd-chian/soliwordsapi | 87d496f82f40febee3bdf4de878064a98a82c005 | [
"MIT"
] | 16 | 2020-11-03T14:40:30.000Z | 2022-03-02T15:38:40.000Z | swtoolkit/api/interfaces/ipartdoc.py | szcyd-chian/soliwordsapi | 87d496f82f40febee3bdf4de878064a98a82c005 | [
"MIT"
] | 2 | 2021-03-02T12:10:24.000Z | 2021-11-19T21:34:47.000Z | swtoolkit/api/interfaces/ipartdoc.py | szcyd-chian/soliwordsapi | 87d496f82f40febee3bdf4de878064a98a82c005 | [
"MIT"
] | 8 | 2020-11-11T12:25:58.000Z | 2022-03-28T06:06:44.000Z | class IPartDoc:
def __init__(self, system_object):
self.system_object = system_object
@property
def _instance(self):
return self.system_object
@property
def is_weldment(self):
return self._instance.IsWeldment
| 21.25 | 42 | 0.678431 | 29 | 255 | 5.586207 | 0.448276 | 0.296296 | 0.296296 | 0.283951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25098 | 255 | 11 | 43 | 23.181818 | 0.848168 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.222222 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
75369ccfbfd56359bbe939f3b80a1491d0f64ef6 | 125 | py | Python | test/client/data_storage/生成逻辑.py | sf-fl/federatedML | 6cb48525d662ba1bad12702622f2b43f67da67cf | [
"Apache-2.0"
] | null | null | null | test/client/data_storage/生成逻辑.py | sf-fl/federatedML | 6cb48525d662ba1bad12702622f2b43f67da67cf | [
"Apache-2.0"
] | null | null | null | test/client/data_storage/生成逻辑.py | sf-fl/federatedML | 6cb48525d662ba1bad12702622f2b43f67da67cf | [
"Apache-2.0"
] | 1 | 2020-11-17T08:45:49.000Z | 2020-11-17T08:45:49.000Z | # =IF(D2=1,IF(RAND()>0.5,NORMINV(RAND(),75,2),NORMINV(RAND(),85,2)),IF(RAND()>0.5,NORMINV(RAND(),70,2),NORMINV(RAND(),80,2))) | 125 | 125 | 0.616 | 27 | 125 | 2.851852 | 0.444444 | 0.571429 | 0.181818 | 0.207792 | 0.493506 | 0.493506 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 0.008 | 125 | 1 | 125 | 125 | 0.475806 | 0.984 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f35524fb759a3430fca7157f45c812d4fd7a33ab | 53,426 | py | Python | src/parameters.py | michgz/tonetyrant | 99119f91e872bd12ebf98f2b84919cd6ab0ee374 | [
"MIT"
] | null | null | null | src/parameters.py | michgz/tonetyrant | 99119f91e872bd12ebf98f2b84919cd6ab0ee374 | [
"MIT"
] | 1 | 2022-02-18T09:29:33.000Z | 2022-02-18T09:29:33.000Z | src/parameters.py | michgz/tonetyrant | 99119f91e872bd12ebf98f2b84919cd6ab0ee374 | [
"MIT"
] | null | null | null | ## Automatically generated file. Time of processing: 2022-02-21T16:33:10.671609
from dataclasses import dataclass
@dataclass
class Param:
number: int
block0: int
name: str
cluster: str
byteOffset: int
byteCount: int
bitOffset: int
bitCount: int
recommendedLimits: tuple
recommendedStep: int
defaultValue: int
midiBytes: int
helpStr: str
Params = [
Param(0, 0, byteOffset=422, byteCount=16, bitOffset=-1, bitCount=-1, defaultValue=0, recommendedLimits=(-1, -1), recommendedStep=16, name="Name", cluster="Name", midiBytes=16, helpStr="The name of the tone, as shown in the keyboard front panel.\nCharacters can be any normal ASCII character."),
Param(1, 0, byteOffset=135, byteCount=1, bitOffset=0, bitCount=4, defaultValue=0, recommendedLimits=(0, 6), recommendedStep=2, name="Timbre", cluster="Wavetable (Sound A)", midiBytes=1, helpStr=""),
Param(2, 0, byteOffset=130, byteCount=2, bitOffset=0, bitCount=14, defaultValue=1, recommendedLimits=(0, 900), recommendedStep=1, name="Wavetable", cluster="Wavetable (Sound A)", midiBytes=2, helpStr=""),
Param(3, 0, byteOffset=132, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="Velocity (Sound A)", midiBytes=1, helpStr=""),
Param(4, 0, byteOffset=133, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Velocity to filter cutoff", cluster="Velocity (Sound A)", midiBytes=1, helpStr=""),
Param(5, 0, byteOffset=134, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Velocity sense", cluster="Velocity (Sound A)", midiBytes=1, helpStr=""),
Param(6, 0, byteOffset=2, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack", cluster="Pitch Envelope (Sound A)", midiBytes=2, helpStr="Units 1/8 semitones"),
Param(6, 1, byteOffset=6, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Pitch Envelope (Sound A)", midiBytes=2, helpStr="Units 1/8 semitones"),
Param(6, 2, byteOffset=10, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release", cluster="Pitch Envelope (Sound A)", midiBytes=2, helpStr="Units 1/8 semitones"),
Param(7, 0, byteOffset=0, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="unused time", cluster="Pitch Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(7, 1, byteOffset=4, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time", cluster="Pitch Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(7, 2, byteOffset=8, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time", cluster="Pitch Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(8, 0, byteOffset=124, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="", cluster="Unknown (Sound A)", midiBytes=2, helpStr=""),
Param(9, 0, byteOffset=125, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="", cluster="Unknown (Sound A)", midiBytes=2, helpStr=""),
Param(10, 0, byteOffset=14, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(10, 1, byteOffset=18, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(10, 2, byteOffset=22, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(10, 3, byteOffset=26, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(10, 4, byteOffset=30, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(10, 5, byteOffset=34, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(10, 6, byteOffset=38, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 0, byteOffset=12, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Unused time", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 1, byteOffset=16, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 2, byteOffset=20, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 3, byteOffset=24, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 4, byteOffset=28, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 5, byteOffset=32, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(11, 6, byteOffset=36, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Detune Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 0, byteOffset=42, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 1, byteOffset=46, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 2, byteOffset=50, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 3, byteOffset=54, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 4, byteOffset=58, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 5, byteOffset=62, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(12, 6, byteOffset=66, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 0, byteOffset=40, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Unused time", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 1, byteOffset=44, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 2, byteOffset=48, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 3, byteOffset=52, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 4, byteOffset=56, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 5, byteOffset=60, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(13, 6, byteOffset=64, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Unknown Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(14, 0, byteOffset=126, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Cutoff", cluster="Filter (Sound A)", midiBytes=2, helpStr=""),
Param(15, 0, byteOffset=127, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Resonance", cluster="Filter (Sound A)", midiBytes=2, helpStr=""),
Param(16, 0, byteOffset=128, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Envelope Multiplier", cluster="Filter (Sound A)", midiBytes=1, helpStr=""),
Param(17, 0, byteOffset=70, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(17, 1, byteOffset=74, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(17, 2, byteOffset=78, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(17, 3, byteOffset=82, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(17, 4, byteOffset=86, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(17, 5, byteOffset=90, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(17, 6, byteOffset=94, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 0, byteOffset=68, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="unused time", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 1, byteOffset=72, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 2, byteOffset=76, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 3, byteOffset=80, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 4, byteOffset=84, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 5, byteOffset=88, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(18, 6, byteOffset=92, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Filter Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 0, byteOffset=98, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 1, byteOffset=102, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 2, byteOffset=106, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 3, byteOffset=110, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 4, byteOffset=114, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold level", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 5, byteOffset=118, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(19, 6, byteOffset=122, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 0, byteOffset=96, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="unused time", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 1, byteOffset=100, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 2, byteOffset=104, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 3, byteOffset=108, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 4, byteOffset=112, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 5, byteOffset=116, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(20, 6, byteOffset=120, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Gain Envelope (Sound A)", midiBytes=2, helpStr=""),
Param(21, 0, byteOffset=271, byteCount=1, bitOffset=0, bitCount=4, defaultValue=0, recommendedLimits=(0, 6), recommendedStep=2, name="Timbre", cluster="Wavetable (Sound B)", midiBytes=1, helpStr=""),
Param(22, 0, byteOffset=266, byteCount=2, bitOffset=0, bitCount=14, defaultValue=1, recommendedLimits=(0, 900), recommendedStep=1, name="Wavetable", cluster="Wavetable (Sound B)", midiBytes=2, helpStr=""),
Param(23, 0, byteOffset=268, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="Velocity (Sound B)", midiBytes=1, helpStr=""),
Param(24, 0, byteOffset=269, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Velocity to filter cutoff", cluster="Velocity (Sound B)", midiBytes=1, helpStr=""),
Param(25, 0, byteOffset=270, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Velocity sense", cluster="Velocity (Sound B)", midiBytes=1, helpStr=""),
Param(26, 0, byteOffset=138, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack", cluster="Pitch Envelope (Sound B)", midiBytes=2, helpStr="Units 1/8 semitones"),
Param(26, 1, byteOffset=142, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Pitch Envelope (Sound B)", midiBytes=2, helpStr="Units 1/8 semitones"),
Param(26, 2, byteOffset=146, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release", cluster="Pitch Envelope (Sound B)", midiBytes=2, helpStr="Units 1/8 semitones"),
Param(27, 0, byteOffset=136, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="unused time", cluster="Pitch Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(27, 1, byteOffset=140, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time", cluster="Pitch Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(27, 2, byteOffset=144, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time", cluster="Pitch Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(28, 0, byteOffset=260, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="", cluster="Unknown (Sound B)", midiBytes=2, helpStr=""),
Param(29, 0, byteOffset=261, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="", cluster="Unknown (Sound B)", midiBytes=2, helpStr=""),
Param(30, 0, byteOffset=150, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(30, 1, byteOffset=154, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(30, 2, byteOffset=158, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(30, 3, byteOffset=162, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(30, 4, byteOffset=166, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(30, 5, byteOffset=170, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(30, 6, byteOffset=174, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 0, byteOffset=148, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Unused time", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 1, byteOffset=152, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 2, byteOffset=156, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 3, byteOffset=160, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 4, byteOffset=164, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 5, byteOffset=168, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(31, 6, byteOffset=172, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Detune Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 0, byteOffset=178, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 1, byteOffset=182, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 2, byteOffset=186, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 3, byteOffset=190, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 4, byteOffset=194, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 5, byteOffset=198, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(32, 6, byteOffset=202, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 0, byteOffset=176, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Unused time", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 1, byteOffset=180, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 2, byteOffset=184, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 3, byteOffset=188, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 4, byteOffset=192, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 5, byteOffset=196, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(33, 6, byteOffset=200, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Unknown Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(34, 0, byteOffset=262, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Cutoff", cluster="Filter (Sound B)", midiBytes=2, helpStr=""),
Param(35, 0, byteOffset=263, byteCount=1, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Resonance", cluster="Filter (Sound B)", midiBytes=2, helpStr=""),
Param(36, 0, byteOffset=264, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Envelope Multiplier", cluster="Filter (Sound B)", midiBytes=1, helpStr=""),
Param(37, 0, byteOffset=206, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(37, 1, byteOffset=210, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(37, 2, byteOffset=214, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(37, 3, byteOffset=218, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(37, 4, byteOffset=222, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(37, 5, byteOffset=226, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(37, 6, byteOffset=230, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 0, byteOffset=204, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Unused time", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 1, byteOffset=208, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 2, byteOffset=212, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 3, byteOffset=216, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 4, byteOffset=220, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 5, byteOffset=224, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(38, 6, byteOffset=228, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Filter Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 0, byteOffset=234, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 1", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 1, byteOffset=238, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 2", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 2, byteOffset=242, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 3", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 3, byteOffset=246, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Attack 4", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 4, byteOffset=250, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Hold", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 5, byteOffset=254, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 1", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(39, 6, byteOffset=258, byteCount=2, bitOffset=0, bitCount=8, defaultValue=128, recommendedLimits=(0, 255), recommendedStep=16, name="Release 2", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 0, byteOffset=232, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Unused time", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 1, byteOffset=236, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 1", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 2, byteOffset=240, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 2", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 3, byteOffset=244, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 3", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 4, byteOffset=248, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Attack time 4", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 5, byteOffset=252, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 1", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(40, 6, byteOffset=256, byteCount=2, bitOffset=0, bitCount=10, defaultValue=512, recommendedLimits=(256, 1023), recommendedStep=64, name="Release time 2", cluster="Gain Envelope (Sound B)", midiBytes=2, helpStr=""),
Param(41, 0, byteOffset=421, byteCount=1, bitOffset=7, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="Sound B used for note-off", cluster="Octave", midiBytes=1, helpStr=""),
Param(42, 0, byteOffset=421, byteCount=1, bitOffset=5, bitCount=2, defaultValue=0, recommendedLimits=(0, 2), recommendedStep=1, name="Note off velocity", cluster="Octave", midiBytes=1, helpStr=""),
Param(43, 0, byteOffset=421, byteCount=1, bitOffset=1, bitCount=3, defaultValue=4, recommendedLimits=(1, 7), recommendedStep=1, name="Octave shift", cluster="Octave", midiBytes=1, helpStr=""),
Param(44, 0, byteOffset=421, byteCount=1, bitOffset=0, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="Enable DSP", cluster="DSP Chain", midiBytes=1, helpStr=""),
Param(45, 0, byteOffset=438, byteCount=1, bitOffset=0, bitCount=7, defaultValue=100, recommendedLimits=(0, 127), recommendedStep=16, name="Volume", cluster="Volume", midiBytes=1, helpStr=""),
Param(46, 0, byteOffset=439, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Volume 2", cluster="Gain", midiBytes=1, helpStr=""),
Param(47, 0, byteOffset=440, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="Key-follow 1", cluster="Gain", midiBytes=1, helpStr="Relationship between note pitch and volume of the sound. A value of 2 specifies an equal volume for all pitches, and is the default"),
Param(48, 0, byteOffset=441, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="Key-follow 2", cluster="Gain", midiBytes=1, helpStr="Relationship between note pitch and volume of the sound. A value of 2 specifies an equal volume for all pitches, and is the default"),
Param(49, 0, byteOffset=442, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="", cluster="Gain", midiBytes=1, helpStr=""),
Param(50, 0, byteOffset=443, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="Gain", midiBytes=1, helpStr=""),
Param(51, 0, byteOffset=444, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="", cluster="Gain", midiBytes=1, helpStr=""),
Param(52, 0, byteOffset=445, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="Gain", midiBytes=1, helpStr=""),
Param(53, 0, byteOffset=446, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="", cluster="Gain", midiBytes=1, helpStr=""),
Param(54, 0, byteOffset=447, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="Gain", midiBytes=1, helpStr=""),
Param(55, 0, byteOffset=420, byteCount=1, bitOffset=7, bitCount=1, defaultValue=1, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(56, 0, byteOffset=272, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="Chorus send", cluster="Effects", midiBytes=1, helpStr=""),
Param(57, 0, byteOffset=273, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="Reverb send", cluster="Effects", midiBytes=1, helpStr=""),
Param(58, 0, byteOffset=274, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="Delay send", cluster="Effects", midiBytes=1, helpStr=""),
Param(59, 0, byteOffset=292, byteCount=1, bitOffset=4, bitCount=4, defaultValue=15, recommendedLimits=(0, 6), recommendedStep=1, name="Vibrato type", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(60, 0, byteOffset=275, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Vibrato rate", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(61, 0, byteOffset=276, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Vibrato delay", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(62, 0, byteOffset=277, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Vibrato rise time", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(63, 0, byteOffset=278, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Vibrato depth", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(64, 0, byteOffset=279, byteCount=1, bitOffset=0, bitCount=7, defaultValue=72, recommendedLimits=(0, 127), recommendedStep=16, name="Vibrato depth for CC 01 (modulation)", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(65, 0, byteOffset=280, byteCount=1, bitOffset=0, bitCount=7, defaultValue=72, recommendedLimits=(0, 127), recommendedStep=16, name="Vibrato depth for after-touch", cluster="Vibrato", midiBytes=1, helpStr=""),
Param(66, 0, byteOffset=292, byteCount=1, bitOffset=0, bitCount=4, defaultValue=15, recommendedLimits=(0, 6), recommendedStep=1, name="Tremolo/filter LFO type", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(67, 0, byteOffset=281, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Tremolo/filter LFO rate", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(68, 0, byteOffset=282, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Filter LFO delay", cluster="Filter LFO", midiBytes=1, helpStr=""),
Param(69, 0, byteOffset=283, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Filter LFO rise time", cluster="Filter LFO", midiBytes=1, helpStr=""),
Param(70, 0, byteOffset=284, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Filter LFO depth", cluster="Filter LFO", midiBytes=1, helpStr=""),
Param(71, 0, byteOffset=285, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Filter LFO depth for CC 01 (modulation)", cluster="Filter LFO", midiBytes=1, helpStr=""),
Param(72, 0, byteOffset=286, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Filter LFO depth for after-touch", cluster="Filter LFO", midiBytes=1, helpStr=""),
Param(73, 0, byteOffset=287, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Tremolo delay", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(74, 0, byteOffset=288, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Tremolo rise time", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(75, 0, byteOffset=289, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Tremolo depth", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(76, 0, byteOffset=290, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Tremolo depth for CC 01 (modulation)", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(77, 0, byteOffset=291, byteCount=1, bitOffset=0, bitCount=7, defaultValue=64, recommendedLimits=(0, 127), recommendedStep=16, name="Tremolo depth for after-touch", cluster="Tremolo", midiBytes=1, helpStr=""),
Param(78, 0, byteOffset=448, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(79, 0, byteOffset=449, byteCount=1, bitOffset=0, bitCount=4, defaultValue=0, recommendedLimits=(0, 15), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(80, 0, byteOffset=420, byteCount=1, bitOffset=3, bitCount=3, defaultValue=0, recommendedLimits=(0, 7), recommendedStep=1, name="Stretch tuning", cluster="Tuning", midiBytes=1, helpStr="Specifies the stretch tuning, a.k.a. Railsback curve. 0 should be used for most instruments, while 1-7 may be suitable for piano or electric piano sounds"),
Param(81, 0, byteOffset=420, byteCount=1, bitOffset=2, bitCount=1, defaultValue=1, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(82, 0, byteOffset=420, byteCount=1, bitOffset=1, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(83, 0, byteOffset=420, byteCount=1, bitOffset=0, bitCount=1, defaultValue=1, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(84, 0, byteOffset=294, byteCount=16, bitOffset=-1, bitCount=-1, defaultValue=0, recommendedLimits=(-1, -1), recommendedStep=16, name="DSP chain name", cluster="DSP Chain", midiBytes=16, helpStr="Name of the DSP chain. This is not used for anything and can be set to any string"),
Param(85, 0, byteOffset=310, byteCount=2, bitOffset=0, bitCount=14, defaultValue=16383, recommendedLimits=(1, 31), recommendedStep=1, name="DSP effect 1", cluster="DSP", midiBytes=2, helpStr=""),
Param(85, 1, byteOffset=328, byteCount=2, bitOffset=0, bitCount=14, defaultValue=16383, recommendedLimits=(1, 31), recommendedStep=1, name="DSP effect 2", cluster="DSP", midiBytes=2, helpStr=""),
Param(85, 2, byteOffset=346, byteCount=2, bitOffset=0, bitCount=14, defaultValue=16383, recommendedLimits=(1, 31), recommendedStep=1, name="DSP effect 3", cluster="DSP", midiBytes=2, helpStr=""),
Param(85, 3, byteOffset=364, byteCount=2, bitOffset=0, bitCount=14, defaultValue=16383, recommendedLimits=(1, 31), recommendedStep=1, name="DSP effect 4", cluster="DSP", midiBytes=2, helpStr=""),
Param(86, 0, byteOffset=310, byteCount=2, bitOffset=14, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="DSP bypass 1", cluster="DSP", midiBytes=1, helpStr=""),
Param(86, 1, byteOffset=328, byteCount=2, bitOffset=14, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="DSP bypass 2", cluster="DSP", midiBytes=1, helpStr=""),
Param(86, 2, byteOffset=346, byteCount=2, bitOffset=14, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="DSP bypass 3", cluster="DSP", midiBytes=1, helpStr=""),
Param(86, 3, byteOffset=364, byteCount=2, bitOffset=14, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="DSP bypass 4", cluster="DSP", midiBytes=1, helpStr=""),
Param(87, 0, byteOffset=312, byteCount=16, bitOffset=-1, bitCount=-1, defaultValue=0, recommendedLimits=(-1, -1), recommendedStep=16, name="DSP parameters 1", cluster="DSP", midiBytes=16, helpStr=""),
Param(87, 1, byteOffset=330, byteCount=16, bitOffset=-1, bitCount=-1, defaultValue=0, recommendedLimits=(-1, -1), recommendedStep=16, name="DSP parameters 2", cluster="DSP", midiBytes=16, helpStr=""),
Param(87, 2, byteOffset=348, byteCount=16, bitOffset=-1, bitCount=-1, defaultValue=0, recommendedLimits=(-1, -1), recommendedStep=16, name="DSP parameters 3", cluster="DSP", midiBytes=16, helpStr=""),
Param(87, 3, byteOffset=366, byteCount=16, bitOffset=-1, bitCount=-1, defaultValue=0, recommendedLimits=(-1, -1), recommendedStep=16, name="DSP parameters 4", cluster="DSP", midiBytes=16, helpStr=""),
Param(88, 0, byteOffset=383, byteCount=1, bitOffset=7, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(89, 0, byteOffset=383, byteCount=1, bitOffset=3, bitCount=2, defaultValue=0, recommendedLimits=(0, 3), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(90, 0, byteOffset=383, byteCount=1, bitOffset=2, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(91, 0, byteOffset=383, byteCount=1, bitOffset=1, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(92, 0, byteOffset=382, byteCount=1, bitOffset=5, bitCount=3, defaultValue=0, recommendedLimits=(0, 7), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(93, 0, byteOffset=384, byteCount=1, bitOffset=0, bitCount=7, defaultValue=100, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(94, 0, byteOffset=382, byteCount=1, bitOffset=3, bitCount=2, defaultValue=0, recommendedLimits=(0, 3), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(95, 0, byteOffset=382, byteCount=1, bitOffset=2, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(96, 0, byteOffset=382, byteCount=1, bitOffset=1, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(97, 0, byteOffset=385, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(98, 0, byteOffset=386, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(99, 0, byteOffset=382, byteCount=1, bitOffset=0, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(100, 0, byteOffset=388, byteCount=4, bitOffset=0, bitCount=32, defaultValue=0, recommendedLimits=(0, 899), recommendedStep=16, name="DSP chain number", cluster="DSP Chain", midiBytes=5, helpStr="Number of the DSP chain. This is not used for anything and can be set to any value"),
Param(101, 0, byteOffset=392, byteCount=4, bitOffset=0, bitCount=32, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=5, helpStr=""),
Param(102, 0, byteOffset=396, byteCount=1, bitOffset=0, bitCount=8, defaultValue=0, recommendedLimits=(0, 255), recommendedStep=16, name="", cluster="", midiBytes=2, helpStr=""),
Param(103, 0, byteOffset=397, byteCount=1, bitOffset=0, bitCount=7, defaultValue=1, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(104, 0, byteOffset=398, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(105, 0, byteOffset=399, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(106, 0, byteOffset=400, byteCount=4, bitOffset=0, bitCount=32, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=5, helpStr=""),
Param(107, 0, byteOffset=404, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="Portamento time", cluster="Misc.", midiBytes=1, helpStr=""),
Param(108, 0, byteOffset=405, byteCount=1, bitOffset=0, bitCount=7, defaultValue=4, recommendedLimits=(1, 7), recommendedStep=1, name="Octave shift (keyboard)", cluster="Octave", midiBytes=1, helpStr=""),
Param(109, 0, byteOffset=406, byteCount=1, bitOffset=0, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="", midiBytes=2, helpStr=""),
Param(110, 0, byteOffset=407, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(111, 0, byteOffset=408, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(112, 0, byteOffset=409, byteCount=1, bitOffset=0, bitCount=7, defaultValue=0, recommendedLimits=(0, 127), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(113, 0, byteOffset=410, byteCount=1, bitOffset=0, bitCount=4, defaultValue=1, recommendedLimits=(0, 15), recommendedStep=16, name="", cluster="", midiBytes=1, helpStr=""),
Param(114, 0, byteOffset=410, byteCount=1, bitOffset=4, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="Monophonic", cluster="Misc.", midiBytes=1, helpStr=""),
Param(115, 0, byteOffset=410, byteCount=1, bitOffset=5, bitCount=1, defaultValue=0, recommendedLimits=(0, 1), recommendedStep=16, name="Sound B used for double-stop", cluster="Misc.", midiBytes=1, helpStr=""),
Param(116, 0, byteOffset=410, byteCount=1, bitOffset=6, bitCount=2, defaultValue=0, recommendedLimits=(0, 2), recommendedStep=1, name="Portamento", cluster="Misc.", midiBytes=1, helpStr=""),
Param(117, 0, byteOffset=412, byteCount=1, bitOffset=0, bitCount=4, defaultValue=0, recommendedLimits=(0, 8), recommendedStep=1, name="Type", cluster="Filter 1", midiBytes=1, helpStr=""),
Param(117, 1, byteOffset=416, byteCount=1, bitOffset=0, bitCount=4, defaultValue=0, recommendedLimits=(0, 8), recommendedStep=1, name="Type", cluster="Filter 2", midiBytes=1, helpStr=""),
Param(118, 0, byteOffset=412, byteCount=2, bitOffset=4, bitCount=6, defaultValue=12, recommendedLimits=(0, 22), recommendedStep=1, name="Parameter 1", cluster="Filter 1", midiBytes=1, helpStr="Meaning depends on the filter type. Often it controls frequency"),
Param(118, 1, byteOffset=416, byteCount=2, bitOffset=4, bitCount=6, defaultValue=12, recommendedLimits=(0, 22), recommendedStep=1, name="Parameter 1", cluster="Filter 2", midiBytes=1, helpStr="Meaning depends on the filter type. Often it controls frequency"),
Param(119, 0, byteOffset=413, byteCount=1, bitOffset=2, bitCount=6, defaultValue=12, recommendedLimits=(0, 24), recommendedStep=1, name="Parameter 2", cluster="Filter 1", midiBytes=1, helpStr="Meaning depends on the filter type. Often it controls gain"),
Param(119, 1, byteOffset=417, byteCount=1, bitOffset=2, bitCount=6, defaultValue=12, recommendedLimits=(0, 24), recommendedStep=1, name="Parameter 2", cluster="Filter 2", midiBytes=1, helpStr="Meaning depends on the filter type. Often it controls gain"),
Param(120, 0, byteOffset=414, byteCount=1, bitOffset=4, bitCount=4, defaultValue=0, recommendedLimits=(0, 15), recommendedStep=1, name="Parameter 3", cluster="Filter 1", midiBytes=1, helpStr="Meaning depends on the filter type. Often it controls resonance Q-value"),
Param(120, 1, byteOffset=418, byteCount=1, bitOffset=4, bitCount=4, defaultValue=0, recommendedLimits=(0, 15), recommendedStep=1, name="Parameter 3", cluster="Filter 2", midiBytes=1, helpStr="Meaning depends on the filter type. Often it controls resonance Q-value"),
Param(121, 0, byteOffset=414, byteCount=1, bitOffset=3, bitCount=1, defaultValue=1, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="Filter 1", midiBytes=1, helpStr=""),
Param(121, 1, byteOffset=418, byteCount=1, bitOffset=3, bitCount=1, defaultValue=1, recommendedLimits=(0, 1), recommendedStep=16, name="", cluster="Filter 2", midiBytes=1, helpStr=""),
Param(122, 0, byteOffset=414, byteCount=1, bitOffset=0, bitCount=3, defaultValue=0, recommendedLimits=(0, 7), recommendedStep=16, name="", cluster="Filter 1", midiBytes=1, helpStr=""),
Param(122, 1, byteOffset=418, byteCount=1, bitOffset=0, bitCount=3, defaultValue=0, recommendedLimits=(0, 7), recommendedStep=16, name="", cluster="Filter 2", midiBytes=1, helpStr=""),
Param(200, 0, byteOffset=450, byteCount=1, bitOffset=0, bitCount=7, defaultValue=127, recommendedLimits=(0, 127), recommendedStep=16, name="Volume 3", cluster="Keyboard only", midiBytes=1, helpStr="Volume of the note.\nOnly notes played on the keyboard are affected by this (not MIDI IN or rhythms)"),
Param(201, 0, byteOffset=451, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="Key-follow 3", cluster="Keyboard only", midiBytes=1, helpStr="Relationship between note pitch and volume of the sound. A value of 2 specifies an equal volume for all pitches, and is the default.\nOnly notes played on the keyboard are affected by this (not MIDI IN or rhythms)"),
Param(202, 0, byteOffset=452, byteCount=1, bitOffset=0, bitCount=7, defaultValue=2, recommendedLimits=(0, 126), recommendedStep=1, name="Key-follow 4", cluster="Keyboard only", midiBytes=1, helpStr="Relationship between note pitch and volume of the sound. A value of 2 specifies an equal volume for all pitches, and is the default.\nOnly notes played on the keyboard are affected by this (not MIDI IN or rhythms)"),
]
| 196.419118 | 419 | 0.728821 | 7,134 | 53,426 | 5.458088 | 0.062097 | 0.069033 | 0.096153 | 0.07571 | 0.895295 | 0.892984 | 0.847963 | 0.822641 | 0.812779 | 0.812779 | 0 | 0.099745 | 0.103021 | 53,426 | 271 | 420 | 197.143911 | 0.712783 | 0.001423 | 0 | 0 | 1 | 0.026616 | 0.14933 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015209 | 0.003802 | 0 | 0.057034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f3b2b819b5d772113c4d0145785968fd542dd03f | 127 | py | Python | layers/__init__.py | yonghee12/deepnp | 7eec0e391fff21b700831b2e0e78244047b958f8 | [
"MIT"
] | 2 | 2020-07-30T10:35:17.000Z | 2020-07-30T10:35:27.000Z | layers/__init__.py | yonghee12/deepnp | 7eec0e391fff21b700831b2e0e78244047b958f8 | [
"MIT"
] | null | null | null | layers/__init__.py | yonghee12/deepnp | 7eec0e391fff21b700831b2e0e78244047b958f8 | [
"MIT"
] | null | null | null | from .attention import *
from .basic import *
from .recurrent import *
from .seq2seq import *
from .seq2seq_attention import *
| 21.166667 | 32 | 0.76378 | 16 | 127 | 6 | 0.375 | 0.416667 | 0.354167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018692 | 0.15748 | 127 | 5 | 33 | 25.4 | 0.878505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3481351bcce630478336d5bb6251d75010f6e0df | 349 | py | Python | python3/jute/__init__.py | jongiddy/jute | 1b7fc921d6a137518e8e0e37da61df2693bca259 | [
"MIT"
] | null | null | null | python3/jute/__init__.py | jongiddy/jute | 1b7fc921d6a137518e8e0e37da61df2693bca259 | [
"MIT"
] | null | null | null | python3/jute/__init__.py | jongiddy/jute | 1b7fc921d6a137518e8e0e37da61df2693bca259 | [
"MIT"
] | null | null | null | from ._jute import (
Attribute, Interface, Opaque, DynamicInterface, implements,
underlying_object, InterfaceConformanceError, InvalidAttributeName
)
__all__ = [
'Attribute',
'Interface',
'Opaque',
'DynamicInterface',
'implements',
'underlying_object',
'InterfaceConformanceError',
'InvalidAttributeName',
]
| 21.8125 | 70 | 0.704871 | 22 | 349 | 10.863636 | 0.590909 | 0.150628 | 0.200837 | 0.334728 | 0.92887 | 0.92887 | 0.92887 | 0.92887 | 0.92887 | 0 | 0 | 0 | 0.189112 | 349 | 15 | 71 | 23.266667 | 0.844523 | 0 | 0 | 0 | 0 | 0 | 0.320917 | 0.071633 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 1 | 0 | 1 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
caf06f22d43e397318664e8607edd75d1c0886cc | 3,955 | py | Python | backend/tensor_site/tables.py | b3none/Tensor | 6c70c7d3ade6eabe4162d0b9eef0923c79ea1eba | [
"MIT"
] | null | null | null | backend/tensor_site/tables.py | b3none/Tensor | 6c70c7d3ade6eabe4162d0b9eef0923c79ea1eba | [
"MIT"
] | null | null | null | backend/tensor_site/tables.py | b3none/Tensor | 6c70c7d3ade6eabe4162d0b9eef0923c79ea1eba | [
"MIT"
] | 3 | 2021-09-06T18:01:52.000Z | 2021-10-18T02:49:53.000Z | import django_tables2 as tables
from django_tables2 import TemplateColumn, RelatedLinkColumn
from django.db.models import F, Case, When
class Rank_awpTable(tables.Table):
name = tables.Column(accessor='name',
verbose_name='Name')
steam = tables.Column(accessor='steam',
verbose_name='Steam ID')
score = tables.Column(accessor='score',
verbose_name='Score')
KD = tables.Column(accessor='KDcalculator',
verbose_name='K/D')
ADR = tables.Column(accessor='ADRcalculator',
verbose_name='ADR')
link = tables.Column(accessor='steamid_to_profile',
verbose_name='Profile', orderable=False)
time = tables.Column(verbose_name='Playtime', order_by="connected")
class Meta:
#template_name = "django_tables2/bootstrap4.html"
fields = ("name", 'steam', 'score', 'KD', 'ADR', 'time', 'link')
order_by = ('-score')
attrs = {"class": "table table-bordered table-hover dataTable dtr-inline",
"id": "example2",
"th": {
"_ordering": {
"orderable": "sorting", # Instead of `orderable`
"ascending": "sorting_asc", # Instead of `asc`
"descending": "sorting_desc" # Instead of `desc`
}
},
"td": {
"style": "text-align:center !important"
}
}
per_page = 10
def order_KD(self, queryset, is_descending):
queryset = queryset.annotate(
amount = Case(When(deaths=0, then=F("kills")),
default = F("kills") / F("deaths"))
).order_by(("-" if is_descending else "") + "amount")
return (queryset, True)
def order_ADR(self, queryset, is_descending):
queryset = queryset.annotate(
amount = F("damage") / (F("rounds_ct")+F("rounds_tr"))
).order_by(("-" if is_descending else "") + "amount")
return (queryset, True)
class Rank_retakeTable(tables.Table):
name = tables.Column(accessor='name',
verbose_name='Name')
steam = tables.Column(accessor='steam',
verbose_name='Steam ID')
score = tables.Column(accessor='score',
verbose_name='Score')
KD = tables.Column(accessor='KDcalculator',
verbose_name='K/D')
ADR = tables.Column(accessor='ADRcalculator',
verbose_name='ADR')
link = tables.Column(accessor='steamid_to_profile',
verbose_name='Profile', orderable=False)
time = tables.Column(verbose_name='Playtime', order_by="connected")
class Meta:
#template_name = "django_tables2/bootstrap4.html"
fields = ("name", 'steam', 'score', 'KD', 'ADR', 'time', 'link')
order_by = ('-score')
attrs = {"class": "table table-bordered table-hover dataTable dtr-inline",
"id": "example2",
"th": {
"_ordering": {
"orderable": "sorting", # Instead of `orderable`
"ascending": "sorting_asc", # Instead of `asc`
"descending": "sorting_desc" # Instead of `desc`
}
},
"td": {
"style": "text-align:center !important"
}
}
per_page = 10
def order_KD(self, queryset, is_descending):
queryset = queryset.annotate(
amount = Case(When(deaths=0, then=F("kills")),
default = F("kills") / F("deaths"))
).order_by(("-" if is_descending else "") + "amount")
return (queryset, True)
def order_ADR(self, queryset, is_descending):
queryset = queryset.annotate(
amount = F("damage") / (F("rounds_ct")+F("rounds_tr"))
).order_by(("-" if is_descending else "") + "amount")
return (queryset, True) | 40.357143 | 82 | 0.54488 | 397 | 3,955 | 5.282116 | 0.229219 | 0.080114 | 0.114449 | 0.04578 | 0.927992 | 0.927992 | 0.927992 | 0.927992 | 0.927992 | 0.927992 | 0 | 0.005151 | 0.312769 | 3,955 | 98 | 83 | 40.357143 | 0.766372 | 0.053603 | 0 | 0.850575 | 0 | 0 | 0.194325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045977 | false | 0 | 0.057471 | 0 | 0.356322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1b300169c2f1fe604e79f7198ed6ee10d9c1f9e6 | 17,354 | py | Python | etl/parsers/etw/Microsoft_Windows_Kernel_Acpi.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 104 | 2020-03-04T14:31:31.000Z | 2022-03-28T02:59:36.000Z | etl/parsers/etw/Microsoft_Windows_Kernel_Acpi.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 7 | 2020-04-20T09:18:39.000Z | 2022-03-19T17:06:19.000Z | etl/parsers/etw/Microsoft_Windows_Kernel_Acpi.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 16 | 2020-03-05T18:55:59.000Z | 2022-03-01T10:19:28.000Z | # -*- coding: utf-8 -*-
"""
Microsoft-Windows-Kernel-Acpi
GUID : c514638f-7723-485b-bcfc-96565d735d4a
"""
from construct import Int8sl, Int8ul, Int16ul, Int16sl, Int32sl, Int32ul, Int64sl, Int64ul, Bytes, Double, Float32l, Struct
from etl.utils import WString, CString, SystemTime, Guid
from etl.dtyp import Sid
from etl.parsers.etw.core import Etw, declare, guid
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=1, version=0)
class Microsoft_Windows_Kernel_Acpi_1_0(Etw):
pattern = Struct(
"ResourceFlag" / Int8ul,
"GeneralFlag" / Int8ul,
"TypeSpecificFlag" / Int8ul,
"Granularity" / Int64ul,
"AddressMin" / Int64ul,
"AddressMax" / Int64ul,
"AddressTranslation" / Int64ul,
"AddressLength" / Int64ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=2, version=0)
class Microsoft_Windows_Kernel_Acpi_2_0(Etw):
pattern = Struct(
"GpeRegister" / Int32ul,
"UnexpectedEventMap" / Int8ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=3, version=0)
class Microsoft_Windows_Kernel_Acpi_3_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=4, version=0)
class Microsoft_Windows_Kernel_Acpi_4_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=5, version=0)
class Microsoft_Windows_Kernel_Acpi_5_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"ActiveCoolingLevel" / Int32ul,
"ActiveCoolingDeviceIndex" / Int32ul,
"FanDeviceInstanceLength" / Int16ul,
"FanDeviceInstance" / Bytes(lambda this: this.FanDeviceInstanceLength),
"PowerStateLength" / Int16ul,
"PowerState" / Bytes(lambda this: this.PowerStateLength)
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=6, version=0)
class Microsoft_Windows_Kernel_Acpi_6_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"ActiveCoolingLevel" / Int32ul,
"ActiveCoolingDeviceIndex" / Int32ul,
"FanDeviceInstanceLength" / Int16ul,
"FanDeviceInstance" / Bytes(lambda this: this.FanDeviceInstanceLength),
"PowerState" / Int16ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=7, version=0)
class Microsoft_Windows_Kernel_Acpi_7_0(Etw):
pattern = Struct(
"AmlMethodNameLength" / Int16ul,
"AmlMethodName" / Bytes(lambda this: this.AmlMethodNameLength),
"AmlMethodState" / Int16ul,
"AmlElapsedTime" / Int64ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=8, version=0)
class Microsoft_Windows_Kernel_Acpi_8_0(Etw):
pattern = Struct(
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"PowerState" / Int16ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=9, version=0)
class Microsoft_Windows_Kernel_Acpi_9_0(Etw):
pattern = Struct(
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"Throttle" / Int8ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=10, version=0)
class Microsoft_Windows_Kernel_Acpi_10_0(Etw):
pattern = Struct(
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"PowerState" / Int16ul,
"Throttle" / Int8ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=11, version=0)
class Microsoft_Windows_Kernel_Acpi_11_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"Temperature" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=12, version=0)
class Microsoft_Windows_Kernel_Acpi_12_0(Etw):
pattern = Struct(
"ThermalZoneBiosNameLength" / Int16ul,
"ThermalZoneBiosName" / Bytes(lambda this: this.ThermalZoneBiosNameLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_TC1" / Int32ul,
"_TC2" / Int32ul,
"_TSP" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul,
"_NTT" / Int32ul,
"_PSLCount" / Int32ul,
"_PSLEntries" / CString,
"_TZDCount" / Int32ul,
"_TZDEntries" / CString,
"_AL0Count" / Int32ul,
"_AL0Entries" / CString,
"_AL1Count" / Int32ul,
"_AL1Entries" / CString,
"_AL2Count" / Int32ul,
"_AL2Entries" / CString,
"_AL3Count" / Int32ul,
"_AL3Entries" / CString,
"_AL4Count" / Int32ul,
"_AL4Entries" / CString,
"_AL5Count" / Int32ul,
"_AL5Entries" / CString,
"_AL6Count" / Int32ul,
"_AL6Entries" / CString,
"_AL7Count" / Int32ul,
"_AL7Entries" / CString,
"_AL8Count" / Int32ul,
"_AL8Entries" / CString,
"_AL9Count" / Int32ul,
"_AL9Entries" / CString
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=12, version=1)
class Microsoft_Windows_Kernel_Acpi_12_1(Etw):
pattern = Struct(
"ThermalZoneBiosNameLength" / Int16ul,
"ThermalZoneBiosName" / Bytes(lambda this: this.ThermalZoneBiosNameLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_TC1" / Int32ul,
"_TC2" / Int32ul,
"_TSP" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul,
"_NTT" / Int32ul,
"_PSLCount" / Int32ul,
"_PSLEntries" / CString,
"_TZDCount" / Int32ul,
"_TZDEntries" / CString,
"_AL0Count" / Int32ul,
"_AL0Entries" / CString,
"_AL1Count" / Int32ul,
"_AL1Entries" / CString,
"_AL2Count" / Int32ul,
"_AL2Entries" / CString,
"_AL3Count" / Int32ul,
"_AL3Entries" / CString,
"_AL4Count" / Int32ul,
"_AL4Entries" / CString,
"_AL5Count" / Int32ul,
"_AL5Entries" / CString,
"_AL6Count" / Int32ul,
"_AL6Entries" / CString,
"_AL7Count" / Int32ul,
"_AL7Entries" / CString,
"_AL8Count" / Int32ul,
"_AL8Entries" / CString,
"_AL9Count" / Int32ul,
"_AL9Entries" / CString,
"MinimumThrottle" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=12, version=2)
class Microsoft_Windows_Kernel_Acpi_12_2(Etw):
pattern = Struct(
"ThermalZoneBiosNameLength" / Int16ul,
"ThermalZoneBiosName" / Bytes(lambda this: this.ThermalZoneBiosNameLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_TC1" / Int32ul,
"_TC2" / Int32ul,
"_TSP" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul,
"_NTT" / Int32ul,
"_PSLCount" / Int32ul,
"_PSLEntries" / CString,
"_TZDCount" / Int32ul,
"_TZDEntries" / CString,
"_AL0Count" / Int32ul,
"_AL0Entries" / CString,
"_AL1Count" / Int32ul,
"_AL1Entries" / CString,
"_AL2Count" / Int32ul,
"_AL2Entries" / CString,
"_AL3Count" / Int32ul,
"_AL3Entries" / CString,
"_AL4Count" / Int32ul,
"_AL4Entries" / CString,
"_AL5Count" / Int32ul,
"_AL5Entries" / CString,
"_AL6Count" / Int32ul,
"_AL6Entries" / CString,
"_AL7Count" / Int32ul,
"_AL7Entries" / CString,
"_AL8Count" / Int32ul,
"_AL8Entries" / CString,
"_AL9Count" / Int32ul,
"_AL9Entries" / CString,
"MinimumThrottle" / Int32ul,
"_CR3" / Int32ul,
"_TFP" / Int32ul,
"OverThrottleThreshold" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=12, version=3)
class Microsoft_Windows_Kernel_Acpi_12_3(Etw):
pattern = Struct(
"ThermalZoneBiosNameLength" / Int16ul,
"ThermalZoneBiosName" / Bytes(lambda this: this.ThermalZoneBiosNameLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_TC1" / Int32ul,
"_TC2" / Int32ul,
"_TSP" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul,
"_NTT" / Int32ul,
"_PSLCount" / Int32ul,
"_PSLEntries" / CString,
"_TZDCount" / Int32ul,
"_TZDEntries" / CString,
"_AL0Count" / Int32ul,
"_AL0Entries" / CString,
"_AL1Count" / Int32ul,
"_AL1Entries" / CString,
"_AL2Count" / Int32ul,
"_AL2Entries" / CString,
"_AL3Count" / Int32ul,
"_AL3Entries" / CString,
"_AL4Count" / Int32ul,
"_AL4Entries" / CString,
"_AL5Count" / Int32ul,
"_AL5Entries" / CString,
"_AL6Count" / Int32ul,
"_AL6Entries" / CString,
"_AL7Count" / Int32ul,
"_AL7Entries" / CString,
"_AL8Count" / Int32ul,
"_AL8Entries" / CString,
"_AL9Count" / Int32ul,
"_AL9Entries" / CString,
"MinimumThrottle" / Int32ul,
"_CR3" / Int32ul,
"_TFP" / Int32ul,
"OverThrottleThreshold" / Int32ul,
"DescriptionLength" / Int16ul,
"Description" / Bytes(lambda this: this.DescriptionLength)
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=12, version=4)
class Microsoft_Windows_Kernel_Acpi_12_4(Etw):
pattern = Struct(
"ThermalZoneBiosNameLength" / Int16ul,
"ThermalZoneBiosName" / Bytes(lambda this: this.ThermalZoneBiosNameLength),
"_TMP" / Int32ul,
"_PSV" / Int32ul,
"_TC1" / Int32ul,
"_TC2" / Int32ul,
"_TSP" / Int32ul,
"_AC0" / Int32ul,
"_AC1" / Int32ul,
"_AC2" / Int32ul,
"_AC3" / Int32ul,
"_AC4" / Int32ul,
"_AC5" / Int32ul,
"_AC6" / Int32ul,
"_AC7" / Int32ul,
"_AC8" / Int32ul,
"_AC9" / Int32ul,
"_HOT" / Int32ul,
"_CRT" / Int32ul,
"_NTT" / Int32ul,
"_PSLCount" / Int32ul,
"_PSLEntries" / CString,
"_TZDCount" / Int32ul,
"_TZDEntries" / CString,
"_AL0Count" / Int32ul,
"_AL0Entries" / CString,
"_AL1Count" / Int32ul,
"_AL1Entries" / CString,
"_AL2Count" / Int32ul,
"_AL2Entries" / CString,
"_AL3Count" / Int32ul,
"_AL3Entries" / CString,
"_AL4Count" / Int32ul,
"_AL4Entries" / CString,
"_AL5Count" / Int32ul,
"_AL5Entries" / CString,
"_AL6Count" / Int32ul,
"_AL6Entries" / CString,
"_AL7Count" / Int32ul,
"_AL7Entries" / CString,
"_AL8Count" / Int32ul,
"_AL8Entries" / CString,
"_AL9Count" / Int32ul,
"_AL9Entries" / CString,
"MinimumThrottle" / Int32ul,
"_CR3" / Int32ul,
"_TFP" / Int32ul,
"OverThrottleThreshold" / Int32ul,
"DescriptionLength" / Int16ul,
"Description" / Bytes(lambda this: this.DescriptionLength),
"_TZP" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=13, version=0)
class Microsoft_Windows_Kernel_Acpi_13_0(Etw):
pattern = Struct(
"FanBiosNameLength" / Int16ul,
"FanBiosName" / Bytes(lambda this: this.FanBiosNameLength),
"FstSupported" / Int8ul,
"PowerState" / Int16ul,
"Control" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=14, version=0)
class Microsoft_Windows_Kernel_Acpi_14_0(Etw):
pattern = Struct(
"FanBiosNameLength" / Int16ul,
"FanBiosName" / Bytes(lambda this: this.FanBiosNameLength),
"PowerState" / Int16ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=15, version=0)
class Microsoft_Windows_Kernel_Acpi_15_0(Etw):
pattern = Struct(
"FanBiosNameLength" / Int16ul,
"FanBiosName" / Bytes(lambda this: this.FanBiosNameLength),
"Control" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=16, version=0)
class Microsoft_Windows_Kernel_Acpi_16_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"PowerState" / Int16ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=17, version=0)
class Microsoft_Windows_Kernel_Acpi_17_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"PowerState" / Int16ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=18, version=0)
class Microsoft_Windows_Kernel_Acpi_18_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"ThrottleLimit" / Int8ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=19, version=0)
class Microsoft_Windows_Kernel_Acpi_19_0(Etw):
pattern = Struct(
"ThermalZoneDeviceInstanceLength" / Int16ul,
"ThermalZoneDeviceInstance" / Bytes(lambda this: this.ThermalZoneDeviceInstanceLength),
"DeviceInstanceLength" / Int16ul,
"DeviceInstance" / Bytes(lambda this: this.DeviceInstanceLength),
"ThrottleLimit" / Int8ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=20, version=0)
class Microsoft_Windows_Kernel_Acpi_20_0(Etw):
pattern = Struct(
"DeviceBiosNameLength" / Int16ul,
"DeviceBiosName" / Bytes(lambda this: this.DeviceBiosNameLength),
"DeviceResetType" / Int16ul,
"Status" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=21, version=0)
class Microsoft_Windows_Kernel_Acpi_21_0(Etw):
pattern = Struct(
"AcpiOverrideType" / Int16ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=22, version=0)
class Microsoft_Windows_Kernel_Acpi_22_0(Etw):
pattern = Struct(
"Scope" / WString,
"Object" / WString,
"Status" / Int32ul
)
@declare(guid=guid("c514638f-7723-485b-bcfc-96565d735d4a"), event_id=23, version=0)
class Microsoft_Windows_Kernel_Acpi_23_0(Etw):
pattern = Struct(
"AmlMethodNameLength" / Int16ul,
"AmlMethodName" / Bytes(lambda this: this.AmlMethodNameLength),
"Frequency" / Int64ul
)
| 32.929791 | 123 | 0.615651 | 1,507 | 17,354 | 6.819509 | 0.105508 | 0.034251 | 0.046706 | 0.059161 | 0.912523 | 0.912523 | 0.895981 | 0.80938 | 0.80938 | 0.80938 | 0 | 0.114573 | 0.260171 | 17,354 | 526 | 124 | 32.992395 | 0.685879 | 0.005532 | 0 | 0.755365 | 0 | 0 | 0.264073 | 0.101919 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008584 | 0 | 0.124464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1b37b50d2da07a4da9022d21aad1edad50a4c5b9 | 161 | py | Python | src/mock_open/test/__init__.py | hairygeek/mock-open | f2fc8ca92fa167fb470b24482f49b2aef552fe81 | [
"MIT"
] | 7 | 2015-08-18T15:44:24.000Z | 2021-03-06T18:52:09.000Z | src/mock_open/test/__init__.py | hairygeek/mock-open | f2fc8ca92fa167fb470b24482f49b2aef552fe81 | [
"MIT"
] | 9 | 2015-08-22T17:14:25.000Z | 2020-10-10T18:54:38.000Z | src/mock_open/test/__init__.py | hairygeek/mock-open | f2fc8ca92fa167fb470b24482f49b2aef552fe81 | [
"MIT"
] | 5 | 2015-08-21T07:47:36.000Z | 2020-09-20T08:53:35.000Z | # pylint: disable=missing-docstring
# pylint: disable=wildcard-import
from .test_mocks import *
from .cpython.testmock import *
from .cpython.testwith import *
| 23 | 35 | 0.782609 | 20 | 161 | 6.25 | 0.6 | 0.24 | 0.272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118012 | 161 | 6 | 36 | 26.833333 | 0.880282 | 0.403727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1b416ce2c333f04285e2eee002ab48da5af3dcce | 144 | py | Python | test_python_import_issue/run.py | zengmeng1094/test-python | 79aa30789c2bb8700f660a4d6b13f06960e169e5 | [
"MIT"
] | null | null | null | test_python_import_issue/run.py | zengmeng1094/test-python | 79aa30789c2bb8700f660a4d6b13f06960e169e5 | [
"MIT"
] | null | null | null | test_python_import_issue/run.py | zengmeng1094/test-python | 79aa30789c2bb8700f660a4d6b13f06960e169e5 | [
"MIT"
] | null | null | null | import os
print('run.py', os.getcwd())
from test_python_import_issue.pacx import k
from test_python_import_issue.dirx import a
print('over')
| 16 | 43 | 0.784722 | 25 | 144 | 4.28 | 0.6 | 0.149533 | 0.261682 | 0.373832 | 0.46729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 144 | 8 | 44 | 18 | 0.835938 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0.4 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1b859abd22aa537353d7a976902e16125cba34b1 | 177 | py | Python | backend/handlers/graphql/resolvers/sr.py | al-indigo/vmemperor | 80eb6d47d839a4736eb6f9d2fcfad35f0a7b3bb1 | [
"Apache-2.0"
] | null | null | null | backend/handlers/graphql/resolvers/sr.py | al-indigo/vmemperor | 80eb6d47d839a4736eb6f9d2fcfad35f0a7b3bb1 | [
"Apache-2.0"
] | 8 | 2017-10-11T13:26:10.000Z | 2021-12-13T20:27:52.000Z | backend/handlers/graphql/resolvers/sr.py | ispras/vmemperor | 80eb6d47d839a4736eb6f9d2fcfad35f0a7b3bb1 | [
"Apache-2.0"
] | 4 | 2017-07-27T12:25:42.000Z | 2018-01-28T02:06:26.000Z | def srType():
from handlers.graphql.types.sr import GSR
return GSR
def srContentType():
from handlers.graphql.types.sr import SRContentType
return SRContentType | 25.285714 | 55 | 0.751412 | 22 | 177 | 6.045455 | 0.5 | 0.180451 | 0.285714 | 0.360902 | 0.481203 | 0.481203 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180791 | 177 | 7 | 56 | 25.285714 | 0.917241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
1bda2a9c4164db4fe1f9735cc802fdbef3a386a1 | 4,481 | py | Python | tests/test_filter.py | carlosvega/GenericFilters | 9136e64a5c43dec9edc7bf4085d757cdf36d66f9 | [
"Apache-2.0"
] | null | null | null | tests/test_filter.py | carlosvega/GenericFilters | 9136e64a5c43dec9edc7bf4085d757cdf36d66f9 | [
"Apache-2.0"
] | null | null | null | tests/test_filter.py | carlosvega/GenericFilters | 9136e64a5c43dec9edc7bf4085d757cdf36d66f9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
from filters.filter import *
@pytest.fixture(scope = 'session')
def filter_one():
return filter(alias='filter_one', ip='192.168.2.1', method=['GET', 'POST'])
@pytest.fixture(scope = 'session')
def filter_two():
return filter(alias='filter_two', ip='192.168.2.1', method='GET', uri=['example-01', 'example-03'])
@pytest.fixture(scope = 'session')
def filter_three():
return filter(alias='filter_three', equals=True, ip='192.168.2.1', method='GET', uri=['example-01', 'example-03'])
class Test_Filter_1(object):
def test_one(self, filter_one):
boolean = filter_one.check_filter(ip='192.168.2.1', method='GET')
assert boolean == True
def test_two(self, filter_one):
boolean = filter_one.check_filter(ip='192.168.2.1', method='POST')
assert boolean == True
def test_three(self, filter_one):
boolean = filter_one.check_filter(ip='192.168.2.1', method='GET', uri='EXAMPLE')
assert boolean == True
def test_four(self, filter_one):
boolean = filter_one.check_filter(ip='192.168.2.1', method='GET', uri='EXAMPLE', potato='potato')
assert boolean == True
def test_five(self, filter_one):
boolean = filter_one.check_filter(ip='192.168.2.2', method='GET')
assert boolean == False
def test_six(self, filter_one):
boolean = filter_one.check_filter(ip='192.168.2.1', method='POTATO')
assert boolean == False
def test_seven(self, filter_one):
boolean = filter_one.check_filter()
assert boolean == False
def test_eight(self, filter_one):
boolean = filter_one.check_filter(potato='potato')
assert boolean == False
class Test_Filter_2(object):
def test_one(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.1', method='GET', uri='-abasda-example-01-abasda')
assert boolean == True
def test_two(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.1', method='GET', uri='EXAMPLE')
assert boolean == False
def test_three(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.1', method='GET', uri='EXAMPLE', patata='patata')
assert boolean == False
def test_four(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.1', method='GET', uri='abasda-example-02-abasda')
assert boolean == False
def test_five(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.1', method='POST')
assert boolean == False
def test_six(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.2', method='GET')
assert boolean == False
def test_seven(self, filter_two):
boolean = filter_two.check_filter()
assert boolean == False
def test_eight(self, filter_two):
boolean = filter_two.check_filter(ip='192.168.2.1', method='GET', uri='-abasda-example-03-abasda')
assert boolean == True
def test_nine(self, filter_two):
boolean = filter_two.check_filter(potato='potato')
assert boolean == False
class Test_Filter_2(object):
def test_one(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='-abasda-example-01-abasda')
assert boolean == False
def test_two(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='EXAMPLE')
assert boolean == False
def test_three(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='EXAMPLE', patata='patata')
assert boolean == False
def test_four(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='abasda-example-02-abasda')
assert boolean == False
def test_five(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='POST')
assert boolean == False
def test_six(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.2', method='GET')
assert boolean == False
def test_seven(self, filter_three):
boolean = filter_three.check_filter()
assert boolean == False
def test_eight(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='-abasda-example-03-abasda')
assert boolean == False
def test_nine(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='example-03')
assert boolean == True
def test_nine(self, filter_three):
boolean = filter_three.check_filter(ip='192.168.2.1', method='GET', uri='example-01')
assert boolean == True
def test_ten(self, filter_three):
boolean = filter_three.check_filter(potato='potato')
assert boolean == False
| 41.110092 | 115 | 0.721268 | 700 | 4,481 | 4.44 | 0.071429 | 0.063063 | 0.06435 | 0.072394 | 0.939189 | 0.911519 | 0.868082 | 0.852317 | 0.765766 | 0.765766 | 0 | 0.057576 | 0.116269 | 4,481 | 108 | 116 | 41.490741 | 0.727273 | 0.004686 | 0 | 0.357143 | 0 | 0 | 0.156832 | 0.033206 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.316327 | false | 0 | 0.020408 | 0.030612 | 0.397959 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9444449c51a59bfbb962474f47724be174bd8ccc | 9,963 | py | Python | src/outdoor_bot/srv/_mainTargets_service.py | dan-git/outdoor_bot | 81bf75e26449f8e4b6a38f4049ca4d4cda7b8c04 | [
"BSD-2-Clause"
] | null | null | null | src/outdoor_bot/srv/_mainTargets_service.py | dan-git/outdoor_bot | 81bf75e26449f8e4b6a38f4049ca4d4cda7b8c04 | [
"BSD-2-Clause"
] | null | null | null | src/outdoor_bot/srv/_mainTargets_service.py | dan-git/outdoor_bot | 81bf75e26449f8e4b6a38f4049ca4d4cda7b8c04 | [
"BSD-2-Clause"
] | null | null | null | """autogenerated by genpy from outdoor_bot/mainTargets_serviceRequest.msg. Do not edit."""
import sys
python3 = True if sys.hexversion > 0x03000000 else False
import genpy
import struct
class mainTargets_serviceRequest(genpy.Message):
_md5sum = "90fff5866c31f9caf1d5f5e7d43f49c1"
_type = "outdoor_bot/mainTargets_serviceRequest"
_has_header = False #flag to mark the presence of a Header object
_full_text = """string image_filename
float32 approxRange
bool firstTarget
"""
__slots__ = ['image_filename','approxRange','firstTarget']
_slot_types = ['string','float32','bool']
def __init__(self, *args, **kwds):
"""
Constructor. Any message fields that are implicitly/explicitly
set to None will be assigned a default value. The recommend
use is keyword arguments as this is more robust to future message
changes. You cannot mix in-order arguments and keyword arguments.
The available fields are:
image_filename,approxRange,firstTarget
:param args: complete set of field values, in .msg order
:param kwds: use keyword arguments corresponding to message field names
to set specific fields.
"""
if args or kwds:
super(mainTargets_serviceRequest, self).__init__(*args, **kwds)
#message fields cannot be None, assign default values for those that are
if self.image_filename is None:
self.image_filename = ''
if self.approxRange is None:
self.approxRange = 0.
if self.firstTarget is None:
self.firstTarget = False
else:
self.image_filename = ''
self.approxRange = 0.
self.firstTarget = False
def _get_types(self):
"""
internal API method
"""
return self._slot_types
def serialize(self, buff):
"""
serialize message into buffer
:param buff: buffer, ``StringIO``
"""
try:
_x = self.image_filename
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
if python3:
buff.write(struct.pack('<I%sB'%length, length, *_x))
else:
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_struct_fB.pack(_x.approxRange, _x.firstTarget))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(_x))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(_x))))
def deserialize(self, str):
"""
unpack serialized message in str into this message instance
:param str: byte array of serialized message, ``str``
"""
try:
end = 0
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.image_filename = str[start:end].decode('utf-8')
else:
self.image_filename = str[start:end]
_x = self
start = end
end += 5
(_x.approxRange, _x.firstTarget,) = _struct_fB.unpack(str[start:end])
self.firstTarget = bool(self.firstTarget)
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
def serialize_numpy(self, buff, numpy):
"""
serialize message with numpy array types into buffer
:param buff: buffer, ``StringIO``
:param numpy: numpy python module
"""
try:
_x = self.image_filename
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
if python3:
buff.write(struct.pack('<I%sB'%length, length, *_x))
else:
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_struct_fB.pack(_x.approxRange, _x.firstTarget))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(_x))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(_x))))
def deserialize_numpy(self, str, numpy):
"""
unpack serialized message in str into this message instance using numpy for array types
:param str: byte array of serialized message, ``str``
:param numpy: numpy python module
"""
try:
end = 0
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.image_filename = str[start:end].decode('utf-8')
else:
self.image_filename = str[start:end]
_x = self
start = end
end += 5
(_x.approxRange, _x.firstTarget,) = _struct_fB.unpack(str[start:end])
self.firstTarget = bool(self.firstTarget)
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
_struct_I = genpy.struct_I
_struct_fB = struct.Struct("<fB")
"""autogenerated by genpy from outdoor_bot/mainTargets_serviceResponse.msg. Do not edit."""
import sys
python3 = True if sys.hexversion > 0x03000000 else False
import genpy
import struct
class mainTargets_serviceResponse(genpy.Message):
_md5sum = "30b2d42be8045e43ef370b1aad370871"
_type = "outdoor_bot/mainTargets_serviceResponse"
_has_header = False #flag to mark the presence of a Header object
_full_text = """int32 centerX
int32 centerY
int32 totalX
float32 rangeSquared
bool newDigcamImageReceived
bool newWebcamImageReceived
"""
__slots__ = ['centerX','centerY','totalX','rangeSquared','newDigcamImageReceived','newWebcamImageReceived']
_slot_types = ['int32','int32','int32','float32','bool','bool']
def __init__(self, *args, **kwds):
"""
Constructor. Any message fields that are implicitly/explicitly
set to None will be assigned a default value. The recommend
use is keyword arguments as this is more robust to future message
changes. You cannot mix in-order arguments and keyword arguments.
The available fields are:
centerX,centerY,totalX,rangeSquared,newDigcamImageReceived,newWebcamImageReceived
:param args: complete set of field values, in .msg order
:param kwds: use keyword arguments corresponding to message field names
to set specific fields.
"""
if args or kwds:
super(mainTargets_serviceResponse, self).__init__(*args, **kwds)
#message fields cannot be None, assign default values for those that are
if self.centerX is None:
self.centerX = 0
if self.centerY is None:
self.centerY = 0
if self.totalX is None:
self.totalX = 0
if self.rangeSquared is None:
self.rangeSquared = 0.
if self.newDigcamImageReceived is None:
self.newDigcamImageReceived = False
if self.newWebcamImageReceived is None:
self.newWebcamImageReceived = False
else:
self.centerX = 0
self.centerY = 0
self.totalX = 0
self.rangeSquared = 0.
self.newDigcamImageReceived = False
self.newWebcamImageReceived = False
def _get_types(self):
"""
internal API method
"""
return self._slot_types
def serialize(self, buff):
"""
serialize message into buffer
:param buff: buffer, ``StringIO``
"""
try:
_x = self
buff.write(_struct_3if2B.pack(_x.centerX, _x.centerY, _x.totalX, _x.rangeSquared, _x.newDigcamImageReceived, _x.newWebcamImageReceived))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(_x))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(_x))))
def deserialize(self, str):
"""
unpack serialized message in str into this message instance
:param str: byte array of serialized message, ``str``
"""
try:
end = 0
_x = self
start = end
end += 18
(_x.centerX, _x.centerY, _x.totalX, _x.rangeSquared, _x.newDigcamImageReceived, _x.newWebcamImageReceived,) = _struct_3if2B.unpack(str[start:end])
self.newDigcamImageReceived = bool(self.newDigcamImageReceived)
self.newWebcamImageReceived = bool(self.newWebcamImageReceived)
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
def serialize_numpy(self, buff, numpy):
"""
serialize message with numpy array types into buffer
:param buff: buffer, ``StringIO``
:param numpy: numpy python module
"""
try:
_x = self
buff.write(_struct_3if2B.pack(_x.centerX, _x.centerY, _x.totalX, _x.rangeSquared, _x.newDigcamImageReceived, _x.newWebcamImageReceived))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(_x))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(_x))))
def deserialize_numpy(self, str, numpy):
"""
unpack serialized message in str into this message instance using numpy for array types
:param str: byte array of serialized message, ``str``
:param numpy: numpy python module
"""
try:
end = 0
_x = self
start = end
end += 18
(_x.centerX, _x.centerY, _x.totalX, _x.rangeSquared, _x.newDigcamImageReceived, _x.newWebcamImageReceived,) = _struct_3if2B.unpack(str[start:end])
self.newDigcamImageReceived = bool(self.newDigcamImageReceived)
self.newWebcamImageReceived = bool(self.newWebcamImageReceived)
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
_struct_I = genpy.struct_I
_struct_3if2B = struct.Struct("<3if2B")
class mainTargets_service(object):
_type = 'outdoor_bot/mainTargets_service'
_md5sum = '98f192b83a39359c19fd1f5bbe2ac28f'
_request_class = mainTargets_serviceRequest
_response_class = mainTargets_serviceResponse
| 35.329787 | 152 | 0.674897 | 1,249 | 9,963 | 5.225781 | 0.13691 | 0.022062 | 0.016853 | 0.023288 | 0.798836 | 0.798836 | 0.775548 | 0.761759 | 0.761759 | 0.761759 | 0 | 0.018592 | 0.217204 | 9,963 | 281 | 153 | 35.455516 | 0.81831 | 0.242698 | 0 | 0.8 | 1 | 0 | 0.112044 | 0.040845 | 0 | 0 | 0.002798 | 0 | 0 | 1 | 0.064865 | false | 0 | 0.032432 | 0 | 0.232432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
94556609d5203c0ae0ed1a489d319f56ce18c71f | 47,034 | py | Python | tests/test_parser.py | ldflo/autojinja | bc126da3b77dd89e88b697a38b2b4ea2cc45f6af | [
"BSD-3-Clause"
] | 6 | 2022-01-02T17:28:13.000Z | 2022-01-18T20:34:53.000Z | tests/test_parser.py | ldflo/autojinja | bc126da3b77dd89e88b697a38b2b4ea2cc45f6af | [
"BSD-3-Clause"
] | null | null | null | tests/test_parser.py | ldflo/autojinja | bc126da3b77dd89e88b697a38b2b4ea2cc45f6af | [
"BSD-3-Clause"
] | null | null | null | import autojinja
class CustomException(Exception):
def __init__(self, result, expected):
result = str(result).replace('\t', "\\t").replace('\n', "\\n\n")
expected = str(expected).replace('\t', "\\t").replace('\n', "\\n\n")
message = f"--- Expected ---\n{expected}\\0\n--- Got ---\n{result}\\0"
super().__init__(message)
settingsCogComment = autojinja.ParserSettings(cog_open = "/**", cog_close = "**/", cog_as_comment = True)
settingsEditComment = autojinja.ParserSettings(edit_open = "/**", edit_close = "**/", edit_as_comment = True)
def valid_marker(input, expected,
header0 = None, body0 = None,
header1 = None, body1 = None,
header2 = None, body2 = None,
header3 = None, body3 = None,
remove_markers = False,
*args, **kwargs):
template = autojinja.CogTemplate.from_string(input, *args, **kwargs)
if header0 != None and template.markers[0].header != header0:
raise CustomException(template.markers[0].header, header0)
if body0 != None and template.markers[0].body != body0:
raise CustomException(template.markers[0].body, body0)
if header1 != None and template.markers[1].header != header1:
raise CustomException(template.markers[1].header, header1)
if body1 != None and template.markers[1].body != body1:
raise CustomException(template.markers[1].body, body1)
if header2 != None and template.markers[2].header != header2:
raise CustomException(template.markers[2].header, header2)
if body2 != None and template.markers[2].body != body2:
raise CustomException(template.markers[2].body, body2)
if header3 != None and template.markers[3].header != header3:
raise CustomException(template.markers[3].header, header3)
if body3 != None and template.markers[3].body != body3:
raise CustomException(template.markers[3].body, body3)
result = template.render(remove_markers = remove_markers)
if expected != None and result != expected:
raise CustomException(result, expected)
class Test_CogMarkers:
class Test_OneLineMarker:
def test_1(self):
input = "[[[]]][[[end]]]"
expected = "[[[]]] [[[end]]]"
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_2(self):
input = "[[[]]] abc [[[end]]]"
expected = "[[[]]] [[[end]]]"
header = ""
body = "abc"
valid_marker(input, expected, header, body)
def test_3(self):
input = " [[[ ]]] abc [[[end ]]]"
expected = " [[[ ]]] [[[end ]]]"
header = ""
body = " abc"
valid_marker(input, expected, header, body)
def test_4(self):
input = " [[[ ]]] abc [[[ end]]]"
expected = " [[[ ]]] [[[ end]]]"
header = ""
body = "abc"
valid_marker(input, expected, header, body)
def test_5(self):
input = "[[[a]]] abc [[[ end ]]] "
expected = "[[[a]]] a [[[ end ]]] "
header = "a"
body = "abc"
valid_marker(input, expected, header, body)
def test_6(self):
input = "[[[ a]]] [[[end]]] "
expected = "[[[ a]]] a [[[end]]] "
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_7(self):
input = "[[[a ]]] [[[end]]]"
expected = "[[[a ]]] a [[[end]]]"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_8(self):
input = "[[[ a ]]] [[[end]]]"
expected = "[[[ a ]]] a [[[end]]]"
header = "a"
body = " "
valid_marker(input, expected, header, body)
def test_9(self):
input = "[[[ ]]][[[ end]]]"
expected = "[[[ ]]] [[[ end]]]"
header = " "
body = ""
valid_marker(input, expected, header, body)
def test_10(self):
input = "[[[ \t ]]][[[end ]]]"
expected = "[[[ \t ]]] \t [[[end ]]]"
header = " \t "
body = ""
valid_marker(input, expected, header, body)
def test_11(self):
input = " [[[ ]]]\n" \
"[[[ end ]]]"
expected = " [[[ ]]]\n" \
" \n" \
"[[[ end ]]]"
header = " "
body = ""
valid_marker(input, expected, header, body)
def test_12(self):
input = " [[[a ]]] \n" \
" dummy\n" \
" [[[end]]]"
expected = " [[[a ]]] \n" \
" a \n" \
" [[[end]]]"
header = "a "
body = " dummy"
valid_marker(input, expected, header, body)
def test_13(self):
input = "[[[ a]]] \n" \
" [[[abc]]][[[end]]]\n" \
" [[[end]]] "
expected = "[[[ a]]] \n" \
" a\n" \
" [[[end]]] "
header = " a"
body = " [[[abc]]][[[end]]]"
valid_marker(input, expected, header, body)
def test_14(self):
input = "[[[]]] \n" \
" [[[end]]] "
expected = "[[[]]] \n" \
" [[[end]]] "
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_15(self):
input = " /* [[[ a ]]]*/ dummy /*[[[end]]] */"
expected = " /* [[[ a ]]]*/ a /*[[[end]]] */"
header = "a"
body = "dummy"
valid_marker(input, expected, header, body)
def test_16(self):
input = " /* [[[ a ]]]*/dummy /*[[[end]]] */"
expected = " /* [[[ a ]]]*/dummy a /*[[[end]]] */"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_17(self):
input = " /* [[[ a ]]]*/ dummy/*[[[end]]] */"
expected = " /* [[[ a ]]]*/ a dummy/*[[[end]]] */"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_18(self):
input = " /* [[[ a ]]]*/dummy dummy/*[[[end]]] */"
expected = " /* [[[ a ]]]*/dummy a dummy/*[[[end]]] */"
header = "a"
body = ""
valid_marker(input, expected, header, body)
class Test_SeveralLinesMarker:
def test_1(self):
input = "[[[\n" \
"a\n" \
"]]]def\n" \
"[[[ end ]]]"
expected = "[[[\n" \
"a\n" \
"]]]def\n" \
"a\n" \
"[[[ end ]]]"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_2(self):
input = " [[[\n" \
" a\n" \
" ]]] test\n" \
" dummy\n" \
"[[[ end ]]]"
expected = " [[[\n" \
" a\n" \
" ]]] test\n" \
" a\n" \
"[[[ end ]]]"
header = "a"
body = " dummy"
valid_marker(input, expected, header, body)
def test_3(self):
input = " [[[\n" \
"\n" \
" ]]]\n" \
"[[[ end ]]]"
expected = " [[[\n" \
"\n" \
" ]]]\n" \
" \n" \
"[[[ end ]]]"
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_4(self):
input = " // [[[\n" \
" a\n" \
" ]]]\n" \
"[[[ end ]]]"
expected = " // [[[\n" \
" a\n" \
" ]]]\n" \
" a\n" \
"[[[ end ]]]"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_5(self):
input = "\t\t[[[\n" \
"\t\t a\n" \
"\t\t]]]\n" \
"[[[ end ]]]"
expected = "\t\t[[[\n" \
"\t\t a\n" \
"\t\t]]]\n" \
"\t\t a\n" \
"[[[ end ]]]"
header = " a"
body = ""
valid_marker(input, expected, header, body)
def test_6(self):
input = " //\t[[[\n" \
" \ta\n" \
" \t]]]\n" \
" dummy\n" \
" dummy\n" \
"[[[ end ]]]"
expected = " //\t[[[\n" \
" \ta\n" \
" \t]]]\n" \
" a\n" \
"[[[ end ]]]"
header = "a"
body = " dummy\n dummy"
valid_marker(input, expected, header, body)
def test_7(self):
input = " //\t[[[\n" \
" //\t]]]\n" \
"[[[ end ]]]"
expected = " //\t[[[\n" \
" //\t]]]\n" \
"[[[ end ]]]"
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_8(self):
input = " //\t[[[ \n" \
" //\t]]]\n" \
"[[[ end ]]]"
expected = " //\t[[[ \n" \
" //\t]]]\n" \
" \n" \
"[[[ end ]]]"
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_9(self):
input = " //\t[[[ \n" \
" //\t]]]\n" \
"[[[ end ]]]"
expected = " //\t[[[ \n" \
" //\t]]]\n" \
" \n" \
"[[[ end ]]]"
header = " "
body = ""
valid_marker(input, expected, header, body)
def test_10(self):
input = " // [[[\n" \
" // ]]]\n" \
"[[[ end ]]]"
expected = " // [[[\n" \
" // ]]]\n" \
" \n" \
"[[[ end ]]]"
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_1(self):
input = " // [[[\n" \
" // ]]]\n" \
" dummy\n" \
"[[[ end ]]]"
expected = " // [[[\n" \
" // ]]]\n" \
" \n" \
"[[[ end ]]]"
header = " "
body = " dummy"
valid_marker(input, expected, header, body)
def test_12(self):
input = " // [[[ \n" \
" // ]]]\n" \
"[[[ end ]]]"
expected = " // [[[ \n" \
" // ]]]\n" \
" \n" \
"[[[ end ]]]"
header = "\n"
body = ""
valid_marker(input, expected, header, body)
def test_13(self):
input = " // [[[ abc\n" \
" // def ]]]\n" \
"[[[ end ]]]"
expected = " // [[[ abc\n" \
" // def ]]]\n" \
" abc\n" \
" def\n" \
"[[[ end ]]]"
header = "abc\ndef"
body = ""
valid_marker(input, expected, header, body)
def test_14(self):
input = "//[[[ abc\n" \
"//def ]]]\n" \
"[[[ end ]]]"
expected = "//[[[ abc\n" \
"//def ]]]\n" \
"abc\n" \
"def\n" \
"[[[ end ]]]"
header = "abc\ndef"
body = ""
valid_marker(input, expected, header, body)
def test_15(self):
input = "[[[ abc\n" \
"\n" \
"def ]]]\n" \
"[[[ end ]]]"
expected = "[[[ abc\n" \
"\n" \
"def ]]]\n" \
"abc\n" \
"\n" \
"def\n" \
"[[[ end ]]]"
header = "abc\n\ndef"
body = ""
valid_marker(input, expected, header, body)
def test_16(self):
input = "[[[\n" \
"a\n" \
"\n" \
"]]]\n" \
"[[[ end ]]]"
expected = "[[[\n" \
"a\n" \
"\n" \
"]]]\n" \
"a\n" \
"[[[ end ]]]"
header = "a\n"
body = ""
valid_marker(input, expected, header, body)
def test_17(self):
input = " [[[\n" \
" a\n" \
"\n" \
" ]]]\n" \
" dummy\n" \
"[[[ end ]]]"
expected = " [[[\n" \
" a\n" \
"\n" \
" ]]]\n" \
" a\n" \
"[[[ end ]]]"
header = "a\n"
body = " dummy"
valid_marker(input, expected, header, body)
def test_18(self):
input = " [[[\n" \
" \n" \
" a\n" \
" ]]]\n" \
"[[[ end ]]]"
expected = " [[[\n" \
" \n" \
" a\n" \
" ]]]\n" \
" \n" \
" a\n" \
"[[[ end ]]]"
header = "\na"
body = ""
valid_marker(input, expected, header, body)
class Test_CommentMarkers:
def test_1(self):
input = "/** \t **//**end **/"
expected = "/** \t **/ \t /**end **/"
header = " \t "
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
def test_2(self):
input = "//\t/** \t **//**end **/"
expected = "//\t/** \t **/ \t /**end **/"
header = " \t "
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
def test_3(self):
input = " /** **/\n" \
"/** end **/"
expected = " /** **/\n" \
" \n" \
"/** end **/"
header = " "
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
def test_4(self):
input = "/**\n" \
"\n" \
"**/\n" \
"/** end **/"
expected = "/**\n" \
"\n" \
"**/\n" \
"\n" \
"/** end **/"
header = ""
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
def test_5(self):
input = " /**\n" \
" \n" \
" **/\n" \
"/** end **/"
expected = " /**\n" \
" \n" \
" **/\n" \
" \n" \
"/** end **/"
header = ""
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
def test_6(self):
input = "///**\n" \
"//\n" \
"//**/\n" \
"///** end **/"
expected = "///**\n" \
"//\n" \
"//**/\n" \
" \n" \
"///** end **/"
header = ""
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
def test_7(self):
input = " // /**\n" \
" // \n" \
" // **/\n" \
"/** end **/"
expected = " // /**\n" \
" // \n" \
" // **/\n" \
" \n" \
"/** end **/"
header = ""
body = ""
valid_marker(input, expected, header, body, settings = settingsCogComment)
class Test_RemoveMarkers:
def test_1(self):
input = "//\n" \
" [[[ a ]]] \n" \
" \n" \
" [[[ end ]]] \n"
expected = "//\n" \
" a\n"
header = "a"
body = " "
valid_marker(input, expected, header, body, remove_markers = True)
def test_2(self):
input = "//\n" \
" [[[ a ]]] \n" \
" \n" \
" [[[ end ]]] \n" \
"[[[ def ]]] [[[ end ]]] \n"
expected = "//\n" \
" a\n" \
"def \n"
header = "a"
body = " "
valid_marker(input, expected, header, body, remove_markers = True)
def test_3(self):
input = "//\n" \
" [[[ a ]]]\n" \
" \n" \
" [[[end]]] \n" \
"def\n"
expected = "//\n" \
" a\n" \
"def\n"
header = "a"
body = " "
valid_marker(input, expected, header, body, remove_markers = True)
def test_4(self):
input = "//\n" \
" [[[\n" \
" \n" \
" a\n" \
" ]]]\n" \
" [[[ end ]]] "
expected = "//\n" \
" \n" \
" a\n"
header = "\na"
body = ""
valid_marker(input, expected, header, body, remove_markers = True)
def test_5(self):
input = " // [[[ a ]]] abc [[[ end ]]] \n"
expected = " // a \n"
header0 = "a"
body0 = "abc"
valid_marker(input, expected, header0, body0, remove_markers = True)
def test_6(self):
input = " // [[[ a ]]] abc [[[ end ]]] dummy [[[ bde ]]] b [[[ end ]]] \n"
expected = " // a dummy bde \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_7(self):
input = " // [[[ a ]]] abc [[[ end ]]]dummy [[[ bde ]]] b [[[ end ]]] \n"
expected = " // adummy bde \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_8(self):
input = " // [[[ a ]]] abc [[[ end ]]] dummy[[[ bde ]]] b [[[ end ]]] \n"
expected = " // a dummybde \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_9(self):
input = " /**[[[ a ]]]**/ abc /**[[[ end ]]]**/ dummy/**[[[ bde ]]]**/ b **/[[[ end ]]]**/ \n"
expected = " /**a**/ dummy/**bde**/ \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_10(self):
input = " [[[\n" \
" <<[ abc ]>>\n" \
" <<[ end ]>>\n" \
" ]]]\n" \
" [[[ end ]]]"
expected = ""
header0 = " <<[ abc ]>>\n" \
" <<[ end ]>>"
body0 = ""
valid_marker(input, expected, header0, body0, None, None, None, None, remove_markers = True)
class Test_EditMarkers:
class Test_OneLineMarker:
def test_1(self):
input = "<<[]>><<[end]>>"
expected = "<<[]>> <<[end]>>"
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_2(self):
input = "<<[]>> abc <<[end]>>"
expected = "<<[]>> abc <<[end]>>"
header = ""
body = "abc"
valid_marker(input, expected, header, body)
def test_3(self):
input = " <<[ ]>> abc <<[end ]>>"
expected = " <<[ ]>> abc <<[end ]>>"
header = ""
body = " abc"
valid_marker(input, expected, header, body)
def test_4(self):
input = " <<[ ]>> abc <<[ end]>>"
expected = " <<[ ]>> abc <<[ end]>>"
header = ""
body = "abc "
valid_marker(input, expected, header, body)
def test_5(self):
input = "<<[a]>> abc <<[ end ]>> "
expected = "<<[a]>> abc <<[ end ]>> "
header = "a"
body = "abc"
valid_marker(input, expected, header, body)
def test_6(self):
input = "<<[ a]>> <<[end]>> "
expected = "<<[ a]>> <<[end]>> "
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_7(self):
input = "<<[a ]>> <<[end]>>"
expected = "<<[a ]>> <<[end]>>"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_8(self):
input = "<<[ a ]>> <<[end]>>"
expected = "<<[ a ]>> <<[end]>>"
header = "a"
body = " "
valid_marker(input, expected, header, body)
def test_9(self):
input = "<<[ ]>><<[ end]>>"
expected = "<<[ ]>> <<[ end]>>"
header = " "
body = ""
valid_marker(input, expected, header, body)
def test_10(self):
input = "<<[ \t ]>><<[end ]>>"
expected = "<<[ \t ]>> <<[end ]>>"
header = " \t "
body = ""
valid_marker(input, expected, header, body)
def test_11(self):
input = " <<[ ]>>\n" \
"<<[ end ]>>"
expected = " <<[ ]>>\n" \
"<<[ end ]>>"
header = " "
body = ""
valid_marker(input, expected, header, body)
def test_12(self):
input = " <<[a ]>> \n" \
" dummy\n" \
" <<[end]>>"
expected = " <<[a ]>> \n" \
" dummy\n" \
" <<[end]>>"
header = "a "
body = " dummy"
valid_marker(input, expected, header, body)
def test_13(self):
input = "<<[ a]>> \n" \
" <<[end]>> "
expected = "<<[ a]>> \n" \
" <<[end]>> "
header = " a"
body = ""
valid_marker(input, expected, header, body)
def test_14(self):
input = "<<[]>> \n" \
" <<[end]>> "
expected = "<<[]>> \n" \
" <<[end]>> "
header = ""
body = ""
valid_marker(input, expected, header, body)
def test_15(self):
input = " /* <<[ a ]>>*/ dummy /*<<[end]>> */"
expected = " /* <<[ a ]>>*/ dummy /*<<[end]>> */"
header = "a"
body = "dummy"
valid_marker(input, expected, header, body)
def test_16(self):
input = " /* <<[ a ]>>*/dummy /*<<[end]>> */"
expected = " /* <<[ a ]>>*/dummy /*<<[end]>> */"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_17(self):
input = " /* <<[ a ]>>*/ dummy/*<<[end]>> */"
expected = " /* <<[ a ]>>*/ dummy/*<<[end]>> */"
header = "a"
body = ""
valid_marker(input, expected, header, body)
def test_18(self):
input = " /* <<[ a ]>>*/dummy dummy/*<<[end]>> */"
expected = " /* <<[ a ]>>*/dummy dummy/*<<[end]>> */"
header = "a"
body = ""
valid_marker(input, expected, header, body)
class Test_SeveralLinesMarker:
pass # Not possible
class Test_CommentMarkers:
def test_1(self):
input = "/** \t **//**end **/"
expected = "/** \t **/ /**end **/"
header = " \t "
body = ""
valid_marker(input, expected, header, body, settings = settingsEditComment)
def test_2(self):
input = "//\t/** \t **//**end **/"
expected = "//\t/** \t **/ /**end **/"
header = " \t "
body = ""
valid_marker(input, expected, header, body, settings = settingsEditComment)
def test_3(self):
input = " /** **/\n" \
"/** end **/"
expected = " /** **/\n" \
"/** end **/"
header = " "
body = ""
valid_marker(input, expected, header, body, settings = settingsEditComment)
class Test_RemoveMarkers:
def test_1(self):
input = "//\n" \
" <<[ a ]>> \n" \
" \n" \
" <<[ end ]>> \n"
expected = "//\n" \
" \n"
header = "a"
body = " "
valid_marker(input, expected, header, body, remove_markers = True)
def test_2(self):
input = "//\n" \
" <<[ a ]>> \n" \
" \n" \
" <<[ end ]>> \n" \
"<<[ def ]>> test <<[ end ]>> \n"
expected = "//\n" \
" \n" \
"test \n"
header = "a"
body = " "
valid_marker(input, expected, header, body, remove_markers = True)
def test_3(self):
input = "//\n" \
" <<[ a ]>>\n" \
" test\n" \
" <<[end]>> \n" \
"def\n"
expected = "//\n" \
" test\n" \
"def\n"
header = "a"
body = " test"
valid_marker(input, expected, header, body, remove_markers = True)
def test_4(self):
input = " // <<[ a ]>> abc <<[ end ]>> \n"
expected = " // abc \n"
header0 = "a"
body0 = "abc"
valid_marker(input, expected, header0, body0, remove_markers = True)
def test_5(self):
input = " // <<[ a ]>> abc <<[ end ]>> dummy <<[ bde ]>> b <<[ end ]>> \n"
expected = " // abc dummy b \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_6(self):
input = " // <<[ a ]>> abc <<[ end ]>>dummy <<[ bde ]>> b <<[ end ]>> \n"
expected = " // abcdummy b \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_7(self):
input = " // <<[ a ]>> abc <<[ end ]>> dummy<<[ bde ]>> b <<[ end ]>> \n"
expected = " // abc dummyb \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
def test_8(self):
input = " /**<<[ a ]>>**/ abc /**<<[ end ]>>**/ dummy/**<<[ bde ]>>**/ b **/<<[ end ]>>**/ \n"
expected = " /**abc**/ dummy/**b**/ \n"
header0 = "a"
body0 = "abc"
header2 = "bde"
body2 = "b"
valid_marker(input, expected, header0, body0, None, None, header2, body2, remove_markers = True)
class Test_BothMarkers:
class Test_Both:
def test_1(self):
input = "// [[[ <<[ a ]>> ab <<[ end ]>> ]]]\n" \
"// [[[ ]]][[[ end ]]]\n" \
"// <<[ a ]>> <<[ end ]>>\n" \
"// [[[ ]]][[[ end ]]]\n" \
"// [[[ end ]]]"
valid_marker(input, None)
def test_2(self):
input = "// [[[ <<[ a ]>> dummy <<[ end ]>> ]]]\n" \
"<<[ a ]>> test <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "// [[[ <<[ a ]>> dummy <<[ end ]>> ]]]\n" \
"<<[ a ]>> test <<[ end ]>>\n" \
"// [[[ end ]]]"
header0 = "<<[ a ]>> dummy <<[ end ]>>"
body0 = "<<[ a ]>> test <<[ end ]>>"
valid_marker(input, expected, header0, body0)
def test_3(self):
input = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// \n" \
"// ]]]\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// \n" \
"// ]]]\n" \
"def\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
header0 = "def\n" \
"// <<[ abc ]>>\n" \
"// <<[ end ]>>\n"
body0 = "// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>"
header1 = "abc"
body1 = "123"
valid_marker(input, expected, header0, body0, header1, body1)
def test_4(self):
input = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// // <<[ hgi ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// // <<[ hgi ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
"def\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// <<[ hgi ]>>\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
header0 = "def\n" \
"// <<[ abc ]>>\n" \
"// <<[ end ]>>\n" \
"// <<[ hgi ]>>\n" \
"// <<[ end ]>>"
body0 = "// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>"
header1 = "abc"
body1 = "123"
valid_marker(input, expected, header0, body0, header1, body1)
def test_5(self):
input = "// [[[ <<[ abc ]>><<[ end ]>> ]]]\n" \
"// [[[ end ]]]"
expected = "// [[[ <<[ abc ]>><<[ end ]>> ]]]\n" \
"<<[ abc ]>> <<[ end ]>>\n" \
"// [[[ end ]]]"
header = "<<[ abc ]>><<[ end ]>>"
body = ""
valid_marker(input, expected, header, body)
def test_6(self):
input = "// [[[ <<[ abc ]>><<[ end ]>> ]]][[[ end ]]]"
expected = "// [[[ <<[ abc ]>><<[ end ]>> ]]] <<[ abc ]>> <<[ end ]>> [[[ end ]]]"
header = "<<[ abc ]>><<[ end ]>>"
body = ""
valid_marker(input, expected, header, body)
def test_7(self):
input = "// [[[ <<[ abc ]>><<[ end ]>> ]]] <<[ abc ]>> afz <<[ end ]>> [[[ end ]]]"
expected = "// [[[ <<[ abc ]>><<[ end ]>> ]]] <<[ abc ]>> afz <<[ end ]>> [[[ end ]]]"
header0 = "<<[ abc ]>><<[ end ]>>"
body0 = "<<[ abc ]>> afz <<[ end ]>>"
header1 = "abc"
body1 = "afz"
valid_marker(input, expected, header0, body0, header1, body1)
def test_8(self):
input = "// [[[\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
" // <<[ abc ]>>\n" \
" hello\n" \
" // <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "// [[[\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
" // <<[ abc ]>>\n" \
" hello\n" \
" // <<[ end ]>>\n" \
"// [[[ end ]]]"
header0 = " // <<[ abc ]>>\n" \
" // <<[ end ]>>"
body0 = " // <<[ abc ]>>\n" \
" hello\n" \
" // <<[ end ]>>"
header1 = "abc"
body1 = " hello"
valid_marker(input, expected, header0, body0, header1, body1)
def test_9(self):
input = " // [[[\n" \
" // // <<[ abc ]>>\n" \
" // // <<[ end ]>>\n" \
" // ]]]\n" \
" // <<[ abc ]>>\n" \
" // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]\n" \
" // <<[ end ]>>\n" \
" // [[[ end ]]]"
expected = " // [[[\n" \
" // // <<[ abc ]>>\n" \
" // // <<[ end ]>>\n" \
" // ]]]\n" \
" // <<[ abc ]>>\n" \
" // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]\n" \
" // <<[ end ]>>\n" \
" // [[[ end ]]]"
header0 = " // <<[ abc ]>>\n" \
" // <<[ end ]>>"
body0 = " // <<[ abc ]>>\n" \
" // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]\n" \
" // <<[ end ]>>"
header1 = "abc"
body1 = " // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]"
header2 = "<<[ def ]>><<[ end ]>>"
body2 = "<<[ def ]>> test <<[ end ]>>"
header3 = "def"
body3 = "test"
valid_marker(input, expected, header0, body0, header1, body1, header2, body2, header3, body3)
def test_10(self):
input = "// [[[]]]\n" \
" // <<[ abc ]>>\n" \
" test\n" \
" // <<[ end ]>>\n" \
"// [[[ end ]]]\n" \
" // [[[\n" \
" // // <<[ abc ]>>\n" \
" // // <<[ end ]>>\n" \
" // ]]]\n" \
" // [[[ end ]]]\n"
expected = "// [[[]]]\n" \
"// [[[ end ]]]\n" \
" // [[[\n" \
" // // <<[ abc ]>>\n" \
" // // <<[ end ]>>\n" \
" // ]]]\n" \
" // <<[ abc ]>>\n" \
" test\n" \
" // <<[ end ]>>\n" \
" // [[[ end ]]]\n"
header0 = ""
body0 = " // <<[ abc ]>>\n" \
" test\n" \
" // <<[ end ]>>"
header1 = "abc"
body1 = " test"
valid_marker(input, expected, header0, body0, header1, body1)
class Test_RemoveMarkers:
def test_1(self):
input = "// [[[ <<[ a ]>> dummy <<[ end ]>> ]]]\n" \
"<<[ a ]>> test <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "test\n"
header0 = "<<[ a ]>> dummy <<[ end ]>>"
body0 = "<<[ a ]>> test <<[ end ]>>"
valid_marker(input, expected, header0, body0, remove_markers = True)
def test_2(self):
input = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "def\n" \
"123\n"
header0 = "def\n" \
"// <<[ abc ]>>\n" \
"// <<[ end ]>>"
body0 = "// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>"
header1 = "abc"
body1 = "123"
valid_marker(input, expected, header0, body0, header1, body1, remove_markers = True)
def test_3(self):
input = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// // <<[ hgi ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "def\n" \
"123\n"
header0 = "def\n" \
"// <<[ abc ]>>\n" \
"// <<[ end ]>>\n" \
"// <<[ hgi ]>>\n" \
"// <<[ end ]>>"
body0 = "// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>"
header1 = "abc"
body1 = "123"
valid_marker(input, expected, header0, body0, header1, body1, remove_markers = True)
def test_4(self):
input = "// [[[ <<[ abc ]>><<[ end ]>> ]]]\n" \
"// [[[ end ]]]"
expected = ""
header = "<<[ abc ]>><<[ end ]>>"
body = ""
valid_marker(input, expected, header, body, remove_markers = True)
def test_5(self):
input = "// [[[ <<[ abc ]>><<[ end ]>> ]]][[[ end ]]]"
expected = "// "
header = "<<[ abc ]>><<[ end ]>>"
body = ""
valid_marker(input, expected, header, body, remove_markers = True)
def test_6(self):
input = "// [[[ <<[ abc ]>><<[ end ]>> ]]] <<[ abc ]>> afz <<[ end ]>> [[[ end ]]]"
expected = "// afz"
header0 = "<<[ abc ]>><<[ end ]>>"
body0 = "<<[ abc ]>> afz <<[ end ]>>"
header1 = "abc"
body1 = "afz"
valid_marker(input, expected, header0, body0, header1, body1, remove_markers = True)
def test_7(self):
input = "// [[[\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// ]]]\n" \
" // <<[ abc ]>>\n" \
" hello\n" \
" // <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = " hello\n"
header0 = " // <<[ abc ]>>\n" \
" // <<[ end ]>>"
body0 = " // <<[ abc ]>>\n" \
" hello\n" \
" // <<[ end ]>>"
header1 = "abc"
body1 = " hello"
valid_marker(input, expected, header0, body0, header1, body1, remove_markers = True)
def test_8(self):
input = " // [[[\n" \
" // // <<[ abc ]>>\n" \
" // // <<[ end ]>>\n" \
" // ]]]\n" \
" // <<[ abc ]>>\n" \
" // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]\n" \
" // <<[ end ]>>\n" \
" // [[[ end ]]]"
expected = " // test\n"
header0 = " // <<[ abc ]>>\n" \
" // <<[ end ]>>"
body0 = " // <<[ abc ]>>\n" \
" // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]\n" \
" // <<[ end ]>>"
header1 = "abc"
body1 = " // [[[ <<[ def ]>><<[ end ]>> ]]] <<[ def ]>> test <<[ end ]>> [[[ end ]]]"
header2 = "<<[ def ]>><<[ end ]>>"
body2 = "<<[ def ]>> test <<[ end ]>>"
header3 = "def"
body3 = "test"
valid_marker(input, expected, header0, body0, header1, body1, header2, body2, header3, body3, remove_markers = True)
def test_9(self):
input = "// [[[]]]\n" \
" // <<[ abc ]>>\n" \
" test\n" \
" // <<[ end ]>>\n" \
"// [[[ end ]]]\n" \
" // [[[\n" \
" // // <<[ abc ]>>\n" \
" // // <<[ end ]>>\n" \
" // ]]]\n" \
" // [[[ end ]]]\n"
expected = " test\n"
header0 = ""
body0 = " // <<[ abc ]>>\n" \
" test\n" \
" // <<[ end ]>>"
header1 = "abc"
body1 = " test"
valid_marker(input, expected, header0, body0, header1, body1, remove_markers = True)
def test_10(self): ### TRICKY RSTRIP
input = "// [[[\n" \
"// def\n" \
"// // <<[ abc ]>>\n" \
"// // <<[ end ]>>\n" \
"// \n" \
"// \n" \
"// ]]]\n" \
"// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>\n" \
"// [[[ end ]]]"
expected = "def\n" \
"123\n" \
"\n"
header0 = "def\n" \
"// <<[ abc ]>>\n" \
"// <<[ end ]>>\n" \
"\n"
body0 = "// <<[ abc ]>>\n" \
"123\n" \
"// <<[ end ]>>"
header1 = "abc"
body1 = "123"
valid_marker(input, expected, header0, body0, header1, body1, remove_markers = True)
| 37.961259 | 128 | 0.284284 | 3,307 | 47,034 | 3.962806 | 0.032356 | 0.048836 | 0.125754 | 0.186799 | 0.935902 | 0.871499 | 0.86013 | 0.84525 | 0.825563 | 0.807707 | 0 | 0.019841 | 0.523153 | 47,034 | 1,238 | 129 | 37.991922 | 0.564473 | 0.000553 | 0 | 0.908127 | 0 | 0 | 0.232155 | 0.000489 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091873 | false | 0.000883 | 0.000883 | 0 | 0.105124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
94761c10f9b2ab7199a18f1e65d130b02e79eda5 | 480,014 | py | Python | operators/argocd-operator/python/pulumi_pulumi_kubernetes_crds_operators_argocd_operator/argoproj/v1alpha1/_inputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | operators/argocd-operator/python/pulumi_pulumi_kubernetes_crds_operators_argocd_operator/argoproj/v1alpha1/_inputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | 2 | 2020-09-18T17:12:23.000Z | 2020-12-30T19:40:56.000Z | operators/argocd-operator/python/pulumi_pulumi_kubernetes_crds_operators_argocd_operator/argoproj/v1alpha1/_inputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by crd2pulumi. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
__all__ = [
'AppProjectSpecArgs',
'AppProjectSpecClusterResourceBlacklistArgs',
'AppProjectSpecClusterResourceWhitelistArgs',
'AppProjectSpecDestinationsArgs',
'AppProjectSpecNamespaceResourceBlacklistArgs',
'AppProjectSpecNamespaceResourceWhitelistArgs',
'AppProjectSpecOrphanedResourcesArgs',
'AppProjectSpecOrphanedResourcesIgnoreArgs',
'AppProjectSpecRolesArgs',
'AppProjectSpecRolesJwtTokensArgs',
'AppProjectSpecSignatureKeysArgs',
'AppProjectSpecSyncWindowsArgs',
'ApplicationOperationArgs',
'ApplicationOperationInfoArgs',
'ApplicationOperationInitiatedByArgs',
'ApplicationOperationRetryArgs',
'ApplicationOperationRetryBackoffArgs',
'ApplicationOperationSyncArgs',
'ApplicationOperationSyncResourcesArgs',
'ApplicationOperationSyncSourceArgs',
'ApplicationOperationSyncSourceDirectoryArgs',
'ApplicationOperationSyncSourceDirectoryJsonnetArgs',
'ApplicationOperationSyncSourceDirectoryJsonnetExtVarsArgs',
'ApplicationOperationSyncSourceDirectoryJsonnetTlasArgs',
'ApplicationOperationSyncSourceHelmArgs',
'ApplicationOperationSyncSourceHelmFileParametersArgs',
'ApplicationOperationSyncSourceHelmParametersArgs',
'ApplicationOperationSyncSourceKsonnetArgs',
'ApplicationOperationSyncSourceKsonnetParametersArgs',
'ApplicationOperationSyncSourceKustomizeArgs',
'ApplicationOperationSyncSourcePluginArgs',
'ApplicationOperationSyncSourcePluginEnvArgs',
'ApplicationOperationSyncSyncStrategyArgs',
'ApplicationOperationSyncSyncStrategyApplyArgs',
'ApplicationOperationSyncSyncStrategyHookArgs',
'ApplicationSpecArgs',
'ApplicationSpecDestinationArgs',
'ApplicationSpecIgnoreDifferencesArgs',
'ApplicationSpecInfoArgs',
'ApplicationSpecSourceArgs',
'ApplicationSpecSourceDirectoryArgs',
'ApplicationSpecSourceDirectoryJsonnetArgs',
'ApplicationSpecSourceDirectoryJsonnetExtVarsArgs',
'ApplicationSpecSourceDirectoryJsonnetTlasArgs',
'ApplicationSpecSourceHelmArgs',
'ApplicationSpecSourceHelmFileParametersArgs',
'ApplicationSpecSourceHelmParametersArgs',
'ApplicationSpecSourceKsonnetArgs',
'ApplicationSpecSourceKsonnetParametersArgs',
'ApplicationSpecSourceKustomizeArgs',
'ApplicationSpecSourcePluginArgs',
'ApplicationSpecSourcePluginEnvArgs',
'ApplicationSpecSyncPolicyArgs',
'ApplicationSpecSyncPolicyAutomatedArgs',
'ApplicationSpecSyncPolicyRetryArgs',
'ApplicationSpecSyncPolicyRetryBackoffArgs',
'ApplicationStatusArgs',
'ApplicationStatusConditionsArgs',
'ApplicationStatusHealthArgs',
'ApplicationStatusHistoryArgs',
'ApplicationStatusHistorySourceArgs',
'ApplicationStatusHistorySourceDirectoryArgs',
'ApplicationStatusHistorySourceDirectoryJsonnetArgs',
'ApplicationStatusHistorySourceDirectoryJsonnetExtVarsArgs',
'ApplicationStatusHistorySourceDirectoryJsonnetTlasArgs',
'ApplicationStatusHistorySourceHelmArgs',
'ApplicationStatusHistorySourceHelmFileParametersArgs',
'ApplicationStatusHistorySourceHelmParametersArgs',
'ApplicationStatusHistorySourceKsonnetArgs',
'ApplicationStatusHistorySourceKsonnetParametersArgs',
'ApplicationStatusHistorySourceKustomizeArgs',
'ApplicationStatusHistorySourcePluginArgs',
'ApplicationStatusHistorySourcePluginEnvArgs',
'ApplicationStatusOperationStateArgs',
'ApplicationStatusOperationStateOperationArgs',
'ApplicationStatusOperationStateOperationInfoArgs',
'ApplicationStatusOperationStateOperationInitiatedByArgs',
'ApplicationStatusOperationStateOperationRetryArgs',
'ApplicationStatusOperationStateOperationRetryBackoffArgs',
'ApplicationStatusOperationStateOperationSyncArgs',
'ApplicationStatusOperationStateOperationSyncResourcesArgs',
'ApplicationStatusOperationStateOperationSyncSourceArgs',
'ApplicationStatusOperationStateOperationSyncSourceDirectoryArgs',
'ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetArgs',
'ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetExtVarsArgs',
'ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetTlasArgs',
'ApplicationStatusOperationStateOperationSyncSourceHelmArgs',
'ApplicationStatusOperationStateOperationSyncSourceHelmFileParametersArgs',
'ApplicationStatusOperationStateOperationSyncSourceHelmParametersArgs',
'ApplicationStatusOperationStateOperationSyncSourceKsonnetArgs',
'ApplicationStatusOperationStateOperationSyncSourceKsonnetParametersArgs',
'ApplicationStatusOperationStateOperationSyncSourceKustomizeArgs',
'ApplicationStatusOperationStateOperationSyncSourcePluginArgs',
'ApplicationStatusOperationStateOperationSyncSourcePluginEnvArgs',
'ApplicationStatusOperationStateOperationSyncSyncStrategyArgs',
'ApplicationStatusOperationStateOperationSyncSyncStrategyApplyArgs',
'ApplicationStatusOperationStateOperationSyncSyncStrategyHookArgs',
'ApplicationStatusOperationStateSyncResultArgs',
'ApplicationStatusOperationStateSyncResultResourcesArgs',
'ApplicationStatusOperationStateSyncResultSourceArgs',
'ApplicationStatusOperationStateSyncResultSourceDirectoryArgs',
'ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetArgs',
'ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetExtVarsArgs',
'ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetTlasArgs',
'ApplicationStatusOperationStateSyncResultSourceHelmArgs',
'ApplicationStatusOperationStateSyncResultSourceHelmFileParametersArgs',
'ApplicationStatusOperationStateSyncResultSourceHelmParametersArgs',
'ApplicationStatusOperationStateSyncResultSourceKsonnetArgs',
'ApplicationStatusOperationStateSyncResultSourceKsonnetParametersArgs',
'ApplicationStatusOperationStateSyncResultSourceKustomizeArgs',
'ApplicationStatusOperationStateSyncResultSourcePluginArgs',
'ApplicationStatusOperationStateSyncResultSourcePluginEnvArgs',
'ApplicationStatusResourcesArgs',
'ApplicationStatusResourcesHealthArgs',
'ApplicationStatusSummaryArgs',
'ApplicationStatusSyncArgs',
'ApplicationStatusSyncComparedToArgs',
'ApplicationStatusSyncComparedToDestinationArgs',
'ApplicationStatusSyncComparedToSourceArgs',
'ApplicationStatusSyncComparedToSourceDirectoryArgs',
'ApplicationStatusSyncComparedToSourceDirectoryJsonnetArgs',
'ApplicationStatusSyncComparedToSourceDirectoryJsonnetExtVarsArgs',
'ApplicationStatusSyncComparedToSourceDirectoryJsonnetTlasArgs',
'ApplicationStatusSyncComparedToSourceHelmArgs',
'ApplicationStatusSyncComparedToSourceHelmFileParametersArgs',
'ApplicationStatusSyncComparedToSourceHelmParametersArgs',
'ApplicationStatusSyncComparedToSourceKsonnetArgs',
'ApplicationStatusSyncComparedToSourceKsonnetParametersArgs',
'ApplicationStatusSyncComparedToSourceKustomizeArgs',
'ApplicationStatusSyncComparedToSourcePluginArgs',
'ApplicationStatusSyncComparedToSourcePluginEnvArgs',
'ArgoCDExportSpecArgs',
'ArgoCDExportSpecStorageArgs',
'ArgoCDExportSpecStoragePvcArgs',
'ArgoCDExportSpecStoragePvcDataSourceArgs',
'ArgoCDExportSpecStoragePvcResourcesArgs',
'ArgoCDExportSpecStoragePvcResourcesLimitsArgs',
'ArgoCDExportSpecStoragePvcResourcesRequestsArgs',
'ArgoCDExportSpecStoragePvcSelectorArgs',
'ArgoCDExportSpecStoragePvcSelectorMatchExpressionsArgs',
'ArgoCDExportStatusArgs',
'ArgoCDSpecArgs',
'ArgoCDSpecControllerArgs',
'ArgoCDSpecControllerProcessorsArgs',
'ArgoCDSpecControllerResourcesArgs',
'ArgoCDSpecControllerResourcesLimitsArgs',
'ArgoCDSpecControllerResourcesRequestsArgs',
'ArgoCDSpecDexArgs',
'ArgoCDSpecDexResourcesArgs',
'ArgoCDSpecDexResourcesLimitsArgs',
'ArgoCDSpecDexResourcesRequestsArgs',
'ArgoCDSpecGrafanaArgs',
'ArgoCDSpecGrafanaIngressArgs',
'ArgoCDSpecGrafanaIngressTlsArgs',
'ArgoCDSpecGrafanaResourcesArgs',
'ArgoCDSpecGrafanaResourcesLimitsArgs',
'ArgoCDSpecGrafanaResourcesRequestsArgs',
'ArgoCDSpecGrafanaRouteArgs',
'ArgoCDSpecGrafanaRouteTlsArgs',
'ArgoCDSpecHaArgs',
'ArgoCDSpecImportArgs',
'ArgoCDSpecInitialSSHKnownHostsArgs',
'ArgoCDSpecPrometheusArgs',
'ArgoCDSpecPrometheusIngressArgs',
'ArgoCDSpecPrometheusIngressTlsArgs',
'ArgoCDSpecPrometheusRouteArgs',
'ArgoCDSpecPrometheusRouteTlsArgs',
'ArgoCDSpecRbacArgs',
'ArgoCDSpecRedisArgs',
'ArgoCDSpecRedisResourcesArgs',
'ArgoCDSpecRedisResourcesLimitsArgs',
'ArgoCDSpecRedisResourcesRequestsArgs',
'ArgoCDSpecRepoArgs',
'ArgoCDSpecRepoResourcesArgs',
'ArgoCDSpecRepoResourcesLimitsArgs',
'ArgoCDSpecRepoResourcesRequestsArgs',
'ArgoCDSpecServerArgs',
'ArgoCDSpecServerAutoscaleArgs',
'ArgoCDSpecServerAutoscaleHpaArgs',
'ArgoCDSpecServerAutoscaleHpaScaleTargetRefArgs',
'ArgoCDSpecServerGrpcArgs',
'ArgoCDSpecServerGrpcIngressArgs',
'ArgoCDSpecServerGrpcIngressTlsArgs',
'ArgoCDSpecServerIngressArgs',
'ArgoCDSpecServerIngressTlsArgs',
'ArgoCDSpecServerResourcesArgs',
'ArgoCDSpecServerResourcesLimitsArgs',
'ArgoCDSpecServerResourcesRequestsArgs',
'ArgoCDSpecServerRouteArgs',
'ArgoCDSpecServerRouteTlsArgs',
'ArgoCDSpecServerServiceArgs',
'ArgoCDSpecTlsArgs',
'ArgoCDSpecTlsCaArgs',
'ArgoCDStatusArgs',
]
@pulumi.input_type
class AppProjectSpecArgs:
def __init__(__self__, *,
cluster_resource_blacklist: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceBlacklistArgs']]]] = None,
cluster_resource_whitelist: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceWhitelistArgs']]]] = None,
description: Optional[pulumi.Input[str]] = None,
destinations: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecDestinationsArgs']]]] = None,
namespace_resource_blacklist: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceBlacklistArgs']]]] = None,
namespace_resource_whitelist: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceWhitelistArgs']]]] = None,
orphaned_resources: Optional[pulumi.Input['AppProjectSpecOrphanedResourcesArgs']] = None,
roles: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesArgs']]]] = None,
signature_keys: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSignatureKeysArgs']]]] = None,
source_repos: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sync_windows: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSyncWindowsArgs']]]] = None):
"""
AppProjectSpec is the specification of an AppProject
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceBlacklistArgs']]] cluster_resource_blacklist: ClusterResourceBlacklist contains list of blacklisted cluster level resources
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceWhitelistArgs']]] cluster_resource_whitelist: ClusterResourceWhitelist contains list of whitelisted cluster level resources
:param pulumi.Input[str] description: Description contains optional project description
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecDestinationsArgs']]] destinations: Destinations contains list of destinations available for deployment
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceBlacklistArgs']]] namespace_resource_blacklist: NamespaceResourceBlacklist contains list of blacklisted namespace level resources
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceWhitelistArgs']]] namespace_resource_whitelist: NamespaceResourceWhitelist contains list of whitelisted namespace level resources
:param pulumi.Input['AppProjectSpecOrphanedResourcesArgs'] orphaned_resources: OrphanedResources specifies if controller should monitor orphaned resources of apps in this project
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesArgs']]] roles: Roles are user defined RBAC roles associated with this project
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSignatureKeysArgs']]] signature_keys: List of PGP key IDs that commits to be synced to must be signed with
:param pulumi.Input[Sequence[pulumi.Input[str]]] source_repos: SourceRepos contains list of repository URLs which can be used for deployment
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSyncWindowsArgs']]] sync_windows: SyncWindows controls when syncs can be run for apps in this project
"""
if cluster_resource_blacklist is not None:
pulumi.set(__self__, "cluster_resource_blacklist", cluster_resource_blacklist)
if cluster_resource_whitelist is not None:
pulumi.set(__self__, "cluster_resource_whitelist", cluster_resource_whitelist)
if description is not None:
pulumi.set(__self__, "description", description)
if destinations is not None:
pulumi.set(__self__, "destinations", destinations)
if namespace_resource_blacklist is not None:
pulumi.set(__self__, "namespace_resource_blacklist", namespace_resource_blacklist)
if namespace_resource_whitelist is not None:
pulumi.set(__self__, "namespace_resource_whitelist", namespace_resource_whitelist)
if orphaned_resources is not None:
pulumi.set(__self__, "orphaned_resources", orphaned_resources)
if roles is not None:
pulumi.set(__self__, "roles", roles)
if signature_keys is not None:
pulumi.set(__self__, "signature_keys", signature_keys)
if source_repos is not None:
pulumi.set(__self__, "source_repos", source_repos)
if sync_windows is not None:
pulumi.set(__self__, "sync_windows", sync_windows)
@property
@pulumi.getter(name="clusterResourceBlacklist")
def cluster_resource_blacklist(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceBlacklistArgs']]]]:
"""
ClusterResourceBlacklist contains list of blacklisted cluster level resources
"""
return pulumi.get(self, "cluster_resource_blacklist")
@cluster_resource_blacklist.setter
def cluster_resource_blacklist(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceBlacklistArgs']]]]):
pulumi.set(self, "cluster_resource_blacklist", value)
@property
@pulumi.getter(name="clusterResourceWhitelist")
def cluster_resource_whitelist(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceWhitelistArgs']]]]:
"""
ClusterResourceWhitelist contains list of whitelisted cluster level resources
"""
return pulumi.get(self, "cluster_resource_whitelist")
@cluster_resource_whitelist.setter
def cluster_resource_whitelist(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecClusterResourceWhitelistArgs']]]]):
pulumi.set(self, "cluster_resource_whitelist", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description contains optional project description
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def destinations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecDestinationsArgs']]]]:
"""
Destinations contains list of destinations available for deployment
"""
return pulumi.get(self, "destinations")
@destinations.setter
def destinations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecDestinationsArgs']]]]):
pulumi.set(self, "destinations", value)
@property
@pulumi.getter(name="namespaceResourceBlacklist")
def namespace_resource_blacklist(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceBlacklistArgs']]]]:
"""
NamespaceResourceBlacklist contains list of blacklisted namespace level resources
"""
return pulumi.get(self, "namespace_resource_blacklist")
@namespace_resource_blacklist.setter
def namespace_resource_blacklist(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceBlacklistArgs']]]]):
pulumi.set(self, "namespace_resource_blacklist", value)
@property
@pulumi.getter(name="namespaceResourceWhitelist")
def namespace_resource_whitelist(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceWhitelistArgs']]]]:
"""
NamespaceResourceWhitelist contains list of whitelisted namespace level resources
"""
return pulumi.get(self, "namespace_resource_whitelist")
@namespace_resource_whitelist.setter
def namespace_resource_whitelist(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecNamespaceResourceWhitelistArgs']]]]):
pulumi.set(self, "namespace_resource_whitelist", value)
@property
@pulumi.getter(name="orphanedResources")
def orphaned_resources(self) -> Optional[pulumi.Input['AppProjectSpecOrphanedResourcesArgs']]:
"""
OrphanedResources specifies if controller should monitor orphaned resources of apps in this project
"""
return pulumi.get(self, "orphaned_resources")
@orphaned_resources.setter
def orphaned_resources(self, value: Optional[pulumi.Input['AppProjectSpecOrphanedResourcesArgs']]):
pulumi.set(self, "orphaned_resources", value)
@property
@pulumi.getter
def roles(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesArgs']]]]:
"""
Roles are user defined RBAC roles associated with this project
"""
return pulumi.get(self, "roles")
@roles.setter
def roles(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesArgs']]]]):
pulumi.set(self, "roles", value)
@property
@pulumi.getter(name="signatureKeys")
def signature_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSignatureKeysArgs']]]]:
"""
List of PGP key IDs that commits to be synced to must be signed with
"""
return pulumi.get(self, "signature_keys")
@signature_keys.setter
def signature_keys(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSignatureKeysArgs']]]]):
pulumi.set(self, "signature_keys", value)
@property
@pulumi.getter(name="sourceRepos")
def source_repos(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
SourceRepos contains list of repository URLs which can be used for deployment
"""
return pulumi.get(self, "source_repos")
@source_repos.setter
def source_repos(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "source_repos", value)
@property
@pulumi.getter(name="syncWindows")
def sync_windows(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSyncWindowsArgs']]]]:
"""
SyncWindows controls when syncs can be run for apps in this project
"""
return pulumi.get(self, "sync_windows")
@sync_windows.setter
def sync_windows(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecSyncWindowsArgs']]]]):
pulumi.set(self, "sync_windows", value)
@pulumi.input_type
class AppProjectSpecClusterResourceBlacklistArgs:
def __init__(__self__, *,
group: pulumi.Input[str],
kind: pulumi.Input[str]):
"""
GroupKind specifies a Group and a Kind, but does not force a version. This is useful for identifying concepts during lookup stages without having partially valid types
"""
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "kind", kind)
@property
@pulumi.getter
def group(self) -> pulumi.Input[str]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: pulumi.Input[str]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@pulumi.input_type
class AppProjectSpecClusterResourceWhitelistArgs:
def __init__(__self__, *,
group: pulumi.Input[str],
kind: pulumi.Input[str]):
"""
GroupKind specifies a Group and a Kind, but does not force a version. This is useful for identifying concepts during lookup stages without having partially valid types
"""
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "kind", kind)
@property
@pulumi.getter
def group(self) -> pulumi.Input[str]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: pulumi.Input[str]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@pulumi.input_type
class AppProjectSpecDestinationsArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None,
server: Optional[pulumi.Input[str]] = None):
"""
ApplicationDestination contains deployment destination information
:param pulumi.Input[str] name: Name of the destination cluster which can be used instead of server (url) field
:param pulumi.Input[str] namespace: Namespace overrides the environment namespace value in the ksonnet app.yaml
:param pulumi.Input[str] server: Server overrides the environment server value in the ksonnet app.yaml
"""
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if server is not None:
pulumi.set(__self__, "server", server)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the destination cluster which can be used instead of server (url) field
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
"""
Namespace overrides the environment namespace value in the ksonnet app.yaml
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def server(self) -> Optional[pulumi.Input[str]]:
"""
Server overrides the environment server value in the ksonnet app.yaml
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server", value)
@pulumi.input_type
class AppProjectSpecNamespaceResourceBlacklistArgs:
def __init__(__self__, *,
group: pulumi.Input[str],
kind: pulumi.Input[str]):
"""
GroupKind specifies a Group and a Kind, but does not force a version. This is useful for identifying concepts during lookup stages without having partially valid types
"""
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "kind", kind)
@property
@pulumi.getter
def group(self) -> pulumi.Input[str]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: pulumi.Input[str]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@pulumi.input_type
class AppProjectSpecNamespaceResourceWhitelistArgs:
def __init__(__self__, *,
group: pulumi.Input[str],
kind: pulumi.Input[str]):
"""
GroupKind specifies a Group and a Kind, but does not force a version. This is useful for identifying concepts during lookup stages without having partially valid types
"""
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "kind", kind)
@property
@pulumi.getter
def group(self) -> pulumi.Input[str]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: pulumi.Input[str]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@pulumi.input_type
class AppProjectSpecOrphanedResourcesArgs:
def __init__(__self__, *,
ignore: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecOrphanedResourcesIgnoreArgs']]]] = None,
warn: Optional[pulumi.Input[bool]] = None):
"""
OrphanedResources specifies if controller should monitor orphaned resources of apps in this project
:param pulumi.Input[bool] warn: Warn indicates if warning condition should be created for apps which have orphaned resources
"""
if ignore is not None:
pulumi.set(__self__, "ignore", ignore)
if warn is not None:
pulumi.set(__self__, "warn", warn)
@property
@pulumi.getter
def ignore(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecOrphanedResourcesIgnoreArgs']]]]:
return pulumi.get(self, "ignore")
@ignore.setter
def ignore(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecOrphanedResourcesIgnoreArgs']]]]):
pulumi.set(self, "ignore", value)
@property
@pulumi.getter
def warn(self) -> Optional[pulumi.Input[bool]]:
"""
Warn indicates if warning condition should be created for apps which have orphaned resources
"""
return pulumi.get(self, "warn")
@warn.setter
def warn(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "warn", value)
@pulumi.input_type
class AppProjectSpecOrphanedResourcesIgnoreArgs:
def __init__(__self__, *,
group: Optional[pulumi.Input[str]] = None,
kind: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None):
if group is not None:
pulumi.set(__self__, "group", group)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def group(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def kind(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class AppProjectSpecRolesArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
groups: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
jwt_tokens: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesJwtTokensArgs']]]] = None,
policies: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
ProjectRole represents a role that has access to a project
:param pulumi.Input[str] name: Name is a name for this role
:param pulumi.Input[str] description: Description is a description of the role
:param pulumi.Input[Sequence[pulumi.Input[str]]] groups: Groups are a list of OIDC group claims bound to this role
:param pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesJwtTokensArgs']]] jwt_tokens: JWTTokens are a list of generated JWT tokens bound to this role
:param pulumi.Input[Sequence[pulumi.Input[str]]] policies: Policies Stores a list of casbin formated strings that define access policies for the role in the project
"""
pulumi.set(__self__, "name", name)
if description is not None:
pulumi.set(__self__, "description", description)
if groups is not None:
pulumi.set(__self__, "groups", groups)
if jwt_tokens is not None:
pulumi.set(__self__, "jwt_tokens", jwt_tokens)
if policies is not None:
pulumi.set(__self__, "policies", policies)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Name is a name for this role
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description is a description of the role
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def groups(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Groups are a list of OIDC group claims bound to this role
"""
return pulumi.get(self, "groups")
@groups.setter
def groups(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "groups", value)
@property
@pulumi.getter(name="jwtTokens")
def jwt_tokens(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesJwtTokensArgs']]]]:
"""
JWTTokens are a list of generated JWT tokens bound to this role
"""
return pulumi.get(self, "jwt_tokens")
@jwt_tokens.setter
def jwt_tokens(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AppProjectSpecRolesJwtTokensArgs']]]]):
pulumi.set(self, "jwt_tokens", value)
@property
@pulumi.getter
def policies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Policies Stores a list of casbin formated strings that define access policies for the role in the project
"""
return pulumi.get(self, "policies")
@policies.setter
def policies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "policies", value)
@pulumi.input_type
class AppProjectSpecRolesJwtTokensArgs:
def __init__(__self__, *,
iat: pulumi.Input[int],
exp: Optional[pulumi.Input[int]] = None,
id: Optional[pulumi.Input[str]] = None):
"""
JWTToken holds the issuedAt and expiresAt values of a token
"""
pulumi.set(__self__, "iat", iat)
if exp is not None:
pulumi.set(__self__, "exp", exp)
if id is not None:
pulumi.set(__self__, "id", id)
@property
@pulumi.getter
def iat(self) -> pulumi.Input[int]:
return pulumi.get(self, "iat")
@iat.setter
def iat(self, value: pulumi.Input[int]):
pulumi.set(self, "iat", value)
@property
@pulumi.getter
def exp(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "exp")
@exp.setter
def exp(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "exp", value)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@pulumi.input_type
class AppProjectSpecSignatureKeysArgs:
def __init__(__self__, *,
key_id: pulumi.Input[str]):
"""
SignatureKey is the specification of a key required to verify commit signatures with
:param pulumi.Input[str] key_id: The ID of the key in hexadecimal notation
"""
pulumi.set(__self__, "key_id", key_id)
@property
@pulumi.getter(name="keyID")
def key_id(self) -> pulumi.Input[str]:
"""
The ID of the key in hexadecimal notation
"""
return pulumi.get(self, "key_id")
@key_id.setter
def key_id(self, value: pulumi.Input[str]):
pulumi.set(self, "key_id", value)
@pulumi.input_type
class AppProjectSpecSyncWindowsArgs:
def __init__(__self__, *,
applications: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
clusters: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
duration: Optional[pulumi.Input[str]] = None,
kind: Optional[pulumi.Input[str]] = None,
manual_sync: Optional[pulumi.Input[bool]] = None,
namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
schedule: Optional[pulumi.Input[str]] = None):
"""
SyncWindow contains the kind, time, duration and attributes that are used to assign the syncWindows to apps
:param pulumi.Input[Sequence[pulumi.Input[str]]] applications: Applications contains a list of applications that the window will apply to
:param pulumi.Input[Sequence[pulumi.Input[str]]] clusters: Clusters contains a list of clusters that the window will apply to
:param pulumi.Input[str] duration: Duration is the amount of time the sync window will be open
:param pulumi.Input[str] kind: Kind defines if the window allows or blocks syncs
:param pulumi.Input[bool] manual_sync: ManualSync enables manual syncs when they would otherwise be blocked
:param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: Namespaces contains a list of namespaces that the window will apply to
:param pulumi.Input[str] schedule: Schedule is the time the window will begin, specified in cron format
"""
if applications is not None:
pulumi.set(__self__, "applications", applications)
if clusters is not None:
pulumi.set(__self__, "clusters", clusters)
if duration is not None:
pulumi.set(__self__, "duration", duration)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if manual_sync is not None:
pulumi.set(__self__, "manual_sync", manual_sync)
if namespaces is not None:
pulumi.set(__self__, "namespaces", namespaces)
if schedule is not None:
pulumi.set(__self__, "schedule", schedule)
@property
@pulumi.getter
def applications(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Applications contains a list of applications that the window will apply to
"""
return pulumi.get(self, "applications")
@applications.setter
def applications(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "applications", value)
@property
@pulumi.getter
def clusters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Clusters contains a list of clusters that the window will apply to
"""
return pulumi.get(self, "clusters")
@clusters.setter
def clusters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "clusters", value)
@property
@pulumi.getter
def duration(self) -> Optional[pulumi.Input[str]]:
"""
Duration is the amount of time the sync window will be open
"""
return pulumi.get(self, "duration")
@duration.setter
def duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "duration", value)
@property
@pulumi.getter
def kind(self) -> Optional[pulumi.Input[str]]:
"""
Kind defines if the window allows or blocks syncs
"""
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter(name="manualSync")
def manual_sync(self) -> Optional[pulumi.Input[bool]]:
"""
ManualSync enables manual syncs when they would otherwise be blocked
"""
return pulumi.get(self, "manual_sync")
@manual_sync.setter
def manual_sync(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "manual_sync", value)
@property
@pulumi.getter
def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Namespaces contains a list of namespaces that the window will apply to
"""
return pulumi.get(self, "namespaces")
@namespaces.setter
def namespaces(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "namespaces", value)
@property
@pulumi.getter
def schedule(self) -> Optional[pulumi.Input[str]]:
"""
Schedule is the time the window will begin, specified in cron format
"""
return pulumi.get(self, "schedule")
@schedule.setter
def schedule(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "schedule", value)
@pulumi.input_type
class ApplicationOperationArgs:
def __init__(__self__, *,
info: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationInfoArgs']]]] = None,
initiated_by: Optional[pulumi.Input['ApplicationOperationInitiatedByArgs']] = None,
retry: Optional[pulumi.Input['ApplicationOperationRetryArgs']] = None,
sync: Optional[pulumi.Input['ApplicationOperationSyncArgs']] = None):
"""
Operation contains requested operation parameters.
:param pulumi.Input['ApplicationOperationInitiatedByArgs'] initiated_by: OperationInitiator holds information about the operation initiator
:param pulumi.Input['ApplicationOperationRetryArgs'] retry: Retry controls failed sync retry behavior
:param pulumi.Input['ApplicationOperationSyncArgs'] sync: SyncOperation contains sync operation details.
"""
if info is not None:
pulumi.set(__self__, "info", info)
if initiated_by is not None:
pulumi.set(__self__, "initiated_by", initiated_by)
if retry is not None:
pulumi.set(__self__, "retry", retry)
if sync is not None:
pulumi.set(__self__, "sync", sync)
@property
@pulumi.getter
def info(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationInfoArgs']]]]:
return pulumi.get(self, "info")
@info.setter
def info(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationInfoArgs']]]]):
pulumi.set(self, "info", value)
@property
@pulumi.getter(name="initiatedBy")
def initiated_by(self) -> Optional[pulumi.Input['ApplicationOperationInitiatedByArgs']]:
"""
OperationInitiator holds information about the operation initiator
"""
return pulumi.get(self, "initiated_by")
@initiated_by.setter
def initiated_by(self, value: Optional[pulumi.Input['ApplicationOperationInitiatedByArgs']]):
pulumi.set(self, "initiated_by", value)
@property
@pulumi.getter
def retry(self) -> Optional[pulumi.Input['ApplicationOperationRetryArgs']]:
"""
Retry controls failed sync retry behavior
"""
return pulumi.get(self, "retry")
@retry.setter
def retry(self, value: Optional[pulumi.Input['ApplicationOperationRetryArgs']]):
pulumi.set(self, "retry", value)
@property
@pulumi.getter
def sync(self) -> Optional[pulumi.Input['ApplicationOperationSyncArgs']]:
"""
SyncOperation contains sync operation details.
"""
return pulumi.get(self, "sync")
@sync.setter
def sync(self, value: Optional[pulumi.Input['ApplicationOperationSyncArgs']]):
pulumi.set(self, "sync", value)
@pulumi.input_type
class ApplicationOperationInfoArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationOperationInitiatedByArgs:
def __init__(__self__, *,
automated: Optional[pulumi.Input[bool]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
OperationInitiator holds information about the operation initiator
:param pulumi.Input[bool] automated: Automated is set to true if operation was initiated automatically by the application controller.
:param pulumi.Input[str] username: Name of a user who started operation.
"""
if automated is not None:
pulumi.set(__self__, "automated", automated)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def automated(self) -> Optional[pulumi.Input[bool]]:
"""
Automated is set to true if operation was initiated automatically by the application controller.
"""
return pulumi.get(self, "automated")
@automated.setter
def automated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "automated", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
"""
Name of a user who started operation.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class ApplicationOperationRetryArgs:
def __init__(__self__, *,
backoff: Optional[pulumi.Input['ApplicationOperationRetryBackoffArgs']] = None,
limit: Optional[pulumi.Input[int]] = None):
"""
Retry controls failed sync retry behavior
:param pulumi.Input['ApplicationOperationRetryBackoffArgs'] backoff: Backoff is a backoff strategy
:param pulumi.Input[int] limit: Limit is the maximum number of attempts when retrying a container
"""
if backoff is not None:
pulumi.set(__self__, "backoff", backoff)
if limit is not None:
pulumi.set(__self__, "limit", limit)
@property
@pulumi.getter
def backoff(self) -> Optional[pulumi.Input['ApplicationOperationRetryBackoffArgs']]:
"""
Backoff is a backoff strategy
"""
return pulumi.get(self, "backoff")
@backoff.setter
def backoff(self, value: Optional[pulumi.Input['ApplicationOperationRetryBackoffArgs']]):
pulumi.set(self, "backoff", value)
@property
@pulumi.getter
def limit(self) -> Optional[pulumi.Input[int]]:
"""
Limit is the maximum number of attempts when retrying a container
"""
return pulumi.get(self, "limit")
@limit.setter
def limit(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "limit", value)
@pulumi.input_type
class ApplicationOperationRetryBackoffArgs:
def __init__(__self__, *,
duration: Optional[pulumi.Input[str]] = None,
factor: Optional[pulumi.Input[int]] = None,
max_duration: Optional[pulumi.Input[str]] = None):
"""
Backoff is a backoff strategy
:param pulumi.Input[str] duration: Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
:param pulumi.Input[int] factor: Factor is a factor to multiply the base duration after each failed retry
:param pulumi.Input[str] max_duration: MaxDuration is the maximum amount of time allowed for the backoff strategy
"""
if duration is not None:
pulumi.set(__self__, "duration", duration)
if factor is not None:
pulumi.set(__self__, "factor", factor)
if max_duration is not None:
pulumi.set(__self__, "max_duration", max_duration)
@property
@pulumi.getter
def duration(self) -> Optional[pulumi.Input[str]]:
"""
Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
"""
return pulumi.get(self, "duration")
@duration.setter
def duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "duration", value)
@property
@pulumi.getter
def factor(self) -> Optional[pulumi.Input[int]]:
"""
Factor is a factor to multiply the base duration after each failed retry
"""
return pulumi.get(self, "factor")
@factor.setter
def factor(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "factor", value)
@property
@pulumi.getter(name="maxDuration")
def max_duration(self) -> Optional[pulumi.Input[str]]:
"""
MaxDuration is the maximum amount of time allowed for the backoff strategy
"""
return pulumi.get(self, "max_duration")
@max_duration.setter
def max_duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "max_duration", value)
@pulumi.input_type
class ApplicationOperationSyncArgs:
def __init__(__self__, *,
dry_run: Optional[pulumi.Input[bool]] = None,
manifests: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
prune: Optional[pulumi.Input[bool]] = None,
resources: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncResourcesArgs']]]] = None,
revision: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input['ApplicationOperationSyncSourceArgs']] = None,
sync_options: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sync_strategy: Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyArgs']] = None):
"""
SyncOperation contains sync operation details.
:param pulumi.Input[bool] dry_run: DryRun will perform a `kubectl apply --dry-run` without actually performing the sync
:param pulumi.Input[Sequence[pulumi.Input[str]]] manifests: Manifests is an optional field that overrides sync source with a local directory for development
:param pulumi.Input[bool] prune: Prune deletes resources that are no longer tracked in git
:param pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncResourcesArgs']]] resources: Resources describes which resources to sync
:param pulumi.Input[str] revision: Revision is the revision in which to sync the application to. If omitted, will use the revision specified in app spec.
:param pulumi.Input['ApplicationOperationSyncSourceArgs'] source: Source overrides the source definition set in the application. This is typically set in a Rollback operation and nil during a Sync operation
:param pulumi.Input[Sequence[pulumi.Input[str]]] sync_options: SyncOptions provide per-sync sync-options, e.g. Validate=false
:param pulumi.Input['ApplicationOperationSyncSyncStrategyArgs'] sync_strategy: SyncStrategy describes how to perform the sync
"""
if dry_run is not None:
pulumi.set(__self__, "dry_run", dry_run)
if manifests is not None:
pulumi.set(__self__, "manifests", manifests)
if prune is not None:
pulumi.set(__self__, "prune", prune)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if revision is not None:
pulumi.set(__self__, "revision", revision)
if source is not None:
pulumi.set(__self__, "source", source)
if sync_options is not None:
pulumi.set(__self__, "sync_options", sync_options)
if sync_strategy is not None:
pulumi.set(__self__, "sync_strategy", sync_strategy)
@property
@pulumi.getter(name="dryRun")
def dry_run(self) -> Optional[pulumi.Input[bool]]:
"""
DryRun will perform a `kubectl apply --dry-run` without actually performing the sync
"""
return pulumi.get(self, "dry_run")
@dry_run.setter
def dry_run(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "dry_run", value)
@property
@pulumi.getter
def manifests(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Manifests is an optional field that overrides sync source with a local directory for development
"""
return pulumi.get(self, "manifests")
@manifests.setter
def manifests(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "manifests", value)
@property
@pulumi.getter
def prune(self) -> Optional[pulumi.Input[bool]]:
"""
Prune deletes resources that are no longer tracked in git
"""
return pulumi.get(self, "prune")
@prune.setter
def prune(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "prune", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncResourcesArgs']]]]:
"""
Resources describes which resources to sync
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncResourcesArgs']]]]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def revision(self) -> Optional[pulumi.Input[str]]:
"""
Revision is the revision in which to sync the application to. If omitted, will use the revision specified in app spec.
"""
return pulumi.get(self, "revision")
@revision.setter
def revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "revision", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourceArgs']]:
"""
Source overrides the source definition set in the application. This is typically set in a Rollback operation and nil during a Sync operation
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourceArgs']]):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="syncOptions")
def sync_options(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
SyncOptions provide per-sync sync-options, e.g. Validate=false
"""
return pulumi.get(self, "sync_options")
@sync_options.setter
def sync_options(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "sync_options", value)
@property
@pulumi.getter(name="syncStrategy")
def sync_strategy(self) -> Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyArgs']]:
"""
SyncStrategy describes how to perform the sync
"""
return pulumi.get(self, "sync_strategy")
@sync_strategy.setter
def sync_strategy(self, value: Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyArgs']]):
pulumi.set(self, "sync_strategy", value)
@pulumi.input_type
class ApplicationOperationSyncResourcesArgs:
def __init__(__self__, *,
kind: pulumi.Input[str],
name: pulumi.Input[str],
group: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None):
"""
SyncOperationResource contains resources to sync.
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
if group is not None:
pulumi.set(__self__, "group", group)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def group(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@pulumi.input_type
class ApplicationOperationSyncSourceArgs:
def __init__(__self__, *,
repo_url: pulumi.Input[str],
chart: Optional[pulumi.Input[str]] = None,
directory: Optional[pulumi.Input['ApplicationOperationSyncSourceDirectoryArgs']] = None,
helm: Optional[pulumi.Input['ApplicationOperationSyncSourceHelmArgs']] = None,
ksonnet: Optional[pulumi.Input['ApplicationOperationSyncSourceKsonnetArgs']] = None,
kustomize: Optional[pulumi.Input['ApplicationOperationSyncSourceKustomizeArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
plugin: Optional[pulumi.Input['ApplicationOperationSyncSourcePluginArgs']] = None,
target_revision: Optional[pulumi.Input[str]] = None):
"""
Source overrides the source definition set in the application. This is typically set in a Rollback operation and nil during a Sync operation
:param pulumi.Input[str] repo_url: RepoURL is the repository URL of the application manifests
:param pulumi.Input[str] chart: Chart is a Helm chart name
:param pulumi.Input['ApplicationOperationSyncSourceDirectoryArgs'] directory: Directory holds path/directory specific options
:param pulumi.Input['ApplicationOperationSyncSourceHelmArgs'] helm: Helm holds helm specific options
:param pulumi.Input['ApplicationOperationSyncSourceKsonnetArgs'] ksonnet: Ksonnet holds ksonnet specific options
:param pulumi.Input['ApplicationOperationSyncSourceKustomizeArgs'] kustomize: Kustomize holds kustomize specific options
:param pulumi.Input[str] path: Path is a directory path within the Git repository
:param pulumi.Input['ApplicationOperationSyncSourcePluginArgs'] plugin: ConfigManagementPlugin holds config management plugin specific options
:param pulumi.Input[str] target_revision: TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
pulumi.set(__self__, "repo_url", repo_url)
if chart is not None:
pulumi.set(__self__, "chart", chart)
if directory is not None:
pulumi.set(__self__, "directory", directory)
if helm is not None:
pulumi.set(__self__, "helm", helm)
if ksonnet is not None:
pulumi.set(__self__, "ksonnet", ksonnet)
if kustomize is not None:
pulumi.set(__self__, "kustomize", kustomize)
if path is not None:
pulumi.set(__self__, "path", path)
if plugin is not None:
pulumi.set(__self__, "plugin", plugin)
if target_revision is not None:
pulumi.set(__self__, "target_revision", target_revision)
@property
@pulumi.getter(name="repoURL")
def repo_url(self) -> pulumi.Input[str]:
"""
RepoURL is the repository URL of the application manifests
"""
return pulumi.get(self, "repo_url")
@repo_url.setter
def repo_url(self, value: pulumi.Input[str]):
pulumi.set(self, "repo_url", value)
@property
@pulumi.getter
def chart(self) -> Optional[pulumi.Input[str]]:
"""
Chart is a Helm chart name
"""
return pulumi.get(self, "chart")
@chart.setter
def chart(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "chart", value)
@property
@pulumi.getter
def directory(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourceDirectoryArgs']]:
"""
Directory holds path/directory specific options
"""
return pulumi.get(self, "directory")
@directory.setter
def directory(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourceDirectoryArgs']]):
pulumi.set(self, "directory", value)
@property
@pulumi.getter
def helm(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourceHelmArgs']]:
"""
Helm holds helm specific options
"""
return pulumi.get(self, "helm")
@helm.setter
def helm(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourceHelmArgs']]):
pulumi.set(self, "helm", value)
@property
@pulumi.getter
def ksonnet(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourceKsonnetArgs']]:
"""
Ksonnet holds ksonnet specific options
"""
return pulumi.get(self, "ksonnet")
@ksonnet.setter
def ksonnet(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourceKsonnetArgs']]):
pulumi.set(self, "ksonnet", value)
@property
@pulumi.getter
def kustomize(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourceKustomizeArgs']]:
"""
Kustomize holds kustomize specific options
"""
return pulumi.get(self, "kustomize")
@kustomize.setter
def kustomize(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourceKustomizeArgs']]):
pulumi.set(self, "kustomize", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is a directory path within the Git repository
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def plugin(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourcePluginArgs']]:
"""
ConfigManagementPlugin holds config management plugin specific options
"""
return pulumi.get(self, "plugin")
@plugin.setter
def plugin(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourcePluginArgs']]):
pulumi.set(self, "plugin", value)
@property
@pulumi.getter(name="targetRevision")
def target_revision(self) -> Optional[pulumi.Input[str]]:
"""
TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
return pulumi.get(self, "target_revision")
@target_revision.setter
def target_revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_revision", value)
@pulumi.input_type
class ApplicationOperationSyncSourceDirectoryArgs:
def __init__(__self__, *,
jsonnet: Optional[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetArgs']] = None,
recurse: Optional[pulumi.Input[bool]] = None):
"""
Directory holds path/directory specific options
:param pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetArgs'] jsonnet: ApplicationSourceJsonnet holds jsonnet specific options
"""
if jsonnet is not None:
pulumi.set(__self__, "jsonnet", jsonnet)
if recurse is not None:
pulumi.set(__self__, "recurse", recurse)
@property
@pulumi.getter
def jsonnet(self) -> Optional[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetArgs']]:
"""
ApplicationSourceJsonnet holds jsonnet specific options
"""
return pulumi.get(self, "jsonnet")
@jsonnet.setter
def jsonnet(self, value: Optional[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetArgs']]):
pulumi.set(self, "jsonnet", value)
@property
@pulumi.getter
def recurse(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "recurse")
@recurse.setter
def recurse(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "recurse", value)
@pulumi.input_type
class ApplicationOperationSyncSourceDirectoryJsonnetArgs:
def __init__(__self__, *,
ext_vars: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetExtVarsArgs']]]] = None,
libs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tlas: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetTlasArgs']]]] = None):
"""
ApplicationSourceJsonnet holds jsonnet specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetExtVarsArgs']]] ext_vars: ExtVars is a list of Jsonnet External Variables
:param pulumi.Input[Sequence[pulumi.Input[str]]] libs: Additional library search dirs
:param pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetTlasArgs']]] tlas: TLAS is a list of Jsonnet Top-level Arguments
"""
if ext_vars is not None:
pulumi.set(__self__, "ext_vars", ext_vars)
if libs is not None:
pulumi.set(__self__, "libs", libs)
if tlas is not None:
pulumi.set(__self__, "tlas", tlas)
@property
@pulumi.getter(name="extVars")
def ext_vars(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetExtVarsArgs']]]]:
"""
ExtVars is a list of Jsonnet External Variables
"""
return pulumi.get(self, "ext_vars")
@ext_vars.setter
def ext_vars(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetExtVarsArgs']]]]):
pulumi.set(self, "ext_vars", value)
@property
@pulumi.getter
def libs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Additional library search dirs
"""
return pulumi.get(self, "libs")
@libs.setter
def libs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "libs", value)
@property
@pulumi.getter
def tlas(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetTlasArgs']]]]:
"""
TLAS is a list of Jsonnet Top-level Arguments
"""
return pulumi.get(self, "tlas")
@tlas.setter
def tlas(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceDirectoryJsonnetTlasArgs']]]]):
pulumi.set(self, "tlas", value)
@pulumi.input_type
class ApplicationOperationSyncSourceDirectoryJsonnetExtVarsArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationOperationSyncSourceDirectoryJsonnetTlasArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationOperationSyncSourceHelmArgs:
def __init__(__self__, *,
file_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmFileParametersArgs']]]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmParametersArgs']]]] = None,
release_name: Optional[pulumi.Input[str]] = None,
value_files: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
values: Optional[pulumi.Input[str]] = None):
"""
Helm holds helm specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmFileParametersArgs']]] file_parameters: FileParameters are file parameters to the helm template
:param pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmParametersArgs']]] parameters: Parameters are parameters to the helm template
:param pulumi.Input[str] release_name: The Helm release name. If omitted it will use the application name
:param pulumi.Input[Sequence[pulumi.Input[str]]] value_files: ValuesFiles is a list of Helm value files to use when generating a template
:param pulumi.Input[str] values: Values is Helm values, typically defined as a block
"""
if file_parameters is not None:
pulumi.set(__self__, "file_parameters", file_parameters)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if release_name is not None:
pulumi.set(__self__, "release_name", release_name)
if value_files is not None:
pulumi.set(__self__, "value_files", value_files)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="fileParameters")
def file_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmFileParametersArgs']]]]:
"""
FileParameters are file parameters to the helm template
"""
return pulumi.get(self, "file_parameters")
@file_parameters.setter
def file_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmFileParametersArgs']]]]):
pulumi.set(self, "file_parameters", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmParametersArgs']]]]:
"""
Parameters are parameters to the helm template
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceHelmParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="releaseName")
def release_name(self) -> Optional[pulumi.Input[str]]:
"""
The Helm release name. If omitted it will use the application name
"""
return pulumi.get(self, "release_name")
@release_name.setter
def release_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_name", value)
@property
@pulumi.getter(name="valueFiles")
def value_files(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ValuesFiles is a list of Helm value files to use when generating a template
"""
return pulumi.get(self, "value_files")
@value_files.setter
def value_files(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "value_files", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[str]]:
"""
Values is Helm values, typically defined as a block
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ApplicationOperationSyncSourceHelmFileParametersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
HelmFileParameter is a file parameter to a helm template
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] path: Path is the path value for the helm parameter
"""
if name is not None:
pulumi.set(__self__, "name", name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is the path value for the helm parameter
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class ApplicationOperationSyncSourceHelmParametersArgs:
def __init__(__self__, *,
force_string: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
HelmParameter is a parameter to a helm template
:param pulumi.Input[bool] force_string: ForceString determines whether to tell Helm to interpret booleans and numbers as strings
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] value: Value is the value for the helm parameter
"""
if force_string is not None:
pulumi.set(__self__, "force_string", force_string)
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="forceString")
def force_string(self) -> Optional[pulumi.Input[bool]]:
"""
ForceString determines whether to tell Helm to interpret booleans and numbers as strings
"""
return pulumi.get(self, "force_string")
@force_string.setter
def force_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_string", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the value for the helm parameter
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationOperationSyncSourceKsonnetArgs:
def __init__(__self__, *,
environment: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceKsonnetParametersArgs']]]] = None):
"""
Ksonnet holds ksonnet specific options
:param pulumi.Input[str] environment: Environment is a ksonnet application environment name
:param pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceKsonnetParametersArgs']]] parameters: Parameters are a list of ksonnet component parameter override values
"""
if environment is not None:
pulumi.set(__self__, "environment", environment)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Environment is a ksonnet application environment name
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceKsonnetParametersArgs']]]]:
"""
Parameters are a list of ksonnet component parameter override values
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourceKsonnetParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class ApplicationOperationSyncSourceKsonnetParametersArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
component: Optional[pulumi.Input[str]] = None):
"""
KsonnetParameter is a ksonnet component parameter
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if component is not None:
pulumi.set(__self__, "component", component)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def component(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "component")
@component.setter
def component(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "component", value)
@pulumi.input_type
class ApplicationOperationSyncSourceKustomizeArgs:
def __init__(__self__, *,
common_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name_prefix: Optional[pulumi.Input[str]] = None,
name_suffix: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Kustomize holds kustomize specific options
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] common_labels: CommonLabels adds additional kustomize commonLabels
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images are kustomize image overrides
:param pulumi.Input[str] name_prefix: NamePrefix is a prefix appended to resources for kustomize apps
:param pulumi.Input[str] name_suffix: NameSuffix is a suffix appended to resources for kustomize apps
:param pulumi.Input[str] version: Version contains optional Kustomize version
"""
if common_labels is not None:
pulumi.set(__self__, "common_labels", common_labels)
if images is not None:
pulumi.set(__self__, "images", images)
if name_prefix is not None:
pulumi.set(__self__, "name_prefix", name_prefix)
if name_suffix is not None:
pulumi.set(__self__, "name_suffix", name_suffix)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="commonLabels")
def common_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
CommonLabels adds additional kustomize commonLabels
"""
return pulumi.get(self, "common_labels")
@common_labels.setter
def common_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "common_labels", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images are kustomize image overrides
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="namePrefix")
def name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
NamePrefix is a prefix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_prefix")
@name_prefix.setter
def name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_prefix", value)
@property
@pulumi.getter(name="nameSuffix")
def name_suffix(self) -> Optional[pulumi.Input[str]]:
"""
NameSuffix is a suffix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_suffix")
@name_suffix.setter
def name_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_suffix", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version contains optional Kustomize version
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationOperationSyncSourcePluginArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourcePluginEnvArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
ConfigManagementPlugin holds config management plugin specific options
"""
if env is not None:
pulumi.set(__self__, "env", env)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourcePluginEnvArgs']]]]:
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationOperationSyncSourcePluginEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class ApplicationOperationSyncSourcePluginEnvArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] name: the name, usually uppercase
:param pulumi.Input[str] value: the value
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
the name, usually uppercase
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
the value
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationOperationSyncSyncStrategyArgs:
def __init__(__self__, *,
apply: Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyApplyArgs']] = None,
hook: Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyHookArgs']] = None):
"""
SyncStrategy describes how to perform the sync
:param pulumi.Input['ApplicationOperationSyncSyncStrategyApplyArgs'] apply: Apply wil perform a `kubectl apply` to perform the sync.
:param pulumi.Input['ApplicationOperationSyncSyncStrategyHookArgs'] hook: Hook will submit any referenced resources to perform the sync. This is the default strategy
"""
if apply is not None:
pulumi.set(__self__, "apply", apply)
if hook is not None:
pulumi.set(__self__, "hook", hook)
@property
@pulumi.getter
def apply(self) -> Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyApplyArgs']]:
"""
Apply wil perform a `kubectl apply` to perform the sync.
"""
return pulumi.get(self, "apply")
@apply.setter
def apply(self, value: Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyApplyArgs']]):
pulumi.set(self, "apply", value)
@property
@pulumi.getter
def hook(self) -> Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyHookArgs']]:
"""
Hook will submit any referenced resources to perform the sync. This is the default strategy
"""
return pulumi.get(self, "hook")
@hook.setter
def hook(self, value: Optional[pulumi.Input['ApplicationOperationSyncSyncStrategyHookArgs']]):
pulumi.set(self, "hook", value)
@pulumi.input_type
class ApplicationOperationSyncSyncStrategyApplyArgs:
def __init__(__self__, *,
force: Optional[pulumi.Input[bool]] = None):
"""
Apply wil perform a `kubectl apply` to perform the sync.
:param pulumi.Input[bool] force: Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
if force is not None:
pulumi.set(__self__, "force", force)
@property
@pulumi.getter
def force(self) -> Optional[pulumi.Input[bool]]:
"""
Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
return pulumi.get(self, "force")
@force.setter
def force(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force", value)
@pulumi.input_type
class ApplicationOperationSyncSyncStrategyHookArgs:
def __init__(__self__, *,
force: Optional[pulumi.Input[bool]] = None):
"""
Hook will submit any referenced resources to perform the sync. This is the default strategy
:param pulumi.Input[bool] force: Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
if force is not None:
pulumi.set(__self__, "force", force)
@property
@pulumi.getter
def force(self) -> Optional[pulumi.Input[bool]]:
"""
Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
return pulumi.get(self, "force")
@force.setter
def force(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force", value)
@pulumi.input_type
class ApplicationSpecArgs:
def __init__(__self__, *,
destination: pulumi.Input['ApplicationSpecDestinationArgs'],
project: pulumi.Input[str],
source: pulumi.Input['ApplicationSpecSourceArgs'],
ignore_differences: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecIgnoreDifferencesArgs']]]] = None,
info: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecInfoArgs']]]] = None,
revision_history_limit: Optional[pulumi.Input[int]] = None,
sync_policy: Optional[pulumi.Input['ApplicationSpecSyncPolicyArgs']] = None):
"""
ApplicationSpec represents desired application state. Contains link to repository with application definition and additional parameters link definition revision.
:param pulumi.Input['ApplicationSpecDestinationArgs'] destination: Destination overrides the kubernetes server and namespace defined in the environment ksonnet app.yaml
:param pulumi.Input[str] project: Project is a application project name. Empty name means that application belongs to 'default' project.
:param pulumi.Input['ApplicationSpecSourceArgs'] source: Source is a reference to the location ksonnet application definition
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecIgnoreDifferencesArgs']]] ignore_differences: IgnoreDifferences controls resources fields which should be ignored during comparison
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecInfoArgs']]] info: Infos contains a list of useful information (URLs, email addresses, and plain text) that relates to the application
:param pulumi.Input[int] revision_history_limit: This limits this number of items kept in the apps revision history. This should only be changed in exceptional circumstances. Setting to zero will store no history. This will reduce storage used. Increasing will increase the space used to store the history, so we do not recommend increasing it. Default is 10.
:param pulumi.Input['ApplicationSpecSyncPolicyArgs'] sync_policy: SyncPolicy controls when a sync will be performed
"""
pulumi.set(__self__, "destination", destination)
pulumi.set(__self__, "project", project)
pulumi.set(__self__, "source", source)
if ignore_differences is not None:
pulumi.set(__self__, "ignore_differences", ignore_differences)
if info is not None:
pulumi.set(__self__, "info", info)
if revision_history_limit is not None:
pulumi.set(__self__, "revision_history_limit", revision_history_limit)
if sync_policy is not None:
pulumi.set(__self__, "sync_policy", sync_policy)
@property
@pulumi.getter
def destination(self) -> pulumi.Input['ApplicationSpecDestinationArgs']:
"""
Destination overrides the kubernetes server and namespace defined in the environment ksonnet app.yaml
"""
return pulumi.get(self, "destination")
@destination.setter
def destination(self, value: pulumi.Input['ApplicationSpecDestinationArgs']):
pulumi.set(self, "destination", value)
@property
@pulumi.getter
def project(self) -> pulumi.Input[str]:
"""
Project is a application project name. Empty name means that application belongs to 'default' project.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: pulumi.Input[str]):
pulumi.set(self, "project", value)
@property
@pulumi.getter
def source(self) -> pulumi.Input['ApplicationSpecSourceArgs']:
"""
Source is a reference to the location ksonnet application definition
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: pulumi.Input['ApplicationSpecSourceArgs']):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="ignoreDifferences")
def ignore_differences(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecIgnoreDifferencesArgs']]]]:
"""
IgnoreDifferences controls resources fields which should be ignored during comparison
"""
return pulumi.get(self, "ignore_differences")
@ignore_differences.setter
def ignore_differences(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecIgnoreDifferencesArgs']]]]):
pulumi.set(self, "ignore_differences", value)
@property
@pulumi.getter
def info(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecInfoArgs']]]]:
"""
Infos contains a list of useful information (URLs, email addresses, and plain text) that relates to the application
"""
return pulumi.get(self, "info")
@info.setter
def info(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecInfoArgs']]]]):
pulumi.set(self, "info", value)
@property
@pulumi.getter(name="revisionHistoryLimit")
def revision_history_limit(self) -> Optional[pulumi.Input[int]]:
"""
This limits this number of items kept in the apps revision history. This should only be changed in exceptional circumstances. Setting to zero will store no history. This will reduce storage used. Increasing will increase the space used to store the history, so we do not recommend increasing it. Default is 10.
"""
return pulumi.get(self, "revision_history_limit")
@revision_history_limit.setter
def revision_history_limit(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "revision_history_limit", value)
@property
@pulumi.getter(name="syncPolicy")
def sync_policy(self) -> Optional[pulumi.Input['ApplicationSpecSyncPolicyArgs']]:
"""
SyncPolicy controls when a sync will be performed
"""
return pulumi.get(self, "sync_policy")
@sync_policy.setter
def sync_policy(self, value: Optional[pulumi.Input['ApplicationSpecSyncPolicyArgs']]):
pulumi.set(self, "sync_policy", value)
@pulumi.input_type
class ApplicationSpecDestinationArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None,
server: Optional[pulumi.Input[str]] = None):
"""
Destination overrides the kubernetes server and namespace defined in the environment ksonnet app.yaml
:param pulumi.Input[str] name: Name of the destination cluster which can be used instead of server (url) field
:param pulumi.Input[str] namespace: Namespace overrides the environment namespace value in the ksonnet app.yaml
:param pulumi.Input[str] server: Server overrides the environment server value in the ksonnet app.yaml
"""
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if server is not None:
pulumi.set(__self__, "server", server)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the destination cluster which can be used instead of server (url) field
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
"""
Namespace overrides the environment namespace value in the ksonnet app.yaml
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def server(self) -> Optional[pulumi.Input[str]]:
"""
Server overrides the environment server value in the ksonnet app.yaml
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server", value)
@pulumi.input_type
class ApplicationSpecIgnoreDifferencesArgs:
def __init__(__self__, *,
json_pointers: pulumi.Input[Sequence[pulumi.Input[str]]],
kind: pulumi.Input[str],
group: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None):
"""
ResourceIgnoreDifferences contains resource filter and list of json paths which should be ignored during comparison with live state.
"""
pulumi.set(__self__, "json_pointers", json_pointers)
pulumi.set(__self__, "kind", kind)
if group is not None:
pulumi.set(__self__, "group", group)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
@property
@pulumi.getter(name="jsonPointers")
def json_pointers(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
return pulumi.get(self, "json_pointers")
@json_pointers.setter
def json_pointers(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "json_pointers", value)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def group(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@pulumi.input_type
class ApplicationSpecInfoArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationSpecSourceArgs:
def __init__(__self__, *,
repo_url: pulumi.Input[str],
chart: Optional[pulumi.Input[str]] = None,
directory: Optional[pulumi.Input['ApplicationSpecSourceDirectoryArgs']] = None,
helm: Optional[pulumi.Input['ApplicationSpecSourceHelmArgs']] = None,
ksonnet: Optional[pulumi.Input['ApplicationSpecSourceKsonnetArgs']] = None,
kustomize: Optional[pulumi.Input['ApplicationSpecSourceKustomizeArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
plugin: Optional[pulumi.Input['ApplicationSpecSourcePluginArgs']] = None,
target_revision: Optional[pulumi.Input[str]] = None):
"""
Source is a reference to the location ksonnet application definition
:param pulumi.Input[str] repo_url: RepoURL is the repository URL of the application manifests
:param pulumi.Input[str] chart: Chart is a Helm chart name
:param pulumi.Input['ApplicationSpecSourceDirectoryArgs'] directory: Directory holds path/directory specific options
:param pulumi.Input['ApplicationSpecSourceHelmArgs'] helm: Helm holds helm specific options
:param pulumi.Input['ApplicationSpecSourceKsonnetArgs'] ksonnet: Ksonnet holds ksonnet specific options
:param pulumi.Input['ApplicationSpecSourceKustomizeArgs'] kustomize: Kustomize holds kustomize specific options
:param pulumi.Input[str] path: Path is a directory path within the Git repository
:param pulumi.Input['ApplicationSpecSourcePluginArgs'] plugin: ConfigManagementPlugin holds config management plugin specific options
:param pulumi.Input[str] target_revision: TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
pulumi.set(__self__, "repo_url", repo_url)
if chart is not None:
pulumi.set(__self__, "chart", chart)
if directory is not None:
pulumi.set(__self__, "directory", directory)
if helm is not None:
pulumi.set(__self__, "helm", helm)
if ksonnet is not None:
pulumi.set(__self__, "ksonnet", ksonnet)
if kustomize is not None:
pulumi.set(__self__, "kustomize", kustomize)
if path is not None:
pulumi.set(__self__, "path", path)
if plugin is not None:
pulumi.set(__self__, "plugin", plugin)
if target_revision is not None:
pulumi.set(__self__, "target_revision", target_revision)
@property
@pulumi.getter(name="repoURL")
def repo_url(self) -> pulumi.Input[str]:
"""
RepoURL is the repository URL of the application manifests
"""
return pulumi.get(self, "repo_url")
@repo_url.setter
def repo_url(self, value: pulumi.Input[str]):
pulumi.set(self, "repo_url", value)
@property
@pulumi.getter
def chart(self) -> Optional[pulumi.Input[str]]:
"""
Chart is a Helm chart name
"""
return pulumi.get(self, "chart")
@chart.setter
def chart(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "chart", value)
@property
@pulumi.getter
def directory(self) -> Optional[pulumi.Input['ApplicationSpecSourceDirectoryArgs']]:
"""
Directory holds path/directory specific options
"""
return pulumi.get(self, "directory")
@directory.setter
def directory(self, value: Optional[pulumi.Input['ApplicationSpecSourceDirectoryArgs']]):
pulumi.set(self, "directory", value)
@property
@pulumi.getter
def helm(self) -> Optional[pulumi.Input['ApplicationSpecSourceHelmArgs']]:
"""
Helm holds helm specific options
"""
return pulumi.get(self, "helm")
@helm.setter
def helm(self, value: Optional[pulumi.Input['ApplicationSpecSourceHelmArgs']]):
pulumi.set(self, "helm", value)
@property
@pulumi.getter
def ksonnet(self) -> Optional[pulumi.Input['ApplicationSpecSourceKsonnetArgs']]:
"""
Ksonnet holds ksonnet specific options
"""
return pulumi.get(self, "ksonnet")
@ksonnet.setter
def ksonnet(self, value: Optional[pulumi.Input['ApplicationSpecSourceKsonnetArgs']]):
pulumi.set(self, "ksonnet", value)
@property
@pulumi.getter
def kustomize(self) -> Optional[pulumi.Input['ApplicationSpecSourceKustomizeArgs']]:
"""
Kustomize holds kustomize specific options
"""
return pulumi.get(self, "kustomize")
@kustomize.setter
def kustomize(self, value: Optional[pulumi.Input['ApplicationSpecSourceKustomizeArgs']]):
pulumi.set(self, "kustomize", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is a directory path within the Git repository
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def plugin(self) -> Optional[pulumi.Input['ApplicationSpecSourcePluginArgs']]:
"""
ConfigManagementPlugin holds config management plugin specific options
"""
return pulumi.get(self, "plugin")
@plugin.setter
def plugin(self, value: Optional[pulumi.Input['ApplicationSpecSourcePluginArgs']]):
pulumi.set(self, "plugin", value)
@property
@pulumi.getter(name="targetRevision")
def target_revision(self) -> Optional[pulumi.Input[str]]:
"""
TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
return pulumi.get(self, "target_revision")
@target_revision.setter
def target_revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_revision", value)
@pulumi.input_type
class ApplicationSpecSourceDirectoryArgs:
def __init__(__self__, *,
jsonnet: Optional[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetArgs']] = None,
recurse: Optional[pulumi.Input[bool]] = None):
"""
Directory holds path/directory specific options
:param pulumi.Input['ApplicationSpecSourceDirectoryJsonnetArgs'] jsonnet: ApplicationSourceJsonnet holds jsonnet specific options
"""
if jsonnet is not None:
pulumi.set(__self__, "jsonnet", jsonnet)
if recurse is not None:
pulumi.set(__self__, "recurse", recurse)
@property
@pulumi.getter
def jsonnet(self) -> Optional[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetArgs']]:
"""
ApplicationSourceJsonnet holds jsonnet specific options
"""
return pulumi.get(self, "jsonnet")
@jsonnet.setter
def jsonnet(self, value: Optional[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetArgs']]):
pulumi.set(self, "jsonnet", value)
@property
@pulumi.getter
def recurse(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "recurse")
@recurse.setter
def recurse(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "recurse", value)
@pulumi.input_type
class ApplicationSpecSourceDirectoryJsonnetArgs:
def __init__(__self__, *,
ext_vars: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetExtVarsArgs']]]] = None,
libs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tlas: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetTlasArgs']]]] = None):
"""
ApplicationSourceJsonnet holds jsonnet specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetExtVarsArgs']]] ext_vars: ExtVars is a list of Jsonnet External Variables
:param pulumi.Input[Sequence[pulumi.Input[str]]] libs: Additional library search dirs
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetTlasArgs']]] tlas: TLAS is a list of Jsonnet Top-level Arguments
"""
if ext_vars is not None:
pulumi.set(__self__, "ext_vars", ext_vars)
if libs is not None:
pulumi.set(__self__, "libs", libs)
if tlas is not None:
pulumi.set(__self__, "tlas", tlas)
@property
@pulumi.getter(name="extVars")
def ext_vars(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetExtVarsArgs']]]]:
"""
ExtVars is a list of Jsonnet External Variables
"""
return pulumi.get(self, "ext_vars")
@ext_vars.setter
def ext_vars(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetExtVarsArgs']]]]):
pulumi.set(self, "ext_vars", value)
@property
@pulumi.getter
def libs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Additional library search dirs
"""
return pulumi.get(self, "libs")
@libs.setter
def libs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "libs", value)
@property
@pulumi.getter
def tlas(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetTlasArgs']]]]:
"""
TLAS is a list of Jsonnet Top-level Arguments
"""
return pulumi.get(self, "tlas")
@tlas.setter
def tlas(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceDirectoryJsonnetTlasArgs']]]]):
pulumi.set(self, "tlas", value)
@pulumi.input_type
class ApplicationSpecSourceDirectoryJsonnetExtVarsArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationSpecSourceDirectoryJsonnetTlasArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationSpecSourceHelmArgs:
def __init__(__self__, *,
file_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmFileParametersArgs']]]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmParametersArgs']]]] = None,
release_name: Optional[pulumi.Input[str]] = None,
value_files: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
values: Optional[pulumi.Input[str]] = None):
"""
Helm holds helm specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmFileParametersArgs']]] file_parameters: FileParameters are file parameters to the helm template
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmParametersArgs']]] parameters: Parameters are parameters to the helm template
:param pulumi.Input[str] release_name: The Helm release name. If omitted it will use the application name
:param pulumi.Input[Sequence[pulumi.Input[str]]] value_files: ValuesFiles is a list of Helm value files to use when generating a template
:param pulumi.Input[str] values: Values is Helm values, typically defined as a block
"""
if file_parameters is not None:
pulumi.set(__self__, "file_parameters", file_parameters)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if release_name is not None:
pulumi.set(__self__, "release_name", release_name)
if value_files is not None:
pulumi.set(__self__, "value_files", value_files)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="fileParameters")
def file_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmFileParametersArgs']]]]:
"""
FileParameters are file parameters to the helm template
"""
return pulumi.get(self, "file_parameters")
@file_parameters.setter
def file_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmFileParametersArgs']]]]):
pulumi.set(self, "file_parameters", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmParametersArgs']]]]:
"""
Parameters are parameters to the helm template
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceHelmParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="releaseName")
def release_name(self) -> Optional[pulumi.Input[str]]:
"""
The Helm release name. If omitted it will use the application name
"""
return pulumi.get(self, "release_name")
@release_name.setter
def release_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_name", value)
@property
@pulumi.getter(name="valueFiles")
def value_files(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ValuesFiles is a list of Helm value files to use when generating a template
"""
return pulumi.get(self, "value_files")
@value_files.setter
def value_files(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "value_files", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[str]]:
"""
Values is Helm values, typically defined as a block
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ApplicationSpecSourceHelmFileParametersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
HelmFileParameter is a file parameter to a helm template
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] path: Path is the path value for the helm parameter
"""
if name is not None:
pulumi.set(__self__, "name", name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is the path value for the helm parameter
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class ApplicationSpecSourceHelmParametersArgs:
def __init__(__self__, *,
force_string: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
HelmParameter is a parameter to a helm template
:param pulumi.Input[bool] force_string: ForceString determines whether to tell Helm to interpret booleans and numbers as strings
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] value: Value is the value for the helm parameter
"""
if force_string is not None:
pulumi.set(__self__, "force_string", force_string)
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="forceString")
def force_string(self) -> Optional[pulumi.Input[bool]]:
"""
ForceString determines whether to tell Helm to interpret booleans and numbers as strings
"""
return pulumi.get(self, "force_string")
@force_string.setter
def force_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_string", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the value for the helm parameter
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationSpecSourceKsonnetArgs:
def __init__(__self__, *,
environment: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceKsonnetParametersArgs']]]] = None):
"""
Ksonnet holds ksonnet specific options
:param pulumi.Input[str] environment: Environment is a ksonnet application environment name
:param pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceKsonnetParametersArgs']]] parameters: Parameters are a list of ksonnet component parameter override values
"""
if environment is not None:
pulumi.set(__self__, "environment", environment)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Environment is a ksonnet application environment name
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceKsonnetParametersArgs']]]]:
"""
Parameters are a list of ksonnet component parameter override values
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourceKsonnetParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class ApplicationSpecSourceKsonnetParametersArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
component: Optional[pulumi.Input[str]] = None):
"""
KsonnetParameter is a ksonnet component parameter
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if component is not None:
pulumi.set(__self__, "component", component)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def component(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "component")
@component.setter
def component(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "component", value)
@pulumi.input_type
class ApplicationSpecSourceKustomizeArgs:
def __init__(__self__, *,
common_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name_prefix: Optional[pulumi.Input[str]] = None,
name_suffix: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Kustomize holds kustomize specific options
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] common_labels: CommonLabels adds additional kustomize commonLabels
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images are kustomize image overrides
:param pulumi.Input[str] name_prefix: NamePrefix is a prefix appended to resources for kustomize apps
:param pulumi.Input[str] name_suffix: NameSuffix is a suffix appended to resources for kustomize apps
:param pulumi.Input[str] version: Version contains optional Kustomize version
"""
if common_labels is not None:
pulumi.set(__self__, "common_labels", common_labels)
if images is not None:
pulumi.set(__self__, "images", images)
if name_prefix is not None:
pulumi.set(__self__, "name_prefix", name_prefix)
if name_suffix is not None:
pulumi.set(__self__, "name_suffix", name_suffix)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="commonLabels")
def common_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
CommonLabels adds additional kustomize commonLabels
"""
return pulumi.get(self, "common_labels")
@common_labels.setter
def common_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "common_labels", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images are kustomize image overrides
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="namePrefix")
def name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
NamePrefix is a prefix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_prefix")
@name_prefix.setter
def name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_prefix", value)
@property
@pulumi.getter(name="nameSuffix")
def name_suffix(self) -> Optional[pulumi.Input[str]]:
"""
NameSuffix is a suffix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_suffix")
@name_suffix.setter
def name_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_suffix", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version contains optional Kustomize version
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationSpecSourcePluginArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourcePluginEnvArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
ConfigManagementPlugin holds config management plugin specific options
"""
if env is not None:
pulumi.set(__self__, "env", env)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourcePluginEnvArgs']]]]:
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationSpecSourcePluginEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class ApplicationSpecSourcePluginEnvArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] name: the name, usually uppercase
:param pulumi.Input[str] value: the value
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
the name, usually uppercase
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
the value
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationSpecSyncPolicyArgs:
def __init__(__self__, *,
automated: Optional[pulumi.Input['ApplicationSpecSyncPolicyAutomatedArgs']] = None,
retry: Optional[pulumi.Input['ApplicationSpecSyncPolicyRetryArgs']] = None,
sync_options: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
SyncPolicy controls when a sync will be performed
:param pulumi.Input['ApplicationSpecSyncPolicyAutomatedArgs'] automated: Automated will keep an application synced to the target revision
:param pulumi.Input['ApplicationSpecSyncPolicyRetryArgs'] retry: Retry controls failed sync retry behavior
:param pulumi.Input[Sequence[pulumi.Input[str]]] sync_options: Options allow you to specify whole app sync-options
"""
if automated is not None:
pulumi.set(__self__, "automated", automated)
if retry is not None:
pulumi.set(__self__, "retry", retry)
if sync_options is not None:
pulumi.set(__self__, "sync_options", sync_options)
@property
@pulumi.getter
def automated(self) -> Optional[pulumi.Input['ApplicationSpecSyncPolicyAutomatedArgs']]:
"""
Automated will keep an application synced to the target revision
"""
return pulumi.get(self, "automated")
@automated.setter
def automated(self, value: Optional[pulumi.Input['ApplicationSpecSyncPolicyAutomatedArgs']]):
pulumi.set(self, "automated", value)
@property
@pulumi.getter
def retry(self) -> Optional[pulumi.Input['ApplicationSpecSyncPolicyRetryArgs']]:
"""
Retry controls failed sync retry behavior
"""
return pulumi.get(self, "retry")
@retry.setter
def retry(self, value: Optional[pulumi.Input['ApplicationSpecSyncPolicyRetryArgs']]):
pulumi.set(self, "retry", value)
@property
@pulumi.getter(name="syncOptions")
def sync_options(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Options allow you to specify whole app sync-options
"""
return pulumi.get(self, "sync_options")
@sync_options.setter
def sync_options(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "sync_options", value)
@pulumi.input_type
class ApplicationSpecSyncPolicyAutomatedArgs:
def __init__(__self__, *,
prune: Optional[pulumi.Input[bool]] = None,
self_heal: Optional[pulumi.Input[bool]] = None):
"""
Automated will keep an application synced to the target revision
:param pulumi.Input[bool] prune: Prune will prune resources automatically as part of automated sync (default: false)
:param pulumi.Input[bool] self_heal: SelfHeal enables auto-syncing if (default: false)
"""
if prune is not None:
pulumi.set(__self__, "prune", prune)
if self_heal is not None:
pulumi.set(__self__, "self_heal", self_heal)
@property
@pulumi.getter
def prune(self) -> Optional[pulumi.Input[bool]]:
"""
Prune will prune resources automatically as part of automated sync (default: false)
"""
return pulumi.get(self, "prune")
@prune.setter
def prune(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "prune", value)
@property
@pulumi.getter(name="selfHeal")
def self_heal(self) -> Optional[pulumi.Input[bool]]:
"""
SelfHeal enables auto-syncing if (default: false)
"""
return pulumi.get(self, "self_heal")
@self_heal.setter
def self_heal(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "self_heal", value)
@pulumi.input_type
class ApplicationSpecSyncPolicyRetryArgs:
def __init__(__self__, *,
backoff: Optional[pulumi.Input['ApplicationSpecSyncPolicyRetryBackoffArgs']] = None,
limit: Optional[pulumi.Input[int]] = None):
"""
Retry controls failed sync retry behavior
:param pulumi.Input['ApplicationSpecSyncPolicyRetryBackoffArgs'] backoff: Backoff is a backoff strategy
:param pulumi.Input[int] limit: Limit is the maximum number of attempts when retrying a container
"""
if backoff is not None:
pulumi.set(__self__, "backoff", backoff)
if limit is not None:
pulumi.set(__self__, "limit", limit)
@property
@pulumi.getter
def backoff(self) -> Optional[pulumi.Input['ApplicationSpecSyncPolicyRetryBackoffArgs']]:
"""
Backoff is a backoff strategy
"""
return pulumi.get(self, "backoff")
@backoff.setter
def backoff(self, value: Optional[pulumi.Input['ApplicationSpecSyncPolicyRetryBackoffArgs']]):
pulumi.set(self, "backoff", value)
@property
@pulumi.getter
def limit(self) -> Optional[pulumi.Input[int]]:
"""
Limit is the maximum number of attempts when retrying a container
"""
return pulumi.get(self, "limit")
@limit.setter
def limit(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "limit", value)
@pulumi.input_type
class ApplicationSpecSyncPolicyRetryBackoffArgs:
def __init__(__self__, *,
duration: Optional[pulumi.Input[str]] = None,
factor: Optional[pulumi.Input[int]] = None,
max_duration: Optional[pulumi.Input[str]] = None):
"""
Backoff is a backoff strategy
:param pulumi.Input[str] duration: Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
:param pulumi.Input[int] factor: Factor is a factor to multiply the base duration after each failed retry
:param pulumi.Input[str] max_duration: MaxDuration is the maximum amount of time allowed for the backoff strategy
"""
if duration is not None:
pulumi.set(__self__, "duration", duration)
if factor is not None:
pulumi.set(__self__, "factor", factor)
if max_duration is not None:
pulumi.set(__self__, "max_duration", max_duration)
@property
@pulumi.getter
def duration(self) -> Optional[pulumi.Input[str]]:
"""
Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
"""
return pulumi.get(self, "duration")
@duration.setter
def duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "duration", value)
@property
@pulumi.getter
def factor(self) -> Optional[pulumi.Input[int]]:
"""
Factor is a factor to multiply the base duration after each failed retry
"""
return pulumi.get(self, "factor")
@factor.setter
def factor(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "factor", value)
@property
@pulumi.getter(name="maxDuration")
def max_duration(self) -> Optional[pulumi.Input[str]]:
"""
MaxDuration is the maximum amount of time allowed for the backoff strategy
"""
return pulumi.get(self, "max_duration")
@max_duration.setter
def max_duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "max_duration", value)
@pulumi.input_type
class ApplicationStatusArgs:
def __init__(__self__, *,
conditions: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusConditionsArgs']]]] = None,
health: Optional[pulumi.Input['ApplicationStatusHealthArgs']] = None,
history: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistoryArgs']]]] = None,
observed_at: Optional[pulumi.Input[str]] = None,
operation_state: Optional[pulumi.Input['ApplicationStatusOperationStateArgs']] = None,
reconciled_at: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusResourcesArgs']]]] = None,
source_type: Optional[pulumi.Input[str]] = None,
summary: Optional[pulumi.Input['ApplicationStatusSummaryArgs']] = None,
sync: Optional[pulumi.Input['ApplicationStatusSyncArgs']] = None):
"""
ApplicationStatus contains information about application sync, health status
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistoryArgs']]] history: RevisionHistories is a array of history, oldest first and newest last
:param pulumi.Input[str] observed_at: ObservedAt indicates when the application state was updated without querying latest git state Deprecated: controller no longer updates ObservedAt field
:param pulumi.Input['ApplicationStatusOperationStateArgs'] operation_state: OperationState contains information about state of currently performing operation on application.
:param pulumi.Input[str] reconciled_at: ReconciledAt indicates when the application state was reconciled using the latest git version
:param pulumi.Input['ApplicationStatusSyncArgs'] sync: SyncStatus is a comparison result of application spec and deployed application.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if health is not None:
pulumi.set(__self__, "health", health)
if history is not None:
pulumi.set(__self__, "history", history)
if observed_at is not None:
pulumi.set(__self__, "observed_at", observed_at)
if operation_state is not None:
pulumi.set(__self__, "operation_state", operation_state)
if reconciled_at is not None:
pulumi.set(__self__, "reconciled_at", reconciled_at)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if source_type is not None:
pulumi.set(__self__, "source_type", source_type)
if summary is not None:
pulumi.set(__self__, "summary", summary)
if sync is not None:
pulumi.set(__self__, "sync", sync)
@property
@pulumi.getter
def conditions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusConditionsArgs']]]]:
return pulumi.get(self, "conditions")
@conditions.setter
def conditions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusConditionsArgs']]]]):
pulumi.set(self, "conditions", value)
@property
@pulumi.getter
def health(self) -> Optional[pulumi.Input['ApplicationStatusHealthArgs']]:
return pulumi.get(self, "health")
@health.setter
def health(self, value: Optional[pulumi.Input['ApplicationStatusHealthArgs']]):
pulumi.set(self, "health", value)
@property
@pulumi.getter
def history(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistoryArgs']]]]:
"""
RevisionHistories is a array of history, oldest first and newest last
"""
return pulumi.get(self, "history")
@history.setter
def history(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistoryArgs']]]]):
pulumi.set(self, "history", value)
@property
@pulumi.getter(name="observedAt")
def observed_at(self) -> Optional[pulumi.Input[str]]:
"""
ObservedAt indicates when the application state was updated without querying latest git state Deprecated: controller no longer updates ObservedAt field
"""
return pulumi.get(self, "observed_at")
@observed_at.setter
def observed_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "observed_at", value)
@property
@pulumi.getter(name="operationState")
def operation_state(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateArgs']]:
"""
OperationState contains information about state of currently performing operation on application.
"""
return pulumi.get(self, "operation_state")
@operation_state.setter
def operation_state(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateArgs']]):
pulumi.set(self, "operation_state", value)
@property
@pulumi.getter(name="reconciledAt")
def reconciled_at(self) -> Optional[pulumi.Input[str]]:
"""
ReconciledAt indicates when the application state was reconciled using the latest git version
"""
return pulumi.get(self, "reconciled_at")
@reconciled_at.setter
def reconciled_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "reconciled_at", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusResourcesArgs']]]]:
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusResourcesArgs']]]]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter(name="sourceType")
def source_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "source_type")
@source_type.setter
def source_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_type", value)
@property
@pulumi.getter
def summary(self) -> Optional[pulumi.Input['ApplicationStatusSummaryArgs']]:
return pulumi.get(self, "summary")
@summary.setter
def summary(self, value: Optional[pulumi.Input['ApplicationStatusSummaryArgs']]):
pulumi.set(self, "summary", value)
@property
@pulumi.getter
def sync(self) -> Optional[pulumi.Input['ApplicationStatusSyncArgs']]:
"""
SyncStatus is a comparison result of application spec and deployed application.
"""
return pulumi.get(self, "sync")
@sync.setter
def sync(self, value: Optional[pulumi.Input['ApplicationStatusSyncArgs']]):
pulumi.set(self, "sync", value)
@pulumi.input_type
class ApplicationStatusConditionsArgs:
def __init__(__self__, *,
message: pulumi.Input[str],
type: pulumi.Input[str],
last_transition_time: Optional[pulumi.Input[str]] = None):
"""
ApplicationCondition contains details about current application condition
:param pulumi.Input[str] message: Message contains human-readable message indicating details about condition
:param pulumi.Input[str] type: Type is an application condition type
:param pulumi.Input[str] last_transition_time: LastTransitionTime is the time the condition was first observed.
"""
pulumi.set(__self__, "message", message)
pulumi.set(__self__, "type", type)
if last_transition_time is not None:
pulumi.set(__self__, "last_transition_time", last_transition_time)
@property
@pulumi.getter
def message(self) -> pulumi.Input[str]:
"""
Message contains human-readable message indicating details about condition
"""
return pulumi.get(self, "message")
@message.setter
def message(self, value: pulumi.Input[str]):
pulumi.set(self, "message", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type is an application condition type
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="lastTransitionTime")
def last_transition_time(self) -> Optional[pulumi.Input[str]]:
"""
LastTransitionTime is the time the condition was first observed.
"""
return pulumi.get(self, "last_transition_time")
@last_transition_time.setter
def last_transition_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "last_transition_time", value)
@pulumi.input_type
class ApplicationStatusHealthArgs:
def __init__(__self__, *,
message: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] status: Represents resource health status
"""
if message is not None:
pulumi.set(__self__, "message", message)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def message(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "message")
@message.setter
def message(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
Represents resource health status
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@pulumi.input_type
class ApplicationStatusHistoryArgs:
def __init__(__self__, *,
deployed_at: pulumi.Input[str],
id: pulumi.Input[int],
revision: pulumi.Input[str],
deploy_started_at: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input['ApplicationStatusHistorySourceArgs']] = None):
"""
RevisionHistory contains information relevant to an application deployment
:param pulumi.Input[str] deployed_at: DeployedAt holds the time the deployment completed
:param pulumi.Input[int] id: ID is an auto incrementing identifier of the RevisionHistory
:param pulumi.Input[str] revision: Revision holds the revision of the sync
:param pulumi.Input[str] deploy_started_at: DeployStartedAt holds the time the deployment started
:param pulumi.Input['ApplicationStatusHistorySourceArgs'] source: ApplicationSource contains information about github repository, path within repository and target application environment.
"""
pulumi.set(__self__, "deployed_at", deployed_at)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "revision", revision)
if deploy_started_at is not None:
pulumi.set(__self__, "deploy_started_at", deploy_started_at)
if source is not None:
pulumi.set(__self__, "source", source)
@property
@pulumi.getter(name="deployedAt")
def deployed_at(self) -> pulumi.Input[str]:
"""
DeployedAt holds the time the deployment completed
"""
return pulumi.get(self, "deployed_at")
@deployed_at.setter
def deployed_at(self, value: pulumi.Input[str]):
pulumi.set(self, "deployed_at", value)
@property
@pulumi.getter
def id(self) -> pulumi.Input[int]:
"""
ID is an auto incrementing identifier of the RevisionHistory
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: pulumi.Input[int]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def revision(self) -> pulumi.Input[str]:
"""
Revision holds the revision of the sync
"""
return pulumi.get(self, "revision")
@revision.setter
def revision(self, value: pulumi.Input[str]):
pulumi.set(self, "revision", value)
@property
@pulumi.getter(name="deployStartedAt")
def deploy_started_at(self) -> Optional[pulumi.Input[str]]:
"""
DeployStartedAt holds the time the deployment started
"""
return pulumi.get(self, "deploy_started_at")
@deploy_started_at.setter
def deploy_started_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "deploy_started_at", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourceArgs']]:
"""
ApplicationSource contains information about github repository, path within repository and target application environment.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourceArgs']]):
pulumi.set(self, "source", value)
@pulumi.input_type
class ApplicationStatusHistorySourceArgs:
def __init__(__self__, *,
repo_url: pulumi.Input[str],
chart: Optional[pulumi.Input[str]] = None,
directory: Optional[pulumi.Input['ApplicationStatusHistorySourceDirectoryArgs']] = None,
helm: Optional[pulumi.Input['ApplicationStatusHistorySourceHelmArgs']] = None,
ksonnet: Optional[pulumi.Input['ApplicationStatusHistorySourceKsonnetArgs']] = None,
kustomize: Optional[pulumi.Input['ApplicationStatusHistorySourceKustomizeArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
plugin: Optional[pulumi.Input['ApplicationStatusHistorySourcePluginArgs']] = None,
target_revision: Optional[pulumi.Input[str]] = None):
"""
ApplicationSource contains information about github repository, path within repository and target application environment.
:param pulumi.Input[str] repo_url: RepoURL is the repository URL of the application manifests
:param pulumi.Input[str] chart: Chart is a Helm chart name
:param pulumi.Input['ApplicationStatusHistorySourceDirectoryArgs'] directory: Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusHistorySourceHelmArgs'] helm: Helm holds helm specific options
:param pulumi.Input['ApplicationStatusHistorySourceKsonnetArgs'] ksonnet: Ksonnet holds ksonnet specific options
:param pulumi.Input['ApplicationStatusHistorySourceKustomizeArgs'] kustomize: Kustomize holds kustomize specific options
:param pulumi.Input[str] path: Path is a directory path within the Git repository
:param pulumi.Input['ApplicationStatusHistorySourcePluginArgs'] plugin: ConfigManagementPlugin holds config management plugin specific options
:param pulumi.Input[str] target_revision: TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
pulumi.set(__self__, "repo_url", repo_url)
if chart is not None:
pulumi.set(__self__, "chart", chart)
if directory is not None:
pulumi.set(__self__, "directory", directory)
if helm is not None:
pulumi.set(__self__, "helm", helm)
if ksonnet is not None:
pulumi.set(__self__, "ksonnet", ksonnet)
if kustomize is not None:
pulumi.set(__self__, "kustomize", kustomize)
if path is not None:
pulumi.set(__self__, "path", path)
if plugin is not None:
pulumi.set(__self__, "plugin", plugin)
if target_revision is not None:
pulumi.set(__self__, "target_revision", target_revision)
@property
@pulumi.getter(name="repoURL")
def repo_url(self) -> pulumi.Input[str]:
"""
RepoURL is the repository URL of the application manifests
"""
return pulumi.get(self, "repo_url")
@repo_url.setter
def repo_url(self, value: pulumi.Input[str]):
pulumi.set(self, "repo_url", value)
@property
@pulumi.getter
def chart(self) -> Optional[pulumi.Input[str]]:
"""
Chart is a Helm chart name
"""
return pulumi.get(self, "chart")
@chart.setter
def chart(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "chart", value)
@property
@pulumi.getter
def directory(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourceDirectoryArgs']]:
"""
Directory holds path/directory specific options
"""
return pulumi.get(self, "directory")
@directory.setter
def directory(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourceDirectoryArgs']]):
pulumi.set(self, "directory", value)
@property
@pulumi.getter
def helm(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourceHelmArgs']]:
"""
Helm holds helm specific options
"""
return pulumi.get(self, "helm")
@helm.setter
def helm(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourceHelmArgs']]):
pulumi.set(self, "helm", value)
@property
@pulumi.getter
def ksonnet(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourceKsonnetArgs']]:
"""
Ksonnet holds ksonnet specific options
"""
return pulumi.get(self, "ksonnet")
@ksonnet.setter
def ksonnet(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourceKsonnetArgs']]):
pulumi.set(self, "ksonnet", value)
@property
@pulumi.getter
def kustomize(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourceKustomizeArgs']]:
"""
Kustomize holds kustomize specific options
"""
return pulumi.get(self, "kustomize")
@kustomize.setter
def kustomize(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourceKustomizeArgs']]):
pulumi.set(self, "kustomize", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is a directory path within the Git repository
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def plugin(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourcePluginArgs']]:
"""
ConfigManagementPlugin holds config management plugin specific options
"""
return pulumi.get(self, "plugin")
@plugin.setter
def plugin(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourcePluginArgs']]):
pulumi.set(self, "plugin", value)
@property
@pulumi.getter(name="targetRevision")
def target_revision(self) -> Optional[pulumi.Input[str]]:
"""
TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
return pulumi.get(self, "target_revision")
@target_revision.setter
def target_revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_revision", value)
@pulumi.input_type
class ApplicationStatusHistorySourceDirectoryArgs:
def __init__(__self__, *,
jsonnet: Optional[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetArgs']] = None,
recurse: Optional[pulumi.Input[bool]] = None):
"""
Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetArgs'] jsonnet: ApplicationSourceJsonnet holds jsonnet specific options
"""
if jsonnet is not None:
pulumi.set(__self__, "jsonnet", jsonnet)
if recurse is not None:
pulumi.set(__self__, "recurse", recurse)
@property
@pulumi.getter
def jsonnet(self) -> Optional[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetArgs']]:
"""
ApplicationSourceJsonnet holds jsonnet specific options
"""
return pulumi.get(self, "jsonnet")
@jsonnet.setter
def jsonnet(self, value: Optional[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetArgs']]):
pulumi.set(self, "jsonnet", value)
@property
@pulumi.getter
def recurse(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "recurse")
@recurse.setter
def recurse(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "recurse", value)
@pulumi.input_type
class ApplicationStatusHistorySourceDirectoryJsonnetArgs:
def __init__(__self__, *,
ext_vars: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetExtVarsArgs']]]] = None,
libs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tlas: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetTlasArgs']]]] = None):
"""
ApplicationSourceJsonnet holds jsonnet specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetExtVarsArgs']]] ext_vars: ExtVars is a list of Jsonnet External Variables
:param pulumi.Input[Sequence[pulumi.Input[str]]] libs: Additional library search dirs
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetTlasArgs']]] tlas: TLAS is a list of Jsonnet Top-level Arguments
"""
if ext_vars is not None:
pulumi.set(__self__, "ext_vars", ext_vars)
if libs is not None:
pulumi.set(__self__, "libs", libs)
if tlas is not None:
pulumi.set(__self__, "tlas", tlas)
@property
@pulumi.getter(name="extVars")
def ext_vars(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetExtVarsArgs']]]]:
"""
ExtVars is a list of Jsonnet External Variables
"""
return pulumi.get(self, "ext_vars")
@ext_vars.setter
def ext_vars(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetExtVarsArgs']]]]):
pulumi.set(self, "ext_vars", value)
@property
@pulumi.getter
def libs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Additional library search dirs
"""
return pulumi.get(self, "libs")
@libs.setter
def libs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "libs", value)
@property
@pulumi.getter
def tlas(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetTlasArgs']]]]:
"""
TLAS is a list of Jsonnet Top-level Arguments
"""
return pulumi.get(self, "tlas")
@tlas.setter
def tlas(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceDirectoryJsonnetTlasArgs']]]]):
pulumi.set(self, "tlas", value)
@pulumi.input_type
class ApplicationStatusHistorySourceDirectoryJsonnetExtVarsArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusHistorySourceDirectoryJsonnetTlasArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusHistorySourceHelmArgs:
def __init__(__self__, *,
file_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmFileParametersArgs']]]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmParametersArgs']]]] = None,
release_name: Optional[pulumi.Input[str]] = None,
value_files: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
values: Optional[pulumi.Input[str]] = None):
"""
Helm holds helm specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmFileParametersArgs']]] file_parameters: FileParameters are file parameters to the helm template
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmParametersArgs']]] parameters: Parameters are parameters to the helm template
:param pulumi.Input[str] release_name: The Helm release name. If omitted it will use the application name
:param pulumi.Input[Sequence[pulumi.Input[str]]] value_files: ValuesFiles is a list of Helm value files to use when generating a template
:param pulumi.Input[str] values: Values is Helm values, typically defined as a block
"""
if file_parameters is not None:
pulumi.set(__self__, "file_parameters", file_parameters)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if release_name is not None:
pulumi.set(__self__, "release_name", release_name)
if value_files is not None:
pulumi.set(__self__, "value_files", value_files)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="fileParameters")
def file_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmFileParametersArgs']]]]:
"""
FileParameters are file parameters to the helm template
"""
return pulumi.get(self, "file_parameters")
@file_parameters.setter
def file_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmFileParametersArgs']]]]):
pulumi.set(self, "file_parameters", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmParametersArgs']]]]:
"""
Parameters are parameters to the helm template
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceHelmParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="releaseName")
def release_name(self) -> Optional[pulumi.Input[str]]:
"""
The Helm release name. If omitted it will use the application name
"""
return pulumi.get(self, "release_name")
@release_name.setter
def release_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_name", value)
@property
@pulumi.getter(name="valueFiles")
def value_files(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ValuesFiles is a list of Helm value files to use when generating a template
"""
return pulumi.get(self, "value_files")
@value_files.setter
def value_files(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "value_files", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[str]]:
"""
Values is Helm values, typically defined as a block
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ApplicationStatusHistorySourceHelmFileParametersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
HelmFileParameter is a file parameter to a helm template
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] path: Path is the path value for the helm parameter
"""
if name is not None:
pulumi.set(__self__, "name", name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is the path value for the helm parameter
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class ApplicationStatusHistorySourceHelmParametersArgs:
def __init__(__self__, *,
force_string: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
HelmParameter is a parameter to a helm template
:param pulumi.Input[bool] force_string: ForceString determines whether to tell Helm to interpret booleans and numbers as strings
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] value: Value is the value for the helm parameter
"""
if force_string is not None:
pulumi.set(__self__, "force_string", force_string)
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="forceString")
def force_string(self) -> Optional[pulumi.Input[bool]]:
"""
ForceString determines whether to tell Helm to interpret booleans and numbers as strings
"""
return pulumi.get(self, "force_string")
@force_string.setter
def force_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_string", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the value for the helm parameter
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusHistorySourceKsonnetArgs:
def __init__(__self__, *,
environment: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceKsonnetParametersArgs']]]] = None):
"""
Ksonnet holds ksonnet specific options
:param pulumi.Input[str] environment: Environment is a ksonnet application environment name
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceKsonnetParametersArgs']]] parameters: Parameters are a list of ksonnet component parameter override values
"""
if environment is not None:
pulumi.set(__self__, "environment", environment)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Environment is a ksonnet application environment name
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceKsonnetParametersArgs']]]]:
"""
Parameters are a list of ksonnet component parameter override values
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourceKsonnetParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class ApplicationStatusHistorySourceKsonnetParametersArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
component: Optional[pulumi.Input[str]] = None):
"""
KsonnetParameter is a ksonnet component parameter
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if component is not None:
pulumi.set(__self__, "component", component)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def component(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "component")
@component.setter
def component(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "component", value)
@pulumi.input_type
class ApplicationStatusHistorySourceKustomizeArgs:
def __init__(__self__, *,
common_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name_prefix: Optional[pulumi.Input[str]] = None,
name_suffix: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Kustomize holds kustomize specific options
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] common_labels: CommonLabels adds additional kustomize commonLabels
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images are kustomize image overrides
:param pulumi.Input[str] name_prefix: NamePrefix is a prefix appended to resources for kustomize apps
:param pulumi.Input[str] name_suffix: NameSuffix is a suffix appended to resources for kustomize apps
:param pulumi.Input[str] version: Version contains optional Kustomize version
"""
if common_labels is not None:
pulumi.set(__self__, "common_labels", common_labels)
if images is not None:
pulumi.set(__self__, "images", images)
if name_prefix is not None:
pulumi.set(__self__, "name_prefix", name_prefix)
if name_suffix is not None:
pulumi.set(__self__, "name_suffix", name_suffix)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="commonLabels")
def common_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
CommonLabels adds additional kustomize commonLabels
"""
return pulumi.get(self, "common_labels")
@common_labels.setter
def common_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "common_labels", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images are kustomize image overrides
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="namePrefix")
def name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
NamePrefix is a prefix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_prefix")
@name_prefix.setter
def name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_prefix", value)
@property
@pulumi.getter(name="nameSuffix")
def name_suffix(self) -> Optional[pulumi.Input[str]]:
"""
NameSuffix is a suffix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_suffix")
@name_suffix.setter
def name_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_suffix", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version contains optional Kustomize version
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationStatusHistorySourcePluginArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourcePluginEnvArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
ConfigManagementPlugin holds config management plugin specific options
"""
if env is not None:
pulumi.set(__self__, "env", env)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourcePluginEnvArgs']]]]:
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusHistorySourcePluginEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class ApplicationStatusHistorySourcePluginEnvArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] name: the name, usually uppercase
:param pulumi.Input[str] value: the value
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
the name, usually uppercase
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
the value
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusOperationStateArgs:
def __init__(__self__, *,
operation: pulumi.Input['ApplicationStatusOperationStateOperationArgs'],
phase: pulumi.Input[str],
started_at: pulumi.Input[str],
finished_at: Optional[pulumi.Input[str]] = None,
message: Optional[pulumi.Input[str]] = None,
retry_count: Optional[pulumi.Input[int]] = None,
sync_result: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultArgs']] = None):
"""
OperationState contains information about state of currently performing operation on application.
:param pulumi.Input['ApplicationStatusOperationStateOperationArgs'] operation: Operation is the original requested operation
:param pulumi.Input[str] phase: Phase is the current phase of the operation
:param pulumi.Input[str] started_at: StartedAt contains time of operation start
:param pulumi.Input[str] finished_at: FinishedAt contains time of operation completion
:param pulumi.Input[str] message: Message hold any pertinent messages when attempting to perform operation (typically errors).
:param pulumi.Input[int] retry_count: RetryCount contains time of operation retries
:param pulumi.Input['ApplicationStatusOperationStateSyncResultArgs'] sync_result: SyncResult is the result of a Sync operation
"""
pulumi.set(__self__, "operation", operation)
pulumi.set(__self__, "phase", phase)
pulumi.set(__self__, "started_at", started_at)
if finished_at is not None:
pulumi.set(__self__, "finished_at", finished_at)
if message is not None:
pulumi.set(__self__, "message", message)
if retry_count is not None:
pulumi.set(__self__, "retry_count", retry_count)
if sync_result is not None:
pulumi.set(__self__, "sync_result", sync_result)
@property
@pulumi.getter
def operation(self) -> pulumi.Input['ApplicationStatusOperationStateOperationArgs']:
"""
Operation is the original requested operation
"""
return pulumi.get(self, "operation")
@operation.setter
def operation(self, value: pulumi.Input['ApplicationStatusOperationStateOperationArgs']):
pulumi.set(self, "operation", value)
@property
@pulumi.getter
def phase(self) -> pulumi.Input[str]:
"""
Phase is the current phase of the operation
"""
return pulumi.get(self, "phase")
@phase.setter
def phase(self, value: pulumi.Input[str]):
pulumi.set(self, "phase", value)
@property
@pulumi.getter(name="startedAt")
def started_at(self) -> pulumi.Input[str]:
"""
StartedAt contains time of operation start
"""
return pulumi.get(self, "started_at")
@started_at.setter
def started_at(self, value: pulumi.Input[str]):
pulumi.set(self, "started_at", value)
@property
@pulumi.getter(name="finishedAt")
def finished_at(self) -> Optional[pulumi.Input[str]]:
"""
FinishedAt contains time of operation completion
"""
return pulumi.get(self, "finished_at")
@finished_at.setter
def finished_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "finished_at", value)
@property
@pulumi.getter
def message(self) -> Optional[pulumi.Input[str]]:
"""
Message hold any pertinent messages when attempting to perform operation (typically errors).
"""
return pulumi.get(self, "message")
@message.setter
def message(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message", value)
@property
@pulumi.getter(name="retryCount")
def retry_count(self) -> Optional[pulumi.Input[int]]:
"""
RetryCount contains time of operation retries
"""
return pulumi.get(self, "retry_count")
@retry_count.setter
def retry_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "retry_count", value)
@property
@pulumi.getter(name="syncResult")
def sync_result(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultArgs']]:
"""
SyncResult is the result of a Sync operation
"""
return pulumi.get(self, "sync_result")
@sync_result.setter
def sync_result(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultArgs']]):
pulumi.set(self, "sync_result", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationArgs:
def __init__(__self__, *,
info: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationInfoArgs']]]] = None,
initiated_by: Optional[pulumi.Input['ApplicationStatusOperationStateOperationInitiatedByArgs']] = None,
retry: Optional[pulumi.Input['ApplicationStatusOperationStateOperationRetryArgs']] = None,
sync: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncArgs']] = None):
"""
Operation is the original requested operation
:param pulumi.Input['ApplicationStatusOperationStateOperationInitiatedByArgs'] initiated_by: OperationInitiator holds information about the operation initiator
:param pulumi.Input['ApplicationStatusOperationStateOperationRetryArgs'] retry: Retry controls failed sync retry behavior
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncArgs'] sync: SyncOperation contains sync operation details.
"""
if info is not None:
pulumi.set(__self__, "info", info)
if initiated_by is not None:
pulumi.set(__self__, "initiated_by", initiated_by)
if retry is not None:
pulumi.set(__self__, "retry", retry)
if sync is not None:
pulumi.set(__self__, "sync", sync)
@property
@pulumi.getter
def info(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationInfoArgs']]]]:
return pulumi.get(self, "info")
@info.setter
def info(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationInfoArgs']]]]):
pulumi.set(self, "info", value)
@property
@pulumi.getter(name="initiatedBy")
def initiated_by(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationInitiatedByArgs']]:
"""
OperationInitiator holds information about the operation initiator
"""
return pulumi.get(self, "initiated_by")
@initiated_by.setter
def initiated_by(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationInitiatedByArgs']]):
pulumi.set(self, "initiated_by", value)
@property
@pulumi.getter
def retry(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationRetryArgs']]:
"""
Retry controls failed sync retry behavior
"""
return pulumi.get(self, "retry")
@retry.setter
def retry(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationRetryArgs']]):
pulumi.set(self, "retry", value)
@property
@pulumi.getter
def sync(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncArgs']]:
"""
SyncOperation contains sync operation details.
"""
return pulumi.get(self, "sync")
@sync.setter
def sync(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncArgs']]):
pulumi.set(self, "sync", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationInfoArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationInitiatedByArgs:
def __init__(__self__, *,
automated: Optional[pulumi.Input[bool]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
OperationInitiator holds information about the operation initiator
:param pulumi.Input[bool] automated: Automated is set to true if operation was initiated automatically by the application controller.
:param pulumi.Input[str] username: Name of a user who started operation.
"""
if automated is not None:
pulumi.set(__self__, "automated", automated)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def automated(self) -> Optional[pulumi.Input[bool]]:
"""
Automated is set to true if operation was initiated automatically by the application controller.
"""
return pulumi.get(self, "automated")
@automated.setter
def automated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "automated", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
"""
Name of a user who started operation.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationRetryArgs:
def __init__(__self__, *,
backoff: Optional[pulumi.Input['ApplicationStatusOperationStateOperationRetryBackoffArgs']] = None,
limit: Optional[pulumi.Input[int]] = None):
"""
Retry controls failed sync retry behavior
:param pulumi.Input['ApplicationStatusOperationStateOperationRetryBackoffArgs'] backoff: Backoff is a backoff strategy
:param pulumi.Input[int] limit: Limit is the maximum number of attempts when retrying a container
"""
if backoff is not None:
pulumi.set(__self__, "backoff", backoff)
if limit is not None:
pulumi.set(__self__, "limit", limit)
@property
@pulumi.getter
def backoff(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationRetryBackoffArgs']]:
"""
Backoff is a backoff strategy
"""
return pulumi.get(self, "backoff")
@backoff.setter
def backoff(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationRetryBackoffArgs']]):
pulumi.set(self, "backoff", value)
@property
@pulumi.getter
def limit(self) -> Optional[pulumi.Input[int]]:
"""
Limit is the maximum number of attempts when retrying a container
"""
return pulumi.get(self, "limit")
@limit.setter
def limit(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "limit", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationRetryBackoffArgs:
def __init__(__self__, *,
duration: Optional[pulumi.Input[str]] = None,
factor: Optional[pulumi.Input[int]] = None,
max_duration: Optional[pulumi.Input[str]] = None):
"""
Backoff is a backoff strategy
:param pulumi.Input[str] duration: Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
:param pulumi.Input[int] factor: Factor is a factor to multiply the base duration after each failed retry
:param pulumi.Input[str] max_duration: MaxDuration is the maximum amount of time allowed for the backoff strategy
"""
if duration is not None:
pulumi.set(__self__, "duration", duration)
if factor is not None:
pulumi.set(__self__, "factor", factor)
if max_duration is not None:
pulumi.set(__self__, "max_duration", max_duration)
@property
@pulumi.getter
def duration(self) -> Optional[pulumi.Input[str]]:
"""
Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
"""
return pulumi.get(self, "duration")
@duration.setter
def duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "duration", value)
@property
@pulumi.getter
def factor(self) -> Optional[pulumi.Input[int]]:
"""
Factor is a factor to multiply the base duration after each failed retry
"""
return pulumi.get(self, "factor")
@factor.setter
def factor(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "factor", value)
@property
@pulumi.getter(name="maxDuration")
def max_duration(self) -> Optional[pulumi.Input[str]]:
"""
MaxDuration is the maximum amount of time allowed for the backoff strategy
"""
return pulumi.get(self, "max_duration")
@max_duration.setter
def max_duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "max_duration", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncArgs:
def __init__(__self__, *,
dry_run: Optional[pulumi.Input[bool]] = None,
manifests: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
prune: Optional[pulumi.Input[bool]] = None,
resources: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncResourcesArgs']]]] = None,
revision: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceArgs']] = None,
sync_options: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sync_strategy: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyArgs']] = None):
"""
SyncOperation contains sync operation details.
:param pulumi.Input[bool] dry_run: DryRun will perform a `kubectl apply --dry-run` without actually performing the sync
:param pulumi.Input[Sequence[pulumi.Input[str]]] manifests: Manifests is an optional field that overrides sync source with a local directory for development
:param pulumi.Input[bool] prune: Prune deletes resources that are no longer tracked in git
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncResourcesArgs']]] resources: Resources describes which resources to sync
:param pulumi.Input[str] revision: Revision is the revision in which to sync the application to. If omitted, will use the revision specified in app spec.
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceArgs'] source: Source overrides the source definition set in the application. This is typically set in a Rollback operation and nil during a Sync operation
:param pulumi.Input[Sequence[pulumi.Input[str]]] sync_options: SyncOptions provide per-sync sync-options, e.g. Validate=false
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyArgs'] sync_strategy: SyncStrategy describes how to perform the sync
"""
if dry_run is not None:
pulumi.set(__self__, "dry_run", dry_run)
if manifests is not None:
pulumi.set(__self__, "manifests", manifests)
if prune is not None:
pulumi.set(__self__, "prune", prune)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if revision is not None:
pulumi.set(__self__, "revision", revision)
if source is not None:
pulumi.set(__self__, "source", source)
if sync_options is not None:
pulumi.set(__self__, "sync_options", sync_options)
if sync_strategy is not None:
pulumi.set(__self__, "sync_strategy", sync_strategy)
@property
@pulumi.getter(name="dryRun")
def dry_run(self) -> Optional[pulumi.Input[bool]]:
"""
DryRun will perform a `kubectl apply --dry-run` without actually performing the sync
"""
return pulumi.get(self, "dry_run")
@dry_run.setter
def dry_run(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "dry_run", value)
@property
@pulumi.getter
def manifests(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Manifests is an optional field that overrides sync source with a local directory for development
"""
return pulumi.get(self, "manifests")
@manifests.setter
def manifests(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "manifests", value)
@property
@pulumi.getter
def prune(self) -> Optional[pulumi.Input[bool]]:
"""
Prune deletes resources that are no longer tracked in git
"""
return pulumi.get(self, "prune")
@prune.setter
def prune(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "prune", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncResourcesArgs']]]]:
"""
Resources describes which resources to sync
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncResourcesArgs']]]]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def revision(self) -> Optional[pulumi.Input[str]]:
"""
Revision is the revision in which to sync the application to. If omitted, will use the revision specified in app spec.
"""
return pulumi.get(self, "revision")
@revision.setter
def revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "revision", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceArgs']]:
"""
Source overrides the source definition set in the application. This is typically set in a Rollback operation and nil during a Sync operation
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceArgs']]):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="syncOptions")
def sync_options(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
SyncOptions provide per-sync sync-options, e.g. Validate=false
"""
return pulumi.get(self, "sync_options")
@sync_options.setter
def sync_options(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "sync_options", value)
@property
@pulumi.getter(name="syncStrategy")
def sync_strategy(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyArgs']]:
"""
SyncStrategy describes how to perform the sync
"""
return pulumi.get(self, "sync_strategy")
@sync_strategy.setter
def sync_strategy(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyArgs']]):
pulumi.set(self, "sync_strategy", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncResourcesArgs:
def __init__(__self__, *,
kind: pulumi.Input[str],
name: pulumi.Input[str],
group: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None):
"""
SyncOperationResource contains resources to sync.
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
if group is not None:
pulumi.set(__self__, "group", group)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def group(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceArgs:
def __init__(__self__, *,
repo_url: pulumi.Input[str],
chart: Optional[pulumi.Input[str]] = None,
directory: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryArgs']] = None,
helm: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmArgs']] = None,
ksonnet: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetArgs']] = None,
kustomize: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKustomizeArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
plugin: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginArgs']] = None,
target_revision: Optional[pulumi.Input[str]] = None):
"""
Source overrides the source definition set in the application. This is typically set in a Rollback operation and nil during a Sync operation
:param pulumi.Input[str] repo_url: RepoURL is the repository URL of the application manifests
:param pulumi.Input[str] chart: Chart is a Helm chart name
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryArgs'] directory: Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmArgs'] helm: Helm holds helm specific options
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetArgs'] ksonnet: Ksonnet holds ksonnet specific options
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKustomizeArgs'] kustomize: Kustomize holds kustomize specific options
:param pulumi.Input[str] path: Path is a directory path within the Git repository
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginArgs'] plugin: ConfigManagementPlugin holds config management plugin specific options
:param pulumi.Input[str] target_revision: TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
pulumi.set(__self__, "repo_url", repo_url)
if chart is not None:
pulumi.set(__self__, "chart", chart)
if directory is not None:
pulumi.set(__self__, "directory", directory)
if helm is not None:
pulumi.set(__self__, "helm", helm)
if ksonnet is not None:
pulumi.set(__self__, "ksonnet", ksonnet)
if kustomize is not None:
pulumi.set(__self__, "kustomize", kustomize)
if path is not None:
pulumi.set(__self__, "path", path)
if plugin is not None:
pulumi.set(__self__, "plugin", plugin)
if target_revision is not None:
pulumi.set(__self__, "target_revision", target_revision)
@property
@pulumi.getter(name="repoURL")
def repo_url(self) -> pulumi.Input[str]:
"""
RepoURL is the repository URL of the application manifests
"""
return pulumi.get(self, "repo_url")
@repo_url.setter
def repo_url(self, value: pulumi.Input[str]):
pulumi.set(self, "repo_url", value)
@property
@pulumi.getter
def chart(self) -> Optional[pulumi.Input[str]]:
"""
Chart is a Helm chart name
"""
return pulumi.get(self, "chart")
@chart.setter
def chart(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "chart", value)
@property
@pulumi.getter
def directory(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryArgs']]:
"""
Directory holds path/directory specific options
"""
return pulumi.get(self, "directory")
@directory.setter
def directory(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryArgs']]):
pulumi.set(self, "directory", value)
@property
@pulumi.getter
def helm(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmArgs']]:
"""
Helm holds helm specific options
"""
return pulumi.get(self, "helm")
@helm.setter
def helm(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmArgs']]):
pulumi.set(self, "helm", value)
@property
@pulumi.getter
def ksonnet(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetArgs']]:
"""
Ksonnet holds ksonnet specific options
"""
return pulumi.get(self, "ksonnet")
@ksonnet.setter
def ksonnet(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetArgs']]):
pulumi.set(self, "ksonnet", value)
@property
@pulumi.getter
def kustomize(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKustomizeArgs']]:
"""
Kustomize holds kustomize specific options
"""
return pulumi.get(self, "kustomize")
@kustomize.setter
def kustomize(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKustomizeArgs']]):
pulumi.set(self, "kustomize", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is a directory path within the Git repository
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def plugin(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginArgs']]:
"""
ConfigManagementPlugin holds config management plugin specific options
"""
return pulumi.get(self, "plugin")
@plugin.setter
def plugin(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginArgs']]):
pulumi.set(self, "plugin", value)
@property
@pulumi.getter(name="targetRevision")
def target_revision(self) -> Optional[pulumi.Input[str]]:
"""
TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
return pulumi.get(self, "target_revision")
@target_revision.setter
def target_revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_revision", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceDirectoryArgs:
def __init__(__self__, *,
jsonnet: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetArgs']] = None,
recurse: Optional[pulumi.Input[bool]] = None):
"""
Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetArgs'] jsonnet: ApplicationSourceJsonnet holds jsonnet specific options
"""
if jsonnet is not None:
pulumi.set(__self__, "jsonnet", jsonnet)
if recurse is not None:
pulumi.set(__self__, "recurse", recurse)
@property
@pulumi.getter
def jsonnet(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetArgs']]:
"""
ApplicationSourceJsonnet holds jsonnet specific options
"""
return pulumi.get(self, "jsonnet")
@jsonnet.setter
def jsonnet(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetArgs']]):
pulumi.set(self, "jsonnet", value)
@property
@pulumi.getter
def recurse(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "recurse")
@recurse.setter
def recurse(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "recurse", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetArgs:
def __init__(__self__, *,
ext_vars: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetExtVarsArgs']]]] = None,
libs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tlas: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetTlasArgs']]]] = None):
"""
ApplicationSourceJsonnet holds jsonnet specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetExtVarsArgs']]] ext_vars: ExtVars is a list of Jsonnet External Variables
:param pulumi.Input[Sequence[pulumi.Input[str]]] libs: Additional library search dirs
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetTlasArgs']]] tlas: TLAS is a list of Jsonnet Top-level Arguments
"""
if ext_vars is not None:
pulumi.set(__self__, "ext_vars", ext_vars)
if libs is not None:
pulumi.set(__self__, "libs", libs)
if tlas is not None:
pulumi.set(__self__, "tlas", tlas)
@property
@pulumi.getter(name="extVars")
def ext_vars(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetExtVarsArgs']]]]:
"""
ExtVars is a list of Jsonnet External Variables
"""
return pulumi.get(self, "ext_vars")
@ext_vars.setter
def ext_vars(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetExtVarsArgs']]]]):
pulumi.set(self, "ext_vars", value)
@property
@pulumi.getter
def libs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Additional library search dirs
"""
return pulumi.get(self, "libs")
@libs.setter
def libs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "libs", value)
@property
@pulumi.getter
def tlas(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetTlasArgs']]]]:
"""
TLAS is a list of Jsonnet Top-level Arguments
"""
return pulumi.get(self, "tlas")
@tlas.setter
def tlas(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetTlasArgs']]]]):
pulumi.set(self, "tlas", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetExtVarsArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceDirectoryJsonnetTlasArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceHelmArgs:
def __init__(__self__, *,
file_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmFileParametersArgs']]]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmParametersArgs']]]] = None,
release_name: Optional[pulumi.Input[str]] = None,
value_files: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
values: Optional[pulumi.Input[str]] = None):
"""
Helm holds helm specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmFileParametersArgs']]] file_parameters: FileParameters are file parameters to the helm template
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmParametersArgs']]] parameters: Parameters are parameters to the helm template
:param pulumi.Input[str] release_name: The Helm release name. If omitted it will use the application name
:param pulumi.Input[Sequence[pulumi.Input[str]]] value_files: ValuesFiles is a list of Helm value files to use when generating a template
:param pulumi.Input[str] values: Values is Helm values, typically defined as a block
"""
if file_parameters is not None:
pulumi.set(__self__, "file_parameters", file_parameters)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if release_name is not None:
pulumi.set(__self__, "release_name", release_name)
if value_files is not None:
pulumi.set(__self__, "value_files", value_files)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="fileParameters")
def file_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmFileParametersArgs']]]]:
"""
FileParameters are file parameters to the helm template
"""
return pulumi.get(self, "file_parameters")
@file_parameters.setter
def file_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmFileParametersArgs']]]]):
pulumi.set(self, "file_parameters", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmParametersArgs']]]]:
"""
Parameters are parameters to the helm template
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceHelmParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="releaseName")
def release_name(self) -> Optional[pulumi.Input[str]]:
"""
The Helm release name. If omitted it will use the application name
"""
return pulumi.get(self, "release_name")
@release_name.setter
def release_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_name", value)
@property
@pulumi.getter(name="valueFiles")
def value_files(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ValuesFiles is a list of Helm value files to use when generating a template
"""
return pulumi.get(self, "value_files")
@value_files.setter
def value_files(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "value_files", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[str]]:
"""
Values is Helm values, typically defined as a block
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceHelmFileParametersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
HelmFileParameter is a file parameter to a helm template
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] path: Path is the path value for the helm parameter
"""
if name is not None:
pulumi.set(__self__, "name", name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is the path value for the helm parameter
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceHelmParametersArgs:
def __init__(__self__, *,
force_string: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
HelmParameter is a parameter to a helm template
:param pulumi.Input[bool] force_string: ForceString determines whether to tell Helm to interpret booleans and numbers as strings
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] value: Value is the value for the helm parameter
"""
if force_string is not None:
pulumi.set(__self__, "force_string", force_string)
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="forceString")
def force_string(self) -> Optional[pulumi.Input[bool]]:
"""
ForceString determines whether to tell Helm to interpret booleans and numbers as strings
"""
return pulumi.get(self, "force_string")
@force_string.setter
def force_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_string", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the value for the helm parameter
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceKsonnetArgs:
def __init__(__self__, *,
environment: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetParametersArgs']]]] = None):
"""
Ksonnet holds ksonnet specific options
:param pulumi.Input[str] environment: Environment is a ksonnet application environment name
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetParametersArgs']]] parameters: Parameters are a list of ksonnet component parameter override values
"""
if environment is not None:
pulumi.set(__self__, "environment", environment)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Environment is a ksonnet application environment name
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetParametersArgs']]]]:
"""
Parameters are a list of ksonnet component parameter override values
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourceKsonnetParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceKsonnetParametersArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
component: Optional[pulumi.Input[str]] = None):
"""
KsonnetParameter is a ksonnet component parameter
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if component is not None:
pulumi.set(__self__, "component", component)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def component(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "component")
@component.setter
def component(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "component", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourceKustomizeArgs:
def __init__(__self__, *,
common_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name_prefix: Optional[pulumi.Input[str]] = None,
name_suffix: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Kustomize holds kustomize specific options
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] common_labels: CommonLabels adds additional kustomize commonLabels
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images are kustomize image overrides
:param pulumi.Input[str] name_prefix: NamePrefix is a prefix appended to resources for kustomize apps
:param pulumi.Input[str] name_suffix: NameSuffix is a suffix appended to resources for kustomize apps
:param pulumi.Input[str] version: Version contains optional Kustomize version
"""
if common_labels is not None:
pulumi.set(__self__, "common_labels", common_labels)
if images is not None:
pulumi.set(__self__, "images", images)
if name_prefix is not None:
pulumi.set(__self__, "name_prefix", name_prefix)
if name_suffix is not None:
pulumi.set(__self__, "name_suffix", name_suffix)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="commonLabels")
def common_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
CommonLabels adds additional kustomize commonLabels
"""
return pulumi.get(self, "common_labels")
@common_labels.setter
def common_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "common_labels", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images are kustomize image overrides
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="namePrefix")
def name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
NamePrefix is a prefix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_prefix")
@name_prefix.setter
def name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_prefix", value)
@property
@pulumi.getter(name="nameSuffix")
def name_suffix(self) -> Optional[pulumi.Input[str]]:
"""
NameSuffix is a suffix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_suffix")
@name_suffix.setter
def name_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_suffix", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version contains optional Kustomize version
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourcePluginArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginEnvArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
ConfigManagementPlugin holds config management plugin specific options
"""
if env is not None:
pulumi.set(__self__, "env", env)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginEnvArgs']]]]:
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateOperationSyncSourcePluginEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSourcePluginEnvArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] name: the name, usually uppercase
:param pulumi.Input[str] value: the value
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
the name, usually uppercase
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
the value
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSyncStrategyArgs:
def __init__(__self__, *,
apply: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyApplyArgs']] = None,
hook: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyHookArgs']] = None):
"""
SyncStrategy describes how to perform the sync
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyApplyArgs'] apply: Apply wil perform a `kubectl apply` to perform the sync.
:param pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyHookArgs'] hook: Hook will submit any referenced resources to perform the sync. This is the default strategy
"""
if apply is not None:
pulumi.set(__self__, "apply", apply)
if hook is not None:
pulumi.set(__self__, "hook", hook)
@property
@pulumi.getter
def apply(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyApplyArgs']]:
"""
Apply wil perform a `kubectl apply` to perform the sync.
"""
return pulumi.get(self, "apply")
@apply.setter
def apply(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyApplyArgs']]):
pulumi.set(self, "apply", value)
@property
@pulumi.getter
def hook(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyHookArgs']]:
"""
Hook will submit any referenced resources to perform the sync. This is the default strategy
"""
return pulumi.get(self, "hook")
@hook.setter
def hook(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateOperationSyncSyncStrategyHookArgs']]):
pulumi.set(self, "hook", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSyncStrategyApplyArgs:
def __init__(__self__, *,
force: Optional[pulumi.Input[bool]] = None):
"""
Apply wil perform a `kubectl apply` to perform the sync.
:param pulumi.Input[bool] force: Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
if force is not None:
pulumi.set(__self__, "force", force)
@property
@pulumi.getter
def force(self) -> Optional[pulumi.Input[bool]]:
"""
Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
return pulumi.get(self, "force")
@force.setter
def force(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force", value)
@pulumi.input_type
class ApplicationStatusOperationStateOperationSyncSyncStrategyHookArgs:
def __init__(__self__, *,
force: Optional[pulumi.Input[bool]] = None):
"""
Hook will submit any referenced resources to perform the sync. This is the default strategy
:param pulumi.Input[bool] force: Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
if force is not None:
pulumi.set(__self__, "force", force)
@property
@pulumi.getter
def force(self) -> Optional[pulumi.Input[bool]]:
"""
Force indicates whether or not to supply the --force flag to `kubectl apply`. The --force flag deletes and re-create the resource, when PATCH encounters conflict and has retried for 5 times.
"""
return pulumi.get(self, "force")
@force.setter
def force(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultArgs:
def __init__(__self__, *,
revision: pulumi.Input[str],
resources: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultResourcesArgs']]]] = None,
source: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceArgs']] = None):
"""
SyncResult is the result of a Sync operation
:param pulumi.Input[str] revision: Revision holds the revision of the sync
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultResourcesArgs']]] resources: Resources holds the sync result of each individual resource
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourceArgs'] source: Source records the application source information of the sync, used for comparing auto-sync
"""
pulumi.set(__self__, "revision", revision)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if source is not None:
pulumi.set(__self__, "source", source)
@property
@pulumi.getter
def revision(self) -> pulumi.Input[str]:
"""
Revision holds the revision of the sync
"""
return pulumi.get(self, "revision")
@revision.setter
def revision(self, value: pulumi.Input[str]):
pulumi.set(self, "revision", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultResourcesArgs']]]]:
"""
Resources holds the sync result of each individual resource
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultResourcesArgs']]]]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceArgs']]:
"""
Source records the application source information of the sync, used for comparing auto-sync
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceArgs']]):
pulumi.set(self, "source", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultResourcesArgs:
def __init__(__self__, *,
group: pulumi.Input[str],
kind: pulumi.Input[str],
name: pulumi.Input[str],
namespace: pulumi.Input[str],
version: pulumi.Input[str],
hook_phase: Optional[pulumi.Input[str]] = None,
hook_type: Optional[pulumi.Input[str]] = None,
message: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
sync_phase: Optional[pulumi.Input[str]] = None):
"""
ResourceResult holds the operation result details of a specific resource
:param pulumi.Input[str] hook_phase: the state of any operation associated with this resource OR hook note: can contain values for non-hook resources
:param pulumi.Input[str] hook_type: the type of the hook, empty for non-hook resources
:param pulumi.Input[str] message: message for the last sync OR operation
:param pulumi.Input[str] status: the final result of the sync, this is be empty if the resources is yet to be applied/pruned and is always zero-value for hooks
:param pulumi.Input[str] sync_phase: indicates the particular phase of the sync that this is for
"""
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "namespace", namespace)
pulumi.set(__self__, "version", version)
if hook_phase is not None:
pulumi.set(__self__, "hook_phase", hook_phase)
if hook_type is not None:
pulumi.set(__self__, "hook_type", hook_type)
if message is not None:
pulumi.set(__self__, "message", message)
if status is not None:
pulumi.set(__self__, "status", status)
if sync_phase is not None:
pulumi.set(__self__, "sync_phase", sync_phase)
@property
@pulumi.getter
def group(self) -> pulumi.Input[str]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: pulumi.Input[str]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> pulumi.Input[str]:
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: pulumi.Input[str]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def version(self) -> pulumi.Input[str]:
return pulumi.get(self, "version")
@version.setter
def version(self, value: pulumi.Input[str]):
pulumi.set(self, "version", value)
@property
@pulumi.getter(name="hookPhase")
def hook_phase(self) -> Optional[pulumi.Input[str]]:
"""
the state of any operation associated with this resource OR hook note: can contain values for non-hook resources
"""
return pulumi.get(self, "hook_phase")
@hook_phase.setter
def hook_phase(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "hook_phase", value)
@property
@pulumi.getter(name="hookType")
def hook_type(self) -> Optional[pulumi.Input[str]]:
"""
the type of the hook, empty for non-hook resources
"""
return pulumi.get(self, "hook_type")
@hook_type.setter
def hook_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "hook_type", value)
@property
@pulumi.getter
def message(self) -> Optional[pulumi.Input[str]]:
"""
message for the last sync OR operation
"""
return pulumi.get(self, "message")
@message.setter
def message(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
the final result of the sync, this is be empty if the resources is yet to be applied/pruned and is always zero-value for hooks
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter(name="syncPhase")
def sync_phase(self) -> Optional[pulumi.Input[str]]:
"""
indicates the particular phase of the sync that this is for
"""
return pulumi.get(self, "sync_phase")
@sync_phase.setter
def sync_phase(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sync_phase", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceArgs:
def __init__(__self__, *,
repo_url: pulumi.Input[str],
chart: Optional[pulumi.Input[str]] = None,
directory: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryArgs']] = None,
helm: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmArgs']] = None,
ksonnet: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetArgs']] = None,
kustomize: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKustomizeArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
plugin: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginArgs']] = None,
target_revision: Optional[pulumi.Input[str]] = None):
"""
Source records the application source information of the sync, used for comparing auto-sync
:param pulumi.Input[str] repo_url: RepoURL is the repository URL of the application manifests
:param pulumi.Input[str] chart: Chart is a Helm chart name
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryArgs'] directory: Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmArgs'] helm: Helm holds helm specific options
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetArgs'] ksonnet: Ksonnet holds ksonnet specific options
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKustomizeArgs'] kustomize: Kustomize holds kustomize specific options
:param pulumi.Input[str] path: Path is a directory path within the Git repository
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginArgs'] plugin: ConfigManagementPlugin holds config management plugin specific options
:param pulumi.Input[str] target_revision: TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
pulumi.set(__self__, "repo_url", repo_url)
if chart is not None:
pulumi.set(__self__, "chart", chart)
if directory is not None:
pulumi.set(__self__, "directory", directory)
if helm is not None:
pulumi.set(__self__, "helm", helm)
if ksonnet is not None:
pulumi.set(__self__, "ksonnet", ksonnet)
if kustomize is not None:
pulumi.set(__self__, "kustomize", kustomize)
if path is not None:
pulumi.set(__self__, "path", path)
if plugin is not None:
pulumi.set(__self__, "plugin", plugin)
if target_revision is not None:
pulumi.set(__self__, "target_revision", target_revision)
@property
@pulumi.getter(name="repoURL")
def repo_url(self) -> pulumi.Input[str]:
"""
RepoURL is the repository URL of the application manifests
"""
return pulumi.get(self, "repo_url")
@repo_url.setter
def repo_url(self, value: pulumi.Input[str]):
pulumi.set(self, "repo_url", value)
@property
@pulumi.getter
def chart(self) -> Optional[pulumi.Input[str]]:
"""
Chart is a Helm chart name
"""
return pulumi.get(self, "chart")
@chart.setter
def chart(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "chart", value)
@property
@pulumi.getter
def directory(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryArgs']]:
"""
Directory holds path/directory specific options
"""
return pulumi.get(self, "directory")
@directory.setter
def directory(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryArgs']]):
pulumi.set(self, "directory", value)
@property
@pulumi.getter
def helm(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmArgs']]:
"""
Helm holds helm specific options
"""
return pulumi.get(self, "helm")
@helm.setter
def helm(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmArgs']]):
pulumi.set(self, "helm", value)
@property
@pulumi.getter
def ksonnet(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetArgs']]:
"""
Ksonnet holds ksonnet specific options
"""
return pulumi.get(self, "ksonnet")
@ksonnet.setter
def ksonnet(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetArgs']]):
pulumi.set(self, "ksonnet", value)
@property
@pulumi.getter
def kustomize(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKustomizeArgs']]:
"""
Kustomize holds kustomize specific options
"""
return pulumi.get(self, "kustomize")
@kustomize.setter
def kustomize(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKustomizeArgs']]):
pulumi.set(self, "kustomize", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is a directory path within the Git repository
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def plugin(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginArgs']]:
"""
ConfigManagementPlugin holds config management plugin specific options
"""
return pulumi.get(self, "plugin")
@plugin.setter
def plugin(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginArgs']]):
pulumi.set(self, "plugin", value)
@property
@pulumi.getter(name="targetRevision")
def target_revision(self) -> Optional[pulumi.Input[str]]:
"""
TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
return pulumi.get(self, "target_revision")
@target_revision.setter
def target_revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_revision", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceDirectoryArgs:
def __init__(__self__, *,
jsonnet: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetArgs']] = None,
recurse: Optional[pulumi.Input[bool]] = None):
"""
Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetArgs'] jsonnet: ApplicationSourceJsonnet holds jsonnet specific options
"""
if jsonnet is not None:
pulumi.set(__self__, "jsonnet", jsonnet)
if recurse is not None:
pulumi.set(__self__, "recurse", recurse)
@property
@pulumi.getter
def jsonnet(self) -> Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetArgs']]:
"""
ApplicationSourceJsonnet holds jsonnet specific options
"""
return pulumi.get(self, "jsonnet")
@jsonnet.setter
def jsonnet(self, value: Optional[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetArgs']]):
pulumi.set(self, "jsonnet", value)
@property
@pulumi.getter
def recurse(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "recurse")
@recurse.setter
def recurse(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "recurse", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetArgs:
def __init__(__self__, *,
ext_vars: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetExtVarsArgs']]]] = None,
libs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tlas: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetTlasArgs']]]] = None):
"""
ApplicationSourceJsonnet holds jsonnet specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetExtVarsArgs']]] ext_vars: ExtVars is a list of Jsonnet External Variables
:param pulumi.Input[Sequence[pulumi.Input[str]]] libs: Additional library search dirs
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetTlasArgs']]] tlas: TLAS is a list of Jsonnet Top-level Arguments
"""
if ext_vars is not None:
pulumi.set(__self__, "ext_vars", ext_vars)
if libs is not None:
pulumi.set(__self__, "libs", libs)
if tlas is not None:
pulumi.set(__self__, "tlas", tlas)
@property
@pulumi.getter(name="extVars")
def ext_vars(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetExtVarsArgs']]]]:
"""
ExtVars is a list of Jsonnet External Variables
"""
return pulumi.get(self, "ext_vars")
@ext_vars.setter
def ext_vars(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetExtVarsArgs']]]]):
pulumi.set(self, "ext_vars", value)
@property
@pulumi.getter
def libs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Additional library search dirs
"""
return pulumi.get(self, "libs")
@libs.setter
def libs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "libs", value)
@property
@pulumi.getter
def tlas(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetTlasArgs']]]]:
"""
TLAS is a list of Jsonnet Top-level Arguments
"""
return pulumi.get(self, "tlas")
@tlas.setter
def tlas(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetTlasArgs']]]]):
pulumi.set(self, "tlas", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetExtVarsArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceDirectoryJsonnetTlasArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceHelmArgs:
def __init__(__self__, *,
file_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmFileParametersArgs']]]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmParametersArgs']]]] = None,
release_name: Optional[pulumi.Input[str]] = None,
value_files: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
values: Optional[pulumi.Input[str]] = None):
"""
Helm holds helm specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmFileParametersArgs']]] file_parameters: FileParameters are file parameters to the helm template
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmParametersArgs']]] parameters: Parameters are parameters to the helm template
:param pulumi.Input[str] release_name: The Helm release name. If omitted it will use the application name
:param pulumi.Input[Sequence[pulumi.Input[str]]] value_files: ValuesFiles is a list of Helm value files to use when generating a template
:param pulumi.Input[str] values: Values is Helm values, typically defined as a block
"""
if file_parameters is not None:
pulumi.set(__self__, "file_parameters", file_parameters)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if release_name is not None:
pulumi.set(__self__, "release_name", release_name)
if value_files is not None:
pulumi.set(__self__, "value_files", value_files)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="fileParameters")
def file_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmFileParametersArgs']]]]:
"""
FileParameters are file parameters to the helm template
"""
return pulumi.get(self, "file_parameters")
@file_parameters.setter
def file_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmFileParametersArgs']]]]):
pulumi.set(self, "file_parameters", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmParametersArgs']]]]:
"""
Parameters are parameters to the helm template
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceHelmParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="releaseName")
def release_name(self) -> Optional[pulumi.Input[str]]:
"""
The Helm release name. If omitted it will use the application name
"""
return pulumi.get(self, "release_name")
@release_name.setter
def release_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_name", value)
@property
@pulumi.getter(name="valueFiles")
def value_files(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ValuesFiles is a list of Helm value files to use when generating a template
"""
return pulumi.get(self, "value_files")
@value_files.setter
def value_files(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "value_files", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[str]]:
"""
Values is Helm values, typically defined as a block
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceHelmFileParametersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
HelmFileParameter is a file parameter to a helm template
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] path: Path is the path value for the helm parameter
"""
if name is not None:
pulumi.set(__self__, "name", name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is the path value for the helm parameter
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceHelmParametersArgs:
def __init__(__self__, *,
force_string: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
HelmParameter is a parameter to a helm template
:param pulumi.Input[bool] force_string: ForceString determines whether to tell Helm to interpret booleans and numbers as strings
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] value: Value is the value for the helm parameter
"""
if force_string is not None:
pulumi.set(__self__, "force_string", force_string)
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="forceString")
def force_string(self) -> Optional[pulumi.Input[bool]]:
"""
ForceString determines whether to tell Helm to interpret booleans and numbers as strings
"""
return pulumi.get(self, "force_string")
@force_string.setter
def force_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_string", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the value for the helm parameter
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceKsonnetArgs:
def __init__(__self__, *,
environment: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetParametersArgs']]]] = None):
"""
Ksonnet holds ksonnet specific options
:param pulumi.Input[str] environment: Environment is a ksonnet application environment name
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetParametersArgs']]] parameters: Parameters are a list of ksonnet component parameter override values
"""
if environment is not None:
pulumi.set(__self__, "environment", environment)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Environment is a ksonnet application environment name
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetParametersArgs']]]]:
"""
Parameters are a list of ksonnet component parameter override values
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourceKsonnetParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceKsonnetParametersArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
component: Optional[pulumi.Input[str]] = None):
"""
KsonnetParameter is a ksonnet component parameter
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if component is not None:
pulumi.set(__self__, "component", component)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def component(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "component")
@component.setter
def component(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "component", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourceKustomizeArgs:
def __init__(__self__, *,
common_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name_prefix: Optional[pulumi.Input[str]] = None,
name_suffix: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Kustomize holds kustomize specific options
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] common_labels: CommonLabels adds additional kustomize commonLabels
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images are kustomize image overrides
:param pulumi.Input[str] name_prefix: NamePrefix is a prefix appended to resources for kustomize apps
:param pulumi.Input[str] name_suffix: NameSuffix is a suffix appended to resources for kustomize apps
:param pulumi.Input[str] version: Version contains optional Kustomize version
"""
if common_labels is not None:
pulumi.set(__self__, "common_labels", common_labels)
if images is not None:
pulumi.set(__self__, "images", images)
if name_prefix is not None:
pulumi.set(__self__, "name_prefix", name_prefix)
if name_suffix is not None:
pulumi.set(__self__, "name_suffix", name_suffix)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="commonLabels")
def common_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
CommonLabels adds additional kustomize commonLabels
"""
return pulumi.get(self, "common_labels")
@common_labels.setter
def common_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "common_labels", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images are kustomize image overrides
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="namePrefix")
def name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
NamePrefix is a prefix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_prefix")
@name_prefix.setter
def name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_prefix", value)
@property
@pulumi.getter(name="nameSuffix")
def name_suffix(self) -> Optional[pulumi.Input[str]]:
"""
NameSuffix is a suffix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_suffix")
@name_suffix.setter
def name_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_suffix", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version contains optional Kustomize version
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourcePluginArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginEnvArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
ConfigManagementPlugin holds config management plugin specific options
"""
if env is not None:
pulumi.set(__self__, "env", env)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginEnvArgs']]]]:
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusOperationStateSyncResultSourcePluginEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class ApplicationStatusOperationStateSyncResultSourcePluginEnvArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] name: the name, usually uppercase
:param pulumi.Input[str] value: the value
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
the name, usually uppercase
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
the value
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusResourcesArgs:
def __init__(__self__, *,
group: Optional[pulumi.Input[str]] = None,
health: Optional[pulumi.Input['ApplicationStatusResourcesHealthArgs']] = None,
hook: Optional[pulumi.Input[bool]] = None,
kind: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None,
requires_pruning: Optional[pulumi.Input[bool]] = None,
status: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
ResourceStatus holds the current sync and health status of a resource
:param pulumi.Input[str] status: SyncStatusCode is a type which represents possible comparison results
"""
if group is not None:
pulumi.set(__self__, "group", group)
if health is not None:
pulumi.set(__self__, "health", health)
if hook is not None:
pulumi.set(__self__, "hook", hook)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if requires_pruning is not None:
pulumi.set(__self__, "requires_pruning", requires_pruning)
if status is not None:
pulumi.set(__self__, "status", status)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def group(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "group")
@group.setter
def group(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def health(self) -> Optional[pulumi.Input['ApplicationStatusResourcesHealthArgs']]:
return pulumi.get(self, "health")
@health.setter
def health(self, value: Optional[pulumi.Input['ApplicationStatusResourcesHealthArgs']]):
pulumi.set(self, "health", value)
@property
@pulumi.getter
def hook(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "hook")
@hook.setter
def hook(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "hook", value)
@property
@pulumi.getter
def kind(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter(name="requiresPruning")
def requires_pruning(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "requires_pruning")
@requires_pruning.setter
def requires_pruning(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "requires_pruning", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
SyncStatusCode is a type which represents possible comparison results
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationStatusResourcesHealthArgs:
def __init__(__self__, *,
message: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] status: Represents resource health status
"""
if message is not None:
pulumi.set(__self__, "message", message)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def message(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "message")
@message.setter
def message(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
Represents resource health status
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@pulumi.input_type
class ApplicationStatusSummaryArgs:
def __init__(__self__, *,
external_urls: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
:param pulumi.Input[Sequence[pulumi.Input[str]]] external_urls: ExternalURLs holds all external URLs of application child resources.
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images holds all images of application child resources.
"""
if external_urls is not None:
pulumi.set(__self__, "external_urls", external_urls)
if images is not None:
pulumi.set(__self__, "images", images)
@property
@pulumi.getter(name="externalURLs")
def external_urls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ExternalURLs holds all external URLs of application child resources.
"""
return pulumi.get(self, "external_urls")
@external_urls.setter
def external_urls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "external_urls", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images holds all images of application child resources.
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@pulumi.input_type
class ApplicationStatusSyncArgs:
def __init__(__self__, *,
status: pulumi.Input[str],
compared_to: Optional[pulumi.Input['ApplicationStatusSyncComparedToArgs']] = None,
revision: Optional[pulumi.Input[str]] = None):
"""
SyncStatus is a comparison result of application spec and deployed application.
:param pulumi.Input[str] status: SyncStatusCode is a type which represents possible comparison results
:param pulumi.Input['ApplicationStatusSyncComparedToArgs'] compared_to: ComparedTo contains application source and target which was used for resources comparison
"""
pulumi.set(__self__, "status", status)
if compared_to is not None:
pulumi.set(__self__, "compared_to", compared_to)
if revision is not None:
pulumi.set(__self__, "revision", revision)
@property
@pulumi.getter
def status(self) -> pulumi.Input[str]:
"""
SyncStatusCode is a type which represents possible comparison results
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: pulumi.Input[str]):
pulumi.set(self, "status", value)
@property
@pulumi.getter(name="comparedTo")
def compared_to(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToArgs']]:
"""
ComparedTo contains application source and target which was used for resources comparison
"""
return pulumi.get(self, "compared_to")
@compared_to.setter
def compared_to(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToArgs']]):
pulumi.set(self, "compared_to", value)
@property
@pulumi.getter
def revision(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "revision")
@revision.setter
def revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "revision", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToArgs:
def __init__(__self__, *,
destination: pulumi.Input['ApplicationStatusSyncComparedToDestinationArgs'],
source: pulumi.Input['ApplicationStatusSyncComparedToSourceArgs']):
"""
ComparedTo contains application source and target which was used for resources comparison
:param pulumi.Input['ApplicationStatusSyncComparedToDestinationArgs'] destination: ApplicationDestination contains deployment destination information
:param pulumi.Input['ApplicationStatusSyncComparedToSourceArgs'] source: ApplicationSource contains information about github repository, path within repository and target application environment.
"""
pulumi.set(__self__, "destination", destination)
pulumi.set(__self__, "source", source)
@property
@pulumi.getter
def destination(self) -> pulumi.Input['ApplicationStatusSyncComparedToDestinationArgs']:
"""
ApplicationDestination contains deployment destination information
"""
return pulumi.get(self, "destination")
@destination.setter
def destination(self, value: pulumi.Input['ApplicationStatusSyncComparedToDestinationArgs']):
pulumi.set(self, "destination", value)
@property
@pulumi.getter
def source(self) -> pulumi.Input['ApplicationStatusSyncComparedToSourceArgs']:
"""
ApplicationSource contains information about github repository, path within repository and target application environment.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: pulumi.Input['ApplicationStatusSyncComparedToSourceArgs']):
pulumi.set(self, "source", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToDestinationArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None,
server: Optional[pulumi.Input[str]] = None):
"""
ApplicationDestination contains deployment destination information
:param pulumi.Input[str] name: Name of the destination cluster which can be used instead of server (url) field
:param pulumi.Input[str] namespace: Namespace overrides the environment namespace value in the ksonnet app.yaml
:param pulumi.Input[str] server: Server overrides the environment server value in the ksonnet app.yaml
"""
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if server is not None:
pulumi.set(__self__, "server", server)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the destination cluster which can be used instead of server (url) field
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
"""
Namespace overrides the environment namespace value in the ksonnet app.yaml
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def server(self) -> Optional[pulumi.Input[str]]:
"""
Server overrides the environment server value in the ksonnet app.yaml
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceArgs:
def __init__(__self__, *,
repo_url: pulumi.Input[str],
chart: Optional[pulumi.Input[str]] = None,
directory: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryArgs']] = None,
helm: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmArgs']] = None,
ksonnet: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetArgs']] = None,
kustomize: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceKustomizeArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
plugin: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourcePluginArgs']] = None,
target_revision: Optional[pulumi.Input[str]] = None):
"""
ApplicationSource contains information about github repository, path within repository and target application environment.
:param pulumi.Input[str] repo_url: RepoURL is the repository URL of the application manifests
:param pulumi.Input[str] chart: Chart is a Helm chart name
:param pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryArgs'] directory: Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusSyncComparedToSourceHelmArgs'] helm: Helm holds helm specific options
:param pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetArgs'] ksonnet: Ksonnet holds ksonnet specific options
:param pulumi.Input['ApplicationStatusSyncComparedToSourceKustomizeArgs'] kustomize: Kustomize holds kustomize specific options
:param pulumi.Input[str] path: Path is a directory path within the Git repository
:param pulumi.Input['ApplicationStatusSyncComparedToSourcePluginArgs'] plugin: ConfigManagementPlugin holds config management plugin specific options
:param pulumi.Input[str] target_revision: TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
pulumi.set(__self__, "repo_url", repo_url)
if chart is not None:
pulumi.set(__self__, "chart", chart)
if directory is not None:
pulumi.set(__self__, "directory", directory)
if helm is not None:
pulumi.set(__self__, "helm", helm)
if ksonnet is not None:
pulumi.set(__self__, "ksonnet", ksonnet)
if kustomize is not None:
pulumi.set(__self__, "kustomize", kustomize)
if path is not None:
pulumi.set(__self__, "path", path)
if plugin is not None:
pulumi.set(__self__, "plugin", plugin)
if target_revision is not None:
pulumi.set(__self__, "target_revision", target_revision)
@property
@pulumi.getter(name="repoURL")
def repo_url(self) -> pulumi.Input[str]:
"""
RepoURL is the repository URL of the application manifests
"""
return pulumi.get(self, "repo_url")
@repo_url.setter
def repo_url(self, value: pulumi.Input[str]):
pulumi.set(self, "repo_url", value)
@property
@pulumi.getter
def chart(self) -> Optional[pulumi.Input[str]]:
"""
Chart is a Helm chart name
"""
return pulumi.get(self, "chart")
@chart.setter
def chart(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "chart", value)
@property
@pulumi.getter
def directory(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryArgs']]:
"""
Directory holds path/directory specific options
"""
return pulumi.get(self, "directory")
@directory.setter
def directory(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryArgs']]):
pulumi.set(self, "directory", value)
@property
@pulumi.getter
def helm(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmArgs']]:
"""
Helm holds helm specific options
"""
return pulumi.get(self, "helm")
@helm.setter
def helm(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmArgs']]):
pulumi.set(self, "helm", value)
@property
@pulumi.getter
def ksonnet(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetArgs']]:
"""
Ksonnet holds ksonnet specific options
"""
return pulumi.get(self, "ksonnet")
@ksonnet.setter
def ksonnet(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetArgs']]):
pulumi.set(self, "ksonnet", value)
@property
@pulumi.getter
def kustomize(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceKustomizeArgs']]:
"""
Kustomize holds kustomize specific options
"""
return pulumi.get(self, "kustomize")
@kustomize.setter
def kustomize(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceKustomizeArgs']]):
pulumi.set(self, "kustomize", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is a directory path within the Git repository
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def plugin(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToSourcePluginArgs']]:
"""
ConfigManagementPlugin holds config management plugin specific options
"""
return pulumi.get(self, "plugin")
@plugin.setter
def plugin(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourcePluginArgs']]):
pulumi.set(self, "plugin", value)
@property
@pulumi.getter(name="targetRevision")
def target_revision(self) -> Optional[pulumi.Input[str]]:
"""
TargetRevision defines the commit, tag, or branch in which to sync the application to. If omitted, will sync to HEAD
"""
return pulumi.get(self, "target_revision")
@target_revision.setter
def target_revision(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_revision", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceDirectoryArgs:
def __init__(__self__, *,
jsonnet: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetArgs']] = None,
recurse: Optional[pulumi.Input[bool]] = None):
"""
Directory holds path/directory specific options
:param pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetArgs'] jsonnet: ApplicationSourceJsonnet holds jsonnet specific options
"""
if jsonnet is not None:
pulumi.set(__self__, "jsonnet", jsonnet)
if recurse is not None:
pulumi.set(__self__, "recurse", recurse)
@property
@pulumi.getter
def jsonnet(self) -> Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetArgs']]:
"""
ApplicationSourceJsonnet holds jsonnet specific options
"""
return pulumi.get(self, "jsonnet")
@jsonnet.setter
def jsonnet(self, value: Optional[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetArgs']]):
pulumi.set(self, "jsonnet", value)
@property
@pulumi.getter
def recurse(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "recurse")
@recurse.setter
def recurse(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "recurse", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceDirectoryJsonnetArgs:
def __init__(__self__, *,
ext_vars: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetExtVarsArgs']]]] = None,
libs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tlas: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetTlasArgs']]]] = None):
"""
ApplicationSourceJsonnet holds jsonnet specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetExtVarsArgs']]] ext_vars: ExtVars is a list of Jsonnet External Variables
:param pulumi.Input[Sequence[pulumi.Input[str]]] libs: Additional library search dirs
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetTlasArgs']]] tlas: TLAS is a list of Jsonnet Top-level Arguments
"""
if ext_vars is not None:
pulumi.set(__self__, "ext_vars", ext_vars)
if libs is not None:
pulumi.set(__self__, "libs", libs)
if tlas is not None:
pulumi.set(__self__, "tlas", tlas)
@property
@pulumi.getter(name="extVars")
def ext_vars(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetExtVarsArgs']]]]:
"""
ExtVars is a list of Jsonnet External Variables
"""
return pulumi.get(self, "ext_vars")
@ext_vars.setter
def ext_vars(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetExtVarsArgs']]]]):
pulumi.set(self, "ext_vars", value)
@property
@pulumi.getter
def libs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Additional library search dirs
"""
return pulumi.get(self, "libs")
@libs.setter
def libs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "libs", value)
@property
@pulumi.getter
def tlas(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetTlasArgs']]]]:
"""
TLAS is a list of Jsonnet Top-level Arguments
"""
return pulumi.get(self, "tlas")
@tlas.setter
def tlas(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceDirectoryJsonnetTlasArgs']]]]):
pulumi.set(self, "tlas", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceDirectoryJsonnetExtVarsArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceDirectoryJsonnetTlasArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
code: Optional[pulumi.Input[bool]] = None):
"""
JsonnetVar is a jsonnet variable
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if code is not None:
pulumi.set(__self__, "code", code)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "code", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceHelmArgs:
def __init__(__self__, *,
file_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmFileParametersArgs']]]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmParametersArgs']]]] = None,
release_name: Optional[pulumi.Input[str]] = None,
value_files: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
values: Optional[pulumi.Input[str]] = None):
"""
Helm holds helm specific options
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmFileParametersArgs']]] file_parameters: FileParameters are file parameters to the helm template
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmParametersArgs']]] parameters: Parameters are parameters to the helm template
:param pulumi.Input[str] release_name: The Helm release name. If omitted it will use the application name
:param pulumi.Input[Sequence[pulumi.Input[str]]] value_files: ValuesFiles is a list of Helm value files to use when generating a template
:param pulumi.Input[str] values: Values is Helm values, typically defined as a block
"""
if file_parameters is not None:
pulumi.set(__self__, "file_parameters", file_parameters)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if release_name is not None:
pulumi.set(__self__, "release_name", release_name)
if value_files is not None:
pulumi.set(__self__, "value_files", value_files)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="fileParameters")
def file_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmFileParametersArgs']]]]:
"""
FileParameters are file parameters to the helm template
"""
return pulumi.get(self, "file_parameters")
@file_parameters.setter
def file_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmFileParametersArgs']]]]):
pulumi.set(self, "file_parameters", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmParametersArgs']]]]:
"""
Parameters are parameters to the helm template
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceHelmParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="releaseName")
def release_name(self) -> Optional[pulumi.Input[str]]:
"""
The Helm release name. If omitted it will use the application name
"""
return pulumi.get(self, "release_name")
@release_name.setter
def release_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "release_name", value)
@property
@pulumi.getter(name="valueFiles")
def value_files(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
ValuesFiles is a list of Helm value files to use when generating a template
"""
return pulumi.get(self, "value_files")
@value_files.setter
def value_files(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "value_files", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[str]]:
"""
Values is Helm values, typically defined as a block
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceHelmFileParametersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
HelmFileParameter is a file parameter to a helm template
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] path: Path is the path value for the helm parameter
"""
if name is not None:
pulumi.set(__self__, "name", name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path is the path value for the helm parameter
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceHelmParametersArgs:
def __init__(__self__, *,
force_string: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
HelmParameter is a parameter to a helm template
:param pulumi.Input[bool] force_string: ForceString determines whether to tell Helm to interpret booleans and numbers as strings
:param pulumi.Input[str] name: Name is the name of the helm parameter
:param pulumi.Input[str] value: Value is the value for the helm parameter
"""
if force_string is not None:
pulumi.set(__self__, "force_string", force_string)
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="forceString")
def force_string(self) -> Optional[pulumi.Input[bool]]:
"""
ForceString determines whether to tell Helm to interpret booleans and numbers as strings
"""
return pulumi.get(self, "force_string")
@force_string.setter
def force_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_string", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name is the name of the helm parameter
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the value for the helm parameter
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceKsonnetArgs:
def __init__(__self__, *,
environment: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetParametersArgs']]]] = None):
"""
Ksonnet holds ksonnet specific options
:param pulumi.Input[str] environment: Environment is a ksonnet application environment name
:param pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetParametersArgs']]] parameters: Parameters are a list of ksonnet component parameter override values
"""
if environment is not None:
pulumi.set(__self__, "environment", environment)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Environment is a ksonnet application environment name
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetParametersArgs']]]]:
"""
Parameters are a list of ksonnet component parameter override values
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourceKsonnetParametersArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceKsonnetParametersArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str],
component: Optional[pulumi.Input[str]] = None):
"""
KsonnetParameter is a ksonnet component parameter
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
if component is not None:
pulumi.set(__self__, "component", component)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter
def component(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "component")
@component.setter
def component(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "component", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourceKustomizeArgs:
def __init__(__self__, *,
common_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name_prefix: Optional[pulumi.Input[str]] = None,
name_suffix: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Kustomize holds kustomize specific options
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] common_labels: CommonLabels adds additional kustomize commonLabels
:param pulumi.Input[Sequence[pulumi.Input[str]]] images: Images are kustomize image overrides
:param pulumi.Input[str] name_prefix: NamePrefix is a prefix appended to resources for kustomize apps
:param pulumi.Input[str] name_suffix: NameSuffix is a suffix appended to resources for kustomize apps
:param pulumi.Input[str] version: Version contains optional Kustomize version
"""
if common_labels is not None:
pulumi.set(__self__, "common_labels", common_labels)
if images is not None:
pulumi.set(__self__, "images", images)
if name_prefix is not None:
pulumi.set(__self__, "name_prefix", name_prefix)
if name_suffix is not None:
pulumi.set(__self__, "name_suffix", name_suffix)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="commonLabels")
def common_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
CommonLabels adds additional kustomize commonLabels
"""
return pulumi.get(self, "common_labels")
@common_labels.setter
def common_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "common_labels", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Images are kustomize image overrides
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="namePrefix")
def name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
NamePrefix is a prefix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_prefix")
@name_prefix.setter
def name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_prefix", value)
@property
@pulumi.getter(name="nameSuffix")
def name_suffix(self) -> Optional[pulumi.Input[str]]:
"""
NameSuffix is a suffix appended to resources for kustomize apps
"""
return pulumi.get(self, "name_suffix")
@name_suffix.setter
def name_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name_suffix", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version contains optional Kustomize version
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourcePluginArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourcePluginEnvArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
ConfigManagementPlugin holds config management plugin specific options
"""
if env is not None:
pulumi.set(__self__, "env", env)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourcePluginEnvArgs']]]]:
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApplicationStatusSyncComparedToSourcePluginEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class ApplicationStatusSyncComparedToSourcePluginEnvArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] name: the name, usually uppercase
:param pulumi.Input[str] value: the value
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
the name, usually uppercase
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
the value
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ArgoCDExportSpecArgs:
def __init__(__self__, *,
argocd: pulumi.Input[str],
image: Optional[pulumi.Input[str]] = None,
schedule: Optional[pulumi.Input[str]] = None,
storage: Optional[pulumi.Input['ArgoCDExportSpecStorageArgs']] = None,
version: Optional[pulumi.Input[str]] = None):
"""
ArgoCDExportSpec defines the desired state of ArgoCDExport
:param pulumi.Input[str] argocd: Argocd is the name of the ArgoCD instance to export.
:param pulumi.Input[str] image: Image is the container image to use for the export Job.
:param pulumi.Input[str] schedule: Schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
:param pulumi.Input['ArgoCDExportSpecStorageArgs'] storage: Storage defines the storage configuration options.
:param pulumi.Input[str] version: Version is the tag/digest to use for the export Job container image.
"""
pulumi.set(__self__, "argocd", argocd)
if image is not None:
pulumi.set(__self__, "image", image)
if schedule is not None:
pulumi.set(__self__, "schedule", schedule)
if storage is not None:
pulumi.set(__self__, "storage", storage)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def argocd(self) -> pulumi.Input[str]:
"""
Argocd is the name of the ArgoCD instance to export.
"""
return pulumi.get(self, "argocd")
@argocd.setter
def argocd(self, value: pulumi.Input[str]):
pulumi.set(self, "argocd", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Image is the container image to use for the export Job.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter
def schedule(self) -> Optional[pulumi.Input[str]]:
"""
Schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
"""
return pulumi.get(self, "schedule")
@schedule.setter
def schedule(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "schedule", value)
@property
@pulumi.getter
def storage(self) -> Optional[pulumi.Input['ArgoCDExportSpecStorageArgs']]:
"""
Storage defines the storage configuration options.
"""
return pulumi.get(self, "storage")
@storage.setter
def storage(self, value: Optional[pulumi.Input['ArgoCDExportSpecStorageArgs']]):
pulumi.set(self, "storage", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version is the tag/digest to use for the export Job container image.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ArgoCDExportSpecStorageArgs:
def __init__(__self__, *,
backend: Optional[pulumi.Input[str]] = None,
pvc: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcArgs']] = None,
secret_name: Optional[pulumi.Input[str]] = None):
"""
Storage defines the storage configuration options.
:param pulumi.Input[str] backend: Backend defines the storage backend to use, must be "local" (the default), "aws", "azure" or "gcp".
:param pulumi.Input['ArgoCDExportSpecStoragePvcArgs'] pvc: PVC is the desired characteristics for a PersistentVolumeClaim.
:param pulumi.Input[str] secret_name: SecretName is the name of a Secret with encryption key, credentials, etc.
"""
if backend is not None:
pulumi.set(__self__, "backend", backend)
if pvc is not None:
pulumi.set(__self__, "pvc", pvc)
if secret_name is not None:
pulumi.set(__self__, "secret_name", secret_name)
@property
@pulumi.getter
def backend(self) -> Optional[pulumi.Input[str]]:
"""
Backend defines the storage backend to use, must be "local" (the default), "aws", "azure" or "gcp".
"""
return pulumi.get(self, "backend")
@backend.setter
def backend(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "backend", value)
@property
@pulumi.getter
def pvc(self) -> Optional[pulumi.Input['ArgoCDExportSpecStoragePvcArgs']]:
"""
PVC is the desired characteristics for a PersistentVolumeClaim.
"""
return pulumi.get(self, "pvc")
@pvc.setter
def pvc(self, value: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcArgs']]):
pulumi.set(self, "pvc", value)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Optional[pulumi.Input[str]]:
"""
SecretName is the name of a Secret with encryption key, credentials, etc.
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_name", value)
@pulumi.input_type
class ArgoCDExportSpecStoragePvcArgs:
def __init__(__self__, *,
access_modes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
data_source: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcDataSourceArgs']] = None,
resources: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcResourcesArgs']] = None,
selector: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorArgs']] = None,
storage_class_name: Optional[pulumi.Input[str]] = None,
volume_mode: Optional[pulumi.Input[str]] = None,
volume_name: Optional[pulumi.Input[str]] = None):
"""
PVC is the desired characteristics for a PersistentVolumeClaim.
:param pulumi.Input[Sequence[pulumi.Input[str]]] access_modes: AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
:param pulumi.Input['ArgoCDExportSpecStoragePvcDataSourceArgs'] data_source: This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot - Beta) * An existing PVC (PersistentVolumeClaim) * An existing custom resource/object that implements data population (Alpha) In order to use VolumeSnapshot object types, the appropriate feature gate must be enabled (VolumeSnapshotDataSource or AnyVolumeDataSource) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the specified data source is not supported, the volume will not be created and the failure will be reported as an event. In the future, we plan to support more data source types and the behavior of the provisioner may change.
:param pulumi.Input['ArgoCDExportSpecStoragePvcResourcesArgs'] resources: Resources represents the minimum resources the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
:param pulumi.Input['ArgoCDExportSpecStoragePvcSelectorArgs'] selector: A label query over volumes to consider for binding.
:param pulumi.Input[str] storage_class_name: Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
:param pulumi.Input[str] volume_mode: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
:param pulumi.Input[str] volume_name: VolumeName is the binding reference to the PersistentVolume backing this claim.
"""
if access_modes is not None:
pulumi.set(__self__, "access_modes", access_modes)
if data_source is not None:
pulumi.set(__self__, "data_source", data_source)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if selector is not None:
pulumi.set(__self__, "selector", selector)
if storage_class_name is not None:
pulumi.set(__self__, "storage_class_name", storage_class_name)
if volume_mode is not None:
pulumi.set(__self__, "volume_mode", volume_mode)
if volume_name is not None:
pulumi.set(__self__, "volume_name", volume_name)
@property
@pulumi.getter(name="accessModes")
def access_modes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
"""
return pulumi.get(self, "access_modes")
@access_modes.setter
def access_modes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "access_modes", value)
@property
@pulumi.getter(name="dataSource")
def data_source(self) -> Optional[pulumi.Input['ArgoCDExportSpecStoragePvcDataSourceArgs']]:
"""
This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot - Beta) * An existing PVC (PersistentVolumeClaim) * An existing custom resource/object that implements data population (Alpha) In order to use VolumeSnapshot object types, the appropriate feature gate must be enabled (VolumeSnapshotDataSource or AnyVolumeDataSource) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the specified data source is not supported, the volume will not be created and the failure will be reported as an event. In the future, we plan to support more data source types and the behavior of the provisioner may change.
"""
return pulumi.get(self, "data_source")
@data_source.setter
def data_source(self, value: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcDataSourceArgs']]):
pulumi.set(self, "data_source", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDExportSpecStoragePvcResourcesArgs']]:
"""
Resources represents the minimum resources the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def selector(self) -> Optional[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorArgs']]:
"""
A label query over volumes to consider for binding.
"""
return pulumi.get(self, "selector")
@selector.setter
def selector(self, value: Optional[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorArgs']]):
pulumi.set(self, "selector", value)
@property
@pulumi.getter(name="storageClassName")
def storage_class_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
"""
return pulumi.get(self, "storage_class_name")
@storage_class_name.setter
def storage_class_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "storage_class_name", value)
@property
@pulumi.getter(name="volumeMode")
def volume_mode(self) -> Optional[pulumi.Input[str]]:
"""
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
"""
return pulumi.get(self, "volume_mode")
@volume_mode.setter
def volume_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "volume_mode", value)
@property
@pulumi.getter(name="volumeName")
def volume_name(self) -> Optional[pulumi.Input[str]]:
"""
VolumeName is the binding reference to the PersistentVolume backing this claim.
"""
return pulumi.get(self, "volume_name")
@volume_name.setter
def volume_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "volume_name", value)
@pulumi.input_type
class ArgoCDExportSpecStoragePvcDataSourceArgs:
def __init__(__self__, *,
kind: pulumi.Input[str],
name: pulumi.Input[str],
api_group: Optional[pulumi.Input[str]] = None):
"""
This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot - Beta) * An existing PVC (PersistentVolumeClaim) * An existing custom resource/object that implements data population (Alpha) In order to use VolumeSnapshot object types, the appropriate feature gate must be enabled (VolumeSnapshotDataSource or AnyVolumeDataSource) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the specified data source is not supported, the volume will not be created and the failure will be reported as an event. In the future, we plan to support more data source types and the behavior of the provisioner may change.
:param pulumi.Input[str] kind: Kind is the type of resource being referenced
:param pulumi.Input[str] name: Name is the name of resource being referenced
:param pulumi.Input[str] api_group: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
if api_group is not None:
pulumi.set(__self__, "api_group", api_group)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
"""
Kind is the type of resource being referenced
"""
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Name is the name of resource being referenced
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="apiGroup")
def api_group(self) -> Optional[pulumi.Input[str]]:
"""
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
"""
return pulumi.get(self, "api_group")
@api_group.setter
def api_group(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_group", value)
@pulumi.input_type
class ArgoCDExportSpecStoragePvcResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesRequestsArgs']]]] = None):
"""
Resources represents the minimum resources the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDExportSpecStoragePvcResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDExportSpecStoragePvcResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDExportSpecStoragePvcResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDExportSpecStoragePvcSelectorArgs:
def __init__(__self__, *,
match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorMatchExpressionsArgs']]]] = None,
match_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
A label query over volumes to consider for binding.
:param pulumi.Input[Sequence[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorMatchExpressionsArgs']]]]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@match_expressions.setter
def match_expressions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDExportSpecStoragePvcSelectorMatchExpressionsArgs']]]]):
pulumi.set(self, "match_expressions", value)
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
@match_labels.setter
def match_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "match_labels", value)
@pulumi.input_type
class ArgoCDExportSpecStoragePvcSelectorMatchExpressionsArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
operator: pulumi.Input[str],
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param pulumi.Input[str] key: key is the label key that the selector applies to.
:param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[str]:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[str]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class ArgoCDExportStatusArgs:
def __init__(__self__, *,
phase: pulumi.Input[str]):
"""
ArgoCDExportStatus defines the observed state of ArgoCDExport
:param pulumi.Input[str] phase: Phase is a simple, high-level summary of where the ArgoCDExport is in its lifecycle. There are five possible phase values: Pending: The ArgoCDExport has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the containers for the ArgoCDExport are still running, or in the process of starting or restarting. Succeeded: All containers for the ArgoCDExport have terminated in success, and will not be restarted. Failed: At least one container has terminated in failure, either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the ArgoCDExport could not be obtained.
"""
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter
def phase(self) -> pulumi.Input[str]:
"""
Phase is a simple, high-level summary of where the ArgoCDExport is in its lifecycle. There are five possible phase values: Pending: The ArgoCDExport has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the containers for the ArgoCDExport are still running, or in the process of starting or restarting. Succeeded: All containers for the ArgoCDExport have terminated in success, and will not be restarted. Failed: At least one container has terminated in failure, either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the ArgoCDExport could not be obtained.
"""
return pulumi.get(self, "phase")
@phase.setter
def phase(self, value: pulumi.Input[str]):
pulumi.set(self, "phase", value)
@pulumi.input_type
class ArgoCDSpecArgs:
def __init__(__self__, *,
application_instance_label_key: Optional[pulumi.Input[str]] = None,
config_management_plugins: Optional[pulumi.Input[str]] = None,
controller: Optional[pulumi.Input['ArgoCDSpecControllerArgs']] = None,
dex: Optional[pulumi.Input['ArgoCDSpecDexArgs']] = None,
ga_anonymize_users: Optional[pulumi.Input[bool]] = None,
ga_tracking_id: Optional[pulumi.Input[str]] = None,
grafana: Optional[pulumi.Input['ArgoCDSpecGrafanaArgs']] = None,
ha: Optional[pulumi.Input['ArgoCDSpecHaArgs']] = None,
help_chat_text: Optional[pulumi.Input[str]] = None,
help_chat_url: Optional[pulumi.Input[str]] = None,
image: Optional[pulumi.Input[str]] = None,
import_: Optional[pulumi.Input['ArgoCDSpecImportArgs']] = None,
initial_repositories: Optional[pulumi.Input[str]] = None,
initial_ssh_known_hosts: Optional[pulumi.Input['ArgoCDSpecInitialSSHKnownHostsArgs']] = None,
kustomize_build_options: Optional[pulumi.Input[str]] = None,
oidc_config: Optional[pulumi.Input[str]] = None,
prometheus: Optional[pulumi.Input['ArgoCDSpecPrometheusArgs']] = None,
rbac: Optional[pulumi.Input['ArgoCDSpecRbacArgs']] = None,
redis: Optional[pulumi.Input['ArgoCDSpecRedisArgs']] = None,
repo: Optional[pulumi.Input['ArgoCDSpecRepoArgs']] = None,
repository_credentials: Optional[pulumi.Input[str]] = None,
resource_customizations: Optional[pulumi.Input[str]] = None,
resource_exclusions: Optional[pulumi.Input[str]] = None,
resource_inclusions: Optional[pulumi.Input[str]] = None,
server: Optional[pulumi.Input['ArgoCDSpecServerArgs']] = None,
status_badge_enabled: Optional[pulumi.Input[bool]] = None,
tls: Optional[pulumi.Input['ArgoCDSpecTlsArgs']] = None,
users_anonymous_enabled: Optional[pulumi.Input[bool]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
ArgoCDSpec defines the desired state of ArgoCD
:param pulumi.Input[str] application_instance_label_key: ApplicationInstanceLabelKey is the key name where Argo CD injects the app name as a tracking label.
:param pulumi.Input[str] config_management_plugins: ConfigManagementPlugins is used to specify additional config management plugins.
:param pulumi.Input['ArgoCDSpecControllerArgs'] controller: Controller defines the Application Controller options for ArgoCD.
:param pulumi.Input['ArgoCDSpecDexArgs'] dex: Dex defines the Dex server options for ArgoCD.
:param pulumi.Input[bool] ga_anonymize_users: GAAnonymizeUsers toggles user IDs being hashed before sending to google analytics.
:param pulumi.Input[str] ga_tracking_id: GATrackingID is the google analytics tracking ID to use.
:param pulumi.Input['ArgoCDSpecGrafanaArgs'] grafana: Grafana defines the Grafana server options for ArgoCD.
:param pulumi.Input['ArgoCDSpecHaArgs'] ha: HA options for High Availability support for the Redis component.
:param pulumi.Input[str] help_chat_text: HelpChatText is the text for getting chat help, defaults to "Chat now!"
:param pulumi.Input[str] help_chat_url: HelpChatURL is the URL for getting chat help, this will typically be your Slack channel for support.
:param pulumi.Input[str] image: Image is the ArgoCD container image for all ArgoCD components.
:param pulumi.Input['ArgoCDSpecImportArgs'] import_: Import is the import/restore options for ArgoCD.
:param pulumi.Input[str] initial_repositories: InitialRepositories to configure Argo CD with upon creation of the cluster.
:param pulumi.Input['ArgoCDSpecInitialSSHKnownHostsArgs'] initial_ssh_known_hosts: InitialSSHKnownHosts defines the SSH known hosts data upon creation of the cluster for connecting Git repositories via SSH.
:param pulumi.Input[str] kustomize_build_options: KustomizeBuildOptions is used to specify build options/parameters to use with `kustomize build`.
:param pulumi.Input[str] oidc_config: OIDCConfig is the OIDC configuration as an alternative to dex.
:param pulumi.Input['ArgoCDSpecPrometheusArgs'] prometheus: Prometheus defines the Prometheus server options for ArgoCD.
:param pulumi.Input['ArgoCDSpecRbacArgs'] rbac: RBAC defines the RBAC configuration for Argo CD.
:param pulumi.Input['ArgoCDSpecRedisArgs'] redis: Redis defines the Redis server options for ArgoCD.
:param pulumi.Input['ArgoCDSpecRepoArgs'] repo: Repo defines the repo server options for Argo CD.
:param pulumi.Input[str] repository_credentials: RepositoryCredentials are the Git pull credentials to configure Argo CD with upon creation of the cluster.
:param pulumi.Input[str] resource_customizations: ResourceCustomizations customizes resource behavior. Keys are in the form: group/Kind.
:param pulumi.Input[str] resource_exclusions: ResourceExclusions is used to completely ignore entire classes of resource group/kinds.
:param pulumi.Input[str] resource_inclusions: ResourceInclusions is used to only include specific group/kinds in the reconciliation process.
:param pulumi.Input['ArgoCDSpecServerArgs'] server: Server defines the options for the ArgoCD Server component.
:param pulumi.Input[bool] status_badge_enabled: StatusBadgeEnabled toggles application status badge feature.
:param pulumi.Input['ArgoCDSpecTlsArgs'] tls: TLS defines the TLS options for ArgoCD.
:param pulumi.Input[bool] users_anonymous_enabled: UsersAnonymousEnabled toggles anonymous user access. The anonymous users get default role permissions specified argocd-rbac-cm.
:param pulumi.Input[str] version: Version is the tag to use with the ArgoCD container image for all ArgoCD components.
"""
if application_instance_label_key is not None:
pulumi.set(__self__, "application_instance_label_key", application_instance_label_key)
if config_management_plugins is not None:
pulumi.set(__self__, "config_management_plugins", config_management_plugins)
if controller is not None:
pulumi.set(__self__, "controller", controller)
if dex is not None:
pulumi.set(__self__, "dex", dex)
if ga_anonymize_users is not None:
pulumi.set(__self__, "ga_anonymize_users", ga_anonymize_users)
if ga_tracking_id is not None:
pulumi.set(__self__, "ga_tracking_id", ga_tracking_id)
if grafana is not None:
pulumi.set(__self__, "grafana", grafana)
if ha is not None:
pulumi.set(__self__, "ha", ha)
if help_chat_text is not None:
pulumi.set(__self__, "help_chat_text", help_chat_text)
if help_chat_url is not None:
pulumi.set(__self__, "help_chat_url", help_chat_url)
if image is not None:
pulumi.set(__self__, "image", image)
if import_ is not None:
pulumi.set(__self__, "import_", import_)
if initial_repositories is not None:
pulumi.set(__self__, "initial_repositories", initial_repositories)
if initial_ssh_known_hosts is not None:
pulumi.set(__self__, "initial_ssh_known_hosts", initial_ssh_known_hosts)
if kustomize_build_options is not None:
pulumi.set(__self__, "kustomize_build_options", kustomize_build_options)
if oidc_config is not None:
pulumi.set(__self__, "oidc_config", oidc_config)
if prometheus is not None:
pulumi.set(__self__, "prometheus", prometheus)
if rbac is not None:
pulumi.set(__self__, "rbac", rbac)
if redis is not None:
pulumi.set(__self__, "redis", redis)
if repo is not None:
pulumi.set(__self__, "repo", repo)
if repository_credentials is not None:
pulumi.set(__self__, "repository_credentials", repository_credentials)
if resource_customizations is not None:
pulumi.set(__self__, "resource_customizations", resource_customizations)
if resource_exclusions is not None:
pulumi.set(__self__, "resource_exclusions", resource_exclusions)
if resource_inclusions is not None:
pulumi.set(__self__, "resource_inclusions", resource_inclusions)
if server is not None:
pulumi.set(__self__, "server", server)
if status_badge_enabled is not None:
pulumi.set(__self__, "status_badge_enabled", status_badge_enabled)
if tls is not None:
pulumi.set(__self__, "tls", tls)
if users_anonymous_enabled is not None:
pulumi.set(__self__, "users_anonymous_enabled", users_anonymous_enabled)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="applicationInstanceLabelKey")
def application_instance_label_key(self) -> Optional[pulumi.Input[str]]:
"""
ApplicationInstanceLabelKey is the key name where Argo CD injects the app name as a tracking label.
"""
return pulumi.get(self, "application_instance_label_key")
@application_instance_label_key.setter
def application_instance_label_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "application_instance_label_key", value)
@property
@pulumi.getter(name="configManagementPlugins")
def config_management_plugins(self) -> Optional[pulumi.Input[str]]:
"""
ConfigManagementPlugins is used to specify additional config management plugins.
"""
return pulumi.get(self, "config_management_plugins")
@config_management_plugins.setter
def config_management_plugins(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "config_management_plugins", value)
@property
@pulumi.getter
def controller(self) -> Optional[pulumi.Input['ArgoCDSpecControllerArgs']]:
"""
Controller defines the Application Controller options for ArgoCD.
"""
return pulumi.get(self, "controller")
@controller.setter
def controller(self, value: Optional[pulumi.Input['ArgoCDSpecControllerArgs']]):
pulumi.set(self, "controller", value)
@property
@pulumi.getter
def dex(self) -> Optional[pulumi.Input['ArgoCDSpecDexArgs']]:
"""
Dex defines the Dex server options for ArgoCD.
"""
return pulumi.get(self, "dex")
@dex.setter
def dex(self, value: Optional[pulumi.Input['ArgoCDSpecDexArgs']]):
pulumi.set(self, "dex", value)
@property
@pulumi.getter(name="gaAnonymizeUsers")
def ga_anonymize_users(self) -> Optional[pulumi.Input[bool]]:
"""
GAAnonymizeUsers toggles user IDs being hashed before sending to google analytics.
"""
return pulumi.get(self, "ga_anonymize_users")
@ga_anonymize_users.setter
def ga_anonymize_users(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "ga_anonymize_users", value)
@property
@pulumi.getter(name="gaTrackingID")
def ga_tracking_id(self) -> Optional[pulumi.Input[str]]:
"""
GATrackingID is the google analytics tracking ID to use.
"""
return pulumi.get(self, "ga_tracking_id")
@ga_tracking_id.setter
def ga_tracking_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ga_tracking_id", value)
@property
@pulumi.getter
def grafana(self) -> Optional[pulumi.Input['ArgoCDSpecGrafanaArgs']]:
"""
Grafana defines the Grafana server options for ArgoCD.
"""
return pulumi.get(self, "grafana")
@grafana.setter
def grafana(self, value: Optional[pulumi.Input['ArgoCDSpecGrafanaArgs']]):
pulumi.set(self, "grafana", value)
@property
@pulumi.getter
def ha(self) -> Optional[pulumi.Input['ArgoCDSpecHaArgs']]:
"""
HA options for High Availability support for the Redis component.
"""
return pulumi.get(self, "ha")
@ha.setter
def ha(self, value: Optional[pulumi.Input['ArgoCDSpecHaArgs']]):
pulumi.set(self, "ha", value)
@property
@pulumi.getter(name="helpChatText")
def help_chat_text(self) -> Optional[pulumi.Input[str]]:
"""
HelpChatText is the text for getting chat help, defaults to "Chat now!"
"""
return pulumi.get(self, "help_chat_text")
@help_chat_text.setter
def help_chat_text(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "help_chat_text", value)
@property
@pulumi.getter(name="helpChatURL")
def help_chat_url(self) -> Optional[pulumi.Input[str]]:
"""
HelpChatURL is the URL for getting chat help, this will typically be your Slack channel for support.
"""
return pulumi.get(self, "help_chat_url")
@help_chat_url.setter
def help_chat_url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "help_chat_url", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Image is the ArgoCD container image for all ArgoCD components.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter(name="import")
def import_(self) -> Optional[pulumi.Input['ArgoCDSpecImportArgs']]:
"""
Import is the import/restore options for ArgoCD.
"""
return pulumi.get(self, "import_")
@import_.setter
def import_(self, value: Optional[pulumi.Input['ArgoCDSpecImportArgs']]):
pulumi.set(self, "import_", value)
@property
@pulumi.getter(name="initialRepositories")
def initial_repositories(self) -> Optional[pulumi.Input[str]]:
"""
InitialRepositories to configure Argo CD with upon creation of the cluster.
"""
return pulumi.get(self, "initial_repositories")
@initial_repositories.setter
def initial_repositories(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "initial_repositories", value)
@property
@pulumi.getter(name="initialSSHKnownHosts")
def initial_ssh_known_hosts(self) -> Optional[pulumi.Input['ArgoCDSpecInitialSSHKnownHostsArgs']]:
"""
InitialSSHKnownHosts defines the SSH known hosts data upon creation of the cluster for connecting Git repositories via SSH.
"""
return pulumi.get(self, "initial_ssh_known_hosts")
@initial_ssh_known_hosts.setter
def initial_ssh_known_hosts(self, value: Optional[pulumi.Input['ArgoCDSpecInitialSSHKnownHostsArgs']]):
pulumi.set(self, "initial_ssh_known_hosts", value)
@property
@pulumi.getter(name="kustomizeBuildOptions")
def kustomize_build_options(self) -> Optional[pulumi.Input[str]]:
"""
KustomizeBuildOptions is used to specify build options/parameters to use with `kustomize build`.
"""
return pulumi.get(self, "kustomize_build_options")
@kustomize_build_options.setter
def kustomize_build_options(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kustomize_build_options", value)
@property
@pulumi.getter(name="oidcConfig")
def oidc_config(self) -> Optional[pulumi.Input[str]]:
"""
OIDCConfig is the OIDC configuration as an alternative to dex.
"""
return pulumi.get(self, "oidc_config")
@oidc_config.setter
def oidc_config(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "oidc_config", value)
@property
@pulumi.getter
def prometheus(self) -> Optional[pulumi.Input['ArgoCDSpecPrometheusArgs']]:
"""
Prometheus defines the Prometheus server options for ArgoCD.
"""
return pulumi.get(self, "prometheus")
@prometheus.setter
def prometheus(self, value: Optional[pulumi.Input['ArgoCDSpecPrometheusArgs']]):
pulumi.set(self, "prometheus", value)
@property
@pulumi.getter
def rbac(self) -> Optional[pulumi.Input['ArgoCDSpecRbacArgs']]:
"""
RBAC defines the RBAC configuration for Argo CD.
"""
return pulumi.get(self, "rbac")
@rbac.setter
def rbac(self, value: Optional[pulumi.Input['ArgoCDSpecRbacArgs']]):
pulumi.set(self, "rbac", value)
@property
@pulumi.getter
def redis(self) -> Optional[pulumi.Input['ArgoCDSpecRedisArgs']]:
"""
Redis defines the Redis server options for ArgoCD.
"""
return pulumi.get(self, "redis")
@redis.setter
def redis(self, value: Optional[pulumi.Input['ArgoCDSpecRedisArgs']]):
pulumi.set(self, "redis", value)
@property
@pulumi.getter
def repo(self) -> Optional[pulumi.Input['ArgoCDSpecRepoArgs']]:
"""
Repo defines the repo server options for Argo CD.
"""
return pulumi.get(self, "repo")
@repo.setter
def repo(self, value: Optional[pulumi.Input['ArgoCDSpecRepoArgs']]):
pulumi.set(self, "repo", value)
@property
@pulumi.getter(name="repositoryCredentials")
def repository_credentials(self) -> Optional[pulumi.Input[str]]:
"""
RepositoryCredentials are the Git pull credentials to configure Argo CD with upon creation of the cluster.
"""
return pulumi.get(self, "repository_credentials")
@repository_credentials.setter
def repository_credentials(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "repository_credentials", value)
@property
@pulumi.getter(name="resourceCustomizations")
def resource_customizations(self) -> Optional[pulumi.Input[str]]:
"""
ResourceCustomizations customizes resource behavior. Keys are in the form: group/Kind.
"""
return pulumi.get(self, "resource_customizations")
@resource_customizations.setter
def resource_customizations(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_customizations", value)
@property
@pulumi.getter(name="resourceExclusions")
def resource_exclusions(self) -> Optional[pulumi.Input[str]]:
"""
ResourceExclusions is used to completely ignore entire classes of resource group/kinds.
"""
return pulumi.get(self, "resource_exclusions")
@resource_exclusions.setter
def resource_exclusions(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_exclusions", value)
@property
@pulumi.getter(name="resourceInclusions")
def resource_inclusions(self) -> Optional[pulumi.Input[str]]:
"""
ResourceInclusions is used to only include specific group/kinds in the reconciliation process.
"""
return pulumi.get(self, "resource_inclusions")
@resource_inclusions.setter
def resource_inclusions(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_inclusions", value)
@property
@pulumi.getter
def server(self) -> Optional[pulumi.Input['ArgoCDSpecServerArgs']]:
"""
Server defines the options for the ArgoCD Server component.
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Optional[pulumi.Input['ArgoCDSpecServerArgs']]):
pulumi.set(self, "server", value)
@property
@pulumi.getter(name="statusBadgeEnabled")
def status_badge_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
StatusBadgeEnabled toggles application status badge feature.
"""
return pulumi.get(self, "status_badge_enabled")
@status_badge_enabled.setter
def status_badge_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "status_badge_enabled", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input['ArgoCDSpecTlsArgs']]:
"""
TLS defines the TLS options for ArgoCD.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input['ArgoCDSpecTlsArgs']]):
pulumi.set(self, "tls", value)
@property
@pulumi.getter(name="usersAnonymousEnabled")
def users_anonymous_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
UsersAnonymousEnabled toggles anonymous user access. The anonymous users get default role permissions specified argocd-rbac-cm.
"""
return pulumi.get(self, "users_anonymous_enabled")
@users_anonymous_enabled.setter
def users_anonymous_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "users_anonymous_enabled", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version is the tag to use with the ArgoCD container image for all ArgoCD components.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ArgoCDSpecControllerArgs:
def __init__(__self__, *,
app_sync: Optional[pulumi.Input[str]] = None,
processors: Optional[pulumi.Input['ArgoCDSpecControllerProcessorsArgs']] = None,
resources: Optional[pulumi.Input['ArgoCDSpecControllerResourcesArgs']] = None):
"""
Controller defines the Application Controller options for ArgoCD.
:param pulumi.Input[str] app_sync: AppSync is used to control the sync frequency, by default the ArgoCD controller polls Git every 3m by default.
Set this to a duration, e.g. 10m or 600s to control the synchronisation frequency.
:param pulumi.Input['ArgoCDSpecControllerProcessorsArgs'] processors: Processors contains the options for the Application Controller processors.
:param pulumi.Input['ArgoCDSpecControllerResourcesArgs'] resources: Resources defines the Compute Resources required by the container for the Application Controller.
"""
if app_sync is not None:
pulumi.set(__self__, "app_sync", app_sync)
if processors is not None:
pulumi.set(__self__, "processors", processors)
if resources is not None:
pulumi.set(__self__, "resources", resources)
@property
@pulumi.getter(name="appSync")
def app_sync(self) -> Optional[pulumi.Input[str]]:
"""
AppSync is used to control the sync frequency, by default the ArgoCD controller polls Git every 3m by default.
Set this to a duration, e.g. 10m or 600s to control the synchronisation frequency.
"""
return pulumi.get(self, "app_sync")
@app_sync.setter
def app_sync(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "app_sync", value)
@property
@pulumi.getter
def processors(self) -> Optional[pulumi.Input['ArgoCDSpecControllerProcessorsArgs']]:
"""
Processors contains the options for the Application Controller processors.
"""
return pulumi.get(self, "processors")
@processors.setter
def processors(self, value: Optional[pulumi.Input['ArgoCDSpecControllerProcessorsArgs']]):
pulumi.set(self, "processors", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDSpecControllerResourcesArgs']]:
"""
Resources defines the Compute Resources required by the container for the Application Controller.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDSpecControllerResourcesArgs']]):
pulumi.set(self, "resources", value)
@pulumi.input_type
class ArgoCDSpecControllerProcessorsArgs:
def __init__(__self__, *,
operation: Optional[pulumi.Input[int]] = None,
status: Optional[pulumi.Input[int]] = None):
"""
Processors contains the options for the Application Controller processors.
:param pulumi.Input[int] operation: Operation is the number of application operation processors.
:param pulumi.Input[int] status: Status is the number of application status processors.
"""
if operation is not None:
pulumi.set(__self__, "operation", operation)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def operation(self) -> Optional[pulumi.Input[int]]:
"""
Operation is the number of application operation processors.
"""
return pulumi.get(self, "operation")
@operation.setter
def operation(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "operation", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[int]]:
"""
Status is the number of application status processors.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "status", value)
@pulumi.input_type
class ArgoCDSpecControllerResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesRequestsArgs']]]] = None):
"""
Resources defines the Compute Resources required by the container for the Application Controller.
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecControllerResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDSpecControllerResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecControllerResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecDexArgs:
def __init__(__self__, *,
config: Optional[pulumi.Input[str]] = None,
image: Optional[pulumi.Input[str]] = None,
open_shift_o_auth: Optional[pulumi.Input[bool]] = None,
resources: Optional[pulumi.Input['ArgoCDSpecDexResourcesArgs']] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Dex defines the Dex server options for ArgoCD.
:param pulumi.Input[str] config: Config is the dex connector configuration.
:param pulumi.Input[str] image: Image is the Dex container image.
:param pulumi.Input[bool] open_shift_o_auth: OpenShiftOAuth enables OpenShift OAuth authentication for the Dex server.
:param pulumi.Input['ArgoCDSpecDexResourcesArgs'] resources: Resources defines the Compute Resources required by the container for Dex.
:param pulumi.Input[str] version: Version is the Dex container image tag.
"""
if config is not None:
pulumi.set(__self__, "config", config)
if image is not None:
pulumi.set(__self__, "image", image)
if open_shift_o_auth is not None:
pulumi.set(__self__, "open_shift_o_auth", open_shift_o_auth)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def config(self) -> Optional[pulumi.Input[str]]:
"""
Config is the dex connector configuration.
"""
return pulumi.get(self, "config")
@config.setter
def config(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "config", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Image is the Dex container image.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter(name="openShiftOAuth")
def open_shift_o_auth(self) -> Optional[pulumi.Input[bool]]:
"""
OpenShiftOAuth enables OpenShift OAuth authentication for the Dex server.
"""
return pulumi.get(self, "open_shift_o_auth")
@open_shift_o_auth.setter
def open_shift_o_auth(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "open_shift_o_auth", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDSpecDexResourcesArgs']]:
"""
Resources defines the Compute Resources required by the container for Dex.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDSpecDexResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version is the Dex container image tag.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ArgoCDSpecDexResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesRequestsArgs']]]] = None):
"""
Resources defines the Compute Resources required by the container for Dex.
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecDexResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDSpecDexResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecDexResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecGrafanaArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
host: Optional[pulumi.Input[str]] = None,
image: Optional[pulumi.Input[str]] = None,
ingress: Optional[pulumi.Input['ArgoCDSpecGrafanaIngressArgs']] = None,
resources: Optional[pulumi.Input['ArgoCDSpecGrafanaResourcesArgs']] = None,
route: Optional[pulumi.Input['ArgoCDSpecGrafanaRouteArgs']] = None,
size: Optional[pulumi.Input[int]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Grafana defines the Grafana server options for ArgoCD.
:param pulumi.Input[bool] enabled: Enabled will toggle Grafana support globally for ArgoCD.
:param pulumi.Input[str] host: Host is the hostname to use for Ingress/Route resources.
:param pulumi.Input[str] image: Image is the Grafana container image.
:param pulumi.Input['ArgoCDSpecGrafanaIngressArgs'] ingress: Ingress defines the desired state for an Ingress for the Grafana component.
:param pulumi.Input['ArgoCDSpecGrafanaResourcesArgs'] resources: Resources defines the Compute Resources required by the container for Grafana.
:param pulumi.Input['ArgoCDSpecGrafanaRouteArgs'] route: Route defines the desired state for an OpenShift Route for the Grafana component.
:param pulumi.Input[int] size: Size is the replica count for the Grafana Deployment.
:param pulumi.Input[str] version: Version is the Grafana container image tag.
"""
pulumi.set(__self__, "enabled", enabled)
if host is not None:
pulumi.set(__self__, "host", host)
if image is not None:
pulumi.set(__self__, "image", image)
if ingress is not None:
pulumi.set(__self__, "ingress", ingress)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if route is not None:
pulumi.set(__self__, "route", route)
if size is not None:
pulumi.set(__self__, "size", size)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle Grafana support globally for ArgoCD.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host is the hostname to use for Ingress/Route resources.
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Image is the Grafana container image.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter
def ingress(self) -> Optional[pulumi.Input['ArgoCDSpecGrafanaIngressArgs']]:
"""
Ingress defines the desired state for an Ingress for the Grafana component.
"""
return pulumi.get(self, "ingress")
@ingress.setter
def ingress(self, value: Optional[pulumi.Input['ArgoCDSpecGrafanaIngressArgs']]):
pulumi.set(self, "ingress", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDSpecGrafanaResourcesArgs']]:
"""
Resources defines the Compute Resources required by the container for Grafana.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDSpecGrafanaResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def route(self) -> Optional[pulumi.Input['ArgoCDSpecGrafanaRouteArgs']]:
"""
Route defines the desired state for an OpenShift Route for the Grafana component.
"""
return pulumi.get(self, "route")
@route.setter
def route(self, value: Optional[pulumi.Input['ArgoCDSpecGrafanaRouteArgs']]):
pulumi.set(self, "route", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
Size is the replica count for the Grafana Deployment.
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version is the Grafana container image tag.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ArgoCDSpecGrafanaIngressArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecGrafanaIngressTlsArgs']]]] = None):
"""
Ingress defines the desired state for an Ingress for the Grafana component.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the Ingress.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to apply to the Ingress.
:param pulumi.Input[str] path: Path used for the Ingress resource.
:param pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecGrafanaIngressTlsArgs']]] tls: TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the Ingress.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to apply to the Ingress.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path used for the Ingress resource.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecGrafanaIngressTlsArgs']]]]:
"""
TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecGrafanaIngressTlsArgs']]]]):
pulumi.set(self, "tls", value)
@pulumi.input_type
class ArgoCDSpecGrafanaIngressTlsArgs:
def __init__(__self__, *,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
secret_name: Optional[pulumi.Input[str]] = None):
"""
IngressTLS describes the transport layer security associated with an Ingress.
:param pulumi.Input[Sequence[pulumi.Input[str]]] hosts: Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
:param pulumi.Input[str] secret_name: SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
if hosts is not None:
pulumi.set(__self__, "hosts", hosts)
if secret_name is not None:
pulumi.set(__self__, "secret_name", secret_name)
@property
@pulumi.getter
def hosts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Optional[pulumi.Input[str]]:
"""
SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_name", value)
@pulumi.input_type
class ArgoCDSpecGrafanaResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesRequestsArgs']]]] = None):
"""
Resources defines the Compute Resources required by the container for Grafana.
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecGrafanaResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDSpecGrafanaResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecGrafanaResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecGrafanaRouteArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input['ArgoCDSpecGrafanaRouteTlsArgs']] = None,
wildcard_policy: Optional[pulumi.Input[str]] = None):
"""
Route defines the desired state for an OpenShift Route for the Grafana component.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the OpenShift Route.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to use for the Route resource.
:param pulumi.Input[str] path: Path the router watches for, to route traffic for to the service.
:param pulumi.Input['ArgoCDSpecGrafanaRouteTlsArgs'] tls: TLS provides the ability to configure certificates and termination for the Route.
:param pulumi.Input[str] wildcard_policy: WildcardPolicy if any for the route. Currently only 'Subdomain' or 'None' is allowed.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
if wildcard_policy is not None:
pulumi.set(__self__, "wildcard_policy", wildcard_policy)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the OpenShift Route.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to use for the Route resource.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path the router watches for, to route traffic for to the service.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input['ArgoCDSpecGrafanaRouteTlsArgs']]:
"""
TLS provides the ability to configure certificates and termination for the Route.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input['ArgoCDSpecGrafanaRouteTlsArgs']]):
pulumi.set(self, "tls", value)
@property
@pulumi.getter(name="wildcardPolicy")
def wildcard_policy(self) -> Optional[pulumi.Input[str]]:
"""
WildcardPolicy if any for the route. Currently only 'Subdomain' or 'None' is allowed.
"""
return pulumi.get(self, "wildcard_policy")
@wildcard_policy.setter
def wildcard_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "wildcard_policy", value)
@pulumi.input_type
class ArgoCDSpecGrafanaRouteTlsArgs:
def __init__(__self__, *,
termination: pulumi.Input[str],
ca_certificate: Optional[pulumi.Input[str]] = None,
certificate: Optional[pulumi.Input[str]] = None,
destination_ca_certificate: Optional[pulumi.Input[str]] = None,
insecure_edge_termination_policy: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None):
"""
TLS provides the ability to configure certificates and termination for the Route.
:param pulumi.Input[str] termination: termination indicates termination type.
:param pulumi.Input[str] ca_certificate: caCertificate provides the cert authority certificate contents
:param pulumi.Input[str] certificate: certificate provides certificate contents
:param pulumi.Input[str] destination_ca_certificate: destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
:param pulumi.Input[str] insecure_edge_termination_policy: insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.
* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.
:param pulumi.Input[str] key: key provides key file contents
"""
pulumi.set(__self__, "termination", termination)
if ca_certificate is not None:
pulumi.set(__self__, "ca_certificate", ca_certificate)
if certificate is not None:
pulumi.set(__self__, "certificate", certificate)
if destination_ca_certificate is not None:
pulumi.set(__self__, "destination_ca_certificate", destination_ca_certificate)
if insecure_edge_termination_policy is not None:
pulumi.set(__self__, "insecure_edge_termination_policy", insecure_edge_termination_policy)
if key is not None:
pulumi.set(__self__, "key", key)
@property
@pulumi.getter
def termination(self) -> pulumi.Input[str]:
"""
termination indicates termination type.
"""
return pulumi.get(self, "termination")
@termination.setter
def termination(self, value: pulumi.Input[str]):
pulumi.set(self, "termination", value)
@property
@pulumi.getter(name="caCertificate")
def ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
caCertificate provides the cert authority certificate contents
"""
return pulumi.get(self, "ca_certificate")
@ca_certificate.setter
def ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ca_certificate", value)
@property
@pulumi.getter
def certificate(self) -> Optional[pulumi.Input[str]]:
"""
certificate provides certificate contents
"""
return pulumi.get(self, "certificate")
@certificate.setter
def certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "certificate", value)
@property
@pulumi.getter(name="destinationCACertificate")
def destination_ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
"""
return pulumi.get(self, "destination_ca_certificate")
@destination_ca_certificate.setter
def destination_ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_ca_certificate", value)
@property
@pulumi.getter(name="insecureEdgeTerminationPolicy")
def insecure_edge_termination_policy(self) -> Optional[pulumi.Input[str]]:
"""
insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.
* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.
"""
return pulumi.get(self, "insecure_edge_termination_policy")
@insecure_edge_termination_policy.setter
def insecure_edge_termination_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "insecure_edge_termination_policy", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
key provides key file contents
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@pulumi.input_type
class ArgoCDSpecHaArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
redis_proxy_image: Optional[pulumi.Input[str]] = None,
redis_proxy_version: Optional[pulumi.Input[str]] = None):
"""
HA options for High Availability support for the Redis component.
:param pulumi.Input[bool] enabled: Enabled will toggle HA support globally for Argo CD.
:param pulumi.Input[str] redis_proxy_image: RedisProxyImage is the Redis HAProxy container image.
:param pulumi.Input[str] redis_proxy_version: RedisProxyVersion is the Redis HAProxy container image tag.
"""
pulumi.set(__self__, "enabled", enabled)
if redis_proxy_image is not None:
pulumi.set(__self__, "redis_proxy_image", redis_proxy_image)
if redis_proxy_version is not None:
pulumi.set(__self__, "redis_proxy_version", redis_proxy_version)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle HA support globally for Argo CD.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="redisProxyImage")
def redis_proxy_image(self) -> Optional[pulumi.Input[str]]:
"""
RedisProxyImage is the Redis HAProxy container image.
"""
return pulumi.get(self, "redis_proxy_image")
@redis_proxy_image.setter
def redis_proxy_image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "redis_proxy_image", value)
@property
@pulumi.getter(name="redisProxyVersion")
def redis_proxy_version(self) -> Optional[pulumi.Input[str]]:
"""
RedisProxyVersion is the Redis HAProxy container image tag.
"""
return pulumi.get(self, "redis_proxy_version")
@redis_proxy_version.setter
def redis_proxy_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "redis_proxy_version", value)
@pulumi.input_type
class ArgoCDSpecImportArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
namespace: Optional[pulumi.Input[str]] = None):
"""
Import is the import/restore options for ArgoCD.
:param pulumi.Input[str] name: Name of an ArgoCDExport from which to import data.
:param pulumi.Input[str] namespace: Namespace for the ArgoCDExport, defaults to the same namespace as the ArgoCD.
"""
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Name of an ArgoCDExport from which to import data.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
"""
Namespace for the ArgoCDExport, defaults to the same namespace as the ArgoCD.
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@pulumi.input_type
class ArgoCDSpecInitialSSHKnownHostsArgs:
def __init__(__self__, *,
excludedefaulthosts: Optional[pulumi.Input[bool]] = None,
keys: Optional[pulumi.Input[str]] = None):
"""
InitialSSHKnownHosts defines the SSH known hosts data upon creation of the cluster for connecting Git repositories via SSH.
:param pulumi.Input[bool] excludedefaulthosts: ExcludeDefaultHosts describes whether you would like to include the default list of SSH Known Hosts provided by ArgoCD.
:param pulumi.Input[str] keys: Keys describes a custom set of SSH Known Hosts that you would like to have included in your ArgoCD server.
"""
if excludedefaulthosts is not None:
pulumi.set(__self__, "excludedefaulthosts", excludedefaulthosts)
if keys is not None:
pulumi.set(__self__, "keys", keys)
@property
@pulumi.getter
def excludedefaulthosts(self) -> Optional[pulumi.Input[bool]]:
"""
ExcludeDefaultHosts describes whether you would like to include the default list of SSH Known Hosts provided by ArgoCD.
"""
return pulumi.get(self, "excludedefaulthosts")
@excludedefaulthosts.setter
def excludedefaulthosts(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "excludedefaulthosts", value)
@property
@pulumi.getter
def keys(self) -> Optional[pulumi.Input[str]]:
"""
Keys describes a custom set of SSH Known Hosts that you would like to have included in your ArgoCD server.
"""
return pulumi.get(self, "keys")
@keys.setter
def keys(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "keys", value)
@pulumi.input_type
class ArgoCDSpecPrometheusArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
host: Optional[pulumi.Input[str]] = None,
ingress: Optional[pulumi.Input['ArgoCDSpecPrometheusIngressArgs']] = None,
route: Optional[pulumi.Input['ArgoCDSpecPrometheusRouteArgs']] = None,
size: Optional[pulumi.Input[int]] = None):
"""
Prometheus defines the Prometheus server options for ArgoCD.
:param pulumi.Input[bool] enabled: Enabled will toggle Prometheus support globally for ArgoCD.
:param pulumi.Input[str] host: Host is the hostname to use for Ingress/Route resources.
:param pulumi.Input['ArgoCDSpecPrometheusIngressArgs'] ingress: Ingress defines the desired state for an Ingress for the Prometheus component.
:param pulumi.Input['ArgoCDSpecPrometheusRouteArgs'] route: Route defines the desired state for an OpenShift Route for the Prometheus component.
:param pulumi.Input[int] size: Size is the replica count for the Prometheus StatefulSet.
"""
pulumi.set(__self__, "enabled", enabled)
if host is not None:
pulumi.set(__self__, "host", host)
if ingress is not None:
pulumi.set(__self__, "ingress", ingress)
if route is not None:
pulumi.set(__self__, "route", route)
if size is not None:
pulumi.set(__self__, "size", size)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle Prometheus support globally for ArgoCD.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host is the hostname to use for Ingress/Route resources.
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def ingress(self) -> Optional[pulumi.Input['ArgoCDSpecPrometheusIngressArgs']]:
"""
Ingress defines the desired state for an Ingress for the Prometheus component.
"""
return pulumi.get(self, "ingress")
@ingress.setter
def ingress(self, value: Optional[pulumi.Input['ArgoCDSpecPrometheusIngressArgs']]):
pulumi.set(self, "ingress", value)
@property
@pulumi.getter
def route(self) -> Optional[pulumi.Input['ArgoCDSpecPrometheusRouteArgs']]:
"""
Route defines the desired state for an OpenShift Route for the Prometheus component.
"""
return pulumi.get(self, "route")
@route.setter
def route(self, value: Optional[pulumi.Input['ArgoCDSpecPrometheusRouteArgs']]):
pulumi.set(self, "route", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
Size is the replica count for the Prometheus StatefulSet.
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@pulumi.input_type
class ArgoCDSpecPrometheusIngressArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecPrometheusIngressTlsArgs']]]] = None):
"""
Ingress defines the desired state for an Ingress for the Prometheus component.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the Ingress.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to apply to the Ingress.
:param pulumi.Input[str] path: Path used for the Ingress resource.
:param pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecPrometheusIngressTlsArgs']]] tls: TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the Ingress.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to apply to the Ingress.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path used for the Ingress resource.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecPrometheusIngressTlsArgs']]]]:
"""
TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecPrometheusIngressTlsArgs']]]]):
pulumi.set(self, "tls", value)
@pulumi.input_type
class ArgoCDSpecPrometheusIngressTlsArgs:
def __init__(__self__, *,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
secret_name: Optional[pulumi.Input[str]] = None):
"""
IngressTLS describes the transport layer security associated with an Ingress.
:param pulumi.Input[Sequence[pulumi.Input[str]]] hosts: Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
:param pulumi.Input[str] secret_name: SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
if hosts is not None:
pulumi.set(__self__, "hosts", hosts)
if secret_name is not None:
pulumi.set(__self__, "secret_name", secret_name)
@property
@pulumi.getter
def hosts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Optional[pulumi.Input[str]]:
"""
SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_name", value)
@pulumi.input_type
class ArgoCDSpecPrometheusRouteArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input['ArgoCDSpecPrometheusRouteTlsArgs']] = None,
wildcard_policy: Optional[pulumi.Input[str]] = None):
"""
Route defines the desired state for an OpenShift Route for the Prometheus component.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the OpenShift Route.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to use for the Route resource.
:param pulumi.Input[str] path: Path the router watches for, to route traffic for to the service.
:param pulumi.Input['ArgoCDSpecPrometheusRouteTlsArgs'] tls: TLS provides the ability to configure certificates and termination for the Route.
:param pulumi.Input[str] wildcard_policy: WildcardPolicy if any for the route. Currently only 'Subdomain' or 'None' is allowed.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
if wildcard_policy is not None:
pulumi.set(__self__, "wildcard_policy", wildcard_policy)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the OpenShift Route.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to use for the Route resource.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path the router watches for, to route traffic for to the service.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input['ArgoCDSpecPrometheusRouteTlsArgs']]:
"""
TLS provides the ability to configure certificates and termination for the Route.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input['ArgoCDSpecPrometheusRouteTlsArgs']]):
pulumi.set(self, "tls", value)
@property
@pulumi.getter(name="wildcardPolicy")
def wildcard_policy(self) -> Optional[pulumi.Input[str]]:
"""
WildcardPolicy if any for the route. Currently only 'Subdomain' or 'None' is allowed.
"""
return pulumi.get(self, "wildcard_policy")
@wildcard_policy.setter
def wildcard_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "wildcard_policy", value)
@pulumi.input_type
class ArgoCDSpecPrometheusRouteTlsArgs:
def __init__(__self__, *,
termination: pulumi.Input[str],
ca_certificate: Optional[pulumi.Input[str]] = None,
certificate: Optional[pulumi.Input[str]] = None,
destination_ca_certificate: Optional[pulumi.Input[str]] = None,
insecure_edge_termination_policy: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None):
"""
TLS provides the ability to configure certificates and termination for the Route.
:param pulumi.Input[str] termination: termination indicates termination type.
:param pulumi.Input[str] ca_certificate: caCertificate provides the cert authority certificate contents
:param pulumi.Input[str] certificate: certificate provides certificate contents
:param pulumi.Input[str] destination_ca_certificate: destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
:param pulumi.Input[str] insecure_edge_termination_policy: insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.
* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.
:param pulumi.Input[str] key: key provides key file contents
"""
pulumi.set(__self__, "termination", termination)
if ca_certificate is not None:
pulumi.set(__self__, "ca_certificate", ca_certificate)
if certificate is not None:
pulumi.set(__self__, "certificate", certificate)
if destination_ca_certificate is not None:
pulumi.set(__self__, "destination_ca_certificate", destination_ca_certificate)
if insecure_edge_termination_policy is not None:
pulumi.set(__self__, "insecure_edge_termination_policy", insecure_edge_termination_policy)
if key is not None:
pulumi.set(__self__, "key", key)
@property
@pulumi.getter
def termination(self) -> pulumi.Input[str]:
"""
termination indicates termination type.
"""
return pulumi.get(self, "termination")
@termination.setter
def termination(self, value: pulumi.Input[str]):
pulumi.set(self, "termination", value)
@property
@pulumi.getter(name="caCertificate")
def ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
caCertificate provides the cert authority certificate contents
"""
return pulumi.get(self, "ca_certificate")
@ca_certificate.setter
def ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ca_certificate", value)
@property
@pulumi.getter
def certificate(self) -> Optional[pulumi.Input[str]]:
"""
certificate provides certificate contents
"""
return pulumi.get(self, "certificate")
@certificate.setter
def certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "certificate", value)
@property
@pulumi.getter(name="destinationCACertificate")
def destination_ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
"""
return pulumi.get(self, "destination_ca_certificate")
@destination_ca_certificate.setter
def destination_ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_ca_certificate", value)
@property
@pulumi.getter(name="insecureEdgeTerminationPolicy")
def insecure_edge_termination_policy(self) -> Optional[pulumi.Input[str]]:
"""
insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.
* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.
"""
return pulumi.get(self, "insecure_edge_termination_policy")
@insecure_edge_termination_policy.setter
def insecure_edge_termination_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "insecure_edge_termination_policy", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
key provides key file contents
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@pulumi.input_type
class ArgoCDSpecRbacArgs:
def __init__(__self__, *,
default_policy: Optional[pulumi.Input[str]] = None,
policy: Optional[pulumi.Input[str]] = None,
scopes: Optional[pulumi.Input[str]] = None):
"""
RBAC defines the RBAC configuration for Argo CD.
:param pulumi.Input[str] default_policy: DefaultPolicy is the name of the default role which Argo CD will falls back to, when authorizing API requests (optional). If omitted or empty, users may be still be able to login, but will see no apps, projects, etc...
:param pulumi.Input[str] policy: Policy is CSV containing user-defined RBAC policies and role definitions. Policy rules are in the form: p, subject, resource, action, object, effect Role definitions and bindings are in the form: g, subject, inherited-subject See https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/rbac.md for additional information.
:param pulumi.Input[str] scopes: Scopes controls which OIDC scopes to examine during rbac enforcement (in addition to `sub` scope). If omitted, defaults to: '[groups]'.
"""
if default_policy is not None:
pulumi.set(__self__, "default_policy", default_policy)
if policy is not None:
pulumi.set(__self__, "policy", policy)
if scopes is not None:
pulumi.set(__self__, "scopes", scopes)
@property
@pulumi.getter(name="defaultPolicy")
def default_policy(self) -> Optional[pulumi.Input[str]]:
"""
DefaultPolicy is the name of the default role which Argo CD will falls back to, when authorizing API requests (optional). If omitted or empty, users may be still be able to login, but will see no apps, projects, etc...
"""
return pulumi.get(self, "default_policy")
@default_policy.setter
def default_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "default_policy", value)
@property
@pulumi.getter
def policy(self) -> Optional[pulumi.Input[str]]:
"""
Policy is CSV containing user-defined RBAC policies and role definitions. Policy rules are in the form: p, subject, resource, action, object, effect Role definitions and bindings are in the form: g, subject, inherited-subject See https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/rbac.md for additional information.
"""
return pulumi.get(self, "policy")
@policy.setter
def policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "policy", value)
@property
@pulumi.getter
def scopes(self) -> Optional[pulumi.Input[str]]:
"""
Scopes controls which OIDC scopes to examine during rbac enforcement (in addition to `sub` scope). If omitted, defaults to: '[groups]'.
"""
return pulumi.get(self, "scopes")
@scopes.setter
def scopes(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scopes", value)
@pulumi.input_type
class ArgoCDSpecRedisArgs:
def __init__(__self__, *,
image: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input['ArgoCDSpecRedisResourcesArgs']] = None,
version: Optional[pulumi.Input[str]] = None):
"""
Redis defines the Redis server options for ArgoCD.
:param pulumi.Input[str] image: Image is the Redis container image.
:param pulumi.Input['ArgoCDSpecRedisResourcesArgs'] resources: Resources defines the Compute Resources required by the container for Redis.
:param pulumi.Input[str] version: Version is the Redis container image tag.
"""
if image is not None:
pulumi.set(__self__, "image", image)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Image is the Redis container image.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDSpecRedisResourcesArgs']]:
"""
Resources defines the Compute Resources required by the container for Redis.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDSpecRedisResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version is the Redis container image tag.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class ArgoCDSpecRedisResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesRequestsArgs']]]] = None):
"""
Resources defines the Compute Resources required by the container for Redis.
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRedisResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDSpecRedisResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecRedisResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecRepoArgs:
def __init__(__self__, *,
mountsatoken: Optional[pulumi.Input[bool]] = None,
resources: Optional[pulumi.Input['ArgoCDSpecRepoResourcesArgs']] = None,
serviceaccount: Optional[pulumi.Input[str]] = None):
"""
Repo defines the repo server options for Argo CD.
:param pulumi.Input[bool] mountsatoken: MountSAToken describes whether you would like to have the Repo server mount the service account token
:param pulumi.Input['ArgoCDSpecRepoResourcesArgs'] resources: Resources defines the Compute Resources required by the container for Redis.
:param pulumi.Input[str] serviceaccount: ServiceAccount defines the ServiceAccount user that you would like the Repo server to use
"""
if mountsatoken is not None:
pulumi.set(__self__, "mountsatoken", mountsatoken)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if serviceaccount is not None:
pulumi.set(__self__, "serviceaccount", serviceaccount)
@property
@pulumi.getter
def mountsatoken(self) -> Optional[pulumi.Input[bool]]:
"""
MountSAToken describes whether you would like to have the Repo server mount the service account token
"""
return pulumi.get(self, "mountsatoken")
@mountsatoken.setter
def mountsatoken(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "mountsatoken", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDSpecRepoResourcesArgs']]:
"""
Resources defines the Compute Resources required by the container for Redis.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDSpecRepoResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def serviceaccount(self) -> Optional[pulumi.Input[str]]:
"""
ServiceAccount defines the ServiceAccount user that you would like the Repo server to use
"""
return pulumi.get(self, "serviceaccount")
@serviceaccount.setter
def serviceaccount(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "serviceaccount", value)
@pulumi.input_type
class ArgoCDSpecRepoResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesRequestsArgs']]]] = None):
"""
Resources defines the Compute Resources required by the container for Redis.
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecRepoResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDSpecRepoResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecRepoResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecServerArgs:
def __init__(__self__, *,
autoscale: Optional[pulumi.Input['ArgoCDSpecServerAutoscaleArgs']] = None,
grpc: Optional[pulumi.Input['ArgoCDSpecServerGrpcArgs']] = None,
host: Optional[pulumi.Input[str]] = None,
ingress: Optional[pulumi.Input['ArgoCDSpecServerIngressArgs']] = None,
insecure: Optional[pulumi.Input[bool]] = None,
resources: Optional[pulumi.Input['ArgoCDSpecServerResourcesArgs']] = None,
route: Optional[pulumi.Input['ArgoCDSpecServerRouteArgs']] = None,
service: Optional[pulumi.Input['ArgoCDSpecServerServiceArgs']] = None):
"""
Server defines the options for the ArgoCD Server component.
:param pulumi.Input['ArgoCDSpecServerAutoscaleArgs'] autoscale: Autoscale defines the autoscale options for the Argo CD Server component.
:param pulumi.Input['ArgoCDSpecServerGrpcArgs'] grpc: GRPC defines the state for the Argo CD Server GRPC options.
:param pulumi.Input[str] host: Host is the hostname to use for Ingress/Route resources.
:param pulumi.Input['ArgoCDSpecServerIngressArgs'] ingress: Ingress defines the desired state for an Ingress for the Argo CD Server component.
:param pulumi.Input[bool] insecure: Insecure toggles the insecure flag.
:param pulumi.Input['ArgoCDSpecServerResourcesArgs'] resources: Resources defines the Compute Resources required by the container for the Argo CD server component.
:param pulumi.Input['ArgoCDSpecServerRouteArgs'] route: Route defines the desired state for an OpenShift Route for the Argo CD Server component.
:param pulumi.Input['ArgoCDSpecServerServiceArgs'] service: Service defines the options for the Service backing the ArgoCD Server component.
"""
if autoscale is not None:
pulumi.set(__self__, "autoscale", autoscale)
if grpc is not None:
pulumi.set(__self__, "grpc", grpc)
if host is not None:
pulumi.set(__self__, "host", host)
if ingress is not None:
pulumi.set(__self__, "ingress", ingress)
if insecure is not None:
pulumi.set(__self__, "insecure", insecure)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if route is not None:
pulumi.set(__self__, "route", route)
if service is not None:
pulumi.set(__self__, "service", service)
@property
@pulumi.getter
def autoscale(self) -> Optional[pulumi.Input['ArgoCDSpecServerAutoscaleArgs']]:
"""
Autoscale defines the autoscale options for the Argo CD Server component.
"""
return pulumi.get(self, "autoscale")
@autoscale.setter
def autoscale(self, value: Optional[pulumi.Input['ArgoCDSpecServerAutoscaleArgs']]):
pulumi.set(self, "autoscale", value)
@property
@pulumi.getter
def grpc(self) -> Optional[pulumi.Input['ArgoCDSpecServerGrpcArgs']]:
"""
GRPC defines the state for the Argo CD Server GRPC options.
"""
return pulumi.get(self, "grpc")
@grpc.setter
def grpc(self, value: Optional[pulumi.Input['ArgoCDSpecServerGrpcArgs']]):
pulumi.set(self, "grpc", value)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host is the hostname to use for Ingress/Route resources.
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def ingress(self) -> Optional[pulumi.Input['ArgoCDSpecServerIngressArgs']]:
"""
Ingress defines the desired state for an Ingress for the Argo CD Server component.
"""
return pulumi.get(self, "ingress")
@ingress.setter
def ingress(self, value: Optional[pulumi.Input['ArgoCDSpecServerIngressArgs']]):
pulumi.set(self, "ingress", value)
@property
@pulumi.getter
def insecure(self) -> Optional[pulumi.Input[bool]]:
"""
Insecure toggles the insecure flag.
"""
return pulumi.get(self, "insecure")
@insecure.setter
def insecure(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "insecure", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['ArgoCDSpecServerResourcesArgs']]:
"""
Resources defines the Compute Resources required by the container for the Argo CD server component.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['ArgoCDSpecServerResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def route(self) -> Optional[pulumi.Input['ArgoCDSpecServerRouteArgs']]:
"""
Route defines the desired state for an OpenShift Route for the Argo CD Server component.
"""
return pulumi.get(self, "route")
@route.setter
def route(self, value: Optional[pulumi.Input['ArgoCDSpecServerRouteArgs']]):
pulumi.set(self, "route", value)
@property
@pulumi.getter
def service(self) -> Optional[pulumi.Input['ArgoCDSpecServerServiceArgs']]:
"""
Service defines the options for the Service backing the ArgoCD Server component.
"""
return pulumi.get(self, "service")
@service.setter
def service(self, value: Optional[pulumi.Input['ArgoCDSpecServerServiceArgs']]):
pulumi.set(self, "service", value)
@pulumi.input_type
class ArgoCDSpecServerAutoscaleArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
hpa: Optional[pulumi.Input['ArgoCDSpecServerAutoscaleHpaArgs']] = None):
"""
Autoscale defines the autoscale options for the Argo CD Server component.
:param pulumi.Input[bool] enabled: Enabled will toggle autoscaling support for the Argo CD Server component.
:param pulumi.Input['ArgoCDSpecServerAutoscaleHpaArgs'] hpa: HPA defines the HorizontalPodAutoscaler options for the Argo CD Server component.
"""
pulumi.set(__self__, "enabled", enabled)
if hpa is not None:
pulumi.set(__self__, "hpa", hpa)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle autoscaling support for the Argo CD Server component.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def hpa(self) -> Optional[pulumi.Input['ArgoCDSpecServerAutoscaleHpaArgs']]:
"""
HPA defines the HorizontalPodAutoscaler options for the Argo CD Server component.
"""
return pulumi.get(self, "hpa")
@hpa.setter
def hpa(self, value: Optional[pulumi.Input['ArgoCDSpecServerAutoscaleHpaArgs']]):
pulumi.set(self, "hpa", value)
@pulumi.input_type
class ArgoCDSpecServerAutoscaleHpaArgs:
def __init__(__self__, *,
max_replicas: pulumi.Input[int],
scale_target_ref: pulumi.Input['ArgoCDSpecServerAutoscaleHpaScaleTargetRefArgs'],
min_replicas: Optional[pulumi.Input[int]] = None,
target_cpu_utilization_percentage: Optional[pulumi.Input[int]] = None):
"""
HPA defines the HorizontalPodAutoscaler options for the Argo CD Server component.
:param pulumi.Input[int] max_replicas: upper limit for the number of pods that can be set by the autoscaler; cannot be smaller than MinReplicas.
:param pulumi.Input['ArgoCDSpecServerAutoscaleHpaScaleTargetRefArgs'] scale_target_ref: reference to scaled resource; horizontal pod autoscaler will learn the current resource consumption and will set the desired number of pods by using its Scale subresource.
:param pulumi.Input[int] min_replicas: minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available.
:param pulumi.Input[int] target_cpu_utilization_percentage: target average CPU utilization (represented as a percentage of requested CPU) over all the pods; if not specified the default autoscaling policy will be used.
"""
pulumi.set(__self__, "max_replicas", max_replicas)
pulumi.set(__self__, "scale_target_ref", scale_target_ref)
if min_replicas is not None:
pulumi.set(__self__, "min_replicas", min_replicas)
if target_cpu_utilization_percentage is not None:
pulumi.set(__self__, "target_cpu_utilization_percentage", target_cpu_utilization_percentage)
@property
@pulumi.getter(name="maxReplicas")
def max_replicas(self) -> pulumi.Input[int]:
"""
upper limit for the number of pods that can be set by the autoscaler; cannot be smaller than MinReplicas.
"""
return pulumi.get(self, "max_replicas")
@max_replicas.setter
def max_replicas(self, value: pulumi.Input[int]):
pulumi.set(self, "max_replicas", value)
@property
@pulumi.getter(name="scaleTargetRef")
def scale_target_ref(self) -> pulumi.Input['ArgoCDSpecServerAutoscaleHpaScaleTargetRefArgs']:
"""
reference to scaled resource; horizontal pod autoscaler will learn the current resource consumption and will set the desired number of pods by using its Scale subresource.
"""
return pulumi.get(self, "scale_target_ref")
@scale_target_ref.setter
def scale_target_ref(self, value: pulumi.Input['ArgoCDSpecServerAutoscaleHpaScaleTargetRefArgs']):
pulumi.set(self, "scale_target_ref", value)
@property
@pulumi.getter(name="minReplicas")
def min_replicas(self) -> Optional[pulumi.Input[int]]:
"""
minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available.
"""
return pulumi.get(self, "min_replicas")
@min_replicas.setter
def min_replicas(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_replicas", value)
@property
@pulumi.getter(name="targetCPUUtilizationPercentage")
def target_cpu_utilization_percentage(self) -> Optional[pulumi.Input[int]]:
"""
target average CPU utilization (represented as a percentage of requested CPU) over all the pods; if not specified the default autoscaling policy will be used.
"""
return pulumi.get(self, "target_cpu_utilization_percentage")
@target_cpu_utilization_percentage.setter
def target_cpu_utilization_percentage(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "target_cpu_utilization_percentage", value)
@pulumi.input_type
class ArgoCDSpecServerAutoscaleHpaScaleTargetRefArgs:
def __init__(__self__, *,
kind: pulumi.Input[str],
name: pulumi.Input[str],
api_version: Optional[pulumi.Input[str]] = None):
"""
reference to scaled resource; horizontal pod autoscaler will learn the current resource consumption and will set the desired number of pods by using its Scale subresource.
:param pulumi.Input[str] kind: Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
:param pulumi.Input[str] name: Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
:param pulumi.Input[str] api_version: API version of the referent
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
"""
Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
"""
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
"""
API version of the referent
"""
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@pulumi.input_type
class ArgoCDSpecServerGrpcArgs:
def __init__(__self__, *,
host: Optional[pulumi.Input[str]] = None,
ingress: Optional[pulumi.Input['ArgoCDSpecServerGrpcIngressArgs']] = None):
"""
GRPC defines the state for the Argo CD Server GRPC options.
:param pulumi.Input[str] host: Host is the hostname to use for Ingress/Route resources.
:param pulumi.Input['ArgoCDSpecServerGrpcIngressArgs'] ingress: Ingress defines the desired state for the Argo CD Server GRPC Ingress.
"""
if host is not None:
pulumi.set(__self__, "host", host)
if ingress is not None:
pulumi.set(__self__, "ingress", ingress)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host is the hostname to use for Ingress/Route resources.
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def ingress(self) -> Optional[pulumi.Input['ArgoCDSpecServerGrpcIngressArgs']]:
"""
Ingress defines the desired state for the Argo CD Server GRPC Ingress.
"""
return pulumi.get(self, "ingress")
@ingress.setter
def ingress(self, value: Optional[pulumi.Input['ArgoCDSpecServerGrpcIngressArgs']]):
pulumi.set(self, "ingress", value)
@pulumi.input_type
class ArgoCDSpecServerGrpcIngressArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerGrpcIngressTlsArgs']]]] = None):
"""
Ingress defines the desired state for the Argo CD Server GRPC Ingress.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the Ingress.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to apply to the Ingress.
:param pulumi.Input[str] path: Path used for the Ingress resource.
:param pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerGrpcIngressTlsArgs']]] tls: TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the Ingress.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to apply to the Ingress.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path used for the Ingress resource.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerGrpcIngressTlsArgs']]]]:
"""
TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerGrpcIngressTlsArgs']]]]):
pulumi.set(self, "tls", value)
@pulumi.input_type
class ArgoCDSpecServerGrpcIngressTlsArgs:
def __init__(__self__, *,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
secret_name: Optional[pulumi.Input[str]] = None):
"""
IngressTLS describes the transport layer security associated with an Ingress.
:param pulumi.Input[Sequence[pulumi.Input[str]]] hosts: Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
:param pulumi.Input[str] secret_name: SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
if hosts is not None:
pulumi.set(__self__, "hosts", hosts)
if secret_name is not None:
pulumi.set(__self__, "secret_name", secret_name)
@property
@pulumi.getter
def hosts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Optional[pulumi.Input[str]]:
"""
SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_name", value)
@pulumi.input_type
class ArgoCDSpecServerIngressArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerIngressTlsArgs']]]] = None):
"""
Ingress defines the desired state for an Ingress for the Argo CD Server component.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the Ingress.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to apply to the Ingress.
:param pulumi.Input[str] path: Path used for the Ingress resource.
:param pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerIngressTlsArgs']]] tls: TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the Ingress.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to apply to the Ingress.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path used for the Ingress resource.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerIngressTlsArgs']]]]:
"""
TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ArgoCDSpecServerIngressTlsArgs']]]]):
pulumi.set(self, "tls", value)
@pulumi.input_type
class ArgoCDSpecServerIngressTlsArgs:
def __init__(__self__, *,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
secret_name: Optional[pulumi.Input[str]] = None):
"""
IngressTLS describes the transport layer security associated with an Ingress.
:param pulumi.Input[Sequence[pulumi.Input[str]]] hosts: Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
:param pulumi.Input[str] secret_name: SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
if hosts is not None:
pulumi.set(__self__, "hosts", hosts)
if secret_name is not None:
pulumi.set(__self__, "secret_name", secret_name)
@property
@pulumi.getter
def hosts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Optional[pulumi.Input[str]]:
"""
SecretName is the name of the secret used to terminate SSL traffic on 443. Field is left optional to allow SSL routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing.
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_name", value)
@pulumi.input_type
class ArgoCDSpecServerResourcesArgs:
def __init__(__self__, *,
limits: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesLimitsArgs']]]] = None,
requests: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesRequestsArgs']]]] = None):
"""
Resources defines the Compute Resources required by the container for the Argo CD server component.
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesLimitsArgs']]] limits: Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
:param pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesRequestsArgs']]] requests: Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
if limits is not None:
pulumi.set(__self__, "limits", limits)
if requests is not None:
pulumi.set(__self__, "requests", requests)
@property
@pulumi.getter
def limits(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesLimitsArgs']]]]:
"""
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "limits")
@limits.setter
def limits(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesLimitsArgs']]]]):
pulumi.set(self, "limits", value)
@property
@pulumi.getter
def requests(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesRequestsArgs']]]]:
"""
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
"""
return pulumi.get(self, "requests")
@requests.setter
def requests(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ArgoCDSpecServerResourcesRequestsArgs']]]]):
pulumi.set(self, "requests", value)
@pulumi.input_type
class ArgoCDSpecServerResourcesLimitsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecServerResourcesRequestsArgs:
def __init__(__self__):
pass
@pulumi.input_type
class ArgoCDSpecServerRouteArgs:
def __init__(__self__, *,
enabled: pulumi.Input[bool],
annotations: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
path: Optional[pulumi.Input[str]] = None,
tls: Optional[pulumi.Input['ArgoCDSpecServerRouteTlsArgs']] = None,
wildcard_policy: Optional[pulumi.Input[str]] = None):
"""
Route defines the desired state for an OpenShift Route for the Argo CD Server component.
:param pulumi.Input[bool] enabled: Enabled will toggle the creation of the OpenShift Route.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] annotations: Annotations is the map of annotations to use for the Route resource.
:param pulumi.Input[str] path: Path the router watches for, to route traffic for to the service.
:param pulumi.Input['ArgoCDSpecServerRouteTlsArgs'] tls: TLS provides the ability to configure certificates and termination for the Route.
:param pulumi.Input[str] wildcard_policy: WildcardPolicy if any for the route. Currently only 'Subdomain' or 'None' is allowed.
"""
pulumi.set(__self__, "enabled", enabled)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if path is not None:
pulumi.set(__self__, "path", path)
if tls is not None:
pulumi.set(__self__, "tls", tls)
if wildcard_policy is not None:
pulumi.set(__self__, "wildcard_policy", wildcard_policy)
@property
@pulumi.getter
def enabled(self) -> pulumi.Input[bool]:
"""
Enabled will toggle the creation of the OpenShift Route.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: pulumi.Input[bool]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Annotations is the map of annotations to use for the Route resource.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Path the router watches for, to route traffic for to the service.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def tls(self) -> Optional[pulumi.Input['ArgoCDSpecServerRouteTlsArgs']]:
"""
TLS provides the ability to configure certificates and termination for the Route.
"""
return pulumi.get(self, "tls")
@tls.setter
def tls(self, value: Optional[pulumi.Input['ArgoCDSpecServerRouteTlsArgs']]):
pulumi.set(self, "tls", value)
@property
@pulumi.getter(name="wildcardPolicy")
def wildcard_policy(self) -> Optional[pulumi.Input[str]]:
"""
WildcardPolicy if any for the route. Currently only 'Subdomain' or 'None' is allowed.
"""
return pulumi.get(self, "wildcard_policy")
@wildcard_policy.setter
def wildcard_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "wildcard_policy", value)
@pulumi.input_type
class ArgoCDSpecServerRouteTlsArgs:
def __init__(__self__, *,
termination: pulumi.Input[str],
ca_certificate: Optional[pulumi.Input[str]] = None,
certificate: Optional[pulumi.Input[str]] = None,
destination_ca_certificate: Optional[pulumi.Input[str]] = None,
insecure_edge_termination_policy: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None):
"""
TLS provides the ability to configure certificates and termination for the Route.
:param pulumi.Input[str] termination: termination indicates termination type.
:param pulumi.Input[str] ca_certificate: caCertificate provides the cert authority certificate contents
:param pulumi.Input[str] certificate: certificate provides certificate contents
:param pulumi.Input[str] destination_ca_certificate: destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
:param pulumi.Input[str] insecure_edge_termination_policy: insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.
* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.
:param pulumi.Input[str] key: key provides key file contents
"""
pulumi.set(__self__, "termination", termination)
if ca_certificate is not None:
pulumi.set(__self__, "ca_certificate", ca_certificate)
if certificate is not None:
pulumi.set(__self__, "certificate", certificate)
if destination_ca_certificate is not None:
pulumi.set(__self__, "destination_ca_certificate", destination_ca_certificate)
if insecure_edge_termination_policy is not None:
pulumi.set(__self__, "insecure_edge_termination_policy", insecure_edge_termination_policy)
if key is not None:
pulumi.set(__self__, "key", key)
@property
@pulumi.getter
def termination(self) -> pulumi.Input[str]:
"""
termination indicates termination type.
"""
return pulumi.get(self, "termination")
@termination.setter
def termination(self, value: pulumi.Input[str]):
pulumi.set(self, "termination", value)
@property
@pulumi.getter(name="caCertificate")
def ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
caCertificate provides the cert authority certificate contents
"""
return pulumi.get(self, "ca_certificate")
@ca_certificate.setter
def ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ca_certificate", value)
@property
@pulumi.getter
def certificate(self) -> Optional[pulumi.Input[str]]:
"""
certificate provides certificate contents
"""
return pulumi.get(self, "certificate")
@certificate.setter
def certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "certificate", value)
@property
@pulumi.getter(name="destinationCACertificate")
def destination_ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
"""
return pulumi.get(self, "destination_ca_certificate")
@destination_ca_certificate.setter
def destination_ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_ca_certificate", value)
@property
@pulumi.getter(name="insecureEdgeTerminationPolicy")
def insecure_edge_termination_policy(self) -> Optional[pulumi.Input[str]]:
"""
insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.
* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.
"""
return pulumi.get(self, "insecure_edge_termination_policy")
@insecure_edge_termination_policy.setter
def insecure_edge_termination_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "insecure_edge_termination_policy", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
key provides key file contents
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@pulumi.input_type
class ArgoCDSpecServerServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str]):
"""
Service defines the options for the Service backing the ArgoCD Server component.
:param pulumi.Input[str] type: Type is the ServiceType to use for the Service resource.
"""
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type is the ServiceType to use for the Service resource.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@pulumi.input_type
class ArgoCDSpecTlsArgs:
def __init__(__self__, *,
ca: Optional[pulumi.Input['ArgoCDSpecTlsCaArgs']] = None,
initial_certs: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
TLS defines the TLS options for ArgoCD.
:param pulumi.Input['ArgoCDSpecTlsCaArgs'] ca: CA defines the CA options.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] initial_certs: InitialCerts defines custom TLS certificates upon creation of the cluster for connecting Git repositories via HTTPS.
"""
if ca is not None:
pulumi.set(__self__, "ca", ca)
if initial_certs is not None:
pulumi.set(__self__, "initial_certs", initial_certs)
@property
@pulumi.getter
def ca(self) -> Optional[pulumi.Input['ArgoCDSpecTlsCaArgs']]:
"""
CA defines the CA options.
"""
return pulumi.get(self, "ca")
@ca.setter
def ca(self, value: Optional[pulumi.Input['ArgoCDSpecTlsCaArgs']]):
pulumi.set(self, "ca", value)
@property
@pulumi.getter(name="initialCerts")
def initial_certs(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
InitialCerts defines custom TLS certificates upon creation of the cluster for connecting Git repositories via HTTPS.
"""
return pulumi.get(self, "initial_certs")
@initial_certs.setter
def initial_certs(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "initial_certs", value)
@pulumi.input_type
class ArgoCDSpecTlsCaArgs:
def __init__(__self__, *,
config_map_name: Optional[pulumi.Input[str]] = None,
secret_name: Optional[pulumi.Input[str]] = None):
"""
CA defines the CA options.
:param pulumi.Input[str] config_map_name: ConfigMapName is the name of the ConfigMap containing the CA Certificate.
:param pulumi.Input[str] secret_name: SecretName is the name of the Secret containing the CA Certificate and Key.
"""
if config_map_name is not None:
pulumi.set(__self__, "config_map_name", config_map_name)
if secret_name is not None:
pulumi.set(__self__, "secret_name", secret_name)
@property
@pulumi.getter(name="configMapName")
def config_map_name(self) -> Optional[pulumi.Input[str]]:
"""
ConfigMapName is the name of the ConfigMap containing the CA Certificate.
"""
return pulumi.get(self, "config_map_name")
@config_map_name.setter
def config_map_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "config_map_name", value)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Optional[pulumi.Input[str]]:
"""
SecretName is the name of the Secret containing the CA Certificate and Key.
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_name", value)
@pulumi.input_type
class ArgoCDStatusArgs:
def __init__(__self__, *,
application_controller: Optional[pulumi.Input[str]] = None,
dex: Optional[pulumi.Input[str]] = None,
phase: Optional[pulumi.Input[str]] = None,
redis: Optional[pulumi.Input[str]] = None,
repo: Optional[pulumi.Input[str]] = None,
server: Optional[pulumi.Input[str]] = None):
"""
ArgoCDStatus defines the observed state of ArgoCD
:param pulumi.Input[str] application_controller: ApplicationController is a simple, high-level summary of where the Argo CD application controller component is in its lifecycle. There are five possible ApplicationController values: Pending: The Argo CD application controller component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD application controller component are in a Ready state. Failed: At least one of the Argo CD application controller component Pods had a failure. Unknown: For some reason the state of the Argo CD application controller component could not be obtained.
:param pulumi.Input[str] dex: Dex is a simple, high-level summary of where the Argo CD Dex component is in its lifecycle. There are five possible dex values: Pending: The Argo CD Dex component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD Dex component are in a Ready state. Failed: At least one of the Argo CD Dex component Pods had a failure. Unknown: For some reason the state of the Argo CD Dex component could not be obtained.
:param pulumi.Input[str] phase: Phase is a simple, high-level summary of where the ArgoCD is in its lifecycle. There are five possible phase values: Pending: The ArgoCD has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Available: All of the resources for the ArgoCD are ready. Failed: At least one resource has experienced a failure. Unknown: For some reason the state of the ArgoCD phase could not be obtained.
:param pulumi.Input[str] redis: Redis is a simple, high-level summary of where the Argo CD Redis component is in its lifecycle. There are five possible redis values: Pending: The Argo CD Redis component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD Redis component are in a Ready state. Failed: At least one of the Argo CD Redis component Pods had a failure. Unknown: For some reason the state of the Argo CD Redis component could not be obtained.
:param pulumi.Input[str] repo: Repo is a simple, high-level summary of where the Argo CD Repo component is in its lifecycle. There are five possible repo values: Pending: The Argo CD Repo component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD Repo component are in a Ready state. Failed: At least one of the Argo CD Repo component Pods had a failure. Unknown: For some reason the state of the Argo CD Repo component could not be obtained.
:param pulumi.Input[str] server: Server is a simple, high-level summary of where the Argo CD server component is in its lifecycle. There are five possible server values: Pending: The Argo CD server component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD server component are in a Ready state. Failed: At least one of the Argo CD server component Pods had a failure. Unknown: For some reason the state of the Argo CD server component could not be obtained.
"""
if application_controller is not None:
pulumi.set(__self__, "application_controller", application_controller)
if dex is not None:
pulumi.set(__self__, "dex", dex)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if redis is not None:
pulumi.set(__self__, "redis", redis)
if repo is not None:
pulumi.set(__self__, "repo", repo)
if server is not None:
pulumi.set(__self__, "server", server)
@property
@pulumi.getter(name="applicationController")
def application_controller(self) -> Optional[pulumi.Input[str]]:
"""
ApplicationController is a simple, high-level summary of where the Argo CD application controller component is in its lifecycle. There are five possible ApplicationController values: Pending: The Argo CD application controller component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD application controller component are in a Ready state. Failed: At least one of the Argo CD application controller component Pods had a failure. Unknown: For some reason the state of the Argo CD application controller component could not be obtained.
"""
return pulumi.get(self, "application_controller")
@application_controller.setter
def application_controller(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "application_controller", value)
@property
@pulumi.getter
def dex(self) -> Optional[pulumi.Input[str]]:
"""
Dex is a simple, high-level summary of where the Argo CD Dex component is in its lifecycle. There are five possible dex values: Pending: The Argo CD Dex component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD Dex component are in a Ready state. Failed: At least one of the Argo CD Dex component Pods had a failure. Unknown: For some reason the state of the Argo CD Dex component could not be obtained.
"""
return pulumi.get(self, "dex")
@dex.setter
def dex(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "dex", value)
@property
@pulumi.getter
def phase(self) -> Optional[pulumi.Input[str]]:
"""
Phase is a simple, high-level summary of where the ArgoCD is in its lifecycle. There are five possible phase values: Pending: The ArgoCD has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Available: All of the resources for the ArgoCD are ready. Failed: At least one resource has experienced a failure. Unknown: For some reason the state of the ArgoCD phase could not be obtained.
"""
return pulumi.get(self, "phase")
@phase.setter
def phase(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "phase", value)
@property
@pulumi.getter
def redis(self) -> Optional[pulumi.Input[str]]:
"""
Redis is a simple, high-level summary of where the Argo CD Redis component is in its lifecycle. There are five possible redis values: Pending: The Argo CD Redis component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD Redis component are in a Ready state. Failed: At least one of the Argo CD Redis component Pods had a failure. Unknown: For some reason the state of the Argo CD Redis component could not be obtained.
"""
return pulumi.get(self, "redis")
@redis.setter
def redis(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "redis", value)
@property
@pulumi.getter
def repo(self) -> Optional[pulumi.Input[str]]:
"""
Repo is a simple, high-level summary of where the Argo CD Repo component is in its lifecycle. There are five possible repo values: Pending: The Argo CD Repo component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD Repo component are in a Ready state. Failed: At least one of the Argo CD Repo component Pods had a failure. Unknown: For some reason the state of the Argo CD Repo component could not be obtained.
"""
return pulumi.get(self, "repo")
@repo.setter
def repo(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "repo", value)
@property
@pulumi.getter
def server(self) -> Optional[pulumi.Input[str]]:
"""
Server is a simple, high-level summary of where the Argo CD server component is in its lifecycle. There are five possible server values: Pending: The Argo CD server component has been accepted by the Kubernetes system, but one or more of the required resources have not been created. Running: All of the required Pods for the Argo CD server component are in a Ready state. Failed: At least one of the Argo CD server component Pods had a failure. Unknown: For some reason the state of the Argo CD server component could not be obtained.
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server", value)
| 42.047477 | 862 | 0.671793 | 52,839 | 480,014 | 5.983648 | 0.022162 | 0.109558 | 0.094288 | 0.047803 | 0.85584 | 0.823323 | 0.800161 | 0.747759 | 0.721647 | 0.695547 | 0 | 0.000298 | 0.224183 | 480,014 | 11,415 | 863 | 42.051161 | 0.848702 | 0.275619 | 0 | 0.741099 | 1 | 0 | 0.148672 | 0.098742 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208298 | false | 0.001962 | 0.002243 | 0.018082 | 0.328287 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
17002ff9008d0db63f41aaacadff381cfe94e19c | 2,642 | py | Python | data/cases/fit.py | YYgroup/STmodel | 0e608c7d3fe596a2bf2aad446bbd1e151c56b543 | [
"MIT"
] | 1 | 2021-06-18T06:13:09.000Z | 2021-06-18T06:13:09.000Z | data/cases/fit.py | YYgroup/STmodel | 0e608c7d3fe596a2bf2aad446bbd1e151c56b543 | [
"MIT"
] | null | null | null | data/cases/fit.py | YYgroup/STmodel | 0e608c7d3fe596a2bf2aad446bbd1e151c56b543 | [
"MIT"
] | null | null | null | import numpy as np
from scipy.optimize import curve_fit
import STmodel.data.cases as stc
import STmodel.model.st as stm
# fit C0 for a set of data from a case
# also calculate Ma, Le, Ze
def case_fit_C0(case_name, **kwargs):
cases = stc.case.CaseSet(case_name)
params = {}
for case in cases.cases:
case_params = {}
case_data = cases.get_case_data(case)
reactant = cases.get_reactant(case)
# model prediction with given C0, as the fit function for case_fit_C0
def case_model_predictions(case, C0):
ur, lr = case
model = stm.Model(reactant, C=C0, type_C0='const', Ka_def='Bradley')
sr_model = np.zeros(len(ur))
for i, u in enumerate(ur):
sr_model[i] = model.ratio_turbulent_burning_velocity(u, lr[i])
return sr_model
u = case_data.turbulence_intensity
l = case_data.turbulence_length_scale
s = case_data.turbulent_burning_velocity
popt, pcov = curve_fit(case_model_predictions,
(u, l), s)
case_params['np'] = len(s)
case_params['C0'] = popt
case_params['Le'] = reactant.Le
case_params['Re'] = reactant.ReF
case_params['Ze'] = reactant.flame_state.Ze()
case_params['Er'] = reactant.flame_state.expansion()
params[case] = case_params
return params
# fit A and B for a set of data from a case
# also calculate Ma, Le, Ze
def case_fit_A(case_name, **kwargs):
cases = stc.case.CaseSet(case_name)
params = {}
for case in cases.cases:
case_params = {}
case_data = cases.get_case_data(case)
reactant = cases.get_reactant(case)
# model prediction with given C0, as the fit function for case_fit_C0
def case_model_predictions(case, a):
ur, lr = case
model = stm.Model(reactant, A=a, B=0.0)
sr_model = np.zeros(len(ur))
for i, u in enumerate(ur):
sr_model[i] = model.ratio_turbulent_burning_velocity(u, lr[i])
return sr_model
u = case_data.turbulence_intensity
l = case_data.turbulence_length_scale
s = case_data.turbulent_burning_velocity
popt, pcov = curve_fit(case_model_predictions,
(u, l), s)
case_params['np'] = len(s)
case_params['A'] = popt
case_params['Le'] = reactant.Le
case_params['Re'] = reactant.ReF
case_params['Ze'] = reactant.flame_state.Ze()
params[case] = case_params
return params
| 26.959184 | 80 | 0.599924 | 362 | 2,642 | 4.160221 | 0.209945 | 0.099602 | 0.053121 | 0.011952 | 0.873838 | 0.873838 | 0.831341 | 0.792829 | 0.792829 | 0.792829 | 0 | 0.006522 | 0.303558 | 2,642 | 97 | 81 | 27.237113 | 0.811957 | 0.100681 | 0 | 0.77193 | 0 | 0 | 0.01393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.070175 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ca7c1c36fe589597db662c5c0b99c75b18973228 | 86 | py | Python | tests/__init__.py | TrustCodes/gs1-compression | 74c20141ab57025bda21092fbfaa922f8ca0a7ec | [
"Apache-2.0"
] | 3 | 2021-03-11T23:35:21.000Z | 2021-08-04T04:16:12.000Z | tests/__init__.py | TrustCodes/gs1-compression | 74c20141ab57025bda21092fbfaa922f8ca0a7ec | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | TrustCodes/gs1-compression | 74c20141ab57025bda21092fbfaa922f8ca0a7ec | [
"Apache-2.0"
] | null | null | null | from tests.decompress.test_utils import *
from tests.decompress.test_analyse import *
| 28.666667 | 43 | 0.837209 | 12 | 86 | 5.833333 | 0.583333 | 0.257143 | 0.542857 | 0.657143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 2 | 44 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
048a262382fa92c30e9bcab76a2eb1303d9f383b | 12,923 | py | Python | Curves/RotationControl.py | peerke88/curveCreator | b43d90ce19d80beb70bb25dbe15e13b4a3975f3b | [
"MIT"
] | null | null | null | Curves/RotationControl.py | peerke88/curveCreator | b43d90ce19d80beb70bb25dbe15e13b4a3975f3b | [
"MIT"
] | null | null | null | Curves/RotationControl.py | peerke88/curveCreator | b43d90ce19d80beb70bb25dbe15e13b4a3975f3b | [
"MIT"
] | null | null | null | import maya.cmds as cmds
curve1 = []
curve1.append(cmds.curve( p =[(0.33587352900359235, 3.552713678800501e-15, 0.7361468691490706), (0.4304251528553422, 0.03022134593976844, 0.6908148502394227), (0.5107046042608197, 0.07247276978952044, 0.6274377144647979), (0.5684975545498538, 0.12400997464880348, 0.5501319071758723), (0.5684975545498538, 0.12400997464880348, 0.5501319071758723), (0.5684975545498538, 0.12400997464880348, 0.5501319071758723), (0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (0.5402530010360955, 0.08197748359885182, 0.5946770761304137), (0.4852738226242117, 0.0455134352238602, 0.6467685738089757), (0.41921829926566545, 0.015501246831103543, 0.6896431286557706), (0.41921829926566545, 0.015501246831103543, 0.6896431286557706), (0.41921829926566545, 0.015501246831103543, 0.6896431286557706), (0.7077292503941841, 0.015501246831103543, 0.43808875481816606), (0.8080009216827335, 0.015501246831103543, -0.016811817720621902), (0.6943482219457628, 0.015501246831103543, -0.34108941639861884), (0.5527220271320097, 0.015501246831103543, -0.5148315398676033), (0.33489352471512746, 0.015501246831103543, -0.6771644412642893), (0.17585601860228905, 0.015501246831103543, -0.7201406688717121), (0.015501247071298963, 0.015501246831103543, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(0.33587352900359235, -3.552713678800501e-15, 0.7361468691490706), (0.4304251528553422, -0.03022134593976844, 0.6908148502394227), (0.5107046042608197, -0.07247276978952044, 0.6274377144647979), (0.5684975545498538, -0.12400997464880348, 0.5501319071758723), (0.5684975545498538, -0.12400997464880348, 0.5501319071758723), (0.5684975545498538, -0.12400997464880348, 0.5501319071758723), (0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (0.5402530010360955, -0.08197748359885182, 0.5946770761304137), (0.4852738226242117, -0.0455134352238602, 0.6467685738089757), (0.41921829926566545, -0.015501246831103543, 0.6896431286557706), (0.41921829926566545, -0.015501246831103543, 0.6896431286557706), (0.41921829926566545, -0.015501246831103543, 0.6896431286557706), (0.7077292503941841, -0.015501246831103543, 0.43808875481816606), (0.8080009216827335, -0.015501246831103543, -0.016811817720621902), (0.6943482219457628, -0.015501246831103543, -0.34108941639861884), (0.5527220271320097, -0.015501246831103543, -0.5148315398676033), (0.33489352471512746, -0.015501246831103543, -0.6771644412642893), (0.17585601860228905, -0.015501246831103543, -0.7201406688717121), (0.015501247071298963, -0.015501246831103543, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(-0.33587352900359235, 3.552713678800501e-15, 0.7361468691490706), (-0.4304251528553422, 0.03022134593976844, 0.6908148502394227), (-0.5107046042608197, 0.07247276978952044, 0.6274377144647979), (-0.5684975545498538, 0.12400997464880348, 0.5501319071758723), (-0.5684975545498538, 0.12400997464880348, 0.5501319071758723), (-0.5684975545498538, 0.12400997464880348, 0.5501319071758723), (-0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (-0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (-0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (-0.5819931861859438, 0.12400997464880348, 0.5346306603447717), (-0.5402530010360955, 0.08197748359885182, 0.5946770761304137), (-0.4852738226242117, 0.0455134352238602, 0.6467685738089757), (-0.41921829926566545, 0.015501246831103543, 0.6896431286557706), (-0.41921829926566545, 0.015501246831103543, 0.6896431286557706), (-0.41921829926566545, 0.015501246831103543, 0.6896431286557706), (-0.7077292503941841, 0.015501246831103543, 0.43808875481816606), (-0.8080009216827335, 0.015501246831103543, -0.016811817720621902), (-0.6943482219457628, 0.015501246831103543, -0.34108941639861884), (-0.5527220271320097, 0.015501246831103543, -0.5148315398676033), (-0.33489352471512746, 0.015501246831103543, -0.6771644412642893), (-0.17585601860228905, 0.015501246831103543, -0.7201406688717121), (-0.015501247071298963, 0.015501246831103543, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(-0.33587352900359235, -3.552713678800501e-15, 0.7361468691490706), (-0.4304251528553422, -0.03022134593976844, 0.6908148502394227), (-0.5107046042608197, -0.07247276978952044, 0.6274377144647979), (-0.5684975545498538, -0.12400997464880348, 0.5501319071758723), (-0.5684975545498538, -0.12400997464880348, 0.5501319071758723), (-0.5684975545498538, -0.12400997464880348, 0.5501319071758723), (-0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (-0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (-0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (-0.5819931861859438, -0.12400997464880348, 0.5346306603447717), (-0.5402530010360955, -0.08197748359885182, 0.5946770761304137), (-0.4852738226242117, -0.0455134352238602, 0.6467685738089757), (-0.41921829926566545, -0.015501246831103543, 0.6896431286557706), (-0.41921829926566545, -0.015501246831103543, 0.6896431286557706), (-0.41921829926566545, -0.015501246831103543, 0.6896431286557706), (-0.7077292503941841, -0.015501246831103543, 0.43808875481816606), (-0.8080009216827335, -0.015501246831103543, -0.016811817720621902), (-0.6943482219457628, -0.015501246831103543, -0.34108941639861884), (-0.5527220271320097, -0.015501246831103543, -0.5148315398676033), (-0.33489352471512746, -0.015501246831103543, -0.6771644412642893), (-0.17585601860228905, -0.015501246831103543, -0.7201406688717121), (-0.015501247071298963, -0.015501246831103543, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(-7.105427357601002e-15, -0.3358735290035888, 0.7361468691490706), (-0.030221345939771993, -0.43042515285533867, 0.6908148502394227), (-0.072472769789524, -0.5107046042608161, 0.6274377144647979), (-0.12400997464880703, -0.5684975545498503, 0.5501319071758723), (-0.12400997464880703, -0.5684975545498503, 0.5501319071758723), (-0.12400997464880703, -0.5684975545498503, 0.5501319071758723), (-0.12400997464880703, -0.5819931861859402, 0.5346306603447717), (-0.12400997464880703, -0.5819931861859402, 0.5346306603447717), (-0.12400997464880703, -0.5819931861859402, 0.5346306603447717), (-0.12400997464880703, -0.5819931861859402, 0.5346306603447717), (-0.08197748359885537, -0.540253001036092, 0.5946770761304137), (-0.04551343522386375, -0.48527382262420815, 0.6467685738089757), (-0.015501246831107096, -0.4192182992656619, 0.6896431286557706), (-0.015501246831107096, -0.4192182992656619, 0.6896431286557706), (-0.015501246831107096, -0.4192182992656619, 0.6896431286557706), (-0.015501246831107096, -0.7077292503941806, 0.43808875481816606), (-0.015501246831107096, -0.80800092168273, -0.016811817720621902), (-0.015501246831107096, -0.6943482219457593, -0.34108941639861884), (-0.015501246831107096, -0.5527220271320061, -0.5148315398676033), (-0.015501246831107096, -0.3348935247151239, -0.6771644412642893), (-0.015501246831107096, -0.1758560186022855, -0.7201406688717121), (-0.015501246831107096, -0.01550124707129541, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(3.552713678800501e-15, -0.3358735290035959, 0.7361468691490706), (0.03022134593976844, -0.43042515285534577, 0.6908148502394227), (0.07247276978952044, -0.5107046042608232, 0.6274377144647979), (0.12400997464880348, -0.5684975545498574, 0.5501319071758723), (0.12400997464880348, -0.5684975545498574, 0.5501319071758723), (0.12400997464880348, -0.5684975545498574, 0.5501319071758723), (0.12400997464880348, -0.5819931861859473, 0.5346306603447717), (0.12400997464880348, -0.5819931861859473, 0.5346306603447717), (0.12400997464880348, -0.5819931861859473, 0.5346306603447717), (0.12400997464880348, -0.5819931861859473, 0.5346306603447717), (0.08197748359885182, -0.5402530010360991, 0.5946770761304137), (0.0455134352238602, -0.48527382262421526, 0.6467685738089757), (0.015501246831103543, -0.419218299265669, 0.6896431286557706), (0.015501246831103543, -0.419218299265669, 0.6896431286557706), (0.015501246831103543, -0.419218299265669, 0.6896431286557706), (0.015501246831103543, -0.7077292503941877, 0.43808875481816606), (0.015501246831103543, -0.8080009216827371, -0.016811817720621902), (0.015501246831103543, -0.6943482219457664, -0.34108941639861884), (0.015501246831103543, -0.5527220271320132, -0.5148315398676033), (0.015501246831103543, -0.334893524715131, -0.6771644412642893), (0.015501246831103543, -0.1758560186022926, -0.7201406688717121), (0.015501246831103543, -0.015501247071302515, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(0.0, 0.33587352900359235, 0.7361468691490706), (-0.030221345939764888, 0.4304251528553422, 0.6908148502394227), (-0.07247276978951689, 0.5107046042608197, 0.6274377144647979), (-0.12400997464879993, 0.5684975545498538, 0.5501319071758723), (-0.12400997464879993, 0.5684975545498538, 0.5501319071758723), (-0.12400997464879993, 0.5684975545498538, 0.5501319071758723), (-0.12400997464879993, 0.5819931861859438, 0.5346306603447717), (-0.12400997464879993, 0.5819931861859438, 0.5346306603447717), (-0.12400997464879993, 0.5819931861859438, 0.5346306603447717), (-0.12400997464879993, 0.5819931861859438, 0.5346306603447717), (-0.08197748359884827, 0.5402530010360955, 0.5946770761304137), (-0.045513435223856646, 0.4852738226242117, 0.6467685738089757), (-0.01550124683109999, 0.41921829926566545, 0.6896431286557706), (-0.01550124683109999, 0.41921829926566545, 0.6896431286557706), (-0.01550124683109999, 0.41921829926566545, 0.6896431286557706), (-0.01550124683109999, 0.7077292503941841, 0.43808875481816606), (-0.01550124683109999, 0.8080009216827335, -0.016811817720621902), (-0.01550124683109999, 0.6943482219457628, -0.34108941639861884), (-0.01550124683109999, 0.5527220271320097, -0.5148315398676033), (-0.01550124683109999, 0.33489352471512746, -0.6771644412642893), (-0.01550124683109999, 0.17585601860228905, -0.7201406688717121), (-0.01550124683109999, 0.015501247071298963, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
curve1.append(cmds.curve( p =[(3.552713678800501e-15, 0.33587352900359235, 0.7361468691490706), (0.03022134593976844, 0.4304251528553422, 0.6908148502394227), (0.07247276978952044, 0.5107046042608197, 0.6274377144647979), (0.12400997464880348, 0.5684975545498538, 0.5501319071758723), (0.12400997464880348, 0.5684975545498538, 0.5501319071758723), (0.12400997464880348, 0.5684975545498538, 0.5501319071758723), (0.12400997464880348, 0.5819931861859438, 0.5346306603447717), (0.12400997464880348, 0.5819931861859438, 0.5346306603447717), (0.12400997464880348, 0.5819931861859438, 0.5346306603447717), (0.12400997464880348, 0.5819931861859438, 0.5346306603447717), (0.08197748359885182, 0.5402530010360955, 0.5946770761304137), (0.0455134352238602, 0.4852738226242117, 0.6467685738089757), (0.015501246831103543, 0.41921829926566545, 0.6896431286557706), (0.015501246831103543, 0.41921829926566545, 0.6896431286557706), (0.015501246831103543, 0.41921829926566545, 0.6896431286557706), (0.015501246831103543, 0.7077292503941841, 0.43808875481816606), (0.015501246831103543, 0.8080009216827335, -0.016811817720621902), (0.015501246831103543, 0.6943482219457628, -0.34108941639861884), (0.015501246831103543, 0.5527220271320097, -0.5148315398676033), (0.015501246831103543, 0.33489352471512746, -0.6771644412642893), (0.015501246831103543, 0.17585601860228905, -0.7201406688717121), (0.015501246831103543, 0.015501247071298963, -0.7361468691392242)],per = False, d=3, k=[0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19]))
for x in range(len(curve1)-1):
cmds.makeIdentity(curve1[x+1], apply=True, t=1, r=1, s=1, n=0)
shapeNode = cmds.listRelatives(curve1[x+1], shapes=True)
cmds.parent(shapeNode, curve1[0], add=True, s=True)
cmds.delete(curve1[x+1])
fp = cmds.listRelatives(curve1[0], f=True)[0]
path = fp.split("|")[1]
cmds.select(path) | 717.944444 | 1,596 | 0.780469 | 1,400 | 12,923 | 7.204286 | 0.096429 | 0.113028 | 0.118977 | 0.055523 | 0.879933 | 0.747373 | 0.747373 | 0.747373 | 0.747373 | 0.747373 | 0 | 0.793129 | 0.062988 | 12,923 | 18 | 1,597 | 717.944444 | 0.039805 | 0 | 0 | 0 | 0 | 0 | 0.000077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
04b045ad0b456347873b59c5bda7aed4d6ba6442 | 2,453 | py | Python | Pruebas/muneco.py | FR98/Cuarto-Compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
] | 1 | 2022-03-20T12:57:04.000Z | 2022-03-20T12:57:04.000Z | Pruebas/muneco.py | FR98/cuarto-compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
] | null | null | null | Pruebas/muneco.py | FR98/cuarto-compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
] | null | null | null | muneco= [(" ___________" + "\n" + " | " + "\n" + " | " + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | " + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | / \ " + "\n" + "_|_")]
print (muneco[0])
print (muneco[1])
print (muneco[2])
print (muneco[3])
print (muneco[4])
munequito= [
(" ___________" + "\n" + " | |" + "\n" + " | " + "\n" + " | " + "\n" + " | " + "\n" + "_|_"),
(" ___________" + "\n" + " | |" + "\n" + " | ( )" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"),
(" ___________" + "\n" + " | |" + "\n" + " | \( )" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"),
(" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"),
(" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | " + "\n" + "_|_"),
(" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | / " + "\n" + "_|_"),
(" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | / \ " + "\n" + "_|_")
]
def muneco():
munequito= [(" ___________" + "\n" + " | |" + "\n" + " | " + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | ( )" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | " + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | / " + "\n" + "_|_"), (" ___________" + "\n" + " | |" + "\n" + " | \( )/" + "\n" + " | |" + "\n" + " | / \ " + "\n" + "_|_")]
if letra= False:
return munequito[i+1] | 102.208333 | 856 | 0.189156 | 122 | 2,453 | 1.778689 | 0.122951 | 0.847926 | 1.230415 | 1.585253 | 0.520737 | 0.520737 | 0.520737 | 0.520737 | 0.520737 | 0.520737 | 0 | 0.004164 | 0.412556 | 2,453 | 24 | 857 | 102.208333 | 0.146426 | 0 | 0 | 0 | 0 | 0 | 0.487775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.263158 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b6c3e2b841555946cc6fa701540c4510ad154851 | 2,173 | py | Python | src/nodes.py | abhra2020-smart/FlowLang | afd42e383850b27a71147e1cb1f90db892ad29c5 | [
"MIT"
] | null | null | null | src/nodes.py | abhra2020-smart/FlowLang | afd42e383850b27a71147e1cb1f90db892ad29c5 | [
"MIT"
] | 1 | 2021-03-28T10:55:18.000Z | 2021-03-28T10:56:03.000Z | src/nodes.py | abhra2020-smart/FlowLang | afd42e383850b27a71147e1cb1f90db892ad29c5 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
@dataclass
class NumberNode:
value: any
def __repr__(self):
return f"{self.value}"
def __add__(self, other):
return self.value + other.value
@dataclass
class AddNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}+{self.node_b})"
@dataclass
class PowerNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}^{self.node_b})"
@dataclass
class ModNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}%{self.node_b})"
@dataclass
class SubtractNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}-{self.node_b})"
@dataclass
class RSNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}>>{self.node_b})"
@dataclass
class LSNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}<<{self.node_b})"
@dataclass
class GreaterNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}>{self.node_b})"
@dataclass
class SmallerNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}<{self.node_b})"
@dataclass
class EqualsNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}=={self.node_b})"
@dataclass
class MultiplyNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}*{self.node_b})"
@dataclass
class DivideNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}/{self.node_b})"
@dataclass
class IntDivNode:
node_a: any
node_b: any
def __repr__(self):
return f"({self.node_a}//{self.node_b})"
@dataclass
class PlusNode:
node: any
def __repr__(self):
return f"(+{self.node})"
@dataclass
class MinusNode:
node: any
def __repr__(self):
return f"(-{self.node})"
| 15.861314 | 49 | 0.577543 | 289 | 2,173 | 3.955017 | 0.110727 | 0.181977 | 0.131234 | 0.183727 | 0.772528 | 0.772528 | 0.772528 | 0.750656 | 0.750656 | 0.692913 | 0 | 0 | 0.300046 | 2,173 | 136 | 50 | 15.977941 | 0.751479 | 0 | 0 | 0.622222 | 0 | 0 | 0.19244 | 0.172803 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.011111 | 0.177778 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
8e1e06f7e9085d13f908f6d65df665a4913120f4 | 10,912 | py | Python | usaspending_api/download/tests/integration/test_account_download.py | truthiswill/usaspending-api | bd7d915442e2ec94cc830c480ceeffd4479be6c0 | [
"CC0-1.0"
] | null | null | null | usaspending_api/download/tests/integration/test_account_download.py | truthiswill/usaspending-api | bd7d915442e2ec94cc830c480ceeffd4479be6c0 | [
"CC0-1.0"
] | 1 | 2021-11-15T17:54:12.000Z | 2021-11-15T17:54:12.000Z | usaspending_api/download/tests/integration/test_account_download.py | truthiswill/usaspending-api | bd7d915442e2ec94cc830c480ceeffd4479be6c0 | [
"CC0-1.0"
] | null | null | null | import json
import pytest
from django.db import connection
from model_mommy import mommy
from rest_framework import status
from unittest.mock import Mock
from usaspending_api.download.filestreaming import csv_generation
from usaspending_api.download.lookups import JOB_STATUS
@pytest.fixture
def base_job_data(db):
# Populate job status lookup table
for js in JOB_STATUS:
mommy.make('download.JobStatus', job_status_id=js.id, name=js.name, description=js.desc)
@pytest.mark.django_db
def test_tas_a_defaults_success(client, base_job_data):
""" Test the accounts endpoint using the default filters for an account_balances file"""
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "account_balances",
"fy": "2017",
"quarter": "3"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
assert '.zip' in resp.json()['url']
@pytest.mark.django_db
def test_tas_b_defaults_success(client, base_job_data):
""" Test the accounts endpoint using the default filters for an object_class_program_activity file"""
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "object_class_program_activity",
"fy": "2018",
"quarter": "1"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
assert '.zip' in resp.json()['url']
@pytest.mark.django_db
def test_tas_c_defaults_success(client, base_job_data):
""" Test the accounts endpoint using the default filters for an award_financial file"""
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "award_financial",
"fy": "2016",
"quarter": "4"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
assert '.zip' in resp.json()['url']
@pytest.mark.django_db
def test_federal_account_a_defaults_success(client, base_job_data):
""" Test the accounts endpoint using the default filters for an account_balances file"""
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "federal_account",
"filters": {
"submission_type": "account_balances",
"fy": "2017",
"quarter": "3"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
assert '.zip' in resp.json()['url']
@pytest.mark.django_db
def test_federal_account_b_defaults_success(client, base_job_data):
""" Test the accounts endpoint using the default filters for an object_class_program_activity file"""
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "federal_account",
"filters": {
"submission_type": "object_class_program_activity",
"fy": "2018",
"quarter": "1"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
assert '.zip' in resp.json()['url']
@pytest.mark.django_db
def test_federal_account_c_defaults_success(client, base_job_data):
""" Test the accounts endpoint using the default filters for an award_financial file"""
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "federal_account",
"filters": {
"submission_type": "award_financial",
"fy": "2016",
"quarter": "4"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
assert '.zip' in resp.json()['url']
@pytest.mark.django_db
def test_agency_filter_success(client, base_job_data):
""" Test the accounts endpoint with a wrong account_level """
mommy.make('references.ToptierAgency', toptier_agency_id=-1, cgac_code='-01')
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "federal_account",
"filters": {
"submission_type": "account_balances",
"fy": "2017",
"quarter": "4",
"agency": "-1"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
@pytest.mark.django_db
def test_agency_filter_failure(client, base_job_data):
""" Test the accounts endpoint with a wrong account_level """
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "object_class_program_activity",
"fy": "2017",
"quarter": "4",
"agency": "-2"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_federal_account_filter_success(client, base_job_data):
""" Test the accounts endpoint with a wrong account_level """
mommy.make('accounts.FederalAccount', id=-1)
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "award_financial",
"fy": "2017",
"quarter": "4",
"federal_account": "-1"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_200_OK
@pytest.mark.django_db
def test_federal_account_filter_failure(client, base_job_data):
""" Test the accounts endpoint with a wrong account_level """
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "federal_account",
"filters": {
"submission_type": "account_balances",
"fy": "2017",
"quarter": "4",
"federal_account": "-2"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_account_level_failure(client, base_job_data):
""" Test the accounts endpoint with a wrong account_level """
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "not_tas_or_fa",
"filters": {
"submission_type": "account_balances",
"fy": "2017",
"quarter": "4"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_submission_type_failure(client, base_job_data):
""" Test the accounts endpoint with a wrong submission_type """
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "not_a_b_or_c",
"fy": "2018",
"quarter": "2"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_fy_failure(client, base_job_data):
""" Test the accounts endpoint with a wrong fiscal year (FY) """
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "federal_account",
"filters": {
"submission_type": "award_financial",
"fy": "string_not_int",
"quarter": "4"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_quarter_failure(client, base_job_data):
""" Test the accounts endpoint with a wrong quarter """
csv_generation.retrieve_db_string = Mock(return_value=generate_test_db_connection_string())
resp = client.post(
'/api/v2/download/accounts',
content_type='application/json',
data=json.dumps({
"account_level": "treasury_account",
"filters": {
"submission_type": "award_financial",
"fy": "2017",
"quarter": "string_not_int"
},
"file_format": "csv"
}))
assert resp.status_code == status.HTTP_400_BAD_REQUEST
def generate_test_db_connection_string():
db = connection.cursor().db.settings_dict
return 'postgres://{}:{}@{}:5432/{}'.format(db['USER'], db['PASSWORD'], db['HOST'], db['NAME'])
| 34.422713 | 105 | 0.616844 | 1,239 | 10,912 | 5.118644 | 0.100081 | 0.037843 | 0.026017 | 0.056764 | 0.895774 | 0.88726 | 0.882687 | 0.878272 | 0.87638 | 0.865342 | 0 | 0.016542 | 0.263196 | 10,912 | 316 | 106 | 34.531646 | 0.772264 | 0.090543 | 0 | 0.784861 | 0 | 0 | 0.219026 | 0.051936 | 0 | 0 | 0 | 0 | 0.079681 | 1 | 0.063745 | false | 0.003984 | 0.031873 | 0 | 0.099602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f3f2d80a316844faf9ad79882a54e0df7863340c | 98 | py | Python | AtCoder/ABC/180-189/ABC187_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/180-189/ABC187_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/180-189/ABC187_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | A, B = input().split()
print(max(sum([int(x) for x in list(A)]), sum([int(x) for x in list(B)])))
| 32.666667 | 74 | 0.571429 | 22 | 98 | 2.545455 | 0.545455 | 0.214286 | 0.25 | 0.357143 | 0.607143 | 0.607143 | 0.607143 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 98 | 2 | 75 | 49 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
6d5322170b3a2d5c7e16890171e097cf3611f60b | 30,286 | py | Python | fury/tests/test_material.py | iamansoni/fury | 2e7971a176c2540e10a9a6da861097583d08cb4a | [
"BSD-3-Clause"
] | null | null | null | fury/tests/test_material.py | iamansoni/fury | 2e7971a176c2540e10a9a6da861097583d08cb4a | [
"BSD-3-Clause"
] | null | null | null | fury/tests/test_material.py | iamansoni/fury | 2e7971a176c2540e10a9a6da861097583d08cb4a | [
"BSD-3-Clause"
] | null | null | null | from fury import actor, material, window
from fury.optpkg import optional_package
import fury.testing as ft
from scipy.spatial import Delaunay
import math
import numpy as np
import numpy.testing as npt
import random
dipy, have_dipy, _ = optional_package('dipy')
def _generate_surface():
size = 11
vertices = list()
for i in range(-size, size):
for j in range(-size, size):
fact1 = - math.sin(i) * math.cos(j)
fact2 = - math.exp(abs(1 - math.sqrt(i ** 2 + j ** 2) / math.pi))
z_coord = -abs(fact1 * fact2)
vertices.append([i, j, z_coord])
c_arr = np.random.rand(len(vertices), 3)
random.shuffle(vertices)
vertices = np.array(vertices)
tri = Delaunay(vertices[:, [0, 1]])
faces = tri.simplices
c_loop = [None, c_arr]
f_loop = [None, faces]
s_loop = [None, "butterfly", "loop"]
for smooth_type in s_loop:
for face in f_loop:
for color in c_loop:
surface_actor = actor.surface(vertices, faces=face,
colors=color, smooth=smooth_type)
return surface_actor
def test_manifest_pbr(interactive=False):
scene = window.Scene() # Setup scene
# Setup surface
surface_actor = _generate_surface()
material.manifest_pbr(surface_actor)
scene.add(surface_actor)
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
scene.clear() # Reset scene
# Contour from roi setup
data = np.zeros((50, 50, 50))
data[20:30, 25, 25] = 1.
data[25, 20:30, 25] = 1.
affine = np.eye(4)
surface = actor.contour_from_roi(data, affine, color=np.array([1, 0, 1]))
material.manifest_pbr(surface)
scene.add(surface)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
scene.clear() # Reset scene
# Streamtube setup
data1 = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2.]])
data2 = data1 + np.array([0.5, 0., 0.])
data = [data1, data2]
colors = np.array([[1, 0, 0], [0, 0, 1.]])
tubes = actor.streamtube(data, colors, linewidth=.1)
material.manifest_pbr(tubes)
scene.add(tubes)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 2)
scene.clear() # Reset scene
# Axes setup
axes = actor.axes()
material.manifest_pbr(axes)
scene.add(axes)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
scene.clear() # Reset scene
# ODF slicer setup
if have_dipy:
from dipy.data import get_sphere
from tempfile import mkstemp
sphere = get_sphere('symmetric362')
shape = (11, 11, 11, sphere.vertices.shape[0])
fid, fname = mkstemp(suffix='_odf_slicer.mmap')
odfs = np.memmap(fname, dtype=np.float64, mode='w+', shape=shape)
odfs[:] = 1
affine = np.eye(4)
mask = np.ones(odfs.shape[:3])
mask[:4, :4, :4] = 0
odfs[..., 0] = 1
odf_actor = actor.odf_slicer(odfs, affine, mask=mask, sphere=sphere,
scale=.25, colormap='blues')
material.manifest_pbr(odf_actor)
k = 5
I, J, _ = odfs.shape[:3]
odf_actor.display_extent(0, I, 0, J, k, k)
odf_actor.GetProperty().SetOpacity(1.0)
scene.add(odf_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 11 * 11)
scene.clear() # Reset scene
# Tensor slicer setup
if have_dipy:
from dipy.data import get_sphere
sphere = get_sphere('symmetric724')
evals = np.array([1.4, .35, .35]) * 10 ** (-3)
evecs = np.eye(3)
mevals = np.zeros((3, 2, 4, 3))
mevecs = np.zeros((3, 2, 4, 3, 3))
mevals[..., :] = evals
mevecs[..., :, :] = evecs
affine = np.eye(4)
scene = window.Scene()
tensor_actor = actor.tensor_slicer(mevals, mevecs, affine=affine,
sphere=sphere, scale=.3)
material.manifest_pbr(tensor_actor)
_, J, K = mevals.shape[:3]
tensor_actor.display_extent(0, 1, 0, J, 0, K)
scene.add(tensor_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 4)
# TODO: Rotate to test
# npt.assert_equal(report.objects, 4 * 2 * 2)
scene.clear() # Reset scene
# Point setup
points = np.array([[0, 0, 0], [0, 1, 0], [1, 0, 0]])
colors = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
opacity = 0.5
points_actor = actor.point(points, colors, opacity=opacity)
material.manifest_pbr(points_actor)
scene.add(points_actor)
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Sphere setup
xyzr = np.array([[0, 0, 0, 10], [100, 0, 0, 25], [200, 0, 0, 50]])
colors = np.array([[1, 0, 0, 0.3], [0, 1, 0, 0.4], [0, 0, 1., 0.99]])
opacity = 0.5
sphere_actor = actor.sphere(centers=xyzr[:, :3], colors=colors[:],
radii=xyzr[:, 3], opacity=opacity)
material.manifest_pbr(sphere_actor)
scene.add(sphere_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Advanced geometry actors setup (Arrow, cone, cylinder)
xyz = np.array([[0, 0, 0], [50, 0, 0], [100, 0, 0]])
dirs = np.array([[0, 1, 0], [1, 0, 0], [0, 0.5, 0.5]])
colors = np.array([[1, 0, 0, 0.3], [0, 1, 0, 0.4], [1, 1, 0, 1]])
heights = np.array([5, 7, 10])
actor_list = [[actor.cone, {'directions': dirs, 'resolution': 8}],
[actor.arrow, {'directions': dirs, 'resolution': 9}],
[actor.cylinder, {'directions': dirs}]]
for act_func, extra_args in actor_list:
aga_actor = act_func(centers=xyz, colors=colors[:], heights=heights,
**extra_args)
material.manifest_pbr(aga_actor)
scene.add(aga_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear()
# Basic geometry actors (Box, cube, frustum, octagonalprism, rectangle,
# square)
centers = np.array([[4, 0, 0], [0, 4, 0], [0, 0, 0]])
colors = np.array([[1, 0, 0, 0.4], [0, 1, 0, 0.8], [0, 0, 1, 0.5]])
directions = np.array([[1, 1, 0]])
scale_list = [1, 2, (1, 1, 1), [3, 2, 1], np.array([1, 2, 3]),
np.array([[1, 2, 3], [1, 3, 2], [3, 1, 2]])]
actor_list = [[actor.box, {}], [actor.cube, {}], [actor.frustum, {}],
[actor.octagonalprism, {}], [actor.rectangle, {}],
[actor.square, {}]]
for act_func, extra_args in actor_list:
for scale in scale_list:
scene = window.Scene()
bga_actor = act_func(centers=centers, directions=directions,
colors=colors, scales=scale, **extra_args)
material.manifest_pbr(bga_actor)
scene.add(bga_actor)
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
msg = 'Failed with {}, scale={}'.format(act_func.__name__, scale)
npt.assert_equal(report.objects, 3, err_msg=msg)
scene.clear()
# Cone setup using vertices
centers = np.array([[0, 0, 0], [20, 0, 0], [40, 0, 0]])
directions = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
colors = np.array([[1, 0, 0, 0.3], [0, 1, 0, 0.4], [0, 0, 1., 0.99]])
vertices = np.array([[0.0, 0.0, 0.0], [0.0, 10.0, 0.0],
[10.0, 0.0, 0.0], [0.0, 0.0, 10.0]])
faces = np.array([[0, 1, 3], [0, 1, 2]])
cone_actor = actor.cone(centers=centers, directions=directions,
colors=colors[:], vertices=vertices, faces=faces)
material.manifest_pbr(cone_actor)
scene.add(cone_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Superquadric setup
centers = np.array([[8, 0, 0], [0, 8, 0], [0, 0, 0]])
colors = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
directions = np.random.rand(3, 3)
scales = [1, 2, 3]
roundness = np.array([[1, 1], [1, 2], [2, 1]])
sq_actor = actor.superquadric(centers, roundness=roundness,
directions=directions,
colors=colors.astype(np.uint8),
scales=scales)
material.manifest_pbr(sq_actor)
scene.add(sq_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Label setup
text_actor = actor.label("Hello")
material.manifest_pbr(text_actor)
scene.add(text_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 5)
# NOTE: From this point on, these actors don't have full support for PBR
# interpolation. This is, the test passes but there is no evidence of the
# desired effect.
"""
# Line setup
data1 = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2.]])
data2 = data1 + np.array([0.5, 0., 0.])
data = [data1, data2]
colors = np.array([[1, 0, 0], [0, 0, 1.]])
lines = actor.line(data, colors, linewidth=5)
material.manifest_pbr(lines)
scene.add(lines)
"""
"""
# Peak slicer setup
_peak_dirs = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype='f4')
# peak_dirs.shape = (1, 1, 1) + peak_dirs.shape
peak_dirs = np.zeros((11, 11, 11, 3, 3))
peak_dirs[:, :, :] = _peak_dirs
peak_actor = actor.peak_slicer(peak_dirs)
material.manifest_pbr(peak_actor)
scene.add(peak_actor)
"""
"""
# Dots setup
points = np.array([[0, 0, 0], [0, 1, 0], [1, 0, 0]])
dots_actor = actor.dots(points, color=(0, 255, 0))
material.manifest_pbr(dots_actor)
scene.add(dots_actor)
"""
"""
# Texture setup
arr = (255 * np.ones((512, 212, 4))).astype('uint8')
arr[20:40, 20:40, :] = np.array([255, 0, 0, 255], dtype='uint8')
tp2 = actor.texture(arr)
material.manifest_pbr(tp2)
scene.add(tp2)
"""
"""
# Texture on sphere setup
arr = 255 * np.ones((810, 1620, 3), dtype='uint8')
rows, cols, _ = arr.shape
rs = rows // 2
cs = cols // 2
w = 150 // 2
arr[rs - w: rs + w, cs - 10 * w: cs + 10 * w] = np.array([255, 127, 0])
tsa = actor.texture_on_sphere(arr)
material.manifest_pbr(tsa)
scene.add(tsa)
"""
"""
# SDF setup
centers = np.array([[2, 0, 0], [0, 2, 0], [0, 0, 0]]) * 11
colors = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
directions = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
scales = [1, 2, 3]
primitive = ['sphere', 'ellipsoid', 'torus']
sdf_actor = actor.sdf(centers, directions=directions, colors=colors,
primitives=primitive, scales=scales)
material.manifest_pbr(sdf_actor)
scene.add(sdf_actor)
"""
# NOTE: For these last set of actors, there is not support for PBR
# interpolation at all.
"""
# Setup slicer
data = (255 * np.random.rand(50, 50, 50))
affine = np.eye(4)
slicer = actor.slicer(data, affine, value_range=[data.min(), data.max()])
slicer.display(None, None, 25)
material.manifest_pbr(slicer)
scene.add(slicer)
"""
"""
# Contour from label setup
data = np.zeros((50, 50, 50))
data[5:15, 1:10, 25] = 1.
data[25:35, 1:10, 25] = 2.
data[40:49, 1:10, 25] = 3.
color = np.array([[255, 0, 0, 0.6],
[0, 255, 0, 0.5],
[0, 0, 255, 1.0]])
surface = actor.contour_from_label(data, color=color)
material.manifest_pbr(surface)
scene.add(surface)
"""
"""
# Scalar bar setup
lut = actor.colormap_lookup_table(
scale_range=(0., 100.), hue_range=(0., 0.1), saturation_range=(1, 1),
value_range=(1., 1))
sb_actor = actor.scalar_bar(lut, ' ')
material.manifest_pbr(sb_actor)
scene.add(sb_actor)
"""
"""
# Billboard setup
centers = np.array([[0, 0, 0], [5, -5, 5], [-7, 7, -7], [10, 10, 10],
[10.5, 11.5, 11.5], [12, -12, -12], [-17, 17, 17],
[-22, -22, 22]])
colors = np.array([[1, 1, 0], [0, 0, 0], [1, 0, 1], [0, 0, 1], [1, 1, 1],
[1, 0, 0], [0, 1, 0], [0, 1, 1]])
scales = [6, .4, 1.2, 1, .2, .7, 3, 2]
"""
fake_sphere = \
"""
float len = length(point);
float radius = 1.;
if(len > radius)
discard;
vec3 normalizedPoint = normalize(vec3(point.xy, sqrt(1. - len)));
vec3 direction = normalize(vec3(1., 1., 1.));
float df_1 = max(0, dot(direction, normalizedPoint));
float sf_1 = pow(df_1, 24);
fragOutput0 = vec4(max(df_1 * color, sf_1 * vec3(1)), 1);
"""
"""
billboard_actor = actor.billboard(centers, colors=colors, scales=scales,
fs_impl=fake_sphere)
material.manifest_pbr(billboard_actor)
scene.add(billboard_actor)
"""
"""
# Text3D setup
msg = 'I \nlove\n FURY'
txt_actor = actor.text_3d(msg)
material.manifest_pbr(txt_actor)
scene.add(txt_actor)
"""
"""
# Figure setup
arr = (255 * np.ones((512, 212, 4))).astype('uint8')
arr[20:40, 20:40, 3] = 0
tp = actor.figure(arr)
material.manifest_pbr(tp)
scene.add(tp)
"""
if interactive:
window.show(scene)
def test_manifest_standard(interactive=False):
scene = window.Scene() # Setup scene
# Setup surface
surface_actor = _generate_surface()
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(surface_actor)
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
scene.clear() # Reset scene
# Contour from roi setup
data = np.zeros((50, 50, 50))
data[20:30, 25, 25] = 1.
data[25, 20:30, 25] = 1.
affine = np.eye(4)
surface = actor.contour_from_roi(data, affine, color=np.array([1, 0, 1]))
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(surface)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
scene.clear() # Reset scene
# Contour from label setup
data = np.zeros((50, 50, 50))
data[5:15, 1:10, 25] = 1.
data[25:35, 1:10, 25] = 2.
data[40:49, 1:10, 25] = 3.
color = np.array([[255, 0, 0, 0.6],
[0, 255, 0, 0.5],
[0, 0, 255, 1.0]])
surface = actor.contour_from_label(data, color=color)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(surface)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Streamtube setup
data1 = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2.]])
data2 = data1 + np.array([0.5, 0., 0.])
data = [data1, data2]
colors = np.array([[1, 0, 0], [0, 0, 1.]])
tubes = actor.streamtube(data, colors, linewidth=.1)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(tubes)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 2)
scene.clear() # Reset scene
# ODF slicer setup
if have_dipy:
from dipy.data import get_sphere
from tempfile import mkstemp
sphere = get_sphere('symmetric362')
shape = (11, 11, 11, sphere.vertices.shape[0])
fid, fname = mkstemp(suffix='_odf_slicer.mmap')
odfs = np.memmap(fname, dtype=np.float64, mode='w+', shape=shape)
odfs[:] = 1
affine = np.eye(4)
mask = np.ones(odfs.shape[:3])
mask[:4, :4, :4] = 0
odfs[..., 0] = 1
odf_actor = actor.odf_slicer(odfs, affine, mask=mask, sphere=sphere,
scale=.25, colormap='blues')
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
k = 5
I, J, _ = odfs.shape[:3]
odf_actor.display_extent(0, I, 0, J, k, k)
odf_actor.GetProperty().SetOpacity(1.0)
scene.add(odf_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 11 * 11)
scene.clear() # Reset scene
# Tensor slicer setup
if have_dipy:
from dipy.data import get_sphere
sphere = get_sphere('symmetric724')
evals = np.array([1.4, .35, .35]) * 10 ** (-3)
evecs = np.eye(3)
mevals = np.zeros((3, 2, 4, 3))
mevecs = np.zeros((3, 2, 4, 3, 3))
mevals[..., :] = evals
mevecs[..., :, :] = evecs
affine = np.eye(4)
scene = window.Scene()
tensor_actor = actor.tensor_slicer(mevals, mevecs, affine=affine,
sphere=sphere, scale=.3)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
_, J, K = mevals.shape[:3]
tensor_actor.display_extent(0, 1, 0, J, 0, K)
scene.add(tensor_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 4)
scene.clear() # Reset scene
# Point setup
points = np.array([[0, 0, 0], [0, 1, 0], [1, 0, 0]])
colors = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
opacity = 0.5
points_actor = actor.point(points, colors, opacity=opacity)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(points_actor)
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Sphere setup
xyzr = np.array([[0, 0, 0, 10], [100, 0, 0, 25], [200, 0, 0, 50]])
colors = np.array([[1, 0, 0, 0.3], [0, 1, 0, 0.4], [0, 0, 1., 0.99]])
opacity = 0.5
sphere_actor = actor.sphere(centers=xyzr[:, :3], colors=colors[:],
radii=xyzr[:, 3], opacity=opacity)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(sphere_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Advanced geometry actors setup (Arrow, cone, cylinder)
xyz = np.array([[0, 0, 0], [50, 0, 0], [100, 0, 0]])
dirs = np.array([[0, 1, 0], [1, 0, 0], [0, 0.5, 0.5]])
colors = np.array([[1, 0, 0, 0.3], [0, 1, 0, 0.4], [1, 1, 0, 1]])
heights = np.array([5, 7, 10])
actor_list = [[actor.cone, {'directions': dirs, 'resolution': 8}],
[actor.arrow, {'directions': dirs, 'resolution': 9}],
[actor.cylinder, {'directions': dirs}]]
for act_func, extra_args in actor_list:
aga_actor = act_func(centers=xyz, colors=colors[:], heights=heights,
**extra_args)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(aga_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear()
# Basic geometry actors (Box, cube, frustum, octagonalprism, rectangle,
# square)
centers = np.array([[4, 0, 0], [0, 4, 0], [0, 0, 0]])
colors = np.array([[1, 0, 0, 0.4], [0, 1, 0, 0.8], [0, 0, 1, 0.5]])
directions = np.array([[1, 1, 0]])
scale_list = [1, 2, (1, 1, 1), [3, 2, 1], np.array([1, 2, 3]),
np.array([[1, 2, 3], [1, 3, 2], [3, 1, 2]])]
actor_list = [[actor.box, {}], [actor.cube, {}], [actor.frustum, {}],
[actor.octagonalprism, {}], [actor.rectangle, {}],
[actor.square, {}]]
for act_func, extra_args in actor_list:
for scale in scale_list:
scene = window.Scene()
bga_actor = act_func(centers=centers, directions=directions,
colors=colors, scales=scale, **extra_args)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(bga_actor)
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
msg = 'Failed with {}, scale={}'.format(act_func.__name__, scale)
npt.assert_equal(report.objects, 3, err_msg=msg)
scene.clear()
# Cone setup using vertices
centers = np.array([[0, 0, 0], [20, 0, 0], [40, 0, 0]])
directions = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
colors = np.array([[1, 0, 0, 0.3], [0, 1, 0, 0.4], [0, 0, 1., 0.99]])
vertices = np.array([[0.0, 0.0, 0.0], [0.0, 10.0, 0.0],
[10.0, 0.0, 0.0], [0.0, 0.0, 10.0]])
faces = np.array([[0, 1, 3], [0, 1, 2]])
cone_actor = actor.cone(centers=centers, directions=directions,
colors=colors[:], vertices=vertices, faces=faces)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(cone_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 3)
scene.clear() # Reset scene
# Superquadric setup
centers = np.array([[8, 0, 0], [0, 8, 0], [0, 0, 0]])
colors = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
directions = np.random.rand(3, 3)
scales = [1, 2, 3]
roundness = np.array([[1, 1], [1, 2], [2, 1]])
sq_actor = actor.superquadric(centers, roundness=roundness,
directions=directions,
colors=colors.astype(np.uint8),
scales=scales)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(sq_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
ft.assert_greater_equal(report.objects, 3)
scene.clear() # Reset scene
# Label setup
text_actor = actor.label("Hello")
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(text_actor)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 5)
scene.clear() # Reset scene
# Texture setup
arr = (255 * np.ones((512, 212, 4))).astype('uint8')
arr[20:40, 20:40, :] = np.array([255, 0, 0, 255], dtype='uint8')
tp2 = actor.texture(arr)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(tp2)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
scene.clear() # Reset scene
# Texture on sphere setup
arr = 255 * np.ones((810, 1620, 3), dtype='uint8')
rows, cols, _ = arr.shape
rs = rows // 2
cs = cols // 2
w = 150 // 2
arr[rs - w: rs + w, cs - 10 * w: cs + 10 * w] = np.array([255, 127, 0])
tsa = actor.texture_on_sphere(arr)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(tsa)
scene.reset_camera()
scene.reset_clipping_range()
arr = window.snapshot(scene)
report = window.analyze_snapshot(arr)
npt.assert_equal(report.objects, 1)
# NOTE: From this point on, these actors don't have full support for PBR
# interpolation. This is, the test passes but there is no evidence of the
# desired effect.
"""
# Setup slicer
data = (255 * np.random.rand(50, 50, 50))
affine = np.eye(4)
slicer = actor.slicer(data, affine, value_range=[data.min(), data.max()])
slicer.display(None, None, 25)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(slicer)
"""
"""
# Line setup
data1 = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2.]])
data2 = data1 + np.array([0.5, 0., 0.])
data = [data1, data2]
colors = np.array([[1, 0, 0], [0, 0, 1.]])
lines = actor.line(data, colors, linewidth=5)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(lines)
"""
"""
# Scalar bar setup
lut = actor.colormap_lookup_table(
scale_range=(0., 100.), hue_range=(0., 0.1), saturation_range=(1, 1),
value_range=(1., 1))
sb_actor = actor.scalar_bar(lut, ' ')
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(sb_actor)
"""
"""
# Axes setup
axes = actor.axes()
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(axes)
"""
"""
# Peak slicer setup
_peak_dirs = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype='f4')
# peak_dirs.shape = (1, 1, 1) + peak_dirs.shape
peak_dirs = np.zeros((11, 11, 11, 3, 3))
peak_dirs[:, :, :] = _peak_dirs
peak_actor = actor.peak_slicer(peak_dirs)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(peak_actor)
"""
"""
# Dots setup
points = np.array([[0, 0, 0], [0, 1, 0], [1, 0, 0]])
dots_actor = actor.dots(points, color=(0, 255, 0))
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(dots_actor)
"""
"""
# Text3D setup
msg = 'I \nlove\n FURY'
txt_actor = actor.text_3d(msg)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(txt_actor)
"""
"""
# Figure setup
arr = (255 * np.ones((512, 212, 4))).astype('uint8')
arr[20:40, 20:40, 3] = 0
tp = actor.figure(arr)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(tp)
"""
"""
# SDF setup
centers = np.array([[2, 0, 0], [0, 2, 0], [0, 0, 0]]) * 11
colors = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
directions = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
scales = [1, 2, 3]
primitive = ['sphere', 'ellipsoid', 'torus']
sdf_actor = actor.sdf(centers, directions=directions, colors=colors,
primitives=primitive, scales=scales)
material.manifest_standard(surface_actor, ambient_level=.3,
diffuse_level=.25)
scene.add(sdf_actor)
"""
# NOTE: For these last set of actors, there is not support for PBR
# interpolation at all.
"""
# Billboard setup
centers = np.array([[0, 0, 0], [5, -5, 5], [-7, 7, -7], [10, 10, 10],
[10.5, 11.5, 11.5], [12, -12, -12], [-17, 17, 17],
[-22, -22, 22]])
colors = np.array([[1, 1, 0], [0, 0, 0], [1, 0, 1], [0, 0, 1], [1, 1, 1],
[1, 0, 0], [0, 1, 0], [0, 1, 1]])
scales = [6, .4, 1.2, 1, .2, .7, 3, 2]
"""
fake_sphere = \
"""
float len = length(point);
float radius = 1.;
if (len > radius)
discard;
vec3 normalizedPoint = normalize(vec3(point.xy, sqrt(1. - len)));
vec3 direction = normalize(vec3(1., 1., 1.));
float df_1 = max(0, dot(direction, normalizedPoint));
float sf_1 = pow(df_1, 24);
fragOutput0 = vec4(max(df_1 * color, sf_1 * vec3(1)), 1);
"""
"""
billboard_actor = actor.billboard(centers, colors=colors, scales=scales,
fs_impl=fake_sphere)
material.manifest_pbr(billboard_actor)
scene.add(billboard_actor)
"""
if interactive:
window.show(scene)
| 35.505275 | 79 | 0.55983 | 4,111 | 30,286 | 4.003406 | 0.072002 | 0.028922 | 0.021509 | 0.010694 | 0.933467 | 0.927695 | 0.925993 | 0.920221 | 0.920221 | 0.920221 | 0 | 0.070886 | 0.282672 | 30,286 | 852 | 80 | 35.546948 | 0.686674 | 0.050518 | 0 | 0.855042 | 0 | 0 | 0.013963 | 0 | 0 | 0 | 0 | 0.001174 | 0.058824 | 1 | 0.006303 | false | 0 | 0.029412 | 0 | 0.037815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eda1db09a4a1d2e54ffce94dc9048213c833bf35 | 180 | py | Python | client/src/track.py | tommccallum/smartbot | 7241aa80a8dfa1f67e411c9000d65addd81ebd3f | [
"MIT"
] | 1 | 2021-01-27T11:18:54.000Z | 2021-01-27T11:18:54.000Z | client/src/track.py | tommccallum/smartbot | 7241aa80a8dfa1f67e411c9000d65addd81ebd3f | [
"MIT"
] | null | null | null | client/src/track.py | tommccallum/smartbot | 7241aa80a8dfa1f67e411c9000d65addd81ebd3f | [
"MIT"
] | null | null | null |
class Track:
def __init__(self, track, seek):
self.track = track
self.seek = seek
def dict(self):
return { "url": self.track, "seek": self.seek } | 20 | 55 | 0.561111 | 23 | 180 | 4.217391 | 0.391304 | 0.278351 | 0.268041 | 0.350515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 180 | 9 | 55 | 20 | 0.776 | 0 | 0 | 0 | 0 | 0 | 0.039106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
edcfbe4d0db72f09daf63ac781c8a54a19a34cce | 4,689 | py | Python | tests/utils/test_authentication.py | Songmu/cli | f4814e30cedad3feb5e46fd4617238a44717d4bb | [
"Apache-2.0"
] | null | null | null | tests/utils/test_authentication.py | Songmu/cli | f4814e30cedad3feb5e46fd4617238a44717d4bb | [
"Apache-2.0"
] | null | null | null | tests/utils/test_authentication.py | Songmu/cli | f4814e30cedad3feb5e46fd4617238a44717d4bb | [
"Apache-2.0"
] | null | null | null | import os
from unittest import TestCase, mock
from launchable.utils.authentication import get_org_workspace, authentication_headers
class AuthenticationTest(TestCase):
@mock.patch.dict(os.environ, {}, clear=True)
def test_get_org_workspace_no_environment_variables(self):
org, workspace = get_org_workspace()
self.assertIsNone(org)
self.assertIsNone(workspace)
@mock.patch.dict(os.environ, {"LAUNCHABLE_TOKEN": "invalid"})
def test_get_org_workspace_invalid_LAUNCHABLE_TOKEN(self):
org, workspace = get_org_workspace()
self.assertIsNone(org)
self.assertIsNone(workspace)
@mock.patch.dict(os.environ, {"LAUNCHABLE_TOKEN": "v1:launchableinc/test:token"})
def test_get_org_workspace_valid_LAUNCHABLE_TOKEN(self):
org, workspace = get_org_workspace()
self.assertEqual("launchableinc", org)
self.assertEqual("test", workspace)
@mock.patch.dict(os.environ, {"LAUNCHABLE_ORGANIZATION": "launchableinc", "LAUNCHABLE_WORKSPACE": "test"}, clear=True)
def test_get_org_workspace_LAUNCHABLE_ORGANIZATION_and_LAUNCHABLE_WORKSPACE(self):
org, workspace = get_org_workspace()
self.assertEqual("launchableinc", org)
self.assertEqual("test", workspace)
@mock.patch.dict(os.environ, {"LAUNCHABLE_TOKEN": "v1:token_org/token_wp:token", "LAUNCHABLE_ORGANIZATION": "org",
"LAUNCHABLE_WORKSPACE": "wp"})
def test_get_org_workspace_LAUNCHABLE_TOKEN_and_LAUNCHABLE_ORGANIZATION_and_LAUNCHABLE_WORKSPACE(self):
org, workspace = get_org_workspace()
self.assertEqual("token_org", org)
self.assertEqual("token_wp", workspace)
@mock.patch.dict(os.environ, {}, clear=True)
def test_authentication_headers_empty(self):
header = authentication_headers()
self.assertEqual(len(header), 0)
@mock.patch.dict(os.environ, {"LAUNCHABLE_TOKEN": "v1:launchableinc/test:token"})
def test_authentication_headers_LAUNCHABLE_TOKEN(self):
header = authentication_headers()
self.assertEqual(len(header), 1)
self.assertEqual(header["Authorization"], "Bearer v1:launchableinc/test:token")
@mock.patch.dict(os.environ,
{"GITHUB_ACTIONS": "true", "GITHUB_RUN_ID": "1", "GITHUB_REPOSITORY": "launchableinc/test",
"GITHUB_WORKFLOW": "build", "GITHUB_RUN_NUMBER": "1", "GITHUB_EVENT_NAME": "push",
"GITHUB_PR_HEAD_SHA": "test0", "GITHUB_SHA": "test1"}, clear=True)
def test_authentication_headers_GitHub_Actions_with_PR_head(self):
header = authentication_headers()
self.assertEqual(len(header), 8)
self.assertEqual(header["GitHub-Actions"], "true")
self.assertEqual(header["GitHub-Run-Id"], "1")
self.assertEqual(header["GitHub-Repository"], "launchableinc/test")
self.assertEqual(header["GitHub-Workflow"], "build")
self.assertEqual(header["GitHub-Run-Number"], "1")
self.assertEqual(header["GitHub-Event-Name"], "push")
self.assertEqual(header["GitHub-Pr-Head-Sha"], "test0")
self.assertEqual(header["GitHub-Sha"], "test1")
@mock.patch.dict(os.environ,
{"GITHUB_ACTIONS": "true", "GITHUB_RUN_ID": "1", "GITHUB_REPOSITORY": "launchableinc/test",
"GITHUB_WORKFLOW": "build", "GITHUB_RUN_NUMBER": "1", "GITHUB_EVENT_NAME": "push",
"GITHUB_SHA": "test"}, clear=True)
def test_authentication_headers_GitHub_Actions_without_PR_head(self):
header = authentication_headers()
self.assertEqual(len(header), 7)
self.assertEqual(header["GitHub-Actions"], "true")
self.assertEqual(header["GitHub-Run-Id"], "1")
self.assertEqual(header["GitHub-Repository"], "launchableinc/test")
self.assertEqual(header["GitHub-Workflow"], "build")
self.assertEqual(header["GitHub-Run-Number"], "1")
self.assertEqual(header["GitHub-Event-Name"], "push")
self.assertEqual(header["GitHub-Sha"], "test")
@mock.patch.dict(os.environ,
{"LAUNCHABLE_TOKEN": "v1:launchableinc/test:token", "GITHUB_ACTIONS": "true", "GITHUB_RUN_ID": "1",
"GITHUB_REPOSITORY": "launchableinc/test", "GITHUB_WORKFLOW": "build", "GITHUB_RUN_NUMBER": "1",
"GITHUB_EVENT_NAME": "push", "GITHUB_SHA": "test"}, clear=True)
def test_authentication_headers_LAUNCHABLE_TOKEN_and_GitHub_Actions(self):
header = authentication_headers()
self.assertEqual(len(header), 1)
self.assertEqual(header["Authorization"], "Bearer v1:launchableinc/test:token")
| 52.685393 | 122 | 0.677117 | 519 | 4,689 | 5.868979 | 0.121387 | 0.137886 | 0.117203 | 0.132961 | 0.87065 | 0.837163 | 0.800066 | 0.784964 | 0.746225 | 0.709127 | 0 | 0.006543 | 0.185114 | 4,689 | 88 | 123 | 53.284091 | 0.790631 | 0 | 0 | 0.592105 | 0 | 0 | 0.255065 | 0.044359 | 0 | 0 | 0 | 0 | 0.421053 | 1 | 0.131579 | false | 0 | 0.039474 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
610b4e686acb050c295bb11769e241ab87d61c32 | 11,944 | py | Python | simoa/test/test_nskart.py | andsor/pysimoa | 8734c062fa4a21b94d0e27ef460f3d8f8c3684da | [
"Apache-2.0"
] | 4 | 2015-08-10T21:30:34.000Z | 2022-03-09T13:56:21.000Z | simoa/test/test_nskart.py | andsor/pysimoa | 8734c062fa4a21b94d0e27ef460f3d8f8c3684da | [
"Apache-2.0"
] | 4 | 2015-02-13T20:55:22.000Z | 2016-03-23T13:23:38.000Z | simoa/test/test_nskart.py | andsor/pysimoa | 8734c062fa4a21b94d0e27ef460f3d8f8c3684da | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import division
import logging
import math
import numpy as np
import numpy.testing
import pytest
import simoa
import simoa.nskart
def test_nskart_step_1_raises_if_too_few_values():
with pytest.raises(simoa.NSkartTooFewValues):
simoa.nskart._step_1(np.ones(1279))
def test_nskart_step_1_initial_values():
data = np.ones(1280)
env = simoa.nskart._step_1(data)
assert env['X_i'] is data
assert env['N'] == data.size
assert 'm' in env
assert env['d'] == simoa.nskart.NSKART_INITIAL_NUMBER_OF_BATCHES_IN_SPACER
assert (
env['d^*'] == simoa.nskart.NSKART_MAXIMUM_NUMBER_OF_BATCHES_IN_SPACER
)
assert env['k'] == simoa.nskart.NSKART_INITIAL_BATCH_NUMBER
assert env[simoa.nskart.NSKART_RANDOMNESS_TEST_SIGNIFICANCE_KEY] == (
simoa.nskart.NSKART_RANDOMNESS_TEST_SIGNIFICANCE
)
assert env['b'] == 0
def test_nskart_step_1_initial_batch_size_unskewed_data():
data = np.random.rand(1280)
env = simoa.nskart._step_1(data)
assert env['m'] == 1
assert env['n'] == env['m'] * env['k']
def test_nskart_step_1_initial_batch_size_skewed_data():
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
assert env['m'] == 10
assert env['n'] == env['m'] * env['k']
def test_nskart_step_1_minimum_initial_batch_size():
data = np.random.geometric(p=0.99, size=128000)
env = simoa.nskart._step_1(data)
assert env['m'] == 16
assert env['n'] == env['m'] * env['k']
def test_nskart_step_1_initial_nonspaced_batch_means():
data = np.ones(1280)
env = simoa.nskart._step_1(data)
numpy.testing.assert_allclose(env['Y_j(m)'], np.ones(1280))
assert env['Y_j(m)'].size == env['k']
def test_nskart_step_1_initial_nonspaced_batch_means_skewed_data():
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
assert env['m'] == 10
assert data[10:20].mean() == env['Y_j(m)'][1]
assert env['Y_j(m)'].size == env['k']
def test_nskart_step_2_nonskewed():
data = np.random.rand(12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
assert env['d^*'] == (
simoa.nskart.NSKART_MAXIMUM_NUMBER_OF_BATCHES_IN_SPACER
)
def test_nskart_step_2_skewed():
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
assert env['d^*'] == (
simoa.nskart.NSKART_MAXIMUM_NUMBER_OF_BATCHES_IN_SPACER_SKEWED
)
def test_nskart_step3a_pass_randomness_test():
np.random.seed(1)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env)
assert env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
assert env["k'"] == env['k']
def test_nskart_step3a_fail_randomness_test():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env)
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
def test_nskart_step3bd():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
print(env)
assert env['d'] == 1
assert env["k'"] == env['k'] / 2
assert env['Y_j(m,d)'].size == env["k'"]
assert env['Y_j(m,d)'][0] == env['Y_j(m)'][1]
assert env['Y_j(m,d)'][1] == env['Y_j(m)'][3]
assert env['Y_j(m,d)'][-1] == env['Y_j(m)'][-1]
def test_nskart_iterate_step3bd():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3bd(env)
print(env)
assert env['d'] == 2
assert env["k'"] == math.floor(env['k'] / 3) == 426
assert env['k'] % 3 == 2
assert env['Y_j(m,d)'].size == env["k'"]
assert env['Y_j(m,d)'][0] == env['Y_j(m)'][2]
assert env['Y_j(m,d)'][1] == env['Y_j(m)'][5]
assert env['Y_j(m,d)'][-1] == env['Y_j(m)'][-1 - env['k'] % 3]
def test_nskart_step3c_pass():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3c(env)
print(env)
assert env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
def test_nskart_step3c_fail():
np.random.seed(22)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3c(env)
print(env)
assert not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
@pytest.fixture
def nskart_step4_env_insufficient_data():
np.random.seed(4960)
data = np.random.geometric(p=0.99, size=1280)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env) # d == 1
assert env['d'] == 1
env = simoa.nskart._step_3c(env) # fail randomness test
assert not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env) # d == 2
assert env['d'] == 2
env = simoa.nskart._step_3c(env) # fail randomness test
assert not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env) # d == 3
env = simoa.nskart._step_3c(env) # fail randomness test
assert (
not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY] and
not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
)
return env
def test_nskart_step4_raises_insufficient_data(
nskart_step4_env_insufficient_data
):
with pytest.raises(simoa.nskart.NSkartInsufficientDataError):
simoa.nskart._step_4(nskart_step4_env_insufficient_data)
def test_nskart_step4_continue_insufficient_data(
caplog, nskart_step4_env_insufficient_data
):
with caplog.atLevel(logging.ERROR, logger='simoa.nskart'):
env = simoa.nskart._step_4(
env=nskart_step4_env_insufficient_data,
continue_insufficient_data=True
)
assert list(caplog.records())[-1].levelno == logging.ERROR
assert env[simoa.nskart.NSKART_INSUFFICIENT_DATA_KEY]
@pytest.fixture
def nskart_step4_env_sufficient_data():
np.random.seed(435)
data = np.random.geometric(p=0.99, size=128000)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env) # d == 1
assert env['d'] == 1
env = simoa.nskart._step_3c(env) # fail randomness test
assert not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env) # d == 2
assert env['d'] == 2
env = simoa.nskart._step_3c(env) # fail randomness test
assert not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env) # d == 3
env = simoa.nskart._step_3c(env) # fail randomness test
assert (
not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY] and
not env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
)
return env
def test_nskart_step4_sufficient_data(nskart_step4_env_sufficient_data):
env = simoa.nskart._step_4(nskart_step4_env_sufficient_data)
assert env['b'] == 1
assert env['k'] == 1152
assert env['m'] == 23
assert env['d'] == 0
assert env['d^*'] == 10
assert env['Y_j(m)'].size == env['k']
def test_nskart_step5a():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3c(env)
assert env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_5a(env)
assert env["k'"] == 904
assert env['m'] == 14
assert env['w'] == 144
def test_nskart_step5b():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3c(env)
assert env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_5a(env)
env = simoa.nskart._step_5b(env)
assert env['Y_j(m)'].size == env["k'"]
assert env['Y_j(m)'][1] == env['X_i'][158:172].mean()
assert env['Y_j(m)'][-1] == env['X_i'][-14:].mean()
def test_nskart_step6():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3c(env)
assert env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_5a(env)
env = simoa.nskart._step_5b(env)
env = simoa.nskart._step_6(env)
assert env[simoa.nskart.NSKART_GRAND_AVERAGE_KEY] == (
env['X_i'][env['w']:].mean()
)
def test_nskart_step7():
np.random.seed(7)
data = np.random.geometric(p=0.99, size=12800)
env = simoa.nskart._step_1(data)
env = simoa.nskart._step_2(env)
env = simoa.nskart._step_3a(env) # fail randomness test
assert not env[simoa.nskart.NSKART_NONSPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_3bd(env)
env = simoa.nskart._step_3c(env)
assert env[simoa.nskart.NSKART_SPACED_RANDOMNESS_TEST_KEY]
env = simoa.nskart._step_5a(env)
env = simoa.nskart._step_5b(env)
env = simoa.nskart._step_6(env)
env = simoa.nskart._step_7(env)
assert env["d'"] == 11
assert env["k''"] == 76
assert env["Y_j(m,d')"].size == env["k''"]
assert env["Y_j(m,d')"][0] == env['X_i'][
12800 - 75 * 12 * 14 - 14: 12800 - 75 * 12 * 14
].mean()
assert env["Y_j(m,d')"][1] == env['X_i'][
12800 - 74 * 12 * 14 - 14: 12800 - 74 * 12 * 14
].mean()
assert env["Y_j(m,d')"][-1] == env['X_i'][-14:].mean()
# assert env[simoa.nskart.NSKART_BATCHED_GRAND_MEAN_KEY] == (
# env['X_i'][:76 * 12 * 14].reshape((12 * 76, 14))[::-12, :].mean()
# )
assert simoa.nskart.NSKART_BATCHED_SAMPLE_VAR_KEY in env
assert simoa.nskart.NSKART_BATCHED_SKEW_KEY in env
assert env['CI'][0] <= env[simoa.nskart.NSKART_BATCHED_GRAND_MEAN_KEY]
assert env[simoa.nskart.NSKART_BATCHED_GRAND_MEAN_KEY] <= env['CI'][1]
def test_nskart_invocation():
return
simoa.nskart(
data=np.ones(12800),
confidence_level=0.68,
)
| 34.72093 | 78 | 0.679253 | 1,867 | 11,944 | 4.066417 | 0.077665 | 0.192703 | 0.219442 | 0.20627 | 0.829294 | 0.778978 | 0.746048 | 0.721022 | 0.697708 | 0.68559 | 0 | 0.045496 | 0.177411 | 11,944 | 343 | 79 | 34.822157 | 0.727226 | 0.044457 | 0 | 0.585965 | 0 | 0 | 0.025821 | 0 | 0 | 0 | 0 | 0 | 0.308772 | 1 | 0.087719 | false | 0.007018 | 0.02807 | 0 | 0.126316 | 0.014035 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b636c9d93df54d64f6a8a70e94327ab06d5fe606 | 183 | py | Python | matrix.py | algebra2k/linear_algebra | 0474b8feafdff02ed00f7578370f6396467b8ad3 | [
"MIT"
] | null | null | null | matrix.py | algebra2k/linear_algebra | 0474b8feafdff02ed00f7578370f6396467b8ad3 | [
"MIT"
] | null | null | null | matrix.py | algebra2k/linear_algebra | 0474b8feafdff02ed00f7578370f6396467b8ad3 | [
"MIT"
] | null | null | null | class matrix:
def __init__(self, list2d):
self._values = [row[:] for row in list2d]
def __repr__(self):
return self._values
def shape(self):
pass | 20.333333 | 49 | 0.590164 | 23 | 183 | 4.26087 | 0.608696 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015748 | 0.306011 | 183 | 9 | 50 | 20.333333 | 0.755906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.142857 | 0 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 7 |
fcac17c886ec823c06161c7b87a9ccb4c6bda6e5 | 166 | py | Python | settings_CV.py | alexandster/STIDW | fe15f73b5a7e021be6657d08ebbc15017c84dea7 | [
"Apache-2.0"
] | 1 | 2018-08-04T15:26:12.000Z | 2018-08-04T15:26:12.000Z | settings_CV.py | alexandster/STIDW | fe15f73b5a7e021be6657d08ebbc15017c84dea7 | [
"Apache-2.0"
] | null | null | null | settings_CV.py | alexandster/STIDW | fe15f73b5a7e021be6657d08ebbc15017c84dea7 | [
"Apache-2.0"
] | 1 | 2019-02-14T07:42:01.000Z | 2019-02-14T07:42:01.000Z | # settings.py
def init():
global sdNum, p1, p2, p3, p4, p5, p6, p7, dir1, dir2
sdNum, p1, p2, p3, p4, p5, p6, p7, dir1, dir2 = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
| 23.714286 | 80 | 0.524096 | 35 | 166 | 2.485714 | 0.457143 | 0.206897 | 0.275862 | 0.321839 | 0.735632 | 0.735632 | 0.735632 | 0.735632 | 0.735632 | 0.62069 | 0 | 0.233333 | 0.277108 | 166 | 6 | 81 | 27.666667 | 0.491667 | 0.066265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
fcd21f47fb89d3c8bbc85c812e5e58b6c5daacb7 | 15,605 | py | Python | sdk/python/pulumi_google_native/firebase/v1beta1/ios_app.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/firebase/v1beta1/ios_app.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/firebase/v1beta1/ios_app.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
__all__ = ['IosAppArgs', 'IosApp']
@pulumi.input_type
class IosAppArgs:
def __init__(__self__, *,
app_id: Optional[pulumi.Input[str]] = None,
app_store_id: Optional[pulumi.Input[str]] = None,
bundle_id: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
team_id: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a IosApp resource.
:param pulumi.Input[str] app_id: Immutable. The globally unique, Firebase-assigned identifier for the `IosApp`. This identifier should be treated as an opaque token, as the data format is not specified.
:param pulumi.Input[str] app_store_id: The automatically generated Apple ID assigned to the iOS app by Apple in the iOS App Store.
:param pulumi.Input[str] bundle_id: Immutable. The canonical bundle ID of the iOS app as it would appear in the iOS AppStore.
:param pulumi.Input[str] display_name: The user-assigned display name for the `IosApp`.
:param pulumi.Input[str] name: The resource name of the IosApp, in the format: projects/PROJECT_IDENTIFIER /iosApps/APP_ID * PROJECT_IDENTIFIER: the parent Project's [`ProjectNumber`](../projects#FirebaseProject.FIELDS.project_number) ***(recommended)*** or its [`ProjectId`](../projects#FirebaseProject.FIELDS.project_id). Learn more about using project identifiers in Google's [AIP 2510 standard](https://google.aip.dev/cloud/2510). Note that the value for PROJECT_IDENTIFIER in any response body will be the `ProjectId`. * APP_ID: the globally unique, Firebase-assigned identifier for the App (see [`appId`](../projects.iosApps#IosApp.FIELDS.app_id)).
:param pulumi.Input[str] project: Immutable. A user-assigned unique identifier of the parent FirebaseProject for the `IosApp`.
:param pulumi.Input[str] team_id: The Apple Developer Team ID associated with the App in the App Store.
"""
if app_id is not None:
pulumi.set(__self__, "app_id", app_id)
if app_store_id is not None:
pulumi.set(__self__, "app_store_id", app_store_id)
if bundle_id is not None:
pulumi.set(__self__, "bundle_id", bundle_id)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if name is not None:
pulumi.set(__self__, "name", name)
if project is not None:
pulumi.set(__self__, "project", project)
if team_id is not None:
pulumi.set(__self__, "team_id", team_id)
@property
@pulumi.getter(name="appId")
def app_id(self) -> Optional[pulumi.Input[str]]:
"""
Immutable. The globally unique, Firebase-assigned identifier for the `IosApp`. This identifier should be treated as an opaque token, as the data format is not specified.
"""
return pulumi.get(self, "app_id")
@app_id.setter
def app_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "app_id", value)
@property
@pulumi.getter(name="appStoreId")
def app_store_id(self) -> Optional[pulumi.Input[str]]:
"""
The automatically generated Apple ID assigned to the iOS app by Apple in the iOS App Store.
"""
return pulumi.get(self, "app_store_id")
@app_store_id.setter
def app_store_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "app_store_id", value)
@property
@pulumi.getter(name="bundleId")
def bundle_id(self) -> Optional[pulumi.Input[str]]:
"""
Immutable. The canonical bundle ID of the iOS app as it would appear in the iOS AppStore.
"""
return pulumi.get(self, "bundle_id")
@bundle_id.setter
def bundle_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "bundle_id", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
The user-assigned display name for the `IosApp`.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The resource name of the IosApp, in the format: projects/PROJECT_IDENTIFIER /iosApps/APP_ID * PROJECT_IDENTIFIER: the parent Project's [`ProjectNumber`](../projects#FirebaseProject.FIELDS.project_number) ***(recommended)*** or its [`ProjectId`](../projects#FirebaseProject.FIELDS.project_id). Learn more about using project identifiers in Google's [AIP 2510 standard](https://google.aip.dev/cloud/2510). Note that the value for PROJECT_IDENTIFIER in any response body will be the `ProjectId`. * APP_ID: the globally unique, Firebase-assigned identifier for the App (see [`appId`](../projects.iosApps#IosApp.FIELDS.app_id)).
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
"""
Immutable. A user-assigned unique identifier of the parent FirebaseProject for the `IosApp`.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
@property
@pulumi.getter(name="teamId")
def team_id(self) -> Optional[pulumi.Input[str]]:
"""
The Apple Developer Team ID associated with the App in the App Store.
"""
return pulumi.get(self, "team_id")
@team_id.setter
def team_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "team_id", value)
class IosApp(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
app_id: Optional[pulumi.Input[str]] = None,
app_store_id: Optional[pulumi.Input[str]] = None,
bundle_id: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
team_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Requests the creation of a new IosApp in the specified FirebaseProject. The result of this call is an `Operation` which can be used to track the provisioning process. The `Operation` is automatically deleted after completion, so there is no need to call `DeleteOperation`.
Note - this resource's API doesn't support deletion. When deleted, the resource will persist
on Google Cloud even though it will be deleted from Pulumi state.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] app_id: Immutable. The globally unique, Firebase-assigned identifier for the `IosApp`. This identifier should be treated as an opaque token, as the data format is not specified.
:param pulumi.Input[str] app_store_id: The automatically generated Apple ID assigned to the iOS app by Apple in the iOS App Store.
:param pulumi.Input[str] bundle_id: Immutable. The canonical bundle ID of the iOS app as it would appear in the iOS AppStore.
:param pulumi.Input[str] display_name: The user-assigned display name for the `IosApp`.
:param pulumi.Input[str] name: The resource name of the IosApp, in the format: projects/PROJECT_IDENTIFIER /iosApps/APP_ID * PROJECT_IDENTIFIER: the parent Project's [`ProjectNumber`](../projects#FirebaseProject.FIELDS.project_number) ***(recommended)*** or its [`ProjectId`](../projects#FirebaseProject.FIELDS.project_id). Learn more about using project identifiers in Google's [AIP 2510 standard](https://google.aip.dev/cloud/2510). Note that the value for PROJECT_IDENTIFIER in any response body will be the `ProjectId`. * APP_ID: the globally unique, Firebase-assigned identifier for the App (see [`appId`](../projects.iosApps#IosApp.FIELDS.app_id)).
:param pulumi.Input[str] project: Immutable. A user-assigned unique identifier of the parent FirebaseProject for the `IosApp`.
:param pulumi.Input[str] team_id: The Apple Developer Team ID associated with the App in the App Store.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[IosAppArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Requests the creation of a new IosApp in the specified FirebaseProject. The result of this call is an `Operation` which can be used to track the provisioning process. The `Operation` is automatically deleted after completion, so there is no need to call `DeleteOperation`.
Note - this resource's API doesn't support deletion. When deleted, the resource will persist
on Google Cloud even though it will be deleted from Pulumi state.
:param str resource_name: The name of the resource.
:param IosAppArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(IosAppArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
app_id: Optional[pulumi.Input[str]] = None,
app_store_id: Optional[pulumi.Input[str]] = None,
bundle_id: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
team_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = IosAppArgs.__new__(IosAppArgs)
__props__.__dict__["app_id"] = app_id
__props__.__dict__["app_store_id"] = app_store_id
__props__.__dict__["bundle_id"] = bundle_id
__props__.__dict__["display_name"] = display_name
__props__.__dict__["name"] = name
__props__.__dict__["project"] = project
__props__.__dict__["team_id"] = team_id
super(IosApp, __self__).__init__(
'google-native:firebase/v1beta1:IosApp',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'IosApp':
"""
Get an existing IosApp resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = IosAppArgs.__new__(IosAppArgs)
__props__.__dict__["app_id"] = None
__props__.__dict__["app_store_id"] = None
__props__.__dict__["bundle_id"] = None
__props__.__dict__["display_name"] = None
__props__.__dict__["name"] = None
__props__.__dict__["project"] = None
__props__.__dict__["team_id"] = None
return IosApp(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="appId")
def app_id(self) -> pulumi.Output[str]:
"""
Immutable. The globally unique, Firebase-assigned identifier for the `IosApp`. This identifier should be treated as an opaque token, as the data format is not specified.
"""
return pulumi.get(self, "app_id")
@property
@pulumi.getter(name="appStoreId")
def app_store_id(self) -> pulumi.Output[str]:
"""
The automatically generated Apple ID assigned to the iOS app by Apple in the iOS App Store.
"""
return pulumi.get(self, "app_store_id")
@property
@pulumi.getter(name="bundleId")
def bundle_id(self) -> pulumi.Output[str]:
"""
Immutable. The canonical bundle ID of the iOS app as it would appear in the iOS AppStore.
"""
return pulumi.get(self, "bundle_id")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
The user-assigned display name for the `IosApp`.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The resource name of the IosApp, in the format: projects/PROJECT_IDENTIFIER /iosApps/APP_ID * PROJECT_IDENTIFIER: the parent Project's [`ProjectNumber`](../projects#FirebaseProject.FIELDS.project_number) ***(recommended)*** or its [`ProjectId`](../projects#FirebaseProject.FIELDS.project_id). Learn more about using project identifiers in Google's [AIP 2510 standard](https://google.aip.dev/cloud/2510). Note that the value for PROJECT_IDENTIFIER in any response body will be the `ProjectId`. * APP_ID: the globally unique, Firebase-assigned identifier for the App (see [`appId`](../projects.iosApps#IosApp.FIELDS.app_id)).
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def project(self) -> pulumi.Output[str]:
"""
Immutable. A user-assigned unique identifier of the parent FirebaseProject for the `IosApp`.
"""
return pulumi.get(self, "project")
@property
@pulumi.getter(name="teamId")
def team_id(self) -> pulumi.Output[str]:
"""
The Apple Developer Team ID associated with the App in the App Store.
"""
return pulumi.get(self, "team_id")
| 52.016667 | 662 | 0.662288 | 2,003 | 15,605 | 4.95357 | 0.102346 | 0.05765 | 0.071961 | 0.077605 | 0.826144 | 0.795807 | 0.76406 | 0.732413 | 0.700564 | 0.659041 | 0 | 0.00293 | 0.234604 | 15,605 | 299 | 663 | 52.190635 | 0.82778 | 0.447036 | 0 | 0.427778 | 1 | 0 | 0.0798 | 0.004628 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0.005556 | 0.027778 | 0 | 0.272222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1e2a0ec6eebfdf3064cc258f7bb7e5d26b58f460 | 86 | py | Python | test/lit/print_elements/__init__.py | sivachandra/gala | 6d7e5fd3cf3c319062a3985dbffd791944e180e9 | [
"Apache-2.0"
] | 4 | 2016-07-16T01:35:30.000Z | 2020-06-18T05:37:33.000Z | test/lit/print_elements/__init__.py | sivachandra/gala | 6d7e5fd3cf3c319062a3985dbffd791944e180e9 | [
"Apache-2.0"
] | 7 | 2015-06-26T19:24:30.000Z | 2015-08-18T18:16:11.000Z | test/lit/print_elements/__init__.py | sivachandra/gala | 6d7e5fd3cf3c319062a3985dbffd791944e180e9 | [
"Apache-2.0"
] | null | null | null | import gdb
print("gdb.parameter('print elements'):", gdb.parameter('print elements'))
| 28.666667 | 74 | 0.744186 | 11 | 86 | 5.818182 | 0.454545 | 0.375 | 0.53125 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 86 | 2 | 75 | 43 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.534884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 8 |
94cbb502b30dc19af0980c676dea1982bf989460 | 7,554 | py | Python | boolean-strings/models.py | jmaces/rde | d71169d697c322695901653fedd2ccf97413f018 | [
"MIT"
] | 1 | 2021-11-22T18:41:12.000Z | 2021-11-22T18:41:12.000Z | boolean-strings/models.py | jmaces/rde | d71169d697c322695901653fedd2ccf97413f018 | [
"MIT"
] | 1 | 2021-11-19T15:26:13.000Z | 2021-11-22T15:56:59.000Z | boolean-strings/models.py | jmaces/rde | d71169d697c322695901653fedd2ccf97413f018 | [
"MIT"
] | null | null | null | import numpy as np
from keras import initializers as kinitializers
from keras import layers as klayers
from keras import models as kmodels
from kerasadf import layers as adflayers
from tensorflow.keras import initializers, layers, models
# GLOBAL DEFAULT PARAMETERS
DIMENSION = 16
BLOCK_SIZE = 5
# STANDARD TF-KERAS MODELS
def create_simple_model(dimension=DIMENSION, block_size=BLOCK_SIZE):
inp = layers.Input(shape=(dimension, 1))
blockwise_sums = layers.Conv1D(
1,
block_size,
1,
kernel_initializer=initializers.Ones(),
bias_initializer=initializers.Constant(-(block_size - 1)),
activation="relu",
)(inp)
flat = layers.Flatten()(blockwise_sums)
full_sum = layers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="relu",
)(flat)
out = layers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="linear",
)(full_sum)
return models.Model(inp, out)
def create_scaled_model(width, dimension=DIMENSION, block_size=BLOCK_SIZE):
inp = layers.Input(shape=(dimension, 1))
shift_inp_layer = layers.Conv1D(2, 1, 1, activation="relu")
shifted_inp = shift_inp_layer(inp)
shift_inp_layer.set_weights(
[
np.asarray([[[1, 1]]]),
np.asarray([-0.5 + width / 2, -0.5 - width / 2]),
]
)
scale_inp_layer = layers.Conv1D(1, 1, 1, activation="linear")
scaled_inp = scale_inp_layer(shifted_inp)
scale_inp_layer.set_weights(
[np.asarray([[[1 / width], [-1 / width]]]), np.asarray([0])]
)
blockwise_sums = layers.Conv1D(
1,
block_size,
1,
kernel_initializer=initializers.Ones(),
bias_initializer=initializers.Constant(-(block_size - 1)),
activation="relu",
)(scaled_inp)
flat = layers.Flatten()(blockwise_sums)
full_sum = layers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="relu",
)(flat)
out = layers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="linear",
)(full_sum)
return models.Model(inp, out)
# ADF TF-KERAS MODELS
def create_simple_adfmodel(
dimension=DIMENSION, block_size=BLOCK_SIZE, mode="diag", rank=None
):
inp_mean = layers.Input(shape=(dimension, 1))
if mode == "diag":
inp_var = layers.Input(shape=(dimension, 1))
elif mode == "half":
if rank is None:
rank = DIMENSION
inp_var = layers.Input(shape=(rank, dimension, 1))
elif mode == "full":
inp_var = layers.Input(shape=(dimension, 1, dimension, 1))
blockwise_sums = adflayers.Conv1D(
1,
block_size,
1,
kernel_initializer=initializers.Ones(),
bias_initializer=initializers.Constant(-(block_size - 1)),
activation="relu",
mode=mode,
)([inp_mean, inp_var])
flat = adflayers.Flatten(mode=mode)(blockwise_sums)
full_sum = adflayers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="relu",
mode=mode,
)(flat)
out = adflayers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="linear",
mode=mode,
)(full_sum)
return models.Model([inp_mean, inp_var], out)
def create_scaled_adfmodel(
width, dimension=DIMENSION, block_size=BLOCK_SIZE, mode="diag", rank=None
):
inp_mean = layers.Input(shape=(dimension, 1))
if mode == "diag":
inp_var = layers.Input(shape=(dimension, 1))
elif mode == "half":
if rank is None:
rank = DIMENSION
inp_var = layers.Input(shape=(rank, dimension, 1))
elif mode == "full":
inp_var = layers.Input(shape=(dimension, 1, dimension, 1))
shift_inp_layer = adflayers.Conv1D(2, 1, 1, activation="relu", mode=mode)
shifted_inp = shift_inp_layer([inp_mean, inp_var])
shift_inp_layer.set_weights(
[
np.asarray([[[1, 1]]]),
np.asarray([-0.5 + width / 2, -0.5 - width / 2]),
]
)
scale_inp_layer = adflayers.Conv1D(1, 1, 1, activation="linear", mode=mode)
scaled_inp = scale_inp_layer(shifted_inp)
scale_inp_layer.set_weights(
[np.asarray([[[1 / width], [-1 / width]]]), np.asarray([0])]
)
blockwise_sums = adflayers.Conv1D(
1,
block_size,
1,
kernel_initializer=initializers.Ones(),
bias_initializer=initializers.Constant(-(block_size - 1)),
activation="relu",
mode=mode,
)(scaled_inp)
flat = adflayers.Flatten(mode=mode)(blockwise_sums)
full_sum = adflayers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="relu",
mode=mode,
)(flat)
out = adflayers.Dense(
1,
kernel_initializer=initializers.Constant(-1),
bias_initializer=initializers.Ones(),
activation="linear",
mode=mode,
)(full_sum)
return models.Model([inp_mean, inp_var], out)
# STANDARD KERAS MODELS
def create_simple_kmodel(dimension=DIMENSION, block_size=BLOCK_SIZE):
inp = klayers.Input(shape=(dimension, 1))
blockwise_sums = klayers.Conv1D(
1,
block_size,
strides=1,
kernel_initializer=kinitializers.Ones(),
bias_initializer=kinitializers.Constant(-(block_size - 1)),
activation="relu",
)(inp)
flat = klayers.Flatten()(blockwise_sums)
full_sum = klayers.Dense(
1,
kernel_initializer=kinitializers.Constant(-1),
bias_initializer=kinitializers.Ones(),
activation="relu",
)(flat)
out = klayers.Dense(
1,
kernel_initializer=kinitializers.Constant(-1),
bias_initializer=kinitializers.Ones(),
activation="linear",
)(full_sum)
return kmodels.Model(inp, out)
def create_scaled_kmodel(width, dimension=DIMENSION, block_size=BLOCK_SIZE):
inp = klayers.Input(shape=(dimension, 1))
shift_inp_layer = klayers.Conv1D(2, 1, strides=1, activation="relu")
shifted_inp = shift_inp_layer(inp)
shift_inp_layer.set_weights(
[
np.asarray([[[1, 1]]]),
np.asarray([-0.5 + width / 2, -0.5 - width / 2]),
]
)
scale_inp_layer = klayers.Conv1D(1, 1, strides=1, activation="linear")
scaled_inp = scale_inp_layer(shifted_inp)
scale_inp_layer.set_weights(
[np.asarray([[[1 / width], [-1 / width]]]), np.asarray([0])]
)
blockwise_sums = klayers.Conv1D(
1,
block_size,
strides=1,
kernel_initializer=kinitializers.Ones(),
bias_initializer=kinitializers.Constant(-(block_size - 1)),
activation="relu",
)(scaled_inp)
flat = klayers.Flatten()(blockwise_sums)
full_sum = klayers.Dense(
1,
kernel_initializer=kinitializers.Constant(-1),
bias_initializer=kinitializers.Ones(),
activation="relu",
)(flat)
out = klayers.Dense(
1,
kernel_initializer=kinitializers.Constant(-1),
bias_initializer=kinitializers.Ones(),
activation="linear",
)(full_sum)
return kmodels.Model(inp, out)
| 31.873418 | 79 | 0.629335 | 874 | 7,554 | 5.248284 | 0.086957 | 0.049052 | 0.070634 | 0.078483 | 0.906039 | 0.895792 | 0.857423 | 0.850665 | 0.848485 | 0.845869 | 0 | 0.021325 | 0.242653 | 7,554 | 236 | 80 | 32.008475 | 0.780458 | 0.012179 | 0 | 0.834101 | 0 | 0 | 0.019579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02765 | false | 0 | 0.02765 | 0 | 0.082949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a22643e3123c9c085329bef9a524e748a72ba8f2 | 104 | py | Python | src/joseki/__init__.py | nollety/joseki | dffc837cff185b0a1c931de7076bbefda7742405 | [
"MIT"
] | 3 | 2021-12-15T10:48:24.000Z | 2022-01-11T10:06:49.000Z | src/joseki/__init__.py | nollety/joseki | dffc837cff185b0a1c931de7076bbefda7742405 | [
"MIT"
] | 120 | 2021-05-28T06:46:23.000Z | 2022-03-31T07:15:21.000Z | src/joseki/__init__.py | nollety/joseki | dffc837cff185b0a1c931de7076bbefda7742405 | [
"MIT"
] | null | null | null | """Joseki."""
from .core import Identifier # pyflakes.ignore
from .core import make # pyflakes.ignore
| 26 | 47 | 0.730769 | 13 | 104 | 5.846154 | 0.615385 | 0.210526 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144231 | 104 | 3 | 48 | 34.666667 | 0.853933 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bf64aec091c99010af0cf6ecb72c0981c1fdabf2 | 202 | py | Python | zvt/recorders/emquantapi/finance_qtr/__init__.py | markqiu/zvt | 1bcfb71279f2652c3600f0f8e45d941f98ceaa10 | [
"MIT"
] | 6 | 2020-09-03T10:02:00.000Z | 2021-02-04T02:51:47.000Z | zvt/recorders/emquantapi/finance_qtr/__init__.py | wlwd13303/zvt | 23105a5bfdc3a5080c6c22d11e9e53d216688dea | [
"MIT"
] | null | null | null | zvt/recorders/emquantapi/finance_qtr/__init__.py | wlwd13303/zvt | 23105a5bfdc3a5080c6c22d11e9e53d216688dea | [
"MIT"
] | 2 | 2020-07-08T04:15:40.000Z | 2021-06-08T08:51:31.000Z | # -*- coding: utf-8 -*-
from zvt.recorders.emquantapi.finance_qtr.china_stock_income_statement_qtr_recorder import *
from zvt.recorders.emquantapi.finance_qtr.china_stock_cash_flow_qtr_recorder import * | 67.333333 | 92 | 0.846535 | 29 | 202 | 5.482759 | 0.586207 | 0.08805 | 0.201258 | 0.327044 | 0.578616 | 0.578616 | 0.578616 | 0.578616 | 0 | 0 | 0 | 0.005263 | 0.059406 | 202 | 3 | 93 | 67.333333 | 0.831579 | 0.10396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bf7da13159e6b76621d06c1d65c0180cf95b9c5f | 108 | py | Python | instapy_cli/__init__.py | hkr86/instapy-cli | 4c08017d93c2a7f82a01ae1d6e3d4c454e7404d9 | [
"MIT"
] | 341 | 2019-04-20T02:42:35.000Z | 2021-02-14T11:08:12.000Z | instapy_cli/__init__.py | hkr86/instapy-cli | 4c08017d93c2a7f82a01ae1d6e3d4c454e7404d9 | [
"MIT"
] | 71 | 2019-04-19T13:45:18.000Z | 2021-02-16T15:43:06.000Z | instapy_cli/__init__.py | hkr86/instapy-cli | 4c08017d93c2a7f82a01ae1d6e3d4c454e7404d9 | [
"MIT"
] | 91 | 2019-04-19T13:57:48.000Z | 2021-02-12T03:33:50.000Z | from instapy_cli.cli import InstapyCli as cli
def client(*args, **kwargs):
return cli(*args, **kwargs) | 21.6 | 45 | 0.712963 | 16 | 108 | 4.75 | 0.6875 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157407 | 108 | 5 | 46 | 21.6 | 0.835165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
bfd6c67b0e9efd0358872c68caeccd33ecb7cde7 | 7,087 | py | Python | tests/cycle_test.py | RubenPants/EvolvableRNN | 818a4ce941536611c0f1780f7c4a6238f0e1884e | [
"Apache-2.0"
] | null | null | null | tests/cycle_test.py | RubenPants/EvolvableRNN | 818a4ce941536611c0f1780f7c4a6238f0e1884e | [
"Apache-2.0"
] | null | null | null | tests/cycle_test.py | RubenPants/EvolvableRNN | 818a4ce941536611c0f1780f7c4a6238f0e1884e | [
"Apache-2.0"
] | null | null | null | """
cycle_test.py
Test the creates_cycle method.
"""
import os
import unittest
from population.utils.network_util.graphs import creates_cycle
def get_simple_net():
"""
0
/ \
1 |
| |
-1 -2
"""
connections = dict()
connections[(-1, 1)] = 1
connections[(1, 0)] = 1
connections[(-2, 0)] = 1
return connections
def get_medium_net():
"""
0
/ | \
1 / 2
| / |
-1 -2
"""
connections = dict()
connections[(-1, 0)] = 1
connections[(-1, 1)] = 1
connections[(1, 0)] = 0
connections[(-2, 2)] = 1
connections[(2, 0)] = 1
return connections
def get_complex_net():
"""
0
/ \
3 |
| |
2<--4
| |
1 |
| |
-1 -2
"""
connections = dict()
connections[(-1, 1)] = 1
connections[(1, 2)] = 1
connections[(2, 3)] = 1
connections[(3, 0)] = 1
connections[(-2, 4)] = 1
connections[(4, 0)] = 1
connections[(4, 2)] = 1
return connections
class NoCycle(unittest.TestCase):
"""Test the cycle_test method when there are no cycles."""
def test_simple(self):
"""> Cycle-free check in simple network."""
# Folder must be root to load in make_net properly
if os.getcwd().split('\\')[-1] == 'tests': os.chdir('..')
# Create the connections (only keys matter!)
connections = get_simple_net()
# Test
self.assertFalse(creates_cycle(connections=connections, test=(-2, 1)))
self.assertFalse(creates_cycle(connections=connections, test=(-2, -1)))
self.assertFalse(creates_cycle(connections=connections, test=(1, -2)))
self.assertFalse(creates_cycle(connections=connections, test=(1, 2)))
self.assertFalse(creates_cycle(connections=connections, test=(2, 1)))
def test_medium(self):
"""> Cycle-free check in medium network."""
# Folder must be root to load in make_net properly
if os.getcwd().split('\\')[-1] == 'tests': os.chdir('..')
# Create the connections (only keys matter!)
connections = get_medium_net()
# Test
self.assertFalse(creates_cycle(connections=connections, test=(2, 1)))
self.assertFalse(creates_cycle(connections=connections, test=(-2, 1)))
self.assertFalse(creates_cycle(connections=connections, test=(-1, 2)))
def test_complex(self):
"""> Cycle-free check in complex network."""
# Folder must be root to load in make_net properly
if os.getcwd().split('\\')[-1] == 'tests': os.chdir('..')
# Create the connections (only keys matter!)
connections = get_complex_net()
# Test
self.assertFalse(creates_cycle(connections=connections, test=(-1, 2)))
self.assertFalse(creates_cycle(connections=connections, test=(-1, 3)))
self.assertFalse(creates_cycle(connections=connections, test=(-1, 0)))
self.assertFalse(creates_cycle(connections=connections, test=(-2, 0)))
self.assertFalse(creates_cycle(connections=connections, test=(-2, 1)))
self.assertFalse(creates_cycle(connections=connections, test=(-2, 2)))
self.assertFalse(creates_cycle(connections=connections, test=(-2, 3)))
self.assertFalse(creates_cycle(connections=connections, test=(1, 0)))
self.assertFalse(creates_cycle(connections=connections, test=(2, 0)))
class Cycle(unittest.TestCase):
"""Test the cycle_test method when there are cycles."""
def test_simple(self):
"""> Cycle check in simple network."""
# Folder must be root to load in make_net properly
if os.getcwd().split('\\')[-1] == 'tests': os.chdir('..')
# Create the connections (only keys matter!)
connections = get_simple_net()
# Test
self.assertTrue(creates_cycle(connections=connections, test=(1, 1)))
self.assertTrue(creates_cycle(connections=connections, test=(1, -1)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 1)))
self.assertTrue(creates_cycle(connections=connections, test=(0, -1)))
def test_medium(self):
"""> Cycle check in medium network."""
# Folder must be root to load in make_net properly
if os.getcwd().split('\\')[-1] == 'tests': os.chdir('..')
# Create the connections (only keys matter!)
connections = get_medium_net()
# Test
self.assertTrue(creates_cycle(connections=connections, test=(0, -1)))
self.assertTrue(creates_cycle(connections=connections, test=(0, -2)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 1)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 2)))
self.assertTrue(creates_cycle(connections=connections, test=(1, -1)))
self.assertTrue(creates_cycle(connections=connections, test=(1, 1)))
self.assertTrue(creates_cycle(connections=connections, test=(2, 2)))
# Extend network
connections.update({(1, 2): 1})
self.assertTrue(creates_cycle(connections=connections, test=(2, -1)))
self.assertTrue(creates_cycle(connections=connections, test=(2, 1)))
self.assertFalse(creates_cycle(connections=connections, test=(1, -2))) # Allowed!
def test_complex(self):
"""> Cycle check in complex network."""
# Folder must be root to load in make_net properly
if os.getcwd().split('\\')[-1] == 'tests': os.chdir('..')
# Create the connections (only keys matter!)
connections = get_complex_net()
# Test
self.assertTrue(creates_cycle(connections=connections, test=(0, -1)))
self.assertTrue(creates_cycle(connections=connections, test=(0, -2)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 1)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 2)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 3)))
self.assertTrue(creates_cycle(connections=connections, test=(0, 4)))
self.assertTrue(creates_cycle(connections=connections, test=(3, 3)))
self.assertTrue(creates_cycle(connections=connections, test=(3, 2)))
self.assertTrue(creates_cycle(connections=connections, test=(3, 1)))
self.assertTrue(creates_cycle(connections=connections, test=(3, -1)))
self.assertTrue(creates_cycle(connections=connections, test=(3, -2)))
self.assertTrue(creates_cycle(connections=connections, test=(3, 4)))
self.assertTrue(creates_cycle(connections=connections, test=(-1, -1)))
self.assertFalse(creates_cycle(connections=connections, test=(1, 4))) # Allowed!
def main():
nc = NoCycle()
nc.test_simple()
nc.test_medium()
nc.test_complex()
c = Cycle()
c.test_simple()
c.test_medium()
c.test_complex()
if __name__ == '__main__':
unittest.main()
| 35.974619 | 90 | 0.615211 | 822 | 7,087 | 5.188564 | 0.081509 | 0.132239 | 0.242673 | 0.358734 | 0.896366 | 0.882298 | 0.852052 | 0.845252 | 0.819461 | 0.776319 | 0 | 0.02963 | 0.238041 | 7,087 | 196 | 91 | 36.158163 | 0.760185 | 0.160294 | 0 | 0.480769 | 0 | 0 | 0.010727 | 0 | 0 | 0 | 0 | 0 | 0.432692 | 1 | 0.096154 | false | 0 | 0.028846 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
44c73070b0fde594703951c2e72afd73e46210e0 | 288 | py | Python | 26_Cut_and_Paste_Attack_On_AES-ECB/solve.py | 3-24/id0-rsa.pub | 633e974a330d0dc09d37e423168974b7fba69830 | [
"MIT"
] | 1 | 2020-03-29T16:10:54.000Z | 2020-03-29T16:10:54.000Z | 26_Cut_and_Paste_Attack_On_AES-ECB/solve.py | 3-24/id0-rsa.pub | 633e974a330d0dc09d37e423168974b7fba69830 | [
"MIT"
] | null | null | null | 26_Cut_and_Paste_Attack_On_AES-ECB/solve.py | 3-24/id0-rsa.pub | 633e974a330d0dc09d37e423168974b7fba69830 | [
"MIT"
] | null | null | null | c1 = "5797791557579e322e619f12b0ccdee8802015ee0467c419e7a38bd0a254da54"
c2 = "b1e952572d6b8e00b626be86552376e2d529a1b9cafaeb3ba7533d2699636323e7e433c10a9dcdab2ed4bee54da684ca"
c3 = "35d0c02036354fdf6082285e0f7bd6d2fdf526bd557b045bce65a3b3e300b55e"
print(c1[0:32] + c2[0:32] + c3[32:64])
| 48 | 103 | 0.871528 | 16 | 288 | 15.6875 | 0.625 | 0.023904 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.578755 | 0.052083 | 288 | 5 | 104 | 57.6 | 0.340659 | 0 | 0 | 0 | 0 | 0 | 0.777778 | 0.777778 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
785a62f7a6f5fb1c0d923ab409fdb7f5c9f2606b | 56,550 | py | Python | bench/bench.py | grahamgower/moments | 54d2c58d91a231303fb361258e24b41b23f50661 | [
"BSD-3-Clause"
] | null | null | null | bench/bench.py | grahamgower/moments | 54d2c58d91a231303fb361258e24b41b23f50661 | [
"BSD-3-Clause"
] | null | null | null | bench/bench.py | grahamgower/moments | 54d2c58d91a231303fb361258e24b41b23f50661 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: UTF-8 -*-
import matplotlib.pyplot as plt
import scipy.stats as stats
import numpy as np
import os
import time
import moments
import dadi
import report
import demographic_models_moments
import demographic_models_dadi
#---------------------------------------------
#-----------------
# some functions :
#-----------------
def neutral_spectrum(n, ndim):
if ndim == 1: return moments.Spectrum(np.array([0]+[1.0/i for i in range(1, n)]+[0]))
elif ndim == 2:
res = np.zeros([n+1, n+1])
res[1:-1, 0] = [1.0/i for i in range(1, n)]
res[0, 1:-1] = [1.0/i for i in range(1, n)]
elif ndim == 3:
res = np.zeros([n+1, n+1, n+1])
res[1:-1, 0, 0] = [1.0/i for i in range(1, n)]
res[0, 1:-1, 0] = [1.0/i for i in range(1, n)]
res[0, 0, 1:-1] = [1.0/i for i in range(1, n)]
return moments.Spectrum(res)
def BS_entropy(fs1, fs2, n_it, n_loop):
tab1 = fs1.copy()
tab2 = fs2.copy()
# we don't take into account the values at the corners
indfirst = tuple(np.zeros(len(tab1.shape)))
indlast = tuple(int(n)*np.ones(len(tab1.shape)))
tab1[indfirst] = 0.000001
tab2[indfirst] = 0.000001
tab1[indlast] = 0.000001
tab2[indlast] = 0.000001
res = []
for j in range(n_loop):
rd_ind = [tuple(np.random.choice(n, len(tab1.shape))) for i in range(n_it)]
v1 = [tab1[x] for x in rd_ind]
v2 = [tab2[x] for x in rd_ind]
res.append(stats.entropy(v1, v2))
res = np.array(res)
# we remove the entries with infinite entropy
pinf = float('+inf')
res = res[res<pinf]
return np.mean(res)
def BS_entropy(fs1, fs2, n_it, n_loop):
if len(fs1.shape)==1:
return stats.entropy(fs1[1:-1], fs2[1:-1])
else:
return stats.entropy(fs1.reshape(np.prod(fs1.shape))[1:-1], fs2.reshape(np.prod(fs1.shape))[1:-1])
def distance(fs1, fs2, nomig = False):
eps = 1e-16
if not nomig: return abs(fs1-fs2)/(fs1+eps)
else:
res = []
for i in range(len(fs1)):
if fs1[i]==0.0 or fs2[i]==0.0: res.append(0.0)
else: res.append(abs(fs1[i]-fs2[i])/(fs1[i]))
return res
def count_neg(sfs):
#sfs2 = sfs.copy()
#sfs2.unmask_all()
return (sfs<0).sum()+sfs.mask.sum()-2
#--------------
# dadi models :
#--------------
def model1((nu, t), (n1, ), (g, h), pts):
xx = dadi.Numerics.default_grid(pts)
phi = 0.0*dadi.PhiManip.phi_1D(xx)
phi = dadi.Integration.one_pop(phi, xx, t, nu=nu, gamma=g, h=h)
sfs = dadi.Spectrum.from_phi(phi, (n1, ), (xx, ))
return sfs
def model_extrap1((nu, t), (n1, ), (g, h), (pt1, pt2, pt3)):
model_extrap = dadi.Numerics.make_extrap_log_func(model1)
sfs = model_extrap((nu, t), (n1, ), (g, h), [pt1, pt2, pt3])
return sfs
def model2((nu1, nu2, t), (n1,n2), (g, h, m), pts):
#dadi.Integration.timescale_factor = 0.0001
xx = dadi.Numerics.default_grid(pts)
phi = 0.0*dadi.PhiManip.phi_1D(xx)
phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)
phi = dadi.Integration.two_pops(phi, xx, t, nu1, nu2, m12=m, m21=m, gamma1=g, gamma2=g, h1=h, h2=h)
sfs = dadi.Spectrum.from_phi(phi, (n1,n2), (xx,xx))
return sfs
def model_extrap2((nu1, nu2, t), (n1,n2), (g, h, m), (pt1, pt2, pt3)):
model_extrap = dadi.Numerics.make_extrap_log_func(model2)
sfs = model_extrap((nu1, nu2, t), (n1,n2), (g, h, m), [pt1, pt2, pt3])
return sfs
def model2_neutral_init((nu1, nu2, t), (n1,n2), (g, h, m), pts):
#dadi.Integration.timescale_factor = 0.0001
xx = dadi.Numerics.default_grid(pts)
phi = dadi.PhiManip.phi_1D(xx)
phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)
phi = dadi.Integration.two_pops(phi, xx, t, nu1, nu2, m12=m, m21=m, gamma1=g, gamma2=g, h1=h, h2=h)
sfs = dadi.Spectrum.from_phi(phi, (n1,n2), (xx,xx))
return sfs
def model_extrap2_neutral_init((nu1, nu2, t), (n1,n2), (g, h, m), (pt1, pt2, pt3)):
model_extrap = dadi.Numerics.make_extrap_log_func(model2_neutral_init)
sfs = model_extrap((nu1, nu2, t), (n1,n2), (g, h, m), [pt1, pt2, pt3])
return sfs
def model3((nu1, nu2, nu3, t), (n1,n2,n3), (g, h, m), pts):
xx = dadi.Numerics.default_grid(pts)
phi = 0.0*dadi.PhiManip.phi_1D(xx)
phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)
phi = dadi.PhiManip.phi_2D_to_3D_split_2(xx, phi)
phi = dadi.Integration.three_pops(phi, xx, t, nu1, nu2, nu3, m12=m, m13=m, m21=m, m23=m, m31=m, m32=m, gamma1=g, gamma2=g, gamma3=g, h1=h, h2=h, h3=h)
sfs = dadi.Spectrum.from_phi(phi, (n,n,n), (xx,xx,xx))
return sfs
def model_extrap3((nu1, nu2, nu3, t), (n1,n2,n3), (g, h, m), (pt1, pt2, pt3)):
model_extrap = dadi.Numerics.make_extrap_log_func(model3)
sfs = model_extrap((nu1, nu2, nu3, t), (n1,n2,n3), (g, h, m), [pt1, pt2, pt3])
return sfs
def model3_neutral_init((nu1, nu2, nu3, t), (n1,n2,n3), (g, h, m), pts):
xx = dadi.Numerics.default_grid(pts)
phi = dadi.PhiManip.phi_1D(xx)
phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)
phi = dadi.PhiManip.phi_2D_to_3D_split_2(xx, phi)
phi = dadi.Integration.three_pops(phi, xx, t, nu1, nu2, nu3, m12=m, m13=m, m21=m, m23=m, m31=m, m32=m, gamma1=g, gamma2=g, gamma3=g, h1=h, h2=h, h3=h)
sfs = dadi.Spectrum.from_phi(phi, (n,n,n), (xx,xx,xx))
return sfs
def model_extrap3_neutral_init((nu1, nu2, nu3, t), (n1,n2,n3), (g, h, m), (pt1, pt2, pt3)):
model_extrap = dadi.Numerics.make_extrap_log_func(model3_neutral_init)
sfs = model_extrap((nu1, nu2, nu3, t), (n1,n2,n3), (g, h, m), [pt1, pt2, pt3])
return sfs
#---------------------------------------------
# population expansion for dadi models
f = lambda x: 1+0.01*x
n = 80 # sample size
nb_e = 100 # number of drawing for the entropy computation
# we store the result to edit a report at the end...
results = []
names = []
#-------------------------
# 1D neutral equilibrium :
#-------------------------
name = 'Neutral equilibrium 1D'
print('computing '+name)
# parameters :
ndim = 1
T= 5.0
h = 0.5
g = 0
m = 0
N = 1
# analytical solution :
neutral_fs = neutral_spectrum(n, ndim)
ref_ll = moments.Inference.ll_multinom(neutral_fs, neutral_fs)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros(n+1))
sfsm.integrate([N], T)
tps_mom = time.time() - start_time
distm = distance(sfsm[1:-1], neutral_fs[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, neutral_fs, n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, neutral_fs)
print('moments: ', tps_mom, bsem, maxdm, ll, count_neg(sfsm))
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_1dn_'+str(n)):
start_time = time.time()
sfsd = model_extrap1((N, T), (n, ), (g, h), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_1dn_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dn_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_1dn_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dn_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[1:-1], neutral_fs[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, neutral_fs, n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, neutral_fs)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll, count_neg(sfsd))
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-------------------------------
# 1D neutral N varying T = 1.0 :
#-------------------------------
if os.path.exists('limits/lim_1dnf_t1_'+str(n)):
name = 'Neutral 1D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 1
T= 1.0
N = lambda x: [1+0.01*x]
h = 0.5
g = 0
m = 0
# we load dadi's limit :
lim_1dnf = moments.Spectrum.from_file('limits/lim_1dnf_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_1dnf, lim_1dnf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros(n+1))
sfsm.integrate(N, [n], T)
tps_mom = time.time() - start_time
distm = distance(sfsm[1:-1], lim_1dnf[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_1dnf, n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_1dnf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_1dnf_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap1((f, T), (n, ), (g, h), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_1dnf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dnf_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_1dnf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dnf_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[1:-1], lim_1dnf[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_1dnf, n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_1dnf)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-------------------------------
# 1D neutral N varying T = 5.0 :
#-------------------------------
if os.path.exists('limits/lim_1dnf_t5_'+str(n)):
name = 'Neutral 1D, T = 5.0'
print('computing '+name)
# parameters :
ndim = 1
T= 5.0
N = lambda x: [1+0.01*x]
# we load dadi's limit :
lim_1dnf = moments.Spectrum.from_file('limits/lim_1dnf_t5_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_1dnf, lim_1dnf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros(n+1))
sfsm.integrate(N, [n], T)
tps_mom = time.time() - start_time
distm = distance(sfsm[1:-1], lim_1dnf[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_1dnf, n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_1dnf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_1dnf_t5_'+str(n)):
start_time = time.time()
sfsd = model_extrap1((f, T), (n, ), (g, h), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_1dnf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dnf_t5_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_1dnf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dnf_t5_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[1:-1], lim_1dnf[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_1dnf, n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_1dnf)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-----------------------------------
# 1D, selection, N varying T = 1.0 :
#-----------------------------------
if os.path.exists('limits/lim_1dsf_t1_'+str(n)):
name = 'Selection 1D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 1
T= 1.0
h = 0.7
g = 1.0
N = lambda x: [1+0.01*x]
# we load dadi's limit :
lim_1dsf = moments.Spectrum.from_file('limits/lim_1dsf_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_1dsf, lim_1dsf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros(n+1))
sfsm.integrate(N, [n], T, gamma=g, h=h)
tps_mom = time.time() - start_time
distm = distance(sfsm[1:-1], lim_1dsf[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_1dsf, n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_1dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_1dsf_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap1((f, T), (n, ), (g, h), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_1dsf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dsf_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_1dsf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dsf_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[1:-1], lim_1dsf[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_1dsf, n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_1dsf)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-----------------------------------
# 1D, selection, N varying T = 5.0 :
#-----------------------------------
if os.path.exists('limits/lim_1dsf_t5_'+str(n)):
name = 'Selection 1D, T = 5.0'
print('computing '+name)
# parameters :
ndim = 1
T= 5.0
h = 0.7
g = 1.0
N = lambda x: [1+0.01*x]
# we load dadi's limit :
lim_1dsf = moments.Spectrum.from_file('limits/lim_1dsf_t5_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_1dsf, lim_1dsf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros(n+1))
sfsm.integrate(N, [n], T, gamma=g, h=h)
tps_mom = time.time() - start_time
distm = distance(sfsm[1:-1], lim_1dsf[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_1dsf, n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_1dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_1dsf_t5_'+str(n)):
start_time = time.time()
sfsd = model_extrap1((f, T), (n, ), (g, h), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_1dsf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dsf_t5_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_1dsf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_1dsf_t5_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[1:-1], lim_1dsf[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_1dsf, n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_1dsf)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-----------------------------------------------------------------------------------
#-------------------------
# 2D neutral equilibrium :
#-------------------------
name = 'Neutral equilibrium 2D'
print('computing '+name)
# parameters :
ndim = 2
T= 5.0
h = 0.5
g = 0
m = 0
N = 1
# analytical solution :
neutral_fs = neutral_spectrum(n, ndim)
# We don't consider the last columns (rows) as we do not compute them in neutral_spectrum
ref_ll = moments.Inference.ll_multinom(neutral_fs[:-1, :-1], neutral_fs[:-1, :-1])
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1]))
sfsm.integrate([N, N], T)
tps_mom = time.time() - start_time
distm = distance(sfsm[0,1:-1], neutral_fs[0,1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm[0,:], neutral_fs[0,:], n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm[:-1, :-1], neutral_fs[:-1, :-1])
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dn_'+str(n)):
start_time = time.time()
sfsd = model_extrap2((N, N, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dn_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dn_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dn_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dn_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[0,1:-1], neutral_fs[0,1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd[0,:], neutral_fs[0,:], n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd[:-1, :-1], neutral_fs[:-1, :-1])
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 2D selection, no migration, T = 1 :
#------------------------------------
if os.path.exists('limits/lim_2dsf_t1_'+str(n)):
name = 'Selection 2D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 2
T= 1.0
h = 0.7
g = 1.0
m = 0
N = lambda x: [1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dsf = moments.Spectrum.from_file('limits/lim_2dsf_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dsf, lim_2dsf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1]))
sfsm.integrate(N, [n, n], T, gamma=gamma, h=hh)
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2dsf.reshape((n+1)**2)[1:-1], True)
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2dsf, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dsf_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap2((f, f, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2dsf.reshape((n+1)**2)[1:-1])
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dsf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsf_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dsf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsf_t1_'+str(n), 'r')
tps_dadi = float(file.read())
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2dsf, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dsf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 2D selection, no migration, T = 5 :
#------------------------------------
if os.path.exists('limits/lim_2dsf_t5_'+str(n)):
name = 'Selection 2D, T = 5.0'
print('computing '+name)
# parameters :
ndim = 2
T= 5.0
h = 0.7
g = 1.0
m = 0
N = lambda x: [1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dsf = moments.Spectrum.from_file('limits/lim_2dsf_t5_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dsf, lim_2dsf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1]))
sfsm.integrate(N, [n, n], T, gamma=gamma, h=hh)
tps_mom = time.time() - start_time
distm = distance(sfsm[0,:], lim_2dsf[0,:])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm[0,:], lim_2dsf[0,:], n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
print(sfsm[0,1:4])
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dsf_t5_'+str(n)):
start_time = time.time()
sfsd = model_extrap2((f, f, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dsf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsf_t5_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dsf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsf_t5_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[0,:], lim_2dsf[0,:])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd[0,:], lim_2dsf[0,:], n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dsf)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------------------
# 2D selection, no migration, neutral SP, T = 1 :
#------------------------------------------------
if os.path.exists('limits/lim_2dsfnsp_t1_'+str(n)):
name = 'Selection 2D, neutral fs0, T = 1.0'
print('computing '+name)
# parameters :
ndim = 2
T= 1.0
h = 0.7
g = 1.0
m = 0
N = lambda x: [1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dsf = moments.Spectrum.from_file('limits/lim_2dsfnsp_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dsf, lim_2dsf)
# starting point
init_fs = moments.Spectrum(moments.LinearSystem_1D.steady_state_1D(2*n))
init_fs = moments.Manips.split_1D_to_2D(init_fs, n, n)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(init_fs)
sfsm.integrate(N, [n, n], T, gamma=gamma, h=hh)
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2dsf.reshape((n+1)**2)[1:-1], True)
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2dsf, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dsfnsp_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap2_neutral_init((f, f, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2dsf.reshape((n+1)**2)[1:-1])
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dsfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsfnsp_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dsfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsfnsp_t1_'+str(n), 'r')
tps_dadi = float(file.read())
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2dsf, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dsf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 2D selection and migration, T = 1 :
#------------------------------------
if os.path.exists('limits/lim_2dsmf_t1_'+str(n)):
name = 'Selection, migration 2D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 2
T= 1.0
h = 0.7
g = 1.0
m = 2.0
N = lambda x: [1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dsmf = moments.Spectrum.from_file('limits/lim_2dsmf_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dsmf, lim_2dsmf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1]))
sfsm.integrate(N, [n, n], T, 0.1, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2dsmf.reshape((n+1)**2)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2dsmf, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dsmf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dsmf_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap2((f, f, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dsmf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsmf_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dsmf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsmf_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2dsmf.reshape((n+1)**2)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2dsmf, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dsmf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 2D selection and migration, T = 5 :
#------------------------------------
if os.path.exists('limits/lim_2dsmf_t5_'+str(n)):
name = 'Selection, migration 2D, T = 5.0'
print('computing '+name)
# parameters :
ndim = 2
T= 5.0
h = 0.7
g = 1.0
m = 2.0
N = lambda x: [1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dsmf = moments.Spectrum.from_file('limits/lim_2dsmf_t5_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dsmf, lim_2dsmf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1]))
sfsm.integrate(N, [n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2dsmf.reshape((n+1)**2)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2dsmf, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dsmf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dsmf_t5_'+str(n)):
start_time = time.time()
sfsd = model_extrap2((f, f, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dsmf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsmf_t5_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dsmf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsmf_t5_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2dsmf.reshape((n+1)**2)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2dsmf, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dsmf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------------------
# 2D selection and migration, neutral SP, T = 1 :
#------------------------------------------------
if os.path.exists('limits/lim_2dsmfnsp_t1_'+str(n)):
name = 'Selection, migration 2D, neutral fs0, T = 1.0'
print('computing '+name)
# parameters :
ndim = 2
T= 1.0
h = 0.7
g = 1.0
m = 2.0
N = lambda x: [1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dsmf = moments.Spectrum.from_file('limits/lim_2dsmfnsp_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dsmf, lim_2dsmf)
# starting point
init_fs = moments.Spectrum(moments.LinearSystem_1D.steady_state_1D(2*n))
init_fs = moments.Manips.split_1D_to_2D(init_fs, n, n)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(init_fs)
sfsm.integrate(N, [n, n], T, 0.1, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2dsmf.reshape((n+1)**2)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2dsmf, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dsmf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dsmfnsp_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap2_neutral_init((f, f, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dsmfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsmfnsp_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dsmfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dsmfnsp_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2dsmf.reshape((n+1)**2)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2dsmf, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dsmf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------
# Croissance rapide 2D, T = 1 :
#------------------------------
if os.path.exists('limits/lim_2dfg_t1_'+str(n)):
name = 'Croissance rapide 2D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 2
T= 1.0
h = 0.7
g = 1.0
m = 2.0
Nexp = lambda x: [np.exp(np.log(10.0)*x), np.exp(np.log(10.0)*x)]
fexp = lambda x: np.exp(np.log(10.0)*x)
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_2dfg = moments.Spectrum.from_file('limits/lim_2dfg_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2dfg, lim_2dfg)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1]))
sfsm.integrate(Nexp, [n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2dfg.reshape((n+1)**2)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2dfg, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2dfg)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2dfg_t1_'+str(n)):
start_time = time.time()
#sfsd = model_extrap2((fexp, fexp, T), (n, n), (g, h, m), (1.5*n, 1.5*n+10, 1.5*n+20))
sfsd = model_extrap2((fexp, fexp, T), (n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2dfg_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dfg_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2dfg_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2dfg_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2dfg.reshape((n+1)**2)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2dfg, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2dfg)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#---------
# YRI CEU:
#---------
if os.path.exists('limits/lim_2d_yri_ceu_'+str(n)):
name = 'YRI-CEU 2D'
print('computing '+name)
# parameters :
params = [1.881, 0.0710, 1.845, 0.911, 0.355, 0.111]
# we load dadi's limit :
lim_2d_yri_ceu = moments.Spectrum.from_file('limits/lim_2d_yri_ceu_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_2d_yri_ceu, lim_2d_yri_ceu)
# moments :
start_time = time.time()
sfsm = demographic_models_moments.model_YRI_CEU(params, (n, n))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**2)[1:-1], lim_2d_yri_ceu.reshape((n+1)**2)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_2d_yri_ceu, n**2, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_2d_yri_ceu)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_2d_yri_ceu_'+str(n)):
start_time = time.time()
sfsd = demographic_models_dadi.model_YRI_CEU_extrap(params, (n, n), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_2d_yri_ceu_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2d_yri_ceu_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_2d_yri_ceu_'+str(n))
file = open('dadi_simu/time_dadi_extrap_2d_yri_ceu_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**2)[1:-1], lim_2d_yri_ceu.reshape((n+1)**2)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_2d_yri_ceu, n**2, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_2d_yri_ceu)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-----------------------------------------------------------------------------------
#-------------------------
# 3D neutral equilibrium :
#-------------------------
name = 'Neutral equilibrium 3D'
print('computing '+name)
# parameters :
ndim = 3
T= 5.0
h = 0.5
g = 0
m = 0
N = 1
# analytical solution :
neutral_fs = neutral_spectrum(n, ndim)
# We don't consider the last columns (rows) as we do not compute them in neutral_spectrum
ref_ll = moments.Inference.ll_multinom(neutral_fs[:-1, :-1, :-1], neutral_fs[:-1, :-1, :-1])
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
sfsm.integrate([N, N, N], T)
tps_mom = time.time() - start_time
distm = distance(sfsm[0, 1:-1, 0], neutral_fs[0, 1:-1, 0])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm[0, :, 0], neutral_fs[0, :, 0], n, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm[:-1, :-1, :-1], neutral_fs[:-1, :-1, :-1])
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dn_'+str(n)):
start_time = time.time()
sfsd = model_extrap3((N, N, N, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dn_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dn_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dn_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dn_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd[0, 1:-1, 0], neutral_fs[0, 1:-1, 0])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd[0, :, 0], neutral_fs[0, :, 0], n, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd[:-1, :-1, :-1], neutral_fs[:-1, :-1, :-1])
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 3D selection, no migration, T = 1 :
#------------------------------------
if os.path.exists('limits/lim_3dsf_t1_'+str(n)):
name = 'Selection 3D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 3
T= 1.0
h = 0.7
g = 1.0
m = 0
N = lambda x: [1+0.01*x, 1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dsf = moments.Spectrum.from_file('limits/lim_3dsf_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dsf, lim_3dsf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
sfsm.integrate(N, [n, n, n], T, gamma=gamma, h=hh)
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dsf.reshape((n+1)**3)[1:-1], True)
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dsf, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dsf_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap3((f, f, f, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dsf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsf_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dsf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsf_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dsf.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dsf, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dsf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 3D selection, no migration, T = 5 :
#------------------------------------
if os.path.exists('limits/lim_3dsf_t5_'+str(n)):
name = 'Selection 3D, T = 5.0'
print('computing '+name)
# parameters :
ndim = 3
T= 5.0
h = 0.7
g = 1.0
m = 0
N = lambda x: [1+0.01*x, 1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dsf = moments.Spectrum.from_file('limits/lim_3dsf_t5_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dsf, lim_3dsf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
sfsm.integrate(N, [n, n, n], T, gamma=gamma, h=hh)
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dsf.reshape((n+1)**3)[1:-1], True)
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dsf, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dsf_t5_'+str(n)):
start_time = time.time()
sfsd = model_extrap3((f, f, f, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dsf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsf_t5_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dsf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsf_t5_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dsf.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dsf, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dsf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-------------------------------------------------
# 3D selection, no migration, neutral fs0, T = 1 :
#-------------------------------------------------
if os.path.exists('limits/lim_3dsfnsp_t1_'+str(n)):
name = 'Selection 3D, neutral fs0, T = 1.0'
print('computing '+name)
# parameters :
ndim = 3
T= 1.0
h = 0.7
g = 1.0
m = 0
N = lambda x: [1+0.01*x, 1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dsf = moments.Spectrum.from_file('limits/lim_3dsfnsp_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dsf, lim_3dsf)
# starting point
init_fs = moments.Spectrum(moments.LinearSystem_1D.steady_state_1D(3*n))
init_fs = moments.Manips.split_1D_to_2D(init_fs, n, 2*n)
init_fs = moments.Manips.split_2D_to_3D_2(init_fs, n, n)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(init_fs)
sfsm.integrate(N, [n, n, n], T, gamma=gamma, h=hh)
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dsf.reshape((n+1)**3)[1:-1], True)
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dsf, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dsf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dsfnsp_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap3_neutral_init((f, f, f, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dsfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsfnsp_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dsfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsfnsp_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dsf.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dsf, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dsf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 3D selection and migration, T = 1 :
#------------------------------------
if os.path.exists('limits/lim_3dsmf_t1_'+str(n)):
name = 'Selection, migration 3D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 3
T= 1.0
h = 0.7
g = 1.0
m = 2.0
N = lambda x: [1+0.01*x, 1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dsmf = moments.Spectrum.from_file('limits/lim_3dsmf_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dsmf, lim_3dsmf)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
sfsm.integrate(N, [n, n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dsmf.reshape((n+1)**3)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dsmf, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dsmf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dsmf_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap3((f, f, f, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dsmf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsmf_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dsmf_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsmf_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dsmf.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dsmf, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dsmf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------------
# 3D selection and migration, T = 5 :
#------------------------------------
if os.path.exists('limits/lim_3dsmf_t5_'+str(n)):
name = 'Selection, migration 3D, T = 5.0'
print('computing '+name)
# parameters :
ndim = 3
T= 5.0
h = 0.7
g = 1.0
m = 2.0
N = lambda x: [1+0.01*x, 1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dsmf = moments.Spectrum.from_file('limits/lim_3dsmf_t5_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dsmf, lim_3dsmf)
print(count_neg(lim_3dsmf))
# moments :
start_time = time.time()
sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
sfsm.integrate(N, [n, n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dsmf.reshape((n+1)**3)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dsmf, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dsmf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dsmf_t5_'+str(n)):
start_time = time.time()
sfsd = model_extrap3((f, f, f, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dsmf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsmf_t5_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dsmf_t5_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsmf_t5_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dsmf.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dsmf, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dsmf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-------------------------------------------------
# 3D selection and migration, neutral fs0, T = 1 :
#-------------------------------------------------
if os.path.exists('limits/lim_3dsmfnsp_t1_'+str(n)):
name = 'Selection, migration 3D, neutral fs0, T = 1.0'
print('computing '+name)
# parameters :
ndim = 3
T= 1.0
h = 0.7
g = 1.0
m = 2.0
N = lambda x: [1+0.01*x, 1+0.01*x, 1+0.01*x]
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dsmf = moments.Spectrum.from_file('limits/lim_3dsmfnsp_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dsmf, lim_3dsmf)
# starting point
init_fs = moments.Spectrum(moments.LinearSystem_1D.steady_state_1D(3*n))
init_fs = moments.Manips.split_1D_to_2D(init_fs, n, 2*n)
init_fs = moments.Manips.split_2D_to_3D_2(init_fs, n, n)
# moments :
start_time = time.time()
sfsm = moments.Spectrum(init_fs)
sfsm.integrate(N, [n, n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dsmf.reshape((n+1)**3)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dsmf, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dsmf)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dsmfnsp_t1_'+str(n)):
start_time = time.time()
sfsd = model_extrap3_neutral_init((f, f, f, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dsmfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsmfnsp_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dsmfnsp_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dsmfnsp_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dsmf.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dsmf, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dsmf)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#------------------------------
# Croissance rapide 3D, T = 1 :
#------------------------------
if os.path.exists('limits/lim_3dfg_t1_'+str(n)):
name = 'Croissance rapide 3D, T = 1.0'
print('computing '+name)
# parameters :
ndim = 3
T= 1.0
h = 0.7
g = 1.0
m = 2.0
Nexp = lambda x: [np.exp(np.log(10.0)*x), np.exp(np.log(10.0)*x), np.exp(np.log(10.0)*x)]
fexp = lambda x: np.exp(np.log(10.0)*x)
gamma = g*np.ones(ndim)
hh = h*np.ones(ndim)
# we load dadi's limit :
lim_3dfg = moments.Spectrum.from_file('limits/lim_3dfg_t1_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_3dfg, lim_3dfg)
# moments :
start_time = time.time()
#sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
#sfsm.integrate(Nexp, [n, n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
sfsm = moments.Spectrum(np.zeros([n+1, n+1, n+1]))
sfsm.integrate(Nexp, [n, n, n], T, gamma=gamma, h=hh, m=m*np.ones([ndim, ndim]))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_3dfg.reshape((n+1)**3)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_3dfg, n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_3dfg)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_3dfg_t1_'+str(n)):
start_time = time.time()
#sfsd = model_extrap3((fexp, fexp, fexp, T), (n, n, n), (g, h, m), (2*n, 2*n+10, 2*n+20))
sfsd = model_extrap3((fexp, fexp, fexp, T), (n, n, n), (g, h, m), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_3dfg_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dfg_t1_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_3dfg_t1_'+str(n))
file = open('dadi_simu/time_dadi_extrap_3dfg_t1_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_3dfg.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_3dfg, n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_3dfg)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-----------------------------------------------------------------------------------
#----------------
# Out of Africa :
#----------------
if os.path.exists('limits/lim_ooa_3d_'+str(n)):
name = 'Out of Africa 3D'
print('computing '+name)
# parameters :
params = [6.87846000e-01, 7.52004000e-02, 9.54548000e-02, 9.29661000e-01,
3.55988000e-02, 2.01524000e+00, 1.49964000e+01, 7.64217000e-01,
3.76222364e-01, 3.02770000e+00, 1.35484000e-03, 7.71636000e-01,
2.42014000e-02]
# we load dadi's limit :
lim_ooa3d = moments.Spectrum.from_file('limits/lim_ooa_3d_'+str(n))
ref_ll = moments.Inference.ll_multinom(lim_ooa3d, lim_ooa3d)
# moments :
start_time = time.time()
sfsm = demographic_models_moments.model_ooa_3D(params, (n, n, n))
tps_mom = time.time() - start_time
distm = distance(sfsm.reshape((n+1)**3)[1:-1], lim_ooa3d.reshape((n+1)**3)[1:-1])
maxdm = np.mean(distm)
bsem = BS_entropy(sfsm, lim_ooa3d, n**3, nb_e)
#bsem = BS_entropy(sfsm[0, 3:-1, 0], lim_ooa3d[0, 3:-1, 0], n**3, nb_e)
ll = ref_ll - moments.Inference.ll_multinom(sfsm, lim_ooa3d)
print('moments: ', tps_mom, bsem, maxdm, ll)
resm = [tps_mom, bsem, maxdm, ll, count_neg(sfsm)]
# dadi Richardson extrapolation :
if not os.path.exists('dadi_simu/dadi_extrap_ooa_3d_'+str(n)):
start_time = time.time()
#sfsd = demographic_models_dadi.model_ooa_3D_extrap(params, (n, n, n), (1.5*n, 1.5*n+10, 1.5*n+20))
sfsd = demographic_models_dadi.model_ooa_3D_extrap(params, (n, n, n), (n, n+10, n+20))
tps_dadi = time.time() - start_time
# export
sfsd.to_file('dadi_simu/dadi_extrap_ooa_3d_'+str(n))
file = open('dadi_simu/time_dadi_extrap_ooa_3d_'+str(n), "w")
file.write(str(tps_dadi))
file.close()
else:
sfsd = dadi.Spectrum.from_file('dadi_simu/dadi_extrap_ooa_3d_'+str(n))
file = open('dadi_simu/time_dadi_extrap_ooa_3d_'+str(n), 'r')
tps_dadi = float(file.read())
distd = distance(sfsd.reshape((n+1)**3)[1:-1], lim_ooa3d.reshape((n+1)**3)[1:-1])
maxdd = np.mean(distd)
bsed = BS_entropy(sfsd, lim_ooa3d, n**3, nb_e)
#bsed = BS_entropy(sfsd[0, 3:-1, 0], lim_ooa3d[0, 3:-1, 0], n**3, nb_e)
ll = ref_ll - dadi.Inference.ll_multinom(sfsd, lim_ooa3d)
sfsd2 = moments.Spectrum(sfsd)
print('dadi extrap: ', tps_dadi, bsed, maxdd, ll)
resde = [tps_dadi, bsed, maxdd, ll, count_neg(sfsd)]
names.append(name)
results.append([resde, resm])
#-----------------------------------------------------------------------------------
#report.generate_tex_table(results, names)
report.generate_formated_table(results, names, n)
os.system("pdflatex report.tex")
| 38.004032 | 154 | 0.60313 | 9,113 | 56,550 | 3.544497 | 0.034895 | 0.019194 | 0.015603 | 0.038451 | 0.947339 | 0.937649 | 0.918981 | 0.906319 | 0.894771 | 0.887712 | 0 | 0.045293 | 0.195332 | 56,550 | 1,487 | 155 | 38.02959 | 0.664557 | 0.107126 | 0 | 0.698096 | 0 | 0 | 0.117864 | 0.078456 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.009066 | null | null | 0.06437 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
78c8a3cf2b913bf55cf9e3479e1938dc3c9c96f0 | 4,445 | py | Python | tests/pylon_tests/emulated/instantcameraarraytest.py | matt-phair/pypylon | f81385a838ac0786f1a3f9b61db8eb4cdd2a905e | [
"BSD-3-Clause"
] | 358 | 2018-05-03T15:09:08.000Z | 2022-03-30T02:18:02.000Z | tests/pylon_tests/emulated/instantcameraarraytest.py | matt-phair/pypylon | f81385a838ac0786f1a3f9b61db8eb4cdd2a905e | [
"BSD-3-Clause"
] | 473 | 2018-05-01T14:55:20.000Z | 2022-03-31T18:09:31.000Z | tests/pylon_tests/emulated/instantcameraarraytest.py | matt-phair/pypylon | f81385a838ac0786f1a3f9b61db8eb4cdd2a905e | [
"BSD-3-Clause"
] | 176 | 2018-07-05T20:16:05.000Z | 2022-03-16T10:59:52.000Z | from pylonemutestcase import PylonEmuTestCase
from pypylon import pylon
import unittest
class InstantCameraArrayTestSuite(PylonEmuTestCase):
def test_constructor_empty(self):
cameraArray = pylon.InstantCameraArray()
self.assertEqual(0, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertFalse(cameraArray.IsOpen())
self.assertFalse(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
# Test if no Camera is connected
for cam in cameraArray:
self.fail()
def test_initialize(self):
cameraArray = pylon.InstantCameraArray()
self.assertEqual(0, cameraArray.GetSize())
cameraArray.Initialize(self.num_dev)
self.assertEqual(self.num_dev, cameraArray.GetSize())
v = 0
for cam in cameraArray:
v += 1
self.assertEqual(self.num_dev, v)
def test_connect_cameras(self):
devices = pylon.TlFactory.GetInstance().EnumerateDevices(self.device_filter)
self.assertEqual(len(devices), self.num_dev)
cameraArray = pylon.InstantCameraArray(self.num_dev)
for i, cam in enumerate(cameraArray):
self.assertEqual(devices[i].GetDeviceClass(), self.device_class)
cam.Attach(pylon.TlFactory.GetInstance().CreateDevice(devices[i]))
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertFalse(cameraArray.IsOpen())
self.assertTrue(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
cameraArray.Open()
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertTrue(cameraArray.IsOpen())
self.assertTrue(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
cameraArray.StartGrabbing()
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertTrue(cameraArray.IsGrabbing())
self.assertTrue(cameraArray.IsOpen())
self.assertTrue(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
cameraArray.RetrieveResult(300)
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertTrue(cameraArray.IsGrabbing())
self.assertTrue(cameraArray.IsOpen())
self.assertTrue(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
cameraArray.StopGrabbing()
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertTrue(cameraArray.IsOpen())
self.assertTrue(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
cameraArray.Close()
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertFalse(cameraArray.IsOpen())
self.assertTrue(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
cameraArray.DestroyDevice()
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertFalse(cameraArray.IsOpen())
self.assertFalse(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
def test_detach_cameras(self):
cameraArray = pylon.InstantCameraArray(self.num_dev)
devices = pylon.TlFactory.GetInstance().EnumerateDevices(self.device_filter)
self.assertEqual(len(devices), self.num_dev)
for i, cam in enumerate(cameraArray):
self.assertEqual(devices[i].GetDeviceClass(), self.device_class)
cam.Attach(pylon.TlFactory.GetInstance().CreateDevice(devices[i]))
cameraArray.Open()
cameraArray.StartGrabbing()
cameraArray.DetachDevice()
self.assertEqual(self.num_dev, cameraArray.GetSize())
self.assertFalse(cameraArray.IsGrabbing())
self.assertFalse(cameraArray.IsOpen())
self.assertFalse(cameraArray.IsPylonDeviceAttached())
self.assertFalse(cameraArray.IsCameraDeviceRemoved())
if __name__ == "__main__":
unittest.main()
| 40.045045 | 84 | 0.703712 | 396 | 4,445 | 7.813131 | 0.159091 | 0.116354 | 0.201681 | 0.071105 | 0.849386 | 0.840013 | 0.814803 | 0.800905 | 0.800905 | 0.760181 | 0 | 0.001944 | 0.190101 | 4,445 | 110 | 85 | 40.409091 | 0.8575 | 0.006749 | 0 | 0.770115 | 0 | 0 | 0.001813 | 0 | 0 | 0 | 0 | 0 | 0.597701 | 1 | 0.045977 | false | 0 | 0.034483 | 0 | 0.091954 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
78d29ba426386cfec63a13625674d0f5bd330d9c | 1,876 | py | Python | ziso/read/read.py | simbaTmotsi/eye-vision | f5354d5264e8e1eda4a309eb65862864247c4423 | [
"MIT"
] | null | null | null | ziso/read/read.py | simbaTmotsi/eye-vision | f5354d5264e8e1eda4a309eb65862864247c4423 | [
"MIT"
] | 162 | 2019-01-27T16:37:16.000Z | 2022-03-31T23:44:34.000Z | ziso/read/read.py | simbaTmotsi/eye-vision | f5354d5264e8e1eda4a309eb65862864247c4423 | [
"MIT"
] | null | null | null | import cv2
"""
module to read images
"""
'''
Reading images in RGB format
'''
def rgb(image_or_frame):
# reading an image in the default BGR format for opencv
try:
image_or_frame = cv2.cvtColor(cv2.imread(image_or_frame),cv2.COLOR_BGR2RGB)
except:
print ("Please check the file path, there seems to be an error")
"""
error checking
"""
# in the event of an invalid file path this will be executed
try:
assert (image_or_frame) == None
print ("Please check the file path, there seems to be an error")
except Exception as errors:
#raise AssertionError ("file found") from errors
pass
# returning the variable for re-use
return image_or_frame
'''
Reading images in BGR format
'''
def bgr(image_or_frame):
# reading an image in the default BGR format for opencv
image_or_frame = cv2.imread(image_or_frame)
"""
error checking
"""
# in the event of an invalid file path this will be executed
try:
assert (image_or_frame) == None
print ("Please check the file path, there seems to be an error")
except Exception as errors:
#raise AssertionError ("file found") from errors
pass
# returning the variable for re-use
return image_or_frame
'''
Reading images in grayscale format
'''
def gray(image_or_frame):
# reading an image in the default BGR format for opencv
image_or_frame = cv2.imread(image_or_frame, 0)
"""
error checking
"""
# in the event of an invalid file path this will be executed
try:
assert (image_or_frame) == None
print ("Please check the file path, there seems to be an error")
except Exception as errors:
#raise AssertionError ("file found") from errors
pass
# returning the variable for re-use
return image_or_frame
| 28.861538 | 83 | 0.659915 | 269 | 1,876 | 4.486989 | 0.223048 | 0.086993 | 0.14913 | 0.078708 | 0.882353 | 0.864954 | 0.864954 | 0.864954 | 0.864954 | 0.864954 | 0 | 0.005822 | 0.267591 | 1,876 | 65 | 84 | 28.861538 | 0.872635 | 0.309701 | 0 | 0.714286 | 0 | 0 | 0.205714 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 1 | 0.107143 | false | 0.107143 | 0.035714 | 0 | 0.25 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
152ce7373afe31f347c0f9905bba1ec76abc18d8 | 102 | py | Python | tensorcv/data/__init__.py | Hourout/tensorcv | 620559ee90921e036c8a84877d806a5c21010511 | [
"Apache-2.0"
] | 8 | 2018-12-06T05:02:29.000Z | 2021-08-25T07:09:29.000Z | tensorcv/data/__init__.py | Hourout/keras-cv | 620559ee90921e036c8a84877d806a5c21010511 | [
"Apache-2.0"
] | null | null | null | tensorcv/data/__init__.py | Hourout/keras-cv | 620559ee90921e036c8a84877d806a5c21010511 | [
"Apache-2.0"
] | 4 | 2018-12-06T05:02:32.000Z | 2019-06-04T11:15:39.000Z | from tensorcv.data._sample import ImageClassificationFolderDataset
from tensorcv.data import datasets
| 34 | 66 | 0.892157 | 11 | 102 | 8.181818 | 0.636364 | 0.266667 | 0.355556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 2 | 67 | 51 | 0.957447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1532d2e7c32cf8d4dbb68b7327d9b15a9a465d4a | 5,960 | py | Python | resources/guests.py | questionlp/api.wwdt.me | 38dc0a471f176969913b17f3c87c22745078c196 | [
"Apache-2.0"
] | 3 | 2019-07-24T20:06:52.000Z | 2019-11-13T04:13:01.000Z | resources/guests.py | questionlp/api.wwdt.me | 38dc0a471f176969913b17f3c87c22745078c196 | [
"Apache-2.0"
] | 2 | 2021-03-20T15:23:21.000Z | 2021-05-15T21:55:11.000Z | resources/guests.py | questionlp/api.wwdt.me | 38dc0a471f176969913b17f3c87c22745078c196 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2018-2019 Linh Pham
# wwdtm is relased under the terms of the Apache License 2.0
"""This module provides functions that handle the API endpoint requests
for /guest"""
import mysql.connector
from mysql.connector.errors import DatabaseError, ProgrammingError
from flask import Flask, jsonify, abort, make_response, request
from .dicts import error_dict, fail_dict, success_dict
from wwdtm.guest import details, info
def get_guests(database_connection: mysql.connector.connect):
"""Retrieve a list of guests and their corresponding information"""
try:
database_connection.reconnect()
guests = info.retrieve_all(database_connection)
if not guests:
response = fail_dict("guests", "No guests found")
return jsonify(response), 404
return jsonify(success_dict("guests", guests)), 200
except ProgrammingError:
response = error_dict("Unable to retrieve guests from the database")
return jsonify(response), 500
except DatabaseError:
response = error_dict("Database error occurred while retrieving "
"guests from the database")
return jsonify(response), 500
except:
abort(500)
def get_guest_by_id(guest_id: int,
database_connection: mysql.connector.connect):
"""Retrieve a guest based on their ID"""
try:
database_connection.reconnect()
guest_info = info.retrieve_by_id(guest_id, database_connection)
if not guest_info:
message = "Guest ID {} not found".format(guest_id)
response = fail_dict("guest", message)
return jsonify(response), 404
return jsonify(success_dict("guest", guest_info)), 200
except ProgrammingError:
response = error_dict("Unable to retrieve guest information from the "
"database")
return jsonify(response), 500
except DatabaseError:
response = error_dict("Database error occurred while retrieving "
"guest information")
return jsonify(response), 500
except:
abort(500)
def get_guest_details_by_id(guest_id: int,
database_connection: mysql.connector.connect):
"""Retrieve a guest with their appearance data based on their ID"""
try:
database_connection.reconnect()
guest_details = details.retrieve_by_id(guest_id, database_connection)
if not guest_details:
message = "Guest ID {} not found".format(guest_id)
response = fail_dict("guest", message)
return jsonify(response), 404
return jsonify(success_dict("guest", guest_details)), 200
except ProgrammingError:
response = error_dict("Unable to retrieve guest information from the "
"database")
return jsonify(response), 500
except DatabaseError:
response = error_dict("Database error occurred while retrieving "
"guest information")
return jsonify(response), 500
except:
abort(500)
def get_guest_details(database_connection: mysql.connector.connect):
"""Retrieve all guests and their corresponding appearances"""
try:
database_connection.reconnect()
guest_details = details.retrieve_all(database_connection)
if not guest_details:
response = fail_dict("guests", "No guests found")
return jsonify(response), 404
return jsonify(success_dict("guests", guest_details)), 200
except ProgrammingError:
response = error_dict("Unable to retrieve guests from the database")
return jsonify(response), 500
except DatabaseError:
response = error_dict("Database error occurred while retrieving "
"guests information")
return jsonify(response), 500
except:
abort(500)
def get_guest_by_slug(guest_slug: str,
database_connection: mysql.connector.connect):
"""Retrieve a guest based on their slug"""
try:
database_connection.reconnect()
guest_info = info.retrieve_by_slug(guest_slug, database_connection)
if not guest_info:
message = "Guest slug '{}' not found".format(guest_slug)
response = fail_dict("guest", message)
return jsonify(response), 404
return jsonify(success_dict("guest", guest_info)), 200
except ProgrammingError:
response = error_dict("Unable to retrieve guest information from the "
"database")
return jsonify(response), 500
except DatabaseError:
response = error_dict("Database error occurred while retrieving "
"guest information")
return jsonify(response), 500
except:
abort(500)
def get_guest_details_by_slug(guest_slug: str,
database_connection: mysql.connector.connect):
"""Retrieve a guest with their appearances based on their slug"""
try:
database_connection.reconnect()
guest_details = details.retrieve_by_slug(guest_slug,
database_connection)
if not guest_details:
message = "Guest slug '{}' not found".format(guest_slug)
response = fail_dict("guest", message)
return jsonify(response), 404
return jsonify(success_dict("guest", guest_details)), 200
except ProgrammingError:
response = error_dict("Unable to retrieve guest information from the "
"database")
return jsonify(response), 500
except DatabaseError:
response = error_dict("Database error occurred while retrieving "
"guest information")
return jsonify(response), 500
except:
abort(500)
| 40.821918 | 78 | 0.638926 | 642 | 5,960 | 5.775701 | 0.146417 | 0.084142 | 0.101942 | 0.07767 | 0.857605 | 0.857605 | 0.825512 | 0.812567 | 0.791262 | 0.727886 | 0 | 0.023659 | 0.283725 | 5,960 | 145 | 79 | 41.103448 | 0.844929 | 0.085067 | 0 | 0.808333 | 0 | 0 | 0.155863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.041667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1597013669ac09a144259cbcf304f019041d08fb | 3,218 | py | Python | django_UI/App/Controller/User_Controller.py | raminfp/django_rails | ed266d184b3aab21f5e240bef03b3397a19bb6e3 | [
"Ruby",
"MIT"
] | 7 | 2016-02-19T07:36:22.000Z | 2016-04-29T14:41:57.000Z | django_UI/App/Controller/User_Controller.py | raminfp/django_rails | ed266d184b3aab21f5e240bef03b3397a19bb6e3 | [
"Ruby",
"MIT"
] | null | null | null | django_UI/App/Controller/User_Controller.py | raminfp/django_rails | ed266d184b3aab21f5e240bef03b3397a19bb6e3 | [
"Ruby",
"MIT"
] | 1 | 2016-02-19T07:28:10.000Z | 2016-02-19T07:28:10.000Z | from django.http.response import HttpResponse
import urllib2
import io
import urllib3
import json
import base64
# <name.html>_Controller
def index(request):
return HttpResponse("Welcome")
def show(request):
try:
'''
# url_rails = 'http://127.0.0.1:3000/'
# http = urllib2.urlopen(url_rails)
# return HttpResponse(http.read())
'''
http = urllib3.PoolManager()
url_rails = 'http://127.0.0.1:3000/'
r = http.request('GET', url_rails)
b = io.BufferedReader(r, 2048)
data = b.raw.data
return HttpResponse(data)
except Exception as e:
return HttpResponse("Error %s" % e)
def search(request):
'''
# None-Thread Safe request
name = "ramin"
url_rails = 'http://127.0.0.1:3000/user/search/%s' % name
http = urllib2.urlopen(url_rails)
return HttpResponse(http.read())
'''
# Thread Safe request
try:
name = "omid"
http = urllib3.PoolManager()
url_rails = 'http://127.0.0.1:3000/user/search/%s' % name
r = http.request('GET', url_rails)
b = io.BufferedReader(r, 2048)
data = b.raw.data
return HttpResponse(data)
except Exception as e:
return HttpResponse("Error %s" % e)
def create(request):
try:
firstname = "ali"
lastname = "mahdavi"
password = "123456"
confirmpass = '123456'
email = 'ramin.blackhat@gmail.com'
is_active = True
mem_id = 1
model = {'firstname':firstname,'lastname':lastname,'password':password,'confpassword':confirmpass,'email':email,'is_active':is_active,'mem_id':mem_id}
json_model = json.dumps(model)
http = urllib3.PoolManager()
url_rails = 'http://127.0.0.1:3000/user/create/%s' % base64.b64encode(json_model)
r = http.request('GET', url_rails)
b = io.BufferedReader(r, 2048)
return HttpResponse("Create User Seccuefully")
except Exception as e:
return HttpResponse("Error : %s" % e)
def delete(request,id):
try:
http = urllib3.PoolManager()
url_rails = 'http://127.0.0.1:3000/user/delete/%s' % id
r = http.request('GET', url_rails)
b = io.BufferedReader(r, 2048)
return HttpResponse("Delete User Seccuefully")
except Exception as e:
return HttpResponse("Error %s " % e)
def edit(request,id):
try:
getEditID = id
firstname = "ali"
lastname = "mahdavi"
password = "123456"
confirmpass = '123456'
email = 'ramin.blackhat@gmail.com'
is_active = True
mem_id = 1
model = {'firstname':firstname,'lastname':lastname,'password':password,'confpassword':confirmpass,'email':email,'is_active':is_active,'mem_id':mem_id}
json_model = json.dumps(model)
http = urllib3.PoolManager()
url_rails = 'http://127.0.0.1:3000/user/edit/%s/%s' % (getEditID ,base64.b64encode(json_model))
r = http.request('GET', url_rails)
b = io.BufferedReader(r, 2048)
return HttpResponse("Edit User Seccuefully")
except Exception as e:
return HttpResponse("Error %s " % e)
| 29.522936 | 158 | 0.601616 | 392 | 3,218 | 4.859694 | 0.191327 | 0.058793 | 0.044094 | 0.055118 | 0.816273 | 0.816273 | 0.816273 | 0.816273 | 0.804724 | 0.750131 | 0 | 0.05701 | 0.264139 | 3,218 | 108 | 159 | 29.796296 | 0.747466 | 0.065258 | 0 | 0.68 | 0 | 0 | 0.181818 | 0.017112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.08 | 0.08 | 0.013333 | 0.306667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
ecb46413562615ad5c879445f53da9712cd350d1 | 182 | py | Python | module_class_utils.py | dizhouwu/beautilful | 2fd0356385316ab4b645d4ac01a8219a9b90653f | [
"Apache-2.0"
] | null | null | null | module_class_utils.py | dizhouwu/beautilful | 2fd0356385316ab4b645d4ac01a8219a9b90653f | [
"Apache-2.0"
] | null | null | null | module_class_utils.py | dizhouwu/beautilful | 2fd0356385316ab4b645d4ac01a8219a9b90653f | [
"Apache-2.0"
] | null | null | null | import importlib
def load_class(package_name, module_name, class_name):
module = importlib.import_module(f"{package_name}.{module_name}")
return getattr(module, class_name)
| 30.333333 | 69 | 0.78022 | 25 | 182 | 5.36 | 0.44 | 0.223881 | 0.253731 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 182 | 5 | 70 | 36.4 | 0.832298 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
179cd52ead95ff2dcdd78a6f068a8130ca643f76 | 131 | py | Python | NumbersAndMath.py | G3Code-CS/python3-tutorial | 518463e16fd09c31292377d8c79c9f4f940fdf62 | [
"MIT"
] | null | null | null | NumbersAndMath.py | G3Code-CS/python3-tutorial | 518463e16fd09c31292377d8c79c9f4f940fdf62 | [
"MIT"
] | null | null | null | NumbersAndMath.py | G3Code-CS/python3-tutorial | 518463e16fd09c31292377d8c79c9f4f940fdf62 | [
"MIT"
] | null | null | null | print(type(10.5))
print((1/3))
# operators with numbers.
print(2**3) # This is exponential
print(10 % 3)
print(10/3)
print(10//3)
| 16.375 | 34 | 0.664122 | 25 | 131 | 3.48 | 0.52 | 0.241379 | 0.275862 | 0.298851 | 0.275862 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 0.129771 | 131 | 7 | 35 | 18.714286 | 0.622807 | 0.328244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
bd61216586d9d726fbdf1b783f56e0e609df7d9e | 79,565 | py | Python | bpm/dataset/__init__.py | Sunnyalicecai069960/MaskPooling | d2c37b2ed6795c4d6901b23347f679221e3942a9 | [
"MIT"
] | null | null | null | bpm/dataset/__init__.py | Sunnyalicecai069960/MaskPooling | d2c37b2ed6795c4d6901b23347f679221e3942a9 | [
"MIT"
] | null | null | null | bpm/dataset/__init__.py | Sunnyalicecai069960/MaskPooling | d2c37b2ed6795c4d6901b23347f679221e3942a9 | [
"MIT"
] | null | null | null | import numpy as np
import os.path as osp
ospj = osp.join
ospeu = osp.expanduser
from ..utils.utils import load_pickle
from ..utils.dataset_utils import parse_im_name
from .TrainSet import TrainSet
from .TestSet import TestSet
def create_dataset(
name='market1501',
part='trainval',
**kwargs):
assert name in ['cuhk03_original_2','cuhk03_original_np','market30_retain','market30_retain_pixel3', \
'market30_retain_rand_1', 'market30_retain_4_1', 'market30_retain_7_1', \
'market30_retain_72_1', 'market30_retain_mask_batch_18_4', 'market30_retain_mask_batch_32_4', 'market30_retain_mask_batch_64_4', \
'market30_retain_batch_18_4', 'market30_retain_batch_32_4', 'market30_retain_batch_64_4', \
'market30_retain_rpp_4_1', \
'market30_retain_pixel3_rand_1', 'market30_retain_pixel3_4_1', 'market30_retain_pixel3_7_1', \
'market30_retain_pixel3_72_1', 'market30_retain_pixel3_mask_batch_18_4', 'market30_retain_pixel3_mask_batch_32_4', 'market30_retain_pixel3_mask_batch_64_4', \
'market30_retain_pixel3_41_batch_18_4', 'market30_retain_pixel3_41_batch_32_4', 'market30_retain_pixel3_41_batch_64_4', \
'market30_retain_pixel3_71_batch_18_4', 'market30_retain_pixel3_71_batch_32_4', 'market30_retain_pixel3_71_batch_64_4', \
'market30_retain_pixel3_721_batch_18_4', 'market30_retain_pixel3_721_batch_32_4', 'market30_retain_pixel3_721_batch_64_4', \
'market30_retain_pixel3_batch_18_4', 'market30_retain_pixel3_batch_32_4', 'market30_retain_pixel3_batch_64_4', \
'cuhk03', 'cuhk03_33_np', 'cuhk03_33_np_retain','cuhk03_33_1','cuhk03_33_2','cuhk03_33_3','cuhk03_33_4','cuhk03_33_5','cuhk03_33_6','cuhk03_33_7','cuhk03_33_8','cuhk03_33_9','cuhk03_33_10', \
'cuhk03_33_11','cuhk03_33_12','cuhk03_33_13','cuhk03_33_14','cuhk03_33_15','cuhk03_33_16','cuhk03_33_17','cuhk03_33_18','cuhk03_33_19','cuhk03_33_20', \
'cuhk33_retain_rand_1', 'cuhk33_retain_4_1', 'cuhk33_retain_7_1', 'cuhk03_33_np_retain_d', 'cuhk03_33_np_retain_l', \
'cuhk33_retain_batch_18_4', 'cuhk33_retain_batch_32_4', 'cuhk33_retain_batch_64_4', \
'cuhk03_33_np_retain_4_1', 'cuhk03_33_np_retain_7_1', \
'cuhk03_33_2_retain_4_1', 'cuhk03_33_2_retain_7_1', \
'cuhk03_33_4_retain_4_1', 'cuhk03_33_9_retain_4_1', \
'cuhk03_33_11_retain_4_1', 'cuhk03_33_11_retain_7_1', \
'duke','duke30','duke33', 'duke_33_pixel5', \
'duke_33_pixel5_4_1', 'duke_33_pixel5_7_1', 'duke_33_pixel5_batch_18_4', 'duke_33_pixel5_batch_32_4', 'duke_33_pixel5_batch_64_4', \
'duke_33_pixel5_mask_batch_18_4', 'duke_33_pixel5_mask_batch_32_4', 'duke_33_pixel5_mask_batch_64_4', \
'duke_33_pixel5_41_batch_18_4', 'duke_33_pixel5_41_batch_32_4', 'duke_33_pixel5_41_batch_64_4', \
'duke_33_pixel5_71_batch_18_4', 'duke_33_pixel5_71_batch_32_4', 'duke_33_pixel5_71_batch_64_4', \
'viper30','viper33', 'combined'], \
"Unsupported Dataset {}".format(name)
# assert im_type in ['detected', 'labeled'], \
# "Unsupported Dataset Images Type {}".format(im_type)
assert part in ['trainval', 'train', 'val', 'test'], \
"Unsupported Dataset Part {}".format(part)
########################################
# Specify Directory and Partition File #
########################################
# market30_retain
if ('market30_retain' in name) & ('pixel' not in name):
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
if 'batch' in name:
if 'mask' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/batch_hard_1/new_shuffle_apn_partitions_mask_batch_'+ name[-4:] +'.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/batch_hard_1/new_shuffle_apn_partitions_batch_'+ name[-4:] +'.pkl')
elif 'rand_1' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_rand_1.pkl')
elif name[-3:]=='4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_4_1.pkl')
elif name[-3:]=='7_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_7_1.pkl')
elif name[-4:]=='72_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_72_1.pkl')
elif name =='market30_retain':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/partitions.pkl')
###
# cuhk03_original_1, cuhk03_original_2, ... , cuhk03_original_19, cuhk03_original_20
# cuhk03_original_np
elif 'cuhk03_original' in name:
im_type = ['detected', 'labeled'][0]
im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/cuhk03', im_type, 'images'))
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/cuhk03_33_retain', im_type, 'partitions_'+ name.split('_')[-1] +'.pkl'))
# market30_retain_pixel3
elif 'market30_retain_pixel3' in name:
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
if 'batch' in name:
if 'mask' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_mask_batch_'+ name[-4:] +'.pkl')
elif name.split('_')[3]=='41':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_41_batch_'+ name[-4:] +'.pkl')
elif name.split('_')[3]=='71':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_71_batch_'+ name[-4:] +'.pkl')
elif name.split('_')[3]=='721':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_721_batch_'+ name[-4:] +'.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_batch_'+ name[-4:] +'.pkl')
elif 'rand_1' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
elif name[-3:]=='4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
elif name[-3:]=='7_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_7_1.pkl')
elif name[-4:]=='72_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_72_1.pkl')
elif name == 'market30_retain_pixel3':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/partitions.pkl')
# 767/700 CUHK03_NP
# before cvpr2019
# elif name == 'cuhk03_33_np_retain':
# im_type = ['detected', 'labeled'][0]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'partitions.pkl'))
###
elif name == 'cuhk03_33_np_retain_d':
im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/cuhk03_3_retain/cuhk03_3_extend_trans_end_3/detected/images'))
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/cuhk03_33_retain/detected/partitions_np.pkl'))
elif name == 'cuhk03_33_np_retain_l':
im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/cuhk03_3_retain/cuhk03_3_extend_trans_end_3/labeled/images'))
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/cuhk03_33_retain/labeled/partitions_np.pkl'))
###
# 1367/100: cuhk03_33_1, cuhk_33_2, ... , cuhk03_33_19, cuhk03_33_20
# 767/700: cuhk03_33_np
elif ('cuhk03_33' in name) & ('retain' not in name):
im_type = ['detected', 'labeled'][0]
im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/cuhk03_33_retain', im_type, 'partitions_'+ name.split('_')[-1] +'.pkl'))
# cuhk03_33_retain_tri
elif ('cuhk03_33' in name) & ('retain' in name):
# 1367/100: cuhk03_33_1_retain_4_1, cuhk03_33_2_retain_4_1, ... , cuhk03_33_19_retain_4_1,cuhk03_33_20_retain_4_1
# 767/700: cuhk03_33_np_retain_4_1
im_type = ['detected', 'labeled'][0]
im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
if name[-3:]=='4_1':
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri', name[:-4], im_type, 'new_shuffle_apn_partitions_4_1.pkl'))
if name[-3:]=='7_1':
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri', name[:-4], im_type, 'new_shuffle_apn_partitions_7_1.pkl'))
if name[-4:]=='72_1':
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri', name[:-5], im_type, 'new_shuffle_apn_partitions_72_1.pkl'))
elif name == 'duke':
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke/images')
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke/partitions.pkl')
elif name == 'duke30':
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke30/images')
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke30/partitions.pkl')
elif name == 'duke33':
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke33/images')
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke33/partitions.pkl')
elif 'duke_33_pixel5' in name:
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke_33_pixel5/images')
if 'batch' in name:
if 'mask' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/batch_hard_1/new_shuffle_apn_partitions_mask_batch_'+ name[-4:] +'.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/batch_hard_1/new_shuffle_apn_partitions_batch_'+ name[-4:] +'.pkl')
elif 'rand_1' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_rand_1.pkl')
elif name[-3:]=='4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_4_1.pkl')
elif name[-3:]=='7_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_7_1.pkl')
elif name[-4:]=='72_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_72_1.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke_33_pixel5/partitions.pkl')
elif name == 'viper30':
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/viper30/images')
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/viper30/partitions.pkl')
elif name == 'viper33':
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/viper33/images')
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/viper33/partitions.pkl')
elif name == 'combined':
assert part in ['trainval'], \
"Only trainval part of the combined dataset is available now."
im_dir = ospeu('/mnt/data/dataset/pcb/market1501_cuhk03_duke/trainval_images')
partition_file = ospeu('/mnt/data/dataset/pcb/market1501_cuhk03_duke/partitions.pkl')
##################
# Create Dataset #
##################
# Use standard Market1501 CMC settings for all datasets here.
cmc_kwargs = dict(separate_camera_set=False,
single_gallery_shot=False,
first_match_break=True)
partitions = load_pickle(partition_file)
im_names = partitions['{}_im_names'.format(part)]
if part == 'trainval':
ids2labels = partitions['trainval_ids2labels']
ret_set = TrainSet(
im_dir=im_dir,
im_names=im_names,
ids2labels=ids2labels,
**kwargs)
elif part == 'train':
ids2labels = partitions['train_ids2labels']
ret_set = TrainSet(
im_dir=im_dir,
im_names=im_names,
ids2labels=ids2labels,
**kwargs)
elif part == 'val':
marks = partitions['val_marks']
kwargs.update(cmc_kwargs)
ret_set = TestSet(
im_dir=im_dir,
im_names=im_names,
marks=marks,
**kwargs)
elif part == 'test':
marks = partitions['test_marks']
kwargs.update(cmc_kwargs)
ret_set = TestSet(
im_dir=im_dir,
im_names=im_names,
marks=marks,
**kwargs)
if part in ['trainval', 'train']:
num_ids = len(ids2labels)
elif part in ['val', 'test']:
ids = [parse_im_name(n, 'id') for n in im_names]
num_ids = len(list(set(ids)))
num_query = np.sum(np.array(marks) == 0)
num_gallery = np.sum(np.array(marks) == 1)
num_multi_query = np.sum(np.array(marks) == 2)
# Print dataset information
print('-' * 40)
print('{} {} set'.format(name, part))
print('-' * 40)
print('NO. Images: {}'.format(len(im_names)))
print('NO. IDs: {}'.format(num_ids))
try:
print('NO. Query Images: {}'.format(num_query))
print('NO. Gallery Images: {}'.format(num_gallery))
print('NO. Multi-query Images: {}'.format(num_multi_query))
except:
pass
print('-' * 40)
return ret_set
def create_dataset_tri(
name='market1501',
part='trainval',
flag='anchor',
**kwargs):
assert name in ['market30_retain_rand_1', 'market30_retain_4_1', 'market30_retain_7_1', 'market30_retain_8_1', 'market30_retain_9_1', \
'market30_retain_72_1', 'market30_retain_mask_batch_18_4', 'market30_retain_mask_batch_32_4', 'market30_retain_mask_batch_64_4', \
'market30_retain_batch_18_4', 'market30_retain_batch_32_4', 'market30_retain_batch_64_4', \
'market30_retain_rpp_4_1', \
'market30_retain_pixel3_rand_1', 'market30_retain_pixel3_4_1', 'market30_retain_pixel3_7_1', 'market30_retain_pixel3_8_1', 'market30_retain_pixel3_9_1', \
'market30_retain_pixel3_72_1', 'market30_retain_pixel3_mask_batch_18_4', 'market30_retain_pixel3_mask_batch_32_4', 'market30_retain_pixel3_mask_batch_64_4', \
'market30_retain_pixel3_41_batch_18_4', 'market30_retain_pixel3_41_batch_32_4', 'market30_retain_pixel3_41_batch_64_4', \
'market30_retain_pixel3_71_batch_18_4', 'market30_retain_pixel3_71_batch_32_4', 'market30_retain_pixel3_71_batch_64_4', \
'market30_retain_pixel3_721_batch_18_4', 'market30_retain_pixel3_721_batch_32_4', 'market30_retain_pixel3_721_batch_64_4', \
'market30_retain_pixel3_batch_18_4', 'market30_retain_pixel3_batch_32_4', 'market30_retain_pixel3_batch_64_4', \
'cuhk33_retain_rand_1', 'cuhk33_retain_4_1', 'cuhk33_retain_7_1', \
'cuhk03_33_np_retain_4_1', 'cuhk03_33_np_retain_7_1', \
'cuhk03_33_2_retain_4_1', 'cuhk03_33_2_retain_7_1', \
'cuhk03_33_4_retain_4_1', 'cuhk03_33_9_retain_4_1', \
'cuhk03_33_11_retain_4_1', 'cuhk03_33_11_retain_7_1', \
'duke', 'duke_33_pixel5', 'duke_33_pixel5_4_1', 'duke_33_pixel5_7_1', 'duke_33_pixel5_batch_18_4', 'duke_33_pixel5_batch_32_4', 'duke_33_pixel5_batch_64_4', \
'duke_33_pixel5_mask_batch_18_4', 'duke_33_pixel5_mask_batch_32_4', 'duke_33_pixel5_mask_batch_64_4', \
'duke_33_pixel5_41_batch_18_4', 'duke_33_pixel5_41_batch_32_4', 'duke_33_pixel5_41_batch_64_4', \
'duke_33_pixel5_71_batch_18_4', 'duke_33_pixel5_71_batch_32_4', 'duke_33_pixel5_71_batch_64_4', \
'combined'], \
"Unsupported Dataset {}".format(name)
assert part in ['trainval', 'train', 'val', 'test'], \
"Unsupported Dataset Part {}".format(part)
########################################
# Specify Directory and Partition File #
########################################
# market30_retain
if ('market30_retain' in name) & ('pixel' not in name):
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
if 'batch' in name:
if 'mask' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/batch_hard_1/new_shuffle_apn_partitions_mask_batch_'+ name[-4:] +'.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/batch_hard_1/new_shuffle_apn_partitions_batch_'+ name[-4:] +'.pkl')
elif 'rand_1' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_rand_1.pkl')
elif name[-3:]=='4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_4_1.pkl')
elif name[-3:]=='7_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_7_1.pkl')
elif name[-4:]=='72_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_72_1.pkl')
elif name == 'market30_retain_rpp_4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri_rpp/market30_retain/new_shuffle_apn_partitions_4_1.pkl')
# market30_retain_pixel3
elif 'market30_retain_pixel3' in name:
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
if 'batch' in name:
if 'mask' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_mask_batch_'+ name[-4:] +'.pkl')
elif name.split('_')[3]=='41':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_41_batch_'+ name[-4:] +'.pkl')
elif name.split('_')[3]=='71':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_71_batch_'+ name[-4:] +'.pkl')
elif name.split('_')[3]=='721':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_721_batch_'+ name[-4:] +'.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/batch_hard_1/new_shuffle_apn_partitions_batch_'+ name[-4:] +'.pkl')
elif 'rand_1' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
elif name[-3:]=='4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
elif name[-3:]=='7_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_7_1.pkl')
elif name[-4:]=='72_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_72_1.pkl')
# before cvpr2019
# elif name == 'cuhk33_retain_4_1':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_4_1.pkl'))
# cuhk03_33_retain
elif ('cuhk03_33' in name) & ('retain' in name):
# 1367/100: cuhk03_33_1_retain_4_1, cuhk03_33_2_retain_4_1, ... , cuhk03_33_19_retain_4_1,cuhk03_33_20_retain_4_1
# 767/700: cuhk03_33_np_retain_4_1
im_type = ['detected', 'labeled'][0]
im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
if name[-3:]=='4_1':
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri', name[:-4], im_type, 'new_shuffle_apn_partitions_4_1.pkl'))
if name[-3:]=='7_1':
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri', name[:-4], im_type, 'new_shuffle_apn_partitions_7_1.pkl'))
if name[-4:]=='72_1':
partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri', name[:-5], im_type, 'new_shuffle_apn_partitions_72_1.pkl'))
# duke
# duke_33_pixel5
elif 'duke_33_pixel5' in name:
im_dir = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/duke_33_pixel5/images')
if 'batch' in name:
if 'mask' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/batch_hard_1/new_shuffle_apn_partitions_mask_batch_'+ name[-4:] +'.pkl')
else:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/batch_hard_1/new_shuffle_apn_partitions_batch_'+ name[-4:] +'.pkl')
elif 'rand_1' in name:
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_rand_1.pkl')
elif name[-3:]=='4_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_4_1.pkl')
elif name[-3:]=='7_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_7_1.pkl')
elif name[-4:]=='72_1':
partition_file = ospeu('/GPUFS/nsccgz_ywang_1/wangying/DataSet/pcb/trans/tri/duke_33_pixel5/new_shuffle_apn_partitions_72_1.pkl')
elif name == 'combined':
assert part in ['trainval'], \
"Only trainval part of the combined dataset is available now."
im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')
partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')
##################
# Create Dataset #
##################
# Use standard Market1501 CMC settings for all datasets here.
cmc_kwargs = dict(separate_camera_set=False,
single_gallery_shot=False,
first_match_break=True)
partitions = load_pickle(partition_file)
im_names = partitions['{}_{}_im_names'.format(part,flag)]
if part == 'trainval':
ids2labels = partitions['trainval_ids2labels']
ret_set = TrainSet(
im_dir=im_dir,
im_names=im_names,
ids2labels=ids2labels,
**kwargs)
elif part == 'train':
ids2labels = partitions['train_ids2labels']
ret_set = TrainSet(
im_dir=im_dir,
im_names=im_names,
ids2labels=ids2labels,
**kwargs)
elif part == 'val':
marks = partitions['val_marks']
kwargs.update(cmc_kwargs)
ret_set = TestSet(
im_dir=im_dir,
im_names=im_names,
marks=marks,
**kwargs)
elif part == 'test':
marks = partitions['test_marks']
kwargs.update(cmc_kwargs)
ret_set = TestSet(
im_dir=im_dir,
im_names=im_names,
marks=marks,
**kwargs)
if part in ['trainval', 'train']:
num_ids = len(ids2labels)
elif part in ['val', 'test']:
ids = [parse_im_name(n, 'id') for n in im_names]
num_ids = len(list(set(ids)))
num_query = np.sum(np.array(marks) == 0)
num_gallery = np.sum(np.array(marks) == 1)
num_multi_query = np.sum(np.array(marks) == 2)
# Print dataset information
print('-' * 40)
print('{} {} set'.format(name, part))
print('-' * 40)
print('NO. Images: {}'.format(len(im_names)))
print('NO. IDs: {}'.format(num_ids))
try:
print('NO. Query Images: {}'.format(num_query))
print('NO. Gallery Images: {}'.format(num_gallery))
print('NO. Multi-query Images: {}'.format(num_multi_query))
except:
pass
print('-' * 40)
return ret_set
# import numpy as np
# import os.path as osp
# ospj = osp.join
# ospeu = osp.expanduser
# from ..utils.utils import load_pickle
# from ..utils.dataset_utils import parse_im_name
# from .TrainSet import TrainSet
# from .TestSet import TestSet
# def create_dataset(
# name='market1501',
# part='trainval',
# **kwargs):
# assert name in ['market_png_4_1','market_png','market30_retain_pixel3_rand_1','market30_retain_pixel1_4_1','market30_retain_pixel2_4_1','market30_retain_pixel4_4_1','market30_retain_pixel5_4_1',\
# 'market30_retain_pixel6_4_1','market30_retain_pixel7_4_1','market30_retain_pixel8_4_1','market30_retain_pixel9_4_1',\
# 'market30_retain_pixel10_4_1','market30_retain_pixel1','market30_retain_pixel2','market30_retain_pixel4','market30_retain_pixel5','market30_retain_pixel6',\
# 'market30_retain_pixel7','market30_retain_pixel8','market30_retain_pixel9','market30_retain_pixel10',\
# 'market30_retain_rand_1','market30_retain_pixel3_3_1','market30_retain_pixel3_4_1',\
# 'market30_retain_pixel3_5_3','market30_retain_pixel3_rand_1','market30_retain_pixel3',\
# 'cuhk33_retain_3_1','cuhk33_retain_4','cuhk33_retain_4_1','cuhk33_retain_5','cuhk33_retain_5_3','cuhk33_retain_5_6',\
# 'market30_retain_3_1','market30_retain_4','market30_retain_4_1','market30_retain_5',\
# 'market30_retain_5_3','market30_retain_5_6','market33_retain_5','market33_retain_5_3',\
# 'market33_retain_5_6','market33_retain_3','market33_retain_3_1','market33_retain_4','market33_retain_4_1',\
# 'market30_retain_pixel0_4_1','market30_retain_pixel0_5_6','market30_retain_pixel0_5_3',\
# 'market30_retain_pixel0_5','market30_retain_pixel0_4_5','market30_retain_pixel0_3_1',\
# 'cuhk33_retain_3','mars30_retain_pixel7','mars32_retain_pixel7','mars33_retain_pixel7',\
# 'market30_retain_pixel0','market30_retain_2','market30_retain_3','market30_retain_pixel0_2',\
# 'market30_retain_pixel0_3','mars_oldmask_retain','mars','mars20','mars22','mars23','mars30',\
# 'mars32','mars33','market','cuhk20','cuhk22','cuhk23','cuhk20_retain','cuhk22_retain',\
# 'cuhk23_retain','cuhk30','cuhk32','cuhk33','cuhk30_retain','cuhk32_retain','cuhk33_retain',\
# 'cuhk40','cuhk42','cuhk43','cuhk40_retain','cuhk42_retain','cuhk43_retain','market1501',\
# 'market_combined','market23','market22', 'market20','market20_retain','market22_retain',\
# 'market23_retain', 'market30','market32','market33','market30_retain','market32_retain',\
# 'market33_retain','market40','market42','market43','market40_retain','market42_retain',\
# 'market43_retain','market_oldmask','market_oldmask_retain','market_trans','market_png',\
# 'market1501', 'cuhk03', 'duke', 'combined'], \
# "Unsupported Dataset {}".format(name)
# assert part in ['trainval', 'train', 'val', 'test'], \
# "Unsupported Dataset Part {}".format(part)
# ########################################
# # Specify Directory and Partition File #
# ########################################
# if name == 'market1501':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/market1501/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/market1501/partitions.pkl')
# elif name == 'market_png':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_origin/market-1501-png/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_origin/market-1501-png/partitions.pkl')
# elif name == 'market_png_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_origin/market-1501-png/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_pixel3_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market30_retain_pixel3_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel3_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market30_retain_pixel3_rand_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market33_retain_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_5_5.pkl')
# elif name == 'market33_retain_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market33_retain_5_6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_5_6.pkl')
# elif name == 'market33_retain':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/partitions.pkl')
# elif name == 'market33_retain_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_3.pkl')
# elif name == 'market33_retain_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market33_retain_4':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_4_5.pkl')
# elif name == 'market33_retain_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel0_5_6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_5_6.pkl')
# elif name == 'market30_retain_pixel0_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market30_retain_pixel0_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_5_5.pkl')
# elif name == 'market30_retain_pixel0_4_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_4_5.pkl')
# elif name == 'market30_retain_pixel0_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market30_retain_pixel0_2':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_2_2.pkl')
# elif name == 'market30_retain_pixel0_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_3.pkl')
# elif name == 'market30_retain_pixel0':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/partitions.pkl')
# elif name == 'market30_retain_pixel0_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_1/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_1/partitions.pkl')
# elif name == 'market30_retain_pixel1_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_1/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel2':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_2/partitions.pkl')
# elif name == 'market30_retain_pixel2_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/partitions.pkl')
# elif name == 'market30_retain_pixel3_rand_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_pixel4':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_4/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_4/partitions.pkl')
# elif name == 'market30_retain_pixel4_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_4/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_5/partitions.pkl')
# elif name == 'market30_retain_pixel5_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_6/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_6/partitions.pkl')
# elif name == 'market30_retain_pixel6_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_6/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel7':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_7/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_7/partitions.pkl')
# elif name == 'market30_retain_pixel7_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_7/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel8':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_8/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_8/partitions.pkl')
# elif name == 'market30_retain_pixel8_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_8/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel9':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_9/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_9/partitions.pkl')
# elif name == 'market30_retain_pixel9_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_9/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel10':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_10/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_10/partitions.pkl')
# elif name == 'market30_retain_pixel10_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_10/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_rand_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market30_retain_4':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_4_5.pkl')
# elif name == 'market30_retain_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_5_5.pkl')
# elif name == 'market30_retain_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market30_retain_5_6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_5_6.pkl')
# elif name == 'market30_retain_2':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_2_2.pkl')
# elif name == 'market30_retain_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_3.pkl')
# elif name == 'mars_oldmask_retain':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_oldmask_retain/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_oldmask_retain/partitions.pkl')
# elif name == 'market30_retain':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/partitions.pkl')
# elif name == 'market32_retain':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_2/partitions.pkl')
# elif name == 'market33_retain':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/partitions.pkl')
# elif name == 'mars20':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_2/mars_2_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_2/mars_2_extend_trans_end_0/partitions.pkl')
# elif name == 'mars22':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_2/mars_2_extend_trans_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_2/mars_2_extend_trans_end_2/partitions.pkl')
# elif name == 'mars23':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_2/mars_2_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_2/mars_2_extend_trans_end_3/partitions.pkl')
# elif name == 'mars30':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3/mars_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3/mars_3_extend_trans_end_0/partitions.pkl')
# elif name == 'mars32':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3/mars_3_extend_trans_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3/mars_3_extend_trans_end_2/partitions.pkl')
# elif name == 'mars33':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3/mars_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3/mars_3_extend_trans_end_3/partitions.pkl')
# elif name == 'mars30_retain_pixel7':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3_retain_7/mars_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3_retain_7/mars_3_extend_trans_end_0/partitions.pkl')
# elif name == 'mars32_retain_pixel7':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3_retain_7/mars_3_extend_trans_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3_retain_7/mars_3_extend_trans_end_2/partitions.pkl')
# elif name == 'mars33_retain_pixel7':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3_retain_7/mars_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars_3_retain_7/mars_3_extend_trans_end_3/partitions.pkl')
# elif name == 'mars':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars/images_RGBA')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/mars/partitions.pkl')
# elif name == 'cuhk33_retain':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'partitions.pkl'))
# elif name == 'cuhk33_retain_3':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_3.pkl'))
# elif name == 'cuhk33_retain_3_1':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_3_1.pkl'))
# elif name == 'cuhk33_retain_4':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_4_5.pkl'))
# elif name == 'cuhk33_retain_4_1':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_4_1.pkl'))
# elif name == 'cuhk33_retain_5':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_5_5.pkl'))
# elif name == 'cuhk33_retain_5_3':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_5_3.pkl'))
# elif name == 'cuhk33_retain_5_6':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_5_6.pkl'))
# elif name == 'duke':
# im_dir = ospeu('~/Dataset/duke/images')
# partition_file = ospeu('~/Dataset/duke/partitions.pkl')
# elif name == 'combined':
# assert part in ['trainval'], \
# "Only trainval part of the combined dataset is available now."
# im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')
# partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')
# ##################
# # Create Dataset #
# ##################
# # Use standard Market1501 CMC settings for all datasets here.
# cmc_kwargs = dict(separate_camera_set=False,
# single_gallery_shot=False,
# first_match_break=True)
# partitions = load_pickle(partition_file)
# im_names = partitions['{}_im_names'.format(part)]
# if part == 'trainval':
# ids2labels = partitions['trainval_ids2labels']
# ret_set = TrainSet(
# im_dir=im_dir,
# im_names=im_names,
# ids2labels=ids2labels,
# **kwargs)
# elif part == 'train':
# ids2labels = partitions['train_ids2labels']
# ret_set = TrainSet(
# im_dir=im_dir,
# im_names=im_names,
# ids2labels=ids2labels,
# **kwargs)
# elif part == 'val':
# marks = partitions['val_marks']
# kwargs.update(cmc_kwargs)
# ret_set = TestSet(
# im_dir=im_dir,
# im_names=im_names,
# marks=marks,
# **kwargs)
# elif part == 'test':
# marks = partitions['test_marks']
# kwargs.update(cmc_kwargs)
# ret_set = TestSet(
# im_dir=im_dir,
# im_names=im_names,
# marks=marks,
# **kwargs)
# if part in ['trainval', 'train']:
# num_ids = len(ids2labels)
# elif part in ['val', 'test']:
# ids = [parse_im_name(n, 'id') for n in im_names]
# num_ids = len(list(set(ids)))
# num_query = np.sum(np.array(marks) == 0)
# num_gallery = np.sum(np.array(marks) == 1)
# num_multi_query = np.sum(np.array(marks) == 2)
# # Print dataset information
# print('-' * 40)
# print('{} {} set'.format(name, part))
# print('-' * 40)
# print('NO. Images: {}'.format(len(im_names)))
# print('NO. IDs: {}'.format(num_ids))
# try:
# print('NO. Query Images: {}'.format(num_query))
# print('NO. Gallery Images: {}'.format(num_gallery))
# print('NO. Multi-query Images: {}'.format(num_multi_query))
# except:
# pass
# print('-' * 40)
# return ret_set
# def create_dataset_tri(
# name='market1501',
# part='trainval',
# flag='anchor',
# **kwargs):
# assert name in ['market_png_4_1','market_png','market30_retain_pixel3_rand_1','market30_retain_pixel1_4_1','market30_retain_pixel2_4_1','market30_retain_pixel4_4_1','market30_retain_pixel5_4_1',\
# 'market30_retain_pixel6_4_1','market30_retain_pixel7_4_1','market30_retain_pixel8_4_1','market30_retain_pixel9_4_1',\
# 'market30_retain_pixel10_4_1','market30_retain_rand_1','market30_retain_pixel3_3_1','market30_retain_pixel3_4_1','market30_retain_pixel3_5_3','market30_retain_pixel3_rand_1',\
# 'cuhk33_retain_3_1','cuhk33_retain_4','cuhk33_retain_4_1','cuhk33_retain_5','cuhk33_retain_5_3','cuhk33_retain_5_6',\
# 'market30_retain_3_1','market30_retain_4','market30_retain_4_1','market30_retain_5',\
# 'market30_retain_5_3','market30_retain_5_6','market33_retain_5','market33_retain_5_3',\
# 'market33_retain_5_6','market33_retain_3','market33_retain_3_1','market33_retain_4',\
# 'market33_retain_4_1','market30_retain_pixel0_4_1','market30_retain_pixel0_5_6',\
# 'market30_retain_pixel0_5_3','market30_retain_pixel0_5','market30_retain_pixel0_4_5',\
# 'cuhk33_retain_3','market30_retain_pixel0_3_1','market30_retain_2','market30_retain_3',\
# 'market30_retain_pixel0_2','market30_retain_pixel0_3','mars_oldmask_retain','mars',\
# 'mars20','mars22','mars23','mars30','mars32','mars33','market','cuhk20','cuhk22',\
# 'cuhk23','cuhk20_retain','cuhk22_retain','cuhk23_retain','cuhk30','cuhk32','cuhk33',\
# 'cuhk30_retain','cuhk32_retain','cuhk33_retain','cuhk40','cuhk42','cuhk43',\
# 'cuhk40_retain','cuhk42_retain','cuhk43_retain','market1501','market_combined',\
# 'market23','market22', 'market20','market20_retain','market22_retain','market23_retain', \
# 'market30','market32','market33','market30_retain','market32_retain','market33_retain',
# 'market40','market42','market43','market40_retain','market42_retain','market43_retain',
# 'market_oldmask','market_oldmask_retain','market_trans','market_png','market1501',
# 'cuhk03', 'duke', 'combined'], \
# "Unsupported Dataset {}".format(name)
# assert part in ['trainval', 'train', 'val', 'test'], \
# "Unsupported Dataset Part {}".format(part)
# ########################################
# # Specify Directory and Partition File #
# ########################################
# if name == 'market1501':
# im_dir = ospeu('~/Dataset/market1501/images')
# partition_file = ospeu('~/Dataset/market1501/partitions.pkl')
# elif name == 'market_png':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_origin/market-1501-png/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_origin/market-1501-png/partitions.pkl')
# elif name == 'market_png_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_origin/market-1501-png/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_pixel3_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market30_retain_pixel3_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel3_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market30_retain_pixel3_rand_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_pixel3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/partitions.pkl')
# elif name == 'market33_retain_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_5_5.pkl')
# elif name == 'market33_retain_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market33_retain_5_6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_5_6.pkl')
# elif name == 'market33_retain_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_3.pkl')
# elif name == 'market33_retain_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market33_retain_4':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_4_5.pkl')
# elif name == 'market33_retain_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel0_5_6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_5_6.pkl')
# elif name == 'market30_retain_pixel0_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market30_retain_pixel0_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_5_5.pkl')
# elif name == 'market30_retain_pixel0_4_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_4_5.pkl')
# elif name == 'market30_retain_pixel0_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market30_retain_pixel0_2':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_2_2.pkl')
# elif name == 'market30_retain_pixel0_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/new_shuffle_apn_partitions_3.pkl')
# elif name == 'market30_retain_rand_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_3_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_3_1.pkl')
# elif name == 'market30_retain_4':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_4_5.pkl')
# elif name == 'market30_retain_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_5_5.pkl')
# elif name == 'market30_retain_5_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_5_3.pkl')
# elif name == 'market30_retain_5_6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_5_6.pkl')
# elif name == 'market30_retain_2':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_2_2.pkl')
# elif name == 'market30_retain_3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/tri/Market_3_retain/Market_3_extend_trans_end_0/new_shuffle_apn_partitions_3.pkl')
# elif name == 'market30_retain_pixel0':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/partitions.pkl')
# elif name == 'market30_retain_pixel0_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_1/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_1/partitions.pkl')
# elif name == 'market30_retain_pixel1_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_1/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel2':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_2/partitions.pkl')
# elif name == 'market30_retain_pixel2_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_2/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel3':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/partitions.pkl')
# elif name == 'market30_retain_pixel3_rand_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_rand_1.pkl')
# elif name == 'market30_retain_pixel4':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_4/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_4/partitions.pkl')
# elif name == 'market30_retain_pixel4_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_4/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel5':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_5/partitions.pkl')
# elif name == 'market30_retain_pixel5_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_retain/Market_3_extend_trans_end_0/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel6':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_6/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_6/partitions.pkl')
# elif name == 'market30_retain_pixel6_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_6/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel7':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_7/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_7/partitions.pkl')
# elif name == 'market30_retain_pixel7_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_7/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel8':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_8/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_8/partitions.pkl')
# elif name == 'market30_retain_pixel8_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_8/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel9':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_9/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_9/partitions.pkl')
# elif name == 'market30_retain_pixel9_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_9/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'market30_retain_pixel10':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_10/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_10/partitions.pkl')
# elif name == 'market30_retain_pixel10_4_1':
# im_dir = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_10/images')
# partition_file = ospeu('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/Market_3_pixel/Market_pixel_end_3/new_shuffle_apn_partitions_4_1.pkl')
# elif name == 'cuhk03':
# im_type = ['detected', 'labeled'][0]
# im_dir = ospeu(ospj('~/Dataset/cuhk03', im_type, 'images'))
# partition_file = ospeu(ospj('~/Dataset/cuhk03', im_type, 'partitions.pkl'))
# elif name == 'cuhk33_retain_3':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'new_shuffle_apn_partitions_3.pkl'))
# elif name == 'cuhk33_retain_3_1':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_3_1.pkl'))
# elif name == 'cuhk33_retain_4':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_4_5.pkl'))
# elif name == 'cuhk33_retain_4_1':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_4_1.pkl'))
# elif name == 'cuhk33_retain_5':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_5_5.pkl'))
# elif name == 'cuhk33_retain_5_3':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_5_3.pkl'))
# elif name == 'cuhk33_retain_5_6':
# im_type = ['detected', 'labeled'][1]
# im_dir = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, 'images'))
# partition_file = ospeu(ospj('/GPUFS/nsccgz_ywang_1/alice/dataset/pcb/trans/cuhk03_3_retain/cuhk03_3_extend_trans_end_3', im_type, im_type+'_new_shuffle_apn_partitions_5_6.pkl'))
# elif name == 'duke':
# im_dir = ospeu('~/Dataset/duke/images')
# partition_file = ospeu('~/Dataset/duke/partitions.pkl')
# elif name == 'combined':
# assert part in ['trainval'], \
# "Only trainval part of the combined dataset is available now."
# im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')
# partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')
# ##################
# # Create Dataset #
# ##################
# # Use standard Market1501 CMC settings for all datasets here.
# cmc_kwargs = dict(separate_camera_set=False,
# single_gallery_shot=False,
# first_match_break=True)
# partitions = load_pickle(partition_file)
# im_names = partitions['{}_{}_im_names'.format(part,flag)]
# if part == 'trainval':
# ids2labels = partitions['trainval_ids2labels']
# ret_set = TrainSet(
# im_dir=im_dir,
# im_names=im_names,
# ids2labels=ids2labels,
# **kwargs)
# elif part == 'train':
# ids2labels = partitions['train_ids2labels']
# ret_set = TrainSet(
# im_dir=im_dir,
# im_names=im_names,
# ids2labels=ids2labels,
# **kwargs)
# elif part == 'val':
# marks = partitions['val_marks']
# kwargs.update(cmc_kwargs)
# ret_set = TestSet(
# im_dir=im_dir,
# im_names=im_names,
# marks=marks,
# **kwargs)
# elif part == 'test':
# marks = partitions['test_marks']
# kwargs.update(cmc_kwargs)
# ret_set = TestSet(
# im_dir=im_dir,
# im_names=im_names,
# marks=marks,
# **kwargs)
# if part in ['trainval', 'train']:
# num_ids = len(ids2labels)
# elif part in ['val', 'test']:
# ids = [parse_im_name(n, 'id') for n in im_names]
# num_ids = len(list(set(ids)))
# num_query = np.sum(np.array(marks) == 0)
# num_gallery = np.sum(np.array(marks) == 1)
# num_multi_query = np.sum(np.array(marks) == 2)
# # Print dataset information
# print('-' * 40)
# print('{} {} set'.format(name, part))
# print('-' * 40)
# print('NO. Images: {}'.format(len(im_names)))
# print('NO. IDs: {}'.format(num_ids))
# try:
# print('NO. Query Images: {}'.format(num_query))
# print('NO. Gallery Images: {}'.format(num_gallery))
# print('NO. Multi-query Images: {}'.format(num_multi_query))
# except:
# pass
# print('-' * 40)
# return ret_set
| 68.354811 | 202 | 0.736769 | 12,072 | 79,565 | 4.397366 | 0.017478 | 0.072525 | 0.105491 | 0.112084 | 0.980051 | 0.975718 | 0.971272 | 0.969822 | 0.967034 | 0.964171 | 0 | 0.060197 | 0.126651 | 79,565 | 1,163 | 203 | 68.413586 | 0.703747 | 0.666901 | 0 | 0.818444 | 0 | 0.086455 | 0.55659 | 0.472202 | 0 | 0 | 0 | 0 | 0.017291 | 1 | 0.005764 | false | 0.005764 | 0.017291 | 0 | 0.028818 | 0.051873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bd7b8deab8ed92fa575e5206864a54ac0b312962 | 48,336 | py | Python | asn/conversion/convert.py | lynnsoerensen/Arousal_DCNN | 0f4ae01dac06722fa5fecfd43e13b722cdf09199 | [
"MIT"
] | 2 | 2020-12-30T00:38:14.000Z | 2021-07-01T18:30:13.000Z | asn/conversion/convert.py | lynnsoerensen/Arousal_DCNN | 0f4ae01dac06722fa5fecfd43e13b722cdf09199 | [
"MIT"
] | null | null | null | asn/conversion/convert.py | lynnsoerensen/Arousal_DCNN | 0f4ae01dac06722fa5fecfd43e13b722cdf09199 | [
"MIT"
] | 1 | 2021-10-03T08:49:35.000Z | 2021-10-03T08:49:35.000Z | from __future__ import print_function
import numpy as np
from keras.layers import Input, Flatten,TimeDistributed
from keras.layers.convolutional import Conv2D,MaxPooling2D
from keras.layers.merge import Add, Subtract, Concatenate
from keras.models import Model
from asn.layers.test import *
from asn.conversion.utils import normalize_weights
from asn.attention.attn_param import set_model_attn_param
def convert_architecture(model_training, time_steps, mf_base, h_scaling, attn_param, skip=[], skip_value=0.06, spike_counting=True):
"""model_training: analog Keras model
time_steps: time_steps for digital/spiking conversion
mf: mf: precision for spike generation. Default: 0.1
h_scaling: mode for spike generation. Are spikes scales by h?
resnet: is the model_training a resnet?
attn_param: dict with parameters for attention modulation
model_test: spiking architecture
Last updated: 23.10.18
"""
print('Translating the following architecture to ASN:')
model_training.summary()
input_shape = model_training.get_input_shape_at(0) # Input size of training model
if len(input_shape) != 4:
raise Exception("Input shape of analog model should be a tuple (nb_rows, nb_cols, nb_channels)")
input_shape = (time_steps,) + input_shape[1:4] # Add time dimension
input = Input(shape=input_shape)
input_att = Input(shape=((2,))) # for the parameter Ax1 & Ax2
x = input
if model_training.layers[0].__class__.__name__ == 'InputLayer':
l_train = 1
else:
l_train = 0
model_match = []
l_test = 1 # Since the first layer is the input layer
attention_applied = False
for i in np.arange(0, len(model_training.layers)):
mode = model_training.layers[i].__class__.__name__
print('Evaluating layer ' + str(i) + ' - ' + mode)
if i < l_train:
print('Information from layer ' + str(i) + ' (' + mode + ') has already been integrated')
else:
if attn_param[l_test]["layer_idx"] is not None:
print('Applying attention to layer ' + str(l_test))
x = [x, input_att]
print(str(x[0].shape))
attention_applied = True
ASN_2D_layer = ASN_2D_attention
else:
ASN_2D_layer = ASN_2D
print(str(x.shape))
if l_test == 1:
input_layer = True # layer will integrate current instead of spikes
else:
input_layer = False
if l_test in skip:
mf = skip_value
else:
mf = mf_base
if mode == 'BatchNormalization':
if model_training.layers[l_train + 1].__class__.__name__ == 'ASNTransfer':
if model_training.layers[l_train + 2].__class__.__name__ == 'MaxPooling2D':
print('Building joint BN-ASN-MaxPool Layer as layer ' + str(
l_test) + ' in model_test')
if attn_param[l_test]["layer_idx"] is not None:
n_filter = x[0].shape[-1].value
else:
n_filter = x.shape[-1].value
pool_size = model_training.layers[l_train + 2].pool_size
pool_stride = model_training.layers[l_train + 2].strides
pool_padding = model_training.layers[l_train + 2].padding
x = ASN_2D_layer(filters=n_filter,
padding=model_training.layers[l_train].padding, use_bias=True,
pool_mode='max', pool_size=pool_size, pool_stride=pool_stride,
pool_padding=pool_padding, kernel_initializer='ones',
bias_initializer='zeros', mf=mf, h_scaling=h_scaling, input_layer=input_layer,
attn_param=attn_param[l_test])(x)
if spike_counting == True:
# Add it to the spike count
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 3 # because now 4 layers were combined
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 2].__class__.__name__ == 'AveragePooling2D':
print('Building joint BN-ASN-AvgPool Layer as layer ' + str(
l_test) + ' in model_test')
if attn_param[l_test]["layer_idx"] is not None:
n_filter = x[0].shape[-1].value
else:
n_filter = x.shape[-1].value
pool_size = model_training.layers[l_train + 2].pool_size
pool_stride = model_training.layers[l_train + 2].strides
pool_padding = model_training.layers[l_train + 2].padding
x = ASN_2D_layer(filters=n_filter, use_bias=True,
pool_mode='avg', pool_size=pool_size, pool_stride=pool_stride,
kernel_initializer='ones',
bias_initializer='zeros',
pool_padding=pool_padding, mf=mf, h_scaling=h_scaling,
input_layer=input_layer, attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 3 # because now 3 layers were combined
l_test = l_test + 1
model_match.append(l_train)
else:
print('Building joint BN-ASN Layer as layer ' + str(l_test) + ' in model_test')
# Diagnose how high-dimensional the input is:
if attn_param[l_test]["layer_idx"] is not None:
dimensionality = x[0].shape.ndims
n_filter = x[0].shape[-1].value
else:
dimensionality = x.shape.ndims
n_filter = x.shape[-1].value
if dimensionality > 3:
x = ASN_2D_layer(filters=n_filter, use_bias=True, mf=mf, kernel_initializer='ones',
bias_initializer='zeros', input_layer=input_layer, h_scaling=h_scaling,
attn_param=attn_param[l_test])(x)
else:
x = ASN_1D(units=n_filter, mf=mf, use_bias=True, kernel_initializer='ones',
bias_initializer='zeros', input_layer=input_layer, h_scaling=h_scaling)(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
# Do the accounting.
l_train = l_train + 2 # because 2 layers were added
l_test = l_test + 1
model_match.append(l_train)
else:
print('BN Layer will be integrated with other layers')
l_train = l_train + 1
l_test = l_test
elif mode == 'ASNTransfer':
print('Building ASN Layer as layer ' + str(l_test) + ' in model_test')
# Diagnose how high-dimensional the input is:
if attn_param[l_test]["layer_idx"] is not None:
dimensionality = x[0].shape.ndims
n_filter = x[0].shape[-1].value
else:
dimensionality = x.shape.ndims
n_filter = x.shape[-1].value
if dimensionality > 3:
x = ASN_2D_layer(filters=n_filter, use_bias=True, mf=mf, kernel_initializer='ones',
bias_initializer='zeros', input_layer=input_layer, h_scaling=h_scaling,
attn_param=attn_param[l_test])(x)
else:
x = ASN_1D(units=n_filter, mf=mf, use_bias=True, kernel_initializer='ones',
bias_initializer='zeros', input_layer=input_layer, h_scaling=h_scaling)(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif mode == 'Conv2D':
if (model_training.layers[l_train + 1].__class__.__name__ == 'BatchNormalization') & \
(model_training.layers[l_train]._outbound_nodes[0].outbound_layer.name.startswith(
'batch_norm')): # This second part tests the connection.
if model_training.layers[l_train + 2].__class__.__name__ == 'ASNTransfer':
if model_training.layers[l_train + 3].__class__.__name__ == 'MaxPooling2D':
print('Building joint Conv2D-BN-ASN-MaxPool Layer as layer ' + str(
l_test) + ' in model_test')
pool_size = model_training.layers[l_train + 3].pool_size
pool_stride = model_training.layers[l_train + 3].strides
pool_padding = model_training.layers[l_train + 3].padding
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
filters=model_training.layers[l_train].filters,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding, use_bias=True,
pool_mode='max', pool_size=pool_size, pool_stride=pool_stride,
pool_padding=pool_padding,
mf=mf, h_scaling=h_scaling, input_layer=input_layer,
attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 4 # because now 4 layers were combined
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 3].__class__.__name__ == 'AveragePooling2D':
print('Building joint Conv2D-BN-ASN-AvgPool Layer as layer ' + str(
l_test) + ' in model_test')
pool_size = model_training.layers[l_train + 3].pool_size
pool_stride = model_training.layers[l_train + 3].strides
pool_padding = model_training.layers[l_train + 3].padding
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
filters=model_training.layers[l_train].filters,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding, use_bias=True,
pool_mode='avg', pool_size=pool_size, pool_stride=pool_stride,
pool_padding=pool_padding, mf=mf, h_scaling=h_scaling,
input_layer=input_layer, attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 4 # because now 4 layers were combined
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 3].__class__.__name__ in ['GaussianDropout', 'Dropout', 'GaussianNoise','UniformNoise']:
if model_training.layers[l_train + 4].__class__.__name__ == 'MaxPooling2D':
print(
'Skipping DropoutLayer and building joint Conv2D-BN-ASN-MaxPool Layer as layer ' + str(
l_test) + ' in model_test')
pool_size = model_training.layers[l_train + 4].pool_size
pool_stride = model_training.layers[l_train + 4].strides
pool_padding = model_training.layers[l_train + 3].padding
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
filters=model_training.layers[l_train].filters,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding, use_bias=True,
pool_mode='max', pool_size=pool_size, pool_stride=pool_stride,
pool_padding=pool_padding, mf=mf, h_scaling=h_scaling,
input_layer=input_layer, attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 5 # because now 5 layers were combined
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 4].__class__.__name__ == 'AveragePooling2D':
print(
'Skipping DropoutLayer and building joint Conv2D-BN-ASN-AvgPool Layer as layer ' + str(
l_test) + ' in model_test')
pool_size = model_training.layers[l_train + 4].pool_size
pool_stride = model_training.layers[l_train + 4].strides
pool_padding = model_training.layers[l_train + 3].padding
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
filters=model_training.layers[l_train].filters,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding, use_bias=True,
pool_mode='avg', pool_size=pool_size, pool_stride=pool_stride,
pool_padding=pool_padding,
mf=mf, h_scaling=h_scaling, input_layer=input_layer,
attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 5 # because now 5 layers were combined
l_test = l_test + 1
model_match.append(l_train)
else:
print(
'Building joint Conv2D-BN-ASN Layer as layer ' + str(l_test) + ' in model_test')
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
padding=model_training.layers[l_train].padding,
filters=model_training.layers[l_train].filters,
use_bias=True, mf=mf, h_scaling=h_scaling,
input_layer=input_layer, attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 4 # because now 4 layers were combined
l_test = l_test + 1
model_match.append(l_train)
else:
print('Building joint Conv2D-BN-ASN Layer as layer ' + str(l_test) + ' in model_test')
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
padding=model_training.layers[l_train].padding,
filters=model_training.layers[l_train].filters,
strides=model_training.layers[l_train].strides,
use_bias=True, mf=mf, h_scaling=h_scaling,
input_layer=input_layer, attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 3 # because now 3 layers were combined
l_test = l_test + 1
model_match.append(l_train)
else:
print('Building joint Conv2D-BN Layer as layer ' + str(l_test) + ' in model_test')
x = TimeDistributed(Conv2D(model_training.layers[l_train].filters,
model_training.layers[l_train].kernel_size,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding,
use_bias= False))(x)
l_train = l_train + 2
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 1].__class__.__name__ == 'ASNTransfer':
print('Building joint Conv2D-ASN Layer as layer ' + str(l_test) + ' in model_test')
x = ASN_2D_layer(kernel_size=model_training.layers[l_train].kernel_size,
filters=model_training.layers[l_train].filters,
strides=model_training.layers[l_train].strides,
use_bias=True, mf=mf, h_scaling=h_scaling, input_layer=input_layer,
attn_param=attn_param[l_test])(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 2
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 1].__class__.__name__ in ['Conv2D', 'Add', 'BatchNormalization']:
if model_training.layers[l_train].input.name.startswith(identity_name): # Identity
print('Building time-distributed Conv2D as layer ' + str(l_test) + ' in model_test for identity')
identity = TimeDistributed(Conv2D(model_training.layers[l_train].filters,
model_training.layers[l_train].kernel_size,
padding=model_training.layers[l_train].padding,
strides=model_training.layers[l_train].strides,
use_bias=False))(identity)
# Update the identity name
identity_name = model_training.layers[i].name
else:
print('Building time-distributed Conv2D as layer ' + str(l_test) + ' in model_test') # Main
x = TimeDistributed(Conv2D(model_training.layers[l_train].filters,
model_training.layers[l_train].kernel_size,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding,
use_bias=False))(x)
l_train = l_train + 1
l_test = l_test + 1
if model_training.layers[l_train - 1]._outbound_nodes[0].outbound_layer.name.startswith(
'batch_normalization'):
from asn.utils import getLayerIndexByName
model_match.append(getLayerIndexByName(model_training,
model_training.layers[l_train - 1]._outbound_nodes[
0].outbound_layer.name) + 1)
else:
model_match.append(l_train)
else:
print('Building time-distributed Conv2D as layer ' + str(l_test) + ' in model_test') # Main
x = TimeDistributed(Conv2D(model_training.layers[l_train].filters,
model_training.layers[l_train].kernel_size,
strides=model_training.layers[l_train].strides,
padding=model_training.layers[l_train].padding,
use_bias=False))(x)
l_train = l_train + 1
l_test = l_test + 1
if model_training.layers[l_train]._outbound_nodes[0].outbound_layer.name.startswith(
'batch_normalization'):
from asn.utils import getLayerIndexByName
model_match.append(getLayerIndexByName(model_training,
model_training.layers[l_train]._outbound_nodes[
0].outbound_layer.name))
else:
model_match.append(l_train)
elif mode == 'MaxPooling2D':
print('Building time-distributed ' + mode + ' as layer ' + str(l_test) + ' in model_test')
pool_size = model_training.layers[l_train].pool_size
x = TimeDistributed(MaxPooling2D(pool_size=pool_size))(x)
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif mode == 'Add':
print('Building ' + mode + ' as layer ' + str(l_test) + ' in model_test')
x = Add()([x, identity])
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif mode == 'Subtract':
print('Building ' + mode + ' as layer ' + str(l_test) + ' in model_test')
x = Subtract()([identity, x])
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif mode == 'Dense':
if len(model_training.layers) == l_train + 1: # Make a softmax output layer for the last layer
nodes = model_training.layers[l_train].output_shape[1]
config = model_training.layers[l_train].get_config()
activation = config['activation']
print('Building an ASN-' + activation + ' output layer as layer ' + str(l_test) + ' in model_test')
# This triggers that the spikes are being integrated into S and evaluated as a softmax at
# every time_step
x = ASN_1D(nodes, use_bias=True, activation=activation, last_layer=True, mf=mf,
h_scaling=h_scaling, input_layer=input_layer)(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif model_training.layers[l_train + 1].__class__.__name__ == 'BatchNormalization':
if model_training.layers[l_train + 2].__class__.__name__ == 'ASNTransfer':
print('Building joint Dense-BN-ASN Layer as layer ' + str(l_test) + ' in model_test')
nodes = model_training.layers[l_train].output_shape[1]
x = ASN_1D(nodes, use_bias=True, mf=mf, h_scaling=h_scaling,
input_layer=input_layer)(x)
if spike_counting == True:
if l_test == 1:
spike_counter = SpikeSum(h_scaling=h_scaling)(x)
else:
s = SpikeSum(h_scaling=h_scaling)(x)
spike_counter = Concatenate()([spike_counter, s])
l_train = l_train + 3
l_test = l_test + 1
model_match.append(l_train)
else:
print('Building time-distributed Dense-BN Layer as layer ' + str(l_test) + ' in model_test')
nodes = model_training.layers[l_train].output_shape[1]
x = TimeDistributed(Dense(nodes, activation='linear', use_bias=False))(x)
l_train = l_train + 2
l_test = l_test + 1
model_match.append(l_train)
else:
nodes = model_training.layers[l_train].output_shape[1]
if model_training.layers[l_train].input.name.startswith(identity_name): # Identity
print('Building time-distributed Dense as layer ' + str(l_test) + ' in model_test for identity')
identity = TimeDistributed(Dense(nodes, activation='linear', use_bias=False))(identity)
# Update the identity name
identity_name = model_training.layers[i].name
else:
print('Building time-distributed Dense as layer ' + str(l_test) + ' in model_test') # Main
x = TimeDistributed(Dense(nodes, activation='linear', use_bias=False))(x)
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif mode == 'Flatten':
print('Building time-distributed ' + mode + ' as layer ' + str(l_test) + ' in model_test')
x = TimeDistributed(Flatten())(x)
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
elif mode in ['Dropout', 'GaussianDropout','GaussianNoise','UniformNoise']:
l_train = l_train + 1
print('Dropout or noise is not used during testing and therefore not included in model_test')
elif mode == 'Activation':
if len(model_training.layers) == l_train + 1: # Make a softmax output layer for the last layer
nodes = model_training.layers[l_train].output_shape[1]
config = model_training.layers[l_train].get_config()
activation = config['activation']
print('Building an ASN-' + activation + ' output layer as layer ' + str(l_test) + ' in model_test')
# This triggers that the spikes are being integrated into S and evaluated as a softmax at
# every time_step
x = ASN_1D(nodes, use_bias=True, activation=activation, last_layer=True, mf=mf,
h_scaling=h_scaling, input_layer=input_layer)(x)
l_train = l_train + 1
l_test = l_test + 1
model_match.append(l_train)
else:
print('Activation layers that are not an ASNTransfer function can only be used as an output of the '
'network.')
if len(model_training.layers[i]._outbound_nodes) > 1:
identity_name = model_training.layers[i].name
# If a layer has more than 1 outgoing connection it is an identity mapping for a resnet block
identity = x
if spike_counting == True:
if attention_applied == True:
model_test = Model(inputs=[input, input_att], outputs=[x, spike_counter])
else:
model_test = Model(inputs=input, outputs=[x, spike_counter])
else:
if attention_applied == True:
model_test = Model(inputs=[input, input_att], outputs=x)
else:
model_test = Model(inputs=input, outputs=x)
# This is here because we are using an offset in the counting of model_train
model_match = np.array(model_match) - 1
return model_test, model_match
def convert_weights(model_training, model_test, h, model_match):
print('Transferring weights ...')
# Layer counters for both models
if model_training.layers[0].__class__.__name__ == 'InputLayer':
l_train = 1
else:
l_train = 0
if model_test.layers[0].__class__.__name__ == 'InputLayer':
l_test = 1 # Since the first two layers are the input layers
bias = []
for i in np.arange(0, len(model_training.layers)):
mode = model_training.layers[i].__class__.__name__
print('Evaluating layer ' + str(i) + ' - ' + mode)
print(model_training.layers[i].input_shape)
print(model_test.layers[l_test].__class__.__name__)
if model_test.layers[l_test].__class__.__name__ in ['InputLayer']:
l_test = l_test+1
# Skip all spike counting related layers, which are uniquely having only 2 dimensions
while len(model_test.layers[l_test].output_shape) == 2:
l_test = l_test + 1
print(model_test.layers[l_test].__class__.__name__)
if i < l_train:
print('Information from layer ' + str(i) + ' (' + mode + ') has already been integrated')
else:
if mode == 'BatchNormalization':
if model_training.layers[l_train + 1].__class__.__name__ == 'ASNTransfer': # Test for combination BN & ASN
print('Loading in BN weights from Model_training layer ' + str(l_train))
BN_weights = model_training.layers[l_train].get_weights()
weights = model_test.layers[l_test].get_weights()
# Replace 1-weights with the identity matrix between the channel dimensions, such that every channel
# is only connected to itself in the next layer
weights[0] = np.identity(weights[0].shape[-1]).reshape(weights[0].shape) # keep the shape the same.
if len(bias) > 0: # collect the biases
integrated_bias = 0
for b in bias:
integrated_bias = np.array(b) + integrated_bias
weights[1] = integrated_bias
bias = []
if l_test == 1: # if this is the first layer then don't scale by h, only apply BN
scaled_weights = normalize_weights(weights, BN_weights=BN_weights)
else:
scaled_weights = normalize_weights(weights, BN_weights=BN_weights, h=h) # also scaling the kernel with h
model_test.layers[l_test].set_weights(scaled_weights) # Assign new weights based on trained BN params
l_test = l_test + 1
if model_training.layers[l_train + 2].__class__.__name__ in ['AveragePooling2D', 'MaxPooling2D']: # Test for combination BN & ASN
l_train = l_train + 3
else:
l_train = l_train + 2 # because 2 layers were added
else:
print('BN weights will be integrated with other layers')
l_train = l_train + 1
l_test = l_test
elif mode == 'ASNTransfer':
print('Building ' + mode + ' as layer ' + str(l_test) + ' in model_test')
weights = model_test.layers[l_test].get_weights()
# Replace 1-weights with the identity matrix between the channel dimensions, such that every channel
# is only connected to itself in the next layer
weights[0] = np.identity(weights[0].shape[-1]).reshape(weights[0].shape) # keep the shape the same.
if len(bias) > 0: # collect the biases
integrated_bias = 0
for b in bias:
integrated_bias = np.array(b) + integrated_bias
weights[1]= integrated_bias
bias = []
scaled_weights = normalize_weights(weights, h=h) # scale by h to comply with transfer function
model_test.layers[l_test].set_weights(scaled_weights) # Assign new weights
l_train = l_train + 1
l_test = l_test + 1
elif mode == 'Conv2D':
if (model_training.layers[l_train + 1].__class__.__name__ == 'BatchNormalization') & \
(model_training.layers[l_train]._outbound_nodes[0].outbound_layer.name.startswith(
'batch_norm')): # This second part tests the connection.:
if model_training.layers[l_train + 2].__class__.__name__ == 'ASNTransfer':
# assemble trained conv and BN weights
print('Loading in Conv2d weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
print('Loading in BN weights from Model_training layer ' + str(l_train + 1))
BN_weights = model_training.layers[l_train + 1].get_weights()
print('Integrating BN weights from Model_training layer ' + str(l_train + 1))
if (l_test == 1) | (model_training.layers[l_train + 2].__class__.__name__ != 'ASNTransfer'): # if this is the first layer then don't scale by h, only apply BN
print('Not scaling by h!')
scaled_weights = normalize_weights(weights, BN_weights=BN_weights)
else:
scaled_weights = normalize_weights(weights, BN_weights=BN_weights, h=h) # also scaling the kernel with h
# Transfer trained weights
model_test.layers[l_test].set_weights(scaled_weights)
l_test = l_test + 1
if model_training.layers[l_train + 3].__class__.__name__ in ['MaxPooling2D','AveragePooling2D']:
l_train = l_train + 4 # because now 4 layers were combined
elif model_training.layers[l_train + 3].__class__.__name__ in ['GaussianDropout','Dropout']:
if model_training.layers[l_train + 4].__class__.__name__ in ['MaxPooling2D','AveragePooling2D']:
l_train = l_train + 5 # because now 5 layers were combined
else:
l_train = l_train + 4
else:
l_train = l_train + 3 # because now 3 layers were combined
else:
print('Loading in Conv2d weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
print('Loading in BN weights from Model_training layer ' + str(l_train + 1))
BN_weights = model_training.layers[l_train + 1].get_weights()
scaled_weights = normalize_weights(weights, BN_weights=BN_weights)
bias.append(scaled_weights[1])
model_test.layers[l_test].set_weights([scaled_weights[0]])
l_train = l_train+2
l_test = l_test+1
elif model_training.layers[l_train + 1].__class__.__name__ == 'ASNTransfer':
print('Loading in Conv2d weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
weights[0] = weights[0] * h # scale by h to comply with transfer function
model_test.layers[l_test].set_weights(weights)
l_train = l_train + 2
l_test = l_test + 1
else:
# Identity & main branches sometimes get scrambeld up during the assembling of the model,
# therefore we have to check with which time-dist Conv we are dealing
print('Loading in Conv2d weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
if model_training.layers[l_train]._outbound_nodes[0].outbound_layer.name.startswith(
'batch_norm'):
print('Found some BN weights!')
BN_weights = model_training.get_layer(
model_training.layers[l_train]._outbound_nodes[0].outbound_layer.name).get_weights()
weights = normalize_weights(weights, BN_weights=BN_weights)
bias.append(weights[1])
if model_test.layers[l_test].get_weights()[0].shape != weights[0].shape:
model_test.layers[l_test+1].set_weights([weights[0]])
weights2 = model_training.layers[l_train+1].get_weights()
bias.append(weights2[1])
model_test.layers[l_test].set_weights([weights2[0]])
l_train = l_train + 2
l_test = l_test + 2
else:
model_test.layers[l_test].set_weights([weights[0]])
l_train = l_train + 1
l_test = l_test + 1
elif mode in ['MaxPooling2D', 'AveragePooling2D', 'Add', 'Subtract', 'Flatten']:
l_train = l_train + 1
l_test = l_test + 1
elif mode == 'Dense':
if len(model_training.layers) == l_train + 1: # Make a softmax output layer for the last layer
weights = model_training.layers[l_train].get_weights()
model_test.layers[l_test].set_weights(weights)
elif model_training.layers[l_train + 1].__class__.__name__ == 'BatchNormalization':
if model_training.layers[l_train + 2].__class__.__name__ == 'ASNTransfer':
print('Loading in Dense weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
BN_weights = model_training.layers[l_train + 1].get_weights()
print('Integrating BN weights from Model_training layer ' + str(l_train + 1))
scaled_weights = normalize_weights(weights, BN_weights=BN_weights,
h=h) # also scaling the kernel with h
# Transfer trained weights
model_test.layers[l_test].set_weights(scaled_weights)
l_train = l_train + 3
l_test = l_test + 1
else:
print('Loading in Dense weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
BN_weights = model_training.layers[l_train + 1].get_weights()
print('Integrating BN weights from Model_training layer ' + str(l_train + 1))
scaled_weights = normalize_weights(weights, BN_weights)
bias.append(scaled_weights[1])
# Transfer trained weights
model_test.layers[l_test].set_weights([scaled_weights[0]])
l_train = l_train + 2
l_test = l_test + 1
else:
print('Loading in Dense weights from Model_training layer ' + str(l_train))
weights = model_training.layers[l_train].get_weights()
bias.append(weights[1])
# Transfer trained weights
model_test.layers[l_test].set_weights([weights[0]])
l_train = l_train + 1
l_test = l_test + 1
elif mode == 'Activation':
print('Building ' + mode + ' as layer ' + str(l_test) + ' in model_test')
weights = model_test.layers[l_test].get_weights()
# Replace 1-weights with the identity matrix between the channel dimensions, such that every channel
# is only connected to itself in the next layer
weights[0] = np.identity(weights[0].shape[-1]).reshape(weights[0].shape) # keep the shape the same.
if len(bias) > 0: # collect the biases
integrated_bias = 0
for b in bias:
if model_training.layers[l_train-1].__class__.__name__ == 'Add':
integrated_bias = np.array(b) + integrated_bias
elif model_training.layers[l_train-1].__class__.__name__ == 'Subtract':
if integrated_bias == 0:
integrated_bias = np.array(b)
else:
integrated_bias = integrated_bias - np.array(b)
weights[1] = integrated_bias
bias = []
scaled_weights = normalize_weights(weights, h=h) # scale by h to comply with transfer function
model_test.layers[l_test].set_weights(scaled_weights) # Assign new weights
l_train = l_train + 1
l_test = l_test + 1
elif mode in ['Dropout', 'GaussianDropout','GaussianNoise','UniformNoise']:
l_train = l_train + 1
print('Dropout or Noise is not used during testing and therefore not included in model_test')
return model_test
def convert_model(model_training, time_steps, *args, mf=0.1, h_weightscaling=True, skip=[], skip_value=0.06,
spike_counting=False):
"""This function translates between the training architecture and the ASN architecture
:parameter
model_training: analog Keras model with trained weights
time_steps for the digital/spiking model conversion
*args: attn_param, dicts with parameter for attentional manipulation
mf: precision for spike generation. Default: 0.1
h_weightscaling: weights of kernels will be scaled by h to account for adherence to the transfer function based on mf
resnet: whether given model is a resnet?
:returns
model_test: Converted sDNN architecture with trained weights from model_training
model_match: Layer correspondence between analog & spiking network.
Last updated 18.04.19
"""
if len(args) == 0:
attn_param = set_model_attn_param(model_training)
else:
attn_param = set_model_attn_param(model_training, args)
if h_weightscaling == True:
h = normalize_transfer(mf) # for adherence to the transfer function
h_scaling = None
else:
h = None
h_scaling = True
print('Assuming that spike production will be scaled by h')
model_test, model_match = convert_architecture(model_training, time_steps, mf, h_scaling, attn_param, skip=skip,
skip_value=skip_value, spike_counting=spike_counting)
model_test = convert_weights(model_training, model_test, h, model_match)
print('Conversion complete: ')
model_test.summary()
return model_test, model_match
| 55.944444 | 183 | 0.511503 | 5,157 | 48,336 | 4.49331 | 0.062052 | 0.064992 | 0.109874 | 0.106163 | 0.864276 | 0.845978 | 0.826342 | 0.811885 | 0.797342 | 0.767737 | 0 | 0.01236 | 0.410791 | 48,336 | 863 | 184 | 56.00927 | 0.801264 | 0.082692 | 0 | 0.776276 | 0 | 0 | 0.089176 | 0.001902 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004505 | false | 0 | 0.016517 | 0 | 0.025526 | 0.093093 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bd843ddaf06f787e890307a233290dc173fae1ed | 1,659 | py | Python | mail_templated/tests/testcases.py | dabapps/django-mail-templated | fad3d76b9c94a59324d9c33ede0419fd0cd70037 | [
"MIT"
] | 4 | 2015-06-26T00:36:44.000Z | 2019-03-21T21:04:20.000Z | mail_templated/tests/testcases.py | dabapps/django-mail-templated | fad3d76b9c94a59324d9c33ede0419fd0cd70037 | [
"MIT"
] | null | null | null | mail_templated/tests/testcases.py | dabapps/django-mail-templated | fad3d76b9c94a59324d9c33ede0419fd0cd70037 | [
"MIT"
] | 2 | 2015-05-12T11:23:01.000Z | 2020-01-16T16:24:20.000Z | # -- coding: utf-8 --
from django.test import TestCase
from mail_templated import send_mail
from django.core import mail
class TestSendEmail(TestCase):
def test_send_email(self):
""" test we can send an email """
send_mail('test_message.email', {'context_var': "1"}, "from_email", ["to_email"])
self.assertEqual(len(mail.outbox), 1)
email = mail.outbox[0]
# now check the details are correct
self.assertEqual(email.to[0], "to_email")
self.assertEqual(email.from_email, "from_email")
self.assertEqual(email.subject, "Test Message")
self.assertEqual(email.template_name, "test_message.email")
self.assertEqual(email.body, "Test Message Body")
self.assertEqual(email.message().get_content_type(), "multipart/alternative")
self.assertIn("<p>Test Message HTML</p>", email.message().as_string())
def test_send_extended_email(self):
""" test we can send an email """
send_mail('test_message_extended.email', {'context_var': "1"}, "from_email", ["to_email"])
self.assertEqual(len(mail.outbox), 1)
email = mail.outbox[0]
# now check the details are correct
self.assertEqual(email.to[0], "to_email")
self.assertEqual(email.from_email, "from_email")
self.assertEqual(email.subject, "Test Message")
self.assertEqual(email.template_name, "test_message_extended.email")
self.assertEqual(email.body, "Test Message Body")
self.assertEqual(email.message().get_content_type(), "multipart/alternative")
self.assertIn("<p>Test Message HTML</p>", email.message().as_string())
| 46.083333 | 98 | 0.670283 | 213 | 1,659 | 5.056338 | 0.234742 | 0.194986 | 0.222841 | 0.139276 | 0.846797 | 0.846797 | 0.846797 | 0.846797 | 0.846797 | 0.846797 | 0 | 0.006696 | 0.189873 | 1,659 | 35 | 99 | 47.4 | 0.794643 | 0.085594 | 0 | 0.615385 | 0 | 0 | 0.22237 | 0.063915 | 0 | 0 | 0 | 0 | 0.615385 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bde511c0152bd284f8781f4289af749b47708ee5 | 558 | py | Python | tests/test_import.py | turnergarrow/galpy | 7132eddbf2dab491fe137790e31eacdc604b0534 | [
"BSD-3-Clause"
] | 1 | 2019-02-28T08:54:38.000Z | 2019-02-28T08:54:38.000Z | tests/test_import.py | BurcuAkbulut/galpy | cabb42bef3b4f88a2f593cdb123452cd41451db3 | [
"BSD-3-Clause"
] | null | null | null | tests/test_import.py | BurcuAkbulut/galpy | cabb42bef3b4f88a2f593cdb123452cd41451db3 | [
"BSD-3-Clause"
] | 1 | 2020-07-30T06:14:31.000Z | 2020-07-30T06:14:31.000Z | ###################TEST WHETHER THE PACKAGE CAN BE IMPORTED####################
def test_top_import():
import galpy
def test_orbit_import():
import galpy.orbit
def test_potential_import():
import galpy.potential
def test_actionAngle_import():
import galpy.actionAngle
def test_df_import():
import galpy.df
def test_util_import():
import galpy.util
import galpy.util.multi
import galpy.util.bovy_plot
import galpy.util.bovy_coords
import galpy.util.bovy_conversion
| 23.25 | 79 | 0.636201 | 67 | 558 | 5.074627 | 0.313433 | 0.323529 | 0.3 | 0.167647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238351 | 558 | 23 | 80 | 24.26087 | 0.8 | 0.071685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | true | 0 | 1 | 0 | 1.375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bdf2fbae0606325d4bc203215d011f31e664b4e7 | 15,808 | py | Python | tests/test_sklearn_scaler_converter.py | xiaowuhu/sklearn-onnx | e85674a67a0a043e19c2ffe181e5d31eca8ce40b | [
"Apache-2.0"
] | 1 | 2021-12-13T18:00:56.000Z | 2021-12-13T18:00:56.000Z | tests/test_sklearn_scaler_converter.py | xiaowuhu/sklearn-onnx | e85674a67a0a043e19c2ffe181e5d31eca8ce40b | [
"Apache-2.0"
] | null | null | null | tests/test_sklearn_scaler_converter.py | xiaowuhu/sklearn-onnx | e85674a67a0a043e19c2ffe181e5d31eca8ce40b | [
"Apache-2.0"
] | 2 | 2020-08-20T06:27:36.000Z | 2021-09-06T09:09:50.000Z | # SPDX-License-Identifier: Apache-2.0
"""
Tests scikit-learn's standard scaler converter.
"""
import unittest
from distutils.version import StrictVersion
import numpy
from onnxruntime import __version__ as ort_version
from sklearn.preprocessing import (
StandardScaler, RobustScaler, MinMaxScaler, MaxAbsScaler)
try:
# scikit-learn >= 0.22
from sklearn.utils._testing import ignore_warnings
except ImportError:
# scikit-learn < 0.22
from sklearn.utils.testing import ignore_warnings
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import (
Int64TensorType, FloatTensorType, DoubleTensorType)
from test_utils import dump_data_and_model, TARGET_OPSET
ort_version = ".".join(ort_version.split('.')[:2])
class TestSklearnScalerConverter(unittest.TestCase):
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_int(self):
model = StandardScaler()
data = [[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", Int64TensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.int64),
model, model_onnx,
basename="SklearnStandardScalerInt64")
@ignore_warnings(category=DeprecationWarning)
def test_min_max_scaler_int(self):
model = MinMaxScaler()
data = [[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", Int64TensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.int64),
model, model_onnx,
basename="SklearnMinMaxScalerInt64")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_double(self):
model = StandardScaler()
data = [[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", DoubleTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float64),
model, model_onnx,
basename="SklearnStandardScalerDouble")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_blacklist(self):
model = StandardScaler()
data = numpy.array([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]],
dtype=numpy.float32)
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET,
black_op={'Normalizer', 'Scaler'})
self.assertNotIn('Normalizer', str(model_onnx))
self.assertNotIn('Scaler', str(model_onnx))
dump_data_and_model(
data, model, model_onnx,
basename="SklearnStandardScalerBlackList")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_floats(self):
model = StandardScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnStandardScalerFloat32")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_floats_div(self):
model = StandardScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(
model, "scaler", [("input", FloatTensorType([None, 3]))],
options={id(model): {'div': 'div'}})
assert 'op_type: "Div"' in str(model_onnx)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnStandardScalerFloat32Div")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_floats_div_cast(self):
model = StandardScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(
model, "cast", [("input", FloatTensorType([None, 3]))],
options={id(model): {'div': 'div_cast'}},
target_opset=TARGET_OPSET)
assert 'op_type: "Div"' in str(model_onnx)
assert 'caler"' not in str(model_onnx)
assert "double_data:" in str(model_onnx)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnStandardScalerFloat32DivCast")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_floats_no_std(self):
model = StandardScaler(with_std=False)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnStandardScalerFloat32NoStd")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_floats_no_mean(self):
model = StandardScaler(with_mean=False)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnStandardScalerFloat32NoMean")
@ignore_warnings(category=DeprecationWarning)
def test_standard_scaler_floats_no_mean_std(self):
model = StandardScaler(with_mean=False, with_std=False)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnStandardScalerFloat32NoMeanStd")
@ignore_warnings(category=DeprecationWarning)
def test_robust_scaler_floats(self):
model = RobustScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnRobustScalerFloat32")
@ignore_warnings(category=DeprecationWarning)
def test_robust_scaler_doubles(self):
model = RobustScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", DoubleTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float64),
model, model_onnx, basename="SklearnRobustScalerFloat64")
@ignore_warnings(category=DeprecationWarning)
def test_robust_scaler_floats_no_bias(self):
model = RobustScaler(with_centering=False)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model,
basename="SklearnRobustScalerWithCenteringFloat32")
@ignore_warnings(category=DeprecationWarning)
def test_robust_scaler_floats_no_scaling(self):
model = RobustScaler(with_scaling=False)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnRobustScalerNoScalingFloat32")
@ignore_warnings(category=DeprecationWarning)
def test_robust_scaler_floats_no_centering_scaling(self):
model = RobustScaler(with_centering=False, with_scaling=False)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model,
basename="SklearnRobustScalerNoCenteringScalingFloat32")
@ignore_warnings(category=DeprecationWarning)
def test_min_max_scaler(self):
model = MinMaxScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnMinMaxScaler")
@ignore_warnings(category=DeprecationWarning)
def test_min_max_scaler_double(self):
model = MinMaxScaler()
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", DoubleTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float64),
model, model_onnx, basename="SklearnMinMaxScalerDouble")
@ignore_warnings(category=DeprecationWarning)
@unittest.skipIf(StrictVersion(ort_version) < StrictVersion("1.9.0"),
reason="Operator clip not fully implemented")
def test_min_max_scaler_clip(self):
model = MinMaxScaler(clip=True)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
data[0][0] = 1e6
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, model_onnx, basename="SklearnMinMaxScalerClip")
@ignore_warnings(category=DeprecationWarning)
@unittest.skipIf(StrictVersion(ort_version) < StrictVersion("1.9.0"),
reason="Operator clip not fully implemented")
def test_min_max_scaler_double_clip(self):
model = MinMaxScaler(clip=True)
data = [
[0.0, 0.0, 3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", DoubleTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
data[0][0] = 1e6
dump_data_and_model(
numpy.array(data, dtype=numpy.float64),
model, model_onnx, basename="SklearnMinMaxScalerDouble")
@ignore_warnings(category=DeprecationWarning)
def test_max_abs_scaler(self):
model = MaxAbsScaler()
data = [
[0.0, 0.0, -3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", FloatTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float32),
model, basename="SklearnMaxAbsScaler")
@ignore_warnings(category=DeprecationWarning)
def test_max_abs_scaler_double(self):
model = MaxAbsScaler()
data = [
[0.0, 0.0, -3.0],
[1.0, 1.0, 0.0],
[0.0, 2.0, 1.0],
[1.0, 0.0, 2.0],
]
model.fit(data)
model_onnx = convert_sklearn(model, "scaler",
[("input", DoubleTensorType([None, 3]))],
target_opset=TARGET_OPSET)
self.assertTrue(model_onnx is not None)
dump_data_and_model(
numpy.array(data, dtype=numpy.float64),
model, model_onnx, basename="SklearnMaxAbsScalerDouble")
if __name__ == "__main__":
unittest.main()
| 38.650367 | 78 | 0.538335 | 1,761 | 15,808 | 4.646792 | 0.077797 | 0.039839 | 0.037395 | 0.02493 | 0.829525 | 0.818526 | 0.797385 | 0.797385 | 0.762434 | 0.707198 | 0 | 0.055099 | 0.338689 | 15,808 | 408 | 79 | 38.745098 | 0.727664 | 0.007907 | 0 | 0.748663 | 0 | 0 | 0.065399 | 0.03656 | 0 | 0 | 0 | 0 | 0.069519 | 1 | 0.05615 | false | 0 | 0.029412 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da1bb91f60c9a9c361bd0b025fae262023893d73 | 63 | py | Python | api/config.py | estevg/apitest | c80bce676a4a8665ebfbe0ce1f0298b42e0ad733 | [
"MIT"
] | null | null | null | api/config.py | estevg/apitest | c80bce676a4a8665ebfbe0ce1f0298b42e0ad733 | [
"MIT"
] | 7 | 2020-06-05T20:29:01.000Z | 2021-09-22T18:23:26.000Z | api/config.py | estevg/apitest | c80bce676a4a8665ebfbe0ce1f0298b42e0ad733 | [
"MIT"
] | null | null | null | API_KEY = {
'api_key': '5284936e2774291ab4701bf2b2ac9f6d'
} | 21 | 49 | 0.730159 | 5 | 63 | 8.8 | 0.6 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.407407 | 0.142857 | 63 | 3 | 50 | 21 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0.609375 | 0.5 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da42e65d63b20bfd0df06279771bf54a547542c5 | 122 | py | Python | base_kivy_app/tests/test_utils.py | matham/base_kivy_app | 4a6ce00d9acaf325d275f6f9eced6078699e0b6d | [
"MIT"
] | 5 | 2019-09-25T14:52:19.000Z | 2021-05-23T13:25:03.000Z | base_kivy_app/tests/test_utils.py | matham/base_kivy_app | 4a6ce00d9acaf325d275f6f9eced6078699e0b6d | [
"MIT"
] | 1 | 2020-10-13T06:56:48.000Z | 2021-03-20T06:25:53.000Z | base_kivy_app/tests/test_utils.py | matham/base_kivy_app | 4a6ce00d9acaf325d275f6f9eced6078699e0b6d | [
"MIT"
] | 2 | 2020-01-13T19:46:58.000Z | 2021-05-02T12:58:51.000Z | from base_kivy_app.utils import pretty_space
def test_pretty_space():
assert pretty_space(10003045065) == '9.32 GB'
| 20.333333 | 49 | 0.770492 | 19 | 122 | 4.631579 | 0.789474 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0.139344 | 122 | 5 | 50 | 24.4 | 0.704762 | 0 | 0 | 0 | 0 | 0 | 0.057377 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
da6cebb9a777bacfa1895338c41b9fe63b40f15c | 24,975 | py | Python | mux_python/api/url_signing_keys_api.py | moaazsidat/mux-python | 3f03b9dd0761fa1a0cd5bdbeac85ccf4f326508c | [
"MIT"
] | 36 | 2019-02-28T21:18:39.000Z | 2022-03-04T19:58:45.000Z | mux_python/api/url_signing_keys_api.py | moaazsidat/mux-python | 3f03b9dd0761fa1a0cd5bdbeac85ccf4f326508c | [
"MIT"
] | 7 | 2019-04-01T14:48:34.000Z | 2022-03-04T16:31:34.000Z | mux_python/api/url_signing_keys_api.py | moaazsidat/mux-python | 3f03b9dd0761fa1a0cd5bdbeac85ccf4f326508c | [
"MIT"
] | 9 | 2019-11-29T03:57:58.000Z | 2022-03-02T17:29:25.000Z | # coding: utf-8
"""
Mux API
Mux is how developers build online video. This API encompasses both Mux Video and Mux Data functionality to help you build your video-related projects better and faster than ever before. # noqa: E501
The version of the OpenAPI document: v1
Contact: devex@mux.com
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from mux_python.api_client import ApiClient
from mux_python.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class URLSigningKeysApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_url_signing_key(self, **kwargs): # noqa: E501
"""Create a URL signing key # noqa: E501
Creates a new signing key pair. When creating a new signing key, the API will generate a 2048-bit RSA key-pair and return the private key and a generated key-id; the public key will be stored at Mux to validate signed tokens. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_url_signing_key(async_req=True)
>>> result = thread.get()
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: SigningKeyResponse
"""
kwargs['_return_http_data_only'] = True
return self.create_url_signing_key_with_http_info(**kwargs) # noqa: E501
def create_url_signing_key_with_http_info(self, **kwargs): # noqa: E501
"""Create a URL signing key # noqa: E501
Creates a new signing key pair. When creating a new signing key, the API will generate a 2048-bit RSA key-pair and return the private key and a generated key-id; the public key will be stored at Mux to validate signed tokens. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_url_signing_key_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(SigningKeyResponse, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_url_signing_key" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['accessToken'] # noqa: E501
response_types_map = {
201: "SigningKeyResponse",
}
return self.api_client.call_api(
'/video/v1/signing-keys', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def delete_url_signing_key(self, signing_key_id, **kwargs): # noqa: E501
"""Delete a URL signing key # noqa: E501
Deletes an existing signing key. Use with caution, as this will invalidate any existing signatures and no URLs can be signed using the key again. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_url_signing_key(signing_key_id, async_req=True)
>>> result = thread.get()
:param signing_key_id: The ID of the signing key. (required)
:type signing_key_id: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.delete_url_signing_key_with_http_info(signing_key_id, **kwargs) # noqa: E501
def delete_url_signing_key_with_http_info(self, signing_key_id, **kwargs): # noqa: E501
"""Delete a URL signing key # noqa: E501
Deletes an existing signing key. Use with caution, as this will invalidate any existing signatures and no URLs can be signed using the key again. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_url_signing_key_with_http_info(signing_key_id, async_req=True)
>>> result = thread.get()
:param signing_key_id: The ID of the signing key. (required)
:type signing_key_id: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
local_var_params = locals()
all_params = [
'signing_key_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_url_signing_key" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'signing_key_id' is set
if self.api_client.client_side_validation and ('signing_key_id' not in local_var_params or # noqa: E501
local_var_params['signing_key_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `signing_key_id` when calling `delete_url_signing_key`") # noqa: E501
collection_formats = {}
path_params = {}
if 'signing_key_id' in local_var_params:
path_params['SIGNING_KEY_ID'] = local_var_params['signing_key_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['accessToken'] # noqa: E501
response_types_map = {}
return self.api_client.call_api(
'/video/v1/signing-keys/{SIGNING_KEY_ID}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def get_url_signing_key(self, signing_key_id, **kwargs): # noqa: E501
"""Retrieve a URL signing key # noqa: E501
Retrieves the details of a URL signing key that has previously been created. Supply the unique signing key ID that was returned from your previous request, and Mux will return the corresponding signing key information. **The private key is not returned in this response.** # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_url_signing_key(signing_key_id, async_req=True)
>>> result = thread.get()
:param signing_key_id: The ID of the signing key. (required)
:type signing_key_id: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: SigningKeyResponse
"""
kwargs['_return_http_data_only'] = True
return self.get_url_signing_key_with_http_info(signing_key_id, **kwargs) # noqa: E501
def get_url_signing_key_with_http_info(self, signing_key_id, **kwargs): # noqa: E501
"""Retrieve a URL signing key # noqa: E501
Retrieves the details of a URL signing key that has previously been created. Supply the unique signing key ID that was returned from your previous request, and Mux will return the corresponding signing key information. **The private key is not returned in this response.** # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_url_signing_key_with_http_info(signing_key_id, async_req=True)
>>> result = thread.get()
:param signing_key_id: The ID of the signing key. (required)
:type signing_key_id: str
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(SigningKeyResponse, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'signing_key_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_url_signing_key" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'signing_key_id' is set
if self.api_client.client_side_validation and ('signing_key_id' not in local_var_params or # noqa: E501
local_var_params['signing_key_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `signing_key_id` when calling `get_url_signing_key`") # noqa: E501
collection_formats = {}
path_params = {}
if 'signing_key_id' in local_var_params:
path_params['SIGNING_KEY_ID'] = local_var_params['signing_key_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['accessToken'] # noqa: E501
response_types_map = {
200: "SigningKeyResponse",
}
return self.api_client.call_api(
'/video/v1/signing-keys/{SIGNING_KEY_ID}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def list_url_signing_keys(self, **kwargs): # noqa: E501
"""List URL signing keys # noqa: E501
Returns a list of URL signing keys. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_url_signing_keys(async_req=True)
>>> result = thread.get()
:param limit: Number of items to include in the response
:type limit: int
:param page: Offset by this many pages, of the size of `limit`
:type page: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: ListSigningKeysResponse
"""
kwargs['_return_http_data_only'] = True
return self.list_url_signing_keys_with_http_info(**kwargs) # noqa: E501
def list_url_signing_keys_with_http_info(self, **kwargs): # noqa: E501
"""List URL signing keys # noqa: E501
Returns a list of URL signing keys. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_url_signing_keys_with_http_info(async_req=True)
>>> result = thread.get()
:param limit: Number of items to include in the response
:type limit: int
:param page: Offset by this many pages, of the size of `limit`
:type page: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(ListSigningKeysResponse, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'limit',
'page'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method list_url_signing_keys" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'page' in local_var_params and local_var_params['page'] is not None: # noqa: E501
query_params.append(('page', local_var_params['page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['accessToken'] # noqa: E501
response_types_map = {
200: "ListSigningKeysResponse",
}
return self.api_client.call_api(
'/video/v1/signing-keys', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
| 44.20354 | 295 | 0.603724 | 2,897 | 24,975 | 4.950639 | 0.088367 | 0.054386 | 0.048808 | 0.030121 | 0.938642 | 0.935225 | 0.935225 | 0.923721 | 0.909078 | 0.909078 | 0 | 0.01166 | 0.33037 | 24,975 | 564 | 296 | 44.281915 | 0.84591 | 0.518278 | 0 | 0.715481 | 0 | 0 | 0.167187 | 0.050595 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037657 | false | 0 | 0.020921 | 0 | 0.096234 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e54393a2c04fdef57147f5041ec0778967e7b9ce | 2,920 | py | Python | dragon/run.py | Totillity/Dragon | 3c7b57635b2631ef312bac05599b0a9e821716cb | [
"MIT"
] | 2 | 2019-08-14T19:11:40.000Z | 2021-04-15T09:57:35.000Z | dragon/run.py | Totillity/Dragon | 3c7b57635b2631ef312bac05599b0a9e821716cb | [
"MIT"
] | null | null | null | dragon/run.py | Totillity/Dragon | 3c7b57635b2631ef312bac05599b0a9e821716cb | [
"MIT"
] | null | null | null | import os
from pathlib import Path
from dragon.passes import compile_drgn, parse, scan
from dragon.common import DragonError
__all__ = ['run_file', 'compile_file',
'Path']
class CompilingError(DragonError):
pass
def compile_file(path: Path, compiler='clang', delete_c=True):
with path.open("r") as file:
contents = file.read()
try:
unit = compile_drgn(parse(scan(contents)), path)
except DragonError as e:
e.finish('<string>', contents)
raise
for program in unit.programs:
with program.path.with_suffix(".h").open("w") as header:
with program.path.with_suffix(".c").open("w") as source:
base_name = program.path.with_suffix('').name.replace("__", "")
program.generate(base_name, header, source)
c_files = Path(os.path.realpath(__file__)).parent / "std_files"
dragon_c = str(c_files / "dragon.c")
list_c = str(c_files / "list.c")
result = os.system(f"{compiler} -O3 -o {path.with_suffix('')} "
f"{' '.join(str(program.path.with_suffix('.c')) for program in unit.programs)} "
f"{dragon_c} {list_c} "
f"-Wno-parentheses-equality")
if result != 0:
CompilingError("Error during compiling generated C code", 0, (0, 0)).finish(str(path), "")
if delete_c:
for program in unit.programs:
program.path.with_suffix(".c").unlink()
program.path.with_suffix(".h").unlink()
def run_file(path: Path, compiler='clang', delete_c=True, delete_exe=True):
with path.open("r") as file:
contents = file.read()
try:
unit = compile_drgn(parse(scan(contents)), path)
except DragonError as e:
e.finish('<string>', contents)
raise
for program in unit.programs:
with program.path.with_suffix(".h").open("w") as header:
with program.path.with_suffix(".c").open("w") as source:
base_name = program.path.with_suffix('').name.replace("__", "")
program.generate(base_name, header, source)
c_files = Path(os.path.realpath(__file__)).parent / "std_files"
dragon_c = str(c_files / "dragon.c")
list_c = str(c_files / "list.c")
result = os.system(f"{compiler} -O3 -o {path.with_suffix('')} "
f"{' '.join(str(program.path.with_suffix('.c')) for program in unit.programs)} "
f"{dragon_c} {list_c} "
f"-Wno-parentheses-equality")
if result != 0:
CompilingError("Error during compiling generated C code", 0, (0, 0)).finish(str(path), "")
os.system(f"./{path.with_suffix('')}")
if delete_c:
for program in unit.programs:
program.path.with_suffix(".c").unlink()
program.path.with_suffix(".h").unlink()
if delete_exe:
path.with_suffix("").unlink()
| 33.953488 | 103 | 0.592123 | 379 | 2,920 | 4.390501 | 0.197889 | 0.076923 | 0.134615 | 0.151442 | 0.854567 | 0.854567 | 0.854567 | 0.854567 | 0.813702 | 0.813702 | 0 | 0.004596 | 0.254795 | 2,920 | 85 | 104 | 34.352941 | 0.76011 | 0 | 0 | 0.793651 | 0 | 0.031746 | 0.188356 | 0.069178 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0.031746 | 0.063492 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e5cc431cea015f21ebaba9d835816e32643e639a | 24,044 | py | Python | daiquiri/jobs/tests/mixins.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 14 | 2018-12-23T18:35:02.000Z | 2021-12-15T04:55:12.000Z | daiquiri/jobs/tests/mixins.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 40 | 2018-12-20T12:44:05.000Z | 2022-03-21T11:35:20.000Z | daiquiri/jobs/tests/mixins.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 5 | 2019-05-16T08:03:35.000Z | 2021-08-23T20:03:11.000Z | import iso8601
import xml.etree.ElementTree as et
from django.urls import reverse
from django.db.models import Q
from django.utils.http import urlencode
from test_generator.core import TestMixin
class SyncTestMixin(TestMixin):
def _test_get_job_list_create(self, username):
'''
GET /{jobs} with an ulrencodes set of KEY=VALUE as query params
creates a job with these parameters and returns a VOTable
'''
for new_job in self.get_parameter_for_new_jobs(username):
url = reverse(self.url_names['list']) + '?' + urlencode(new_job)
response = self.client.get(url)
self.assertEqual(response.status_code, 200, msg=(
('username', username),
('url', url),
('data', new_job),
('status_code', response.status_code)
))
def _test_post_job_list_create(self, username):
'''
POST /{jobs} with an application/x-www-form-urlencoded set of KEY=VALUE
creates a job with these parameters and returns a VOTable
'''
for new_job in self.get_parameter_for_new_jobs(username):
url = reverse(self.url_names['list'])
response = self.client.post(url, urlencode(new_job), content_type='application/x-www-form-urlencoded')
self.assertEqual(response.status_code, 200, msg=(
('username', username),
('url', url),
('data', new_job),
('status_code', response.status_code)
))
def _test_post_job_list_create_internal(self, username):
'''
POST /{jobs} with an application/x-www-form-urlencoded set of KEY=VALUE
creates a job with these parameters and redirects to /{jobs}/{job-id}
as 303.
'''
for new_job in self.get_parameter_for_new_jobs_internal(username):
url = reverse(self.url_names['list'])
response = self.client.post(url, urlencode(new_job), content_type='application/x-www-form-urlencoded')
self.assertEqual(response.status_code, 400 if username == 'anonymous' else 200, msg=(
('username', username),
('url', url),
('data', new_job),
('status_code', response.status_code)
))
class AsyncTestMixin(TestMixin):
uws_ns = '{http://www.ivoa.net/xml/UWS/v1.0}'
def _test_get_job_list_xml(self, username):
'''
GET /{jobs} returns the job list as <uws:jobs> xml element. The archived
jobs are not returned.
'''
url = reverse(self.url_names['list'])
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
if username == 'user':
root = et.fromstring(response.content)
children = root.getchildren()
self.assertEqual(root.tag, self.uws_ns + 'jobs')
self.assertEqual(len(children), len(self.jobs))
for node in children:
self.assertEqual(node.tag, self.uws_ns + 'jobref')
def _test_get_job_list_xml_phase(self, username):
'''
GET /{jobs}?PHASE=<phase> returns the filtered joblist as <jobs>
element.
'''
url = reverse(self.url_names['list']) + '?PHASE=PENDING&PHASE=ARCHIVED'
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
if username == 'user':
root = et.fromstring(response.content)
self.assertEqual(len(root.getchildren()), len(self.jobs.filter(Q(phase='PENDING')|Q(phase='ARCHIVED'))))
def _test_get_job_list_xml_after(self, username):
'''
GET /{jobs}?AFTER=2014-09-10T10:01:02.000 returns jobs with startTimes
after the given [std:iso8601] time in UTC. The archived jobs are not
returned.
'''
url = reverse(self.url_names['list']) + '?AFTER=2017-01-01T01:00:00Z'
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
if username == 'user':
root = et.fromstring(response.content)
self.assertEqual(len(root.getchildren()), len(self.jobs.filter(creation_time__gte='2017-01-01T01:00:00Z')))
def _test_get_job_list_xml_last(self, username):
'''
GET /{jobs}?LAST=100 returns the given number of most recent jobs
ordered by ascending startTimes. The archived jobs are not returned.
'''
url = reverse(self.url_names['list']) + '?LAST=3'
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
if username == 'user':
root = et.fromstring(response.content)
self.assertEqual(len(root.getchildren()), 3)
def _test_post_job_list_create(self, username):
'''
POST /{jobs} with an application/x-www-form-urlencoded set of KEY=VALUE
creates a job with these parameters and redirects to /{jobs}/{job-id}
as 303.
'''
for new_job in self.get_parameter_for_new_jobs(username):
url = reverse(self.url_names['list'])
response = self.client.post(url, urlencode(new_job), content_type='application/x-www-form-urlencoded')
self.assertEqual(response.status_code, 303, msg=(
('username', username),
('url', url),
('data', new_job),
('status_code', response.status_code),
('content', response.content)
))
response = self.client.get(response['Location'] + '/phase')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content.decode(), 'PENDING')
def _test_post_job_list_create_internal(self, username):
'''
POST /{jobs} with an application/x-www-form-urlencoded set of KEY=VALUE
creates a job with these parameters and redirects to /{jobs}/{job-id}
as 303.
'''
for new_job in self.get_parameter_for_new_jobs_internal(username):
url = reverse(self.url_names['list'])
response = self.client.post(url, urlencode(new_job), content_type='application/x-www-form-urlencoded')
self.assertEqual(response.status_code, 400 if username == 'anonymous' else 303, msg=(
('username', username),
('url', url),
('data', new_job),
('status_code', response.status_code),
('content', response.content)
))
def _test_post_job_list_create_run(self, username):
'''
POST /{jobs} with an application/x-www-form-urlencoded set of
KEY=VALUE and additionally PHASE=RUN to an non-existing {job-id} creates
a job with these parameters and runs it.
'''
for new_job in self.get_parameter_for_new_jobs(username):
new_job.update({'PHASE': 'RUN'})
url = reverse(self.url_names['list'])
response = self.client.post(url, urlencode(new_job), content_type='application/x-www-form-urlencoded')
self.assertEqual(response.status_code, 303)
response = self.client.get(response['Location'] + '/phase')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content.decode(), 'COMPLETED')
def _test_get_job_detail(self, username):
'''
GET /{jobs}/{job-id} returns a job as <uws:job> xml element.
'''
for job in self.jobs:
url = reverse(self.url_names['detail'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
root = et.fromstring(response.content)
self.assertEqual(root.tag, self.uws_ns + 'job')
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_results(self, username):
'''
GET /{jobs}/{job-id}/results returns any results of the job {job-id} as
<uws:results> xml element.
'''
for job in self.jobs:
url = reverse(self.url_names['results'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
root = et.fromstring(response.content)
self.assertEqual(root.tag, self.uws_ns + 'results')
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_result(self, username):
'''
GET /{jobs}/{job-id}/results/result returns a 303 to the stream url for this job.
'''
for job in self.jobs:
url = reverse(self.url_names['result'], kwargs={
'pk': job.pk,
'result': 'result'
})
response = self.client.get(url)
if username == 'user':
if job.phase == 'COMPLETED':
self.assertEqual(response.status_code, 200)
else:
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_parameters(self, username):
'''
GET /{jobs}/{job-id}/parameters returns any parameters for the job
{job-id} as <uws:parameters> xml element.
'''
for job in self.jobs:
url = reverse(self.url_names['parameters'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
root = et.fromstring(response.content)
self.assertEqual(root.tag, self.uws_ns + 'parameters')
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_error(self, username):
'''
GET /{jobs}/{job-id}/error returns any error message associated with
{job-id} as text.
'''
for job in self.jobs:
url = reverse(self.url_names['error'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_delete(self, username):
'''
DELETE /{jobs}/{job-id} sets the job phase to ARCHIVED and deletes the
results and redirects to /{jobs}/{job-id} as 303.
'''
for job in self.jobs:
url = reverse(self.url_names['detail'], kwargs={'pk': job.pk})
response = self.client.delete(url)
if username == 'user':
redirect_url = 'http://testserver' + reverse(self.url_names['detail'], kwargs={'pk': job.pk})
self.assertRedirects(response, redirect_url, status_code=303)
self.assertEqual(self.jobs.get(pk=job.pk).phase, 'ARCHIVED')
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_delete(self, username):
'''
POST /{jobs}/{job-id} with ACTION=DELETE sets the job phase to ARCHIVED
and deletes the results and redirects to /{jobs}/{job-id} as 303.
'''
for job in self.jobs:
url = reverse(self.url_names['detail'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'ACTION': 'DELETE'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
redirect_url = 'http://testserver' + reverse(self.url_names['detail'], kwargs={'pk': job.pk})
self.assertRedirects(response, redirect_url, status_code=303)
self.assertEqual(self.jobs.get(pk=job.pk).phase, 'ARCHIVED')
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_invalid(self, username):
'''
POST /{jobs}/{job-id} with invalid ACTION returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['detail'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'ACTION': 'not_valid'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_missing(self, username):
'''
POST /{jobs}/{job-id} without ACTION returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['detail'], kwargs={'pk': job.pk})
response = self.client.post(url, content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_destruction(self, username):
'''
GET /{jobs}/{job-id}/destruction returns the destruction instant for
{job-id} as [std:iso8601].
'''
for job in self.jobs:
url = reverse(self.url_names['destruction'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
if job.destruction_time:
destruction_time = iso8601.parse_date(response.content.decode())
self.assertEqual(destruction_time, job.destruction_time)
else:
self.assertEqual(response.content.decode(), '')
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_destruction(self, username):
'''
POST /{jobs}/{job-id}/destruction with DESTRUCTION={std:iso8601}
(application/x-www-form-urlencoded) sets the destruction instant for
{job-id} and redirects to /{jobs}/{job-id} as 303.
'''
destruction_time = '2016-01-01T00:00:00'
for job in self.jobs:
url = reverse(self.url_names['destruction'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'DESTRUCTION': destruction_time}), content_type='application/x-www-form-urlencoded')
if username == 'user':
redirect_url = 'http://testserver' + reverse(self.url_names['detail'], kwargs={'pk': job.pk})
self.assertRedirects(response, redirect_url, status_code=303)
self.assertEqual(
self.jobs.get(pk=job.pk).destruction_time,
iso8601.parse_date('2016-01-01T00:00:00')
)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_destruction_invalid(self, username):
'''
POST /{jobs}/{job-id}/destruction with invalid DESTRUCTION returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['destruction'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'DESTRUCTION': 'not_a_date'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_destruction_missing(self, username):
'''
POST /{jobs}/{job-id}/destruction without DESTRUCTION returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['destruction'], kwargs={'pk': job.pk})
response = self.client.post(url, content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_executionduration(self, username):
'''
GET /{jobs}/{job-id}/executionduration returns the maximum execution
duration of {job-id} as integer.
'''
for job in self.jobs:
url = reverse(self.url_names['executionduration'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
self.assertEqual(int(response.content), job.execution_duration)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_executionduration(self, username):
'''
POST /{jobs}/{job-id}/executionduration with EXECUTIONDURATION={int}
sets the maximum execution duration of {job-id} and redirects as to
/{jobs}/{job-id} 303.
'''
execution_duration = 60
for job in self.jobs:
url = reverse(self.url_names['executionduration'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'EXECUTIONDURATION': 60}), content_type='application/x-www-form-urlencoded')
if username == 'user':
redirect_url = 'http://testserver' + reverse(self.url_names['detail'], kwargs={'pk': job.pk})
self.assertRedirects(response, redirect_url, status_code=303)
self.assertEqual(self.jobs.get(pk=job.pk).execution_duration, execution_duration)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_executionduration_invalid(self, username):
'''
POST /{jobs}/{job-id}/executionduration with invalid EXECUTIONDURATION returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['executionduration'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'EXECUTIONDURATION': 'not_an_integer'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_executionduration_missing(self, username):
'''
POST /{jobs}/{job-id}/executionduration without EXECUTIONDURATION returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['executionduration'], kwargs={'pk': job.pk})
response = self.client.post(url, content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_phase(self, username):
'''
GET /{jobs}/{job-id}/phase returns the phase of job {job-id} as one of
the fixed strings.
'''
for job in self.jobs:
url = reverse(self.url_names['phase'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, job.phase.encode())
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_phase_run(self, username):
'''
POST /{jobs}/{job-id}/phase with PHASE=RUN runs the job {job-id} and
redirects as to /{jobs}/{job-id} 303.
'''
for job in self.jobs:
url = reverse(self.url_names['phase'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'PHASE': 'RUN'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
if job.phase in ['PENDING']:
redirect_url = 'http://testserver' + reverse(self.url_names['detail'], kwargs={'pk': job.pk})
self.assertRedirects(response, redirect_url, status_code=303)
self.assertEqual(self.jobs.get(pk=job.pk).phase, 'COMPLETED')
else:
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_phase_abort(self, username):
'''
POST /{jobs}/{job-id}/phase with PHASE=ABORT aborts the job {job-id} and
redirects as to /{jobs}/{job-id} 303.
'''
for job in self.jobs:
url = reverse(self.url_names['phase'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'PHASE': 'ABORT'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
redirect_url = 'http://testserver' + reverse(self.url_names['detail'], kwargs={'pk': job.pk})
self.assertRedirects(response, redirect_url, status_code=303)
if job.phase in ['QUEUED', 'EXECUTING']:
self.assertEqual(self.jobs.get(pk=job.pk).phase, 'ABORTED')
else:
self.assertEqual(self.jobs.get(pk=job.pk).phase, job.phase)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_phase_invalid(self, username):
'''
POST /{jobs}/{job-id}/phase with invalid PHASE returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['phase'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'PHASE': 'invalid'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_phase_missing(self, username):
'''
POST /{jobs}/{job-id}/phase without PHASE returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['phase'], kwargs={'pk': job.pk})
response = self.client.post(url, content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_post_job_phase_unsupported(self, username):
'''
POST /{jobs}/{job-id}/phase with PHASE=unsupported returns 400.
'''
for job in self.jobs:
url = reverse(self.url_names['phase'], kwargs={'pk': job.pk})
response = self.client.post(url, urlencode({'PHASE': 'unsupported'}), content_type='application/x-www-form-urlencoded')
if username == 'user':
self.assertEqual(response.status_code, 400)
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_quote(self, username):
'''
GET /{jobs}/{job-id}/quote returns the quote for {job-id} as [std:iso8601].
'''
for job in self.jobs:
url = reverse(self.url_names['quote'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content.decode(), '')
else:
self.assertEqual(response.status_code, 404)
def _test_get_job_owner(self, username):
'''
GET /{jobs}/{job-id}/owner returns the owner of the job {job-id} as an
appropriate identifier.
'''
for job in self.jobs:
url = reverse(self.url_names['owner'], kwargs={'pk': job.pk})
response = self.client.get(url)
if username == 'user':
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content.decode(), job.owner.username)
else:
self.assertEqual(response.status_code, 404)
| 42.182456 | 146 | 0.586841 | 2,802 | 24,044 | 4.899001 | 0.068879 | 0.089604 | 0.107234 | 0.122532 | 0.857216 | 0.84097 | 0.808844 | 0.774022 | 0.754207 | 0.740147 | 0 | 0.021404 | 0.286891 | 24,044 | 569 | 147 | 42.256591 | 0.77919 | 0.151223 | 0 | 0.710602 | 0 | 0 | 0.094825 | 0.035275 | 0 | 0 | 0 | 0 | 0.252149 | 1 | 0.100287 | false | 0 | 0.017192 | 0 | 0.126075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e5dd342f0d8c5b064c7d1f59837dde912fd1f151 | 255 | py | Python | TrainInfoProcess/src/infoModel/__init__.py | ZhuYing-CSU/PassengerTransportationDesign | d676ec0ac635ee73ede4d5276670b493c0c9ee15 | [
"MIT"
] | 1 | 2021-07-24T02:05:01.000Z | 2021-07-24T02:05:01.000Z | TrainInfoProcess/src/infoModel/__init__.py | ZhuYing-CSU/PassengerTransportationDesign | d676ec0ac635ee73ede4d5276670b493c0c9ee15 | [
"MIT"
] | null | null | null | TrainInfoProcess/src/infoModel/__init__.py | ZhuYing-CSU/PassengerTransportationDesign | d676ec0ac635ee73ede4d5276670b493c0c9ee15 | [
"MIT"
] | null | null | null | from TrainInfoProcess.src.infoModel.Flow import Flow
from TrainInfoProcess.src.infoModel.RailLine import RailLine
from TrainInfoProcess.src.infoModel.StationInfo import StationInfo
from TrainInfoProcess.src.infoModel.TrainNumberInfo import TrainNumberInfo | 63.75 | 74 | 0.894118 | 28 | 255 | 8.142857 | 0.321429 | 0.350877 | 0.403509 | 0.561404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 255 | 4 | 74 | 63.75 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e5fc0224c45bd3dfd221b6895b91e8196abe7dd1 | 94 | py | Python | backend/db/driver.py | artontech/ArtonFileManager | b099c5294ab731b0a0f1eb7dbe35397df4515863 | [
"Apache-2.0"
] | 1 | 2020-11-17T12:45:47.000Z | 2020-11-17T12:45:47.000Z | backend/db/driver.py | artontech/ArtonFileManager | b099c5294ab731b0a0f1eb7dbe35397df4515863 | [
"Apache-2.0"
] | null | null | null | backend/db/driver.py | artontech/ArtonFileManager | b099c5294ab731b0a0f1eb7dbe35397df4515863 | [
"Apache-2.0"
] | null | null | null | class Driver():
def init(self): pass
def open(self): pass
def close(self): pass | 23.5 | 25 | 0.606383 | 14 | 94 | 4.071429 | 0.571429 | 0.421053 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265957 | 94 | 4 | 26 | 23.5 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.75 | false | 0.75 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
007fbc0e7725826274aa071b7c3932abab6afcb2 | 6,039 | py | Python | allure-pytest/test/status/base_setup_status_test.py | vdsbenoit/allure-python | 7b56b031c42369dd73844105382e9ceb9a88d6cd | [
"Apache-2.0"
] | 1 | 2021-02-19T21:00:11.000Z | 2021-02-19T21:00:11.000Z | allure-pytest/test/status/base_setup_status_test.py | vdsbenoit/allure-python | 7b56b031c42369dd73844105382e9ceb9a88d6cd | [
"Apache-2.0"
] | null | null | null | allure-pytest/test/status/base_setup_status_test.py | vdsbenoit/allure-python | 7b56b031c42369dd73844105382e9ceb9a88d6cd | [
"Apache-2.0"
] | 1 | 2020-08-05T05:40:44.000Z | 2020-08-05T05:40:44.000Z | import pytest
@pytest.fixture
def failed_fixture():
assert False
def test_failed_fixture(failed_fixture):
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_failed_fixture',
... with_status('failed'),
... has_status_details(with_message_contains("AssertionError"),
... with_trace_contains("def failed_fixture():")
... ),
... has_container(allure_report,
... has_before('failed_fixture',
... with_status('failed'),
... has_status_details(with_message_contains("AssertionError"),
... with_trace_contains("failed_fixture")
... ),
... ),
... )
... )
... )
"""
pass
@pytest.fixture
def broken_fixture():
raise IndexError
def test_broken_fixture(broken_fixture):
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_broken_fixture',
... with_status('broken'),
... has_status_details(with_message_contains("IndexError"),
... with_trace_contains("def broken_fixture():")
... ),
... has_container(allure_report,
... has_before('broken_fixture',
... with_status('broken'),
... has_status_details(with_message_contains("IndexError"),
... with_trace_contains("broken_fixture")
... ),
... ),
... )
... )
... )
"""
pass
@pytest.fixture
def skip_fixture():
pytest.skip()
def test_skip_fixture(skip_fixture):
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_skip_fixture',
... with_status('skipped'),
... has_status_details(with_message_contains("Skipped: <Skipped instance>")),
... has_container(allure_report,
... has_before('skip_fixture',
... with_status('skipped'),
... has_status_details(with_message_contains("Skipped: <Skipped instance>"),
... with_trace_contains("skip_fixture")
... ),
... ),
... )
... )
... )
"""
@pytest.fixture
def pytest_fail_fixture():
pytest.fail()
def test_pytest_fail_fixture(pytest_fail_fixture):
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_pytest_fail_fixture',
... with_status('failed'),
... has_status_details(with_message_contains("Failed: <Failed instance>"),
... with_trace_contains("def pytest_fail_fixture():")
... ),
... has_container(allure_report,
... has_before('pytest_fail_fixture',
... with_status('failed'),
... has_status_details(with_message_contains("Failed: <Failed instance>"),
... with_trace_contains("pytest_fail_fixture")
... ),
... ),
... )
... )
... )
"""
pass
@pytest.fixture
def pytest_fail_with_reason_fixture():
pytest.fail("Fail message")
def test_pytest_fail_with_reason_fixture(pytest_fail_with_reason_fixture):
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_pytest_fail_with_reason_fixture',
... with_status('failed'),
... has_status_details(with_message_contains("Fail message"),
... with_trace_contains("def pytest_fail_with_reason_fixture():")
... ),
... has_container(allure_report,
... has_before('pytest_fail_with_reason_fixture',
... with_status('failed'),
... has_status_details(with_message_contains("Fail message"),
... with_trace_contains("pytest_fail_with_reason_fixture")
... ),
... ),
... )
... )
... )
"""
pass
| 42.829787 | 132 | 0.381189 | 375 | 6,039 | 5.645333 | 0.085333 | 0.113368 | 0.070855 | 0.094473 | 0.878129 | 0.786018 | 0.750118 | 0.677374 | 0.677374 | 0.639584 | 0 | 0 | 0.514655 | 6,039 | 140 | 133 | 43.135714 | 0.722279 | 0.826793 | 0 | 0.36 | 0 | 0 | 0.018293 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.4 | false | 0.16 | 0.04 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
977cef787e59e7bad7b27c15aec9efd29fbfd3c9 | 1,507 | py | Python | quantdsl/lib/storage1.py | johnbywater/quantdsl | 81c1c69f27e094a6ed0542b28cf1ac8fcce5494a | [
"BSD-3-Clause"
] | 269 | 2015-01-09T00:56:41.000Z | 2022-03-30T17:09:46.000Z | quantdsl/lib/storage1.py | johnbywater/quantdsl | 81c1c69f27e094a6ed0542b28cf1ac8fcce5494a | [
"BSD-3-Clause"
] | 22 | 2017-04-01T13:44:56.000Z | 2018-09-10T11:48:56.000Z | quantdsl/lib/storage1.py | johnbywater/quantdsl | 81c1c69f27e094a6ed0542b28cf1ac8fcce5494a | [
"BSD-3-Clause"
] | 59 | 2015-01-09T00:56:50.000Z | 2022-03-13T23:52:27.000Z | from quantdsl.semantics import Wait, Choice, inline, Settlement, ForwardMarket
def GasStorage(start, end, commodity_name, quantity, limit, step):
if ((start < end) and (limit > 0)):
if quantity <= 0:
return Wait(start, Choice(
Continue(start, end, commodity_name, quantity, limit, step),
Inject(start, end, commodity_name, quantity, limit, step, 1),
))
elif quantity < limit:
return Wait(start, Choice(
Continue(start, end, commodity_name, quantity, limit, step),
Inject(start, end, commodity_name, quantity, limit, step, -1),
))
else:
return Wait(start, Choice(
Continue(start, end, commodity_name, quantity, limit, step),
Inject(start, end, commodity_name, quantity, limit, step, 1),
Inject(start, end, commodity_name, quantity, limit, step, -1),
))
else:
return 0
@inline
def Continue(start, end, commodity_name, quantity, limit, step):
GasStorage(start + step, end, commodity_name, quantity, limit, step)
@inline
def Continue(start, end, commodity_name, quantity, limit, step):
GasStorage(start + step, end, commodity_name, quantity, limit, step)
@inline
def Inject(start, end, commodity_name, quantity, limit, step, vol):
Continue(start, end, commodity_name, quantity + vol, limit, step) - \
Settlement(start, vol * ForwardMarket(start, commodity_name))
| 37.675 | 78 | 0.625747 | 171 | 1,507 | 5.426901 | 0.163743 | 0.210129 | 0.241379 | 0.362069 | 0.794181 | 0.794181 | 0.75431 | 0.713362 | 0.665948 | 0.665948 | 0 | 0.006306 | 0.263437 | 1,507 | 39 | 79 | 38.641026 | 0.82973 | 0 | 0 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.032258 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
97bdae7623dc4bde0a7d1b6384af0c044f6b8642 | 2,833 | py | Python | sympy/ntheory/__init__.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | sympy/ntheory/__init__.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | sympy/ntheory/__init__.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | """
Number theory module (primes, etc)
"""
from .generate import (
nextprime,
prevprime,
prime,
primepi,
primerange,
randprime,
Sieve,
sieve,
primorial,
cycle_length,
composite,
compositepi,
)
from .primetest import isprime
from .factor_ import (
divisors,
proper_divisors,
factorint,
multiplicity,
perfect_power,
pollard_pm1,
pollard_rho,
primefactors,
totient,
trailing,
divisor_count,
proper_divisor_count,
divisor_sigma,
factorrat,
reduced_totient,
primenu,
primeomega,
mersenne_prime_exponent,
is_perfect,
is_mersenne_prime,
is_abundant,
is_deficient,
is_amicable,
abundance,
dra,
drm,
)
from .partitions_ import npartitions
from .residue_ntheory import (
is_primitive_root,
is_quad_residue,
legendre_symbol,
jacobi_symbol,
n_order,
sqrt_mod,
quadratic_residues,
primitive_root,
nthroot_mod,
is_nthpow_residue,
sqrt_mod_iter,
mobius,
discrete_log,
quadratic_congruence,
polynomial_congruence,
)
from .multinomial import (
binomial_coefficients,
binomial_coefficients_list,
multinomial_coefficients,
)
from .continued_fraction import (
continued_fraction_periodic,
continued_fraction_iterator,
continued_fraction_reduce,
continued_fraction_convergents,
continued_fraction,
)
from .egyptian_fraction import egyptian_fraction
__all__ = [
"nextprime",
"prevprime",
"prime",
"primepi",
"primerange",
"randprime",
"Sieve",
"sieve",
"primorial",
"cycle_length",
"composite",
"compositepi",
"isprime",
"divisors",
"proper_divisors",
"factorint",
"multiplicity",
"perfect_power",
"pollard_pm1",
"pollard_rho",
"primefactors",
"totient",
"trailing",
"divisor_count",
"proper_divisor_count",
"divisor_sigma",
"factorrat",
"reduced_totient",
"primenu",
"primeomega",
"mersenne_prime_exponent",
"is_perfect",
"is_mersenne_prime",
"is_abundant",
"is_deficient",
"is_amicable",
"abundance",
"dra",
"drm",
"npartitions",
"is_primitive_root",
"is_quad_residue",
"legendre_symbol",
"jacobi_symbol",
"n_order",
"sqrt_mod",
"quadratic_residues",
"primitive_root",
"nthroot_mod",
"is_nthpow_residue",
"sqrt_mod_iter",
"mobius",
"discrete_log",
"quadratic_congruence",
"polynomial_congruence",
"binomial_coefficients",
"binomial_coefficients_list",
"multinomial_coefficients",
"continued_fraction_periodic",
"continued_fraction_iterator",
"continued_fraction_reduce",
"continued_fraction_convergents",
"continued_fraction",
"egyptian_fraction",
]
| 19.272109 | 48 | 0.66149 | 261 | 2,833 | 6.762452 | 0.333333 | 0.105949 | 0.026062 | 0.033994 | 0.847592 | 0.847592 | 0.847592 | 0.771671 | 0.771671 | 0.771671 | 0 | 0.000927 | 0.238616 | 2,833 | 146 | 49 | 19.40411 | 0.817339 | 0.012001 | 0 | 0 | 1 | 0 | 0.298101 | 0.080258 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057143 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8af3660c2e0e11c6ab64619c15f273ba5e45c1e2 | 3,928 | py | Python | LIBS/csv_pivotingtable_catchmentpop.py | yannforget/shedecides | 67e23b346f972534da1c7ad72152bd0fa3e12e00 | [
"MIT"
] | 1 | 2019-09-24T06:07:45.000Z | 2019-09-24T06:07:45.000Z | LIBS/csv_pivotingtable_catchmentpop.py | yannforget/shedecides | 67e23b346f972534da1c7ad72152bd0fa3e12e00 | [
"MIT"
] | null | null | null | LIBS/csv_pivotingtable_catchmentpop.py | yannforget/shedecides | 67e23b346f972534da1c7ad72152bd0fa3e12e00 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Apr 3 17:17:08 2019
@author: tais
"""
# Import libs
import os
import pandas as pd
import numpy as np
def GetCatchmentCumulPopByISO(in_file, in_sep=';', out_file='', out_sep='', col_prefix="ISO", df_return=False):
"""Get one line per health facility with cumulative population per isochrone"""
# Parameters for outputfile (path and sep)
if out_file == '':
path, ext = os.path.splitext(in_file)
out_file = "%s_clean%s"%(path,ext)
if out_sep == '':
out_sep = in_sep
# Import input csv
df = pd.read_csv(in_file, sep=in_sep)
# Create two new columns with the ID of health facility and isochrone value
df['HF_cat'] = df.apply (lambda row: int(row['label'].split(';')[0].split(" ")[-1]), axis=1)
df['ISO_cat'] = df.apply (lambda row: int(row['label'].split(';')[1].split(" ")[-1]), axis=1)
# Keep only required columns and sort by HF id and Isochrone value
df = df[['HF_cat','ISO_cat','sum']]
df.sort_values(['HF_cat','ISO_cat'],inplace=True)
# Compute cumulated population by increasing isochrone for each HF
cumul_by_HF = pd.Series([])
list_of_HF = list(set(df['HF_cat'].values))
for HF in list_of_HF:
df_HF = df.loc[df['HF_cat'] == HF]
a = df_HF['sum'].cumsum(axis=0)
cumul_by_HF = cumul_by_HF.add(a, fill_value=0)
df['cumul_sum'] = cumul_by_HF
# Pivot the table
df_pivot = pd.pivot_table(df, values='cumul_sum', index='HF_cat', columns='ISO_cat', aggfunc=np.min, fill_value=0)
df_pivot.rename(columns=lambda x: '%s_%s'%(col_prefix,x), inplace=True)
# Add column with total population
ISO_column_name = [ x for x in list(df_pivot)]
df_pivot['%s_TOT'%col_prefix] = df_pivot.iloc[:,-1]
ISO_column_name.append('%s_TOT'%col_prefix)
# Add percentage of population
for name in ISO_column_name:
iso_value = name[len(col_prefix)+1:]
df_pivot['%s_prct%s'%(col_prefix,iso_value)] = (df_pivot['%s'%name]/df_pivot['%s_TOT'%col_prefix])*100
# Export table as csv
df_pivot.to_csv(out_file, sep=out_sep)
# Return
if df_return:
return df_pivot
def GetCatchmentPopByISO(in_file, in_sep=';', out_file='', out_sep='', col_prefix="ISO", df_return=False):
"""Get one line per health facility with population per isochrone"""
# Parameters for outputfile (path and sep)
if out_file == '':
path, ext = os.path.splitext(in_file)
out_file = "%s_clean%s"%(path,ext)
if out_sep == '':
out_sep = in_sep
# Import input csv
df = pd.read_csv(in_file, sep=in_sep)
# Create two new columns with the ID of health facility and isochrone value
df['HF_cat'] = df.apply (lambda row: int(row['label'].split(';')[0].split(" ")[-1]), axis=1)
df['ISO_cat'] = df.apply (lambda row: int(row['label'].split(';')[1].split(" ")[-1]), axis=1)
# Keep only required columns and sort by HF id and Isochrone value
df = df[['HF_cat','ISO_cat','sum']]
df.sort_values(['HF_cat','ISO_cat'],inplace=True)
# Pivot the table
df_pivot = pd.pivot_table(df, values='sum', index='HF_cat', columns='ISO_cat', aggfunc=np.min, fill_value=0)
df_pivot.rename(columns=lambda x: '%s_%s'%(col_prefix,x), inplace=True)
# Add column with total population
ISO_column_name = [ x for x in list(df_pivot)]
df_pivot['%s_TOT'%col_prefix] = sum([df_pivot['%s'%name] for name in ISO_column_name])
ISO_column_name.append('%s_TOT'%col_prefix)
# Add percentage of population
for name in ISO_column_name:
iso_value = name[len(col_prefix)+1:]
df_pivot['%s_prct%s'%(col_prefix,iso_value)] = (df_pivot['%s'%name]/df_pivot['%s_TOT'%col_prefix])*100
# Export table as csv
df_pivot.to_csv(out_file, sep=out_sep)
# Return
if df_return:
return df_pivot
| 45.674419 | 119 | 0.639257 | 630 | 3,928 | 3.761905 | 0.185714 | 0.059072 | 0.03038 | 0.032911 | 0.834599 | 0.834599 | 0.834599 | 0.825316 | 0.825316 | 0.825316 | 0 | 0.011893 | 0.207994 | 3,928 | 85 | 120 | 46.211765 | 0.74992 | 0.226833 | 0 | 0.703704 | 0 | 0 | 0.093342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.055556 | 0 | 0.12963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c1493e887334359952932fc2bdd5ae64d84f146e | 3,471 | py | Python | python/latex2wolfram/Number.py | rafaellc28/Latex2Wolfram | ac133872a7fb3a884e52df07c45db9bd9188adf7 | [
"MIT"
] | 2 | 2019-09-24T21:00:57.000Z | 2021-07-19T21:24:09.000Z | python/latex2wolfram/Number.py | rafaellc28/Latex2Wolfram | ac133872a7fb3a884e52df07c45db9bd9188adf7 | [
"MIT"
] | 1 | 2019-09-24T20:54:08.000Z | 2019-09-24T20:54:08.000Z | python/latex2wolfram/Number.py | rafaellc28/Latex2Wolfram | ac133872a7fb3a884e52df07c45db9bd9188adf7 | [
"MIT"
] | null | null | null | from Expression import *
class Number(Expression):
"""
Class representing a number node in the AST of the MLP
"""
def __init__(self, number):
"""
Set the number
:param number: float
"""
Expression.__init__(self)
self.number = number
def __str__(self):
"""
to string
"""
return str(self.number)
def __len__(self):
"""
length method
"""
return 1
def __iter__(self):
"""
Get the iterator of the class
"""
return [self]
def lessThanZero(self):
return self.number[0] == "-"
def getNumber(self):
return self.number
def getDependencies(self, codeGenerator):
return []
def setupEnvironment(self, codeSetup):
"""
Setup environment
"""
codeSetup.setupEnvironment(self)
def generateCode(self, codeGenerator):
"""
Generate the AMPL code for this Number
"""
return codeGenerator.generateCode(self)
class ImaginaryNumber(Expression):
"""
Class representing a number node in the AST of the MLP
"""
def __init__(self, number = None):
"""
Set the number
:param number: float
"""
Expression.__init__(self)
self.number = number
def __str__(self):
"""
to string
"""
res = "i"
if self.number:
res = str(self.number) + res
return res
def __len__(self):
"""
length method
"""
return 1
def __iter__(self):
"""
Get the iterator of the class
"""
return [self]
def lessThanZero(self):
return self.number[0] == "-"
def getNumber(self):
return self.number
def getDependencies(self, codeGenerator):
return []
def setupEnvironment(self, codeSetup):
"""
Setup environment
"""
codeSetup.setupEnvironment(self)
def generateCode(self, codeGenerator):
"""
Generate the AMPL code for this Number
"""
return codeGenerator.generateCode(self)
class NapierNumber(Expression):
"""
Class representing a number node in the AST of the MLP
"""
def __init__(self, number = None):
"""
Set the number
:param number: float
"""
Expression.__init__(self)
self.number = number
def __str__(self):
"""
to string
"""
res = "e"
if self.number:
res = str(self.number) + res
return res
def __len__(self):
"""
length method
"""
return 1
def __iter__(self):
"""
Get the iterator of the class
"""
return [self]
def lessThanZero(self):
return self.number[0] == "-"
def getNumber(self):
return self.number
def getDependencies(self, codeGenerator):
return []
def setupEnvironment(self, codeSetup):
"""
Setup environment
"""
codeSetup.setupEnvironment(self)
def generateCode(self, codeGenerator):
"""
Generate the AMPL code for this Number
"""
return codeGenerator.generateCode(self)
| 19.834286 | 58 | 0.510804 | 318 | 3,471 | 5.386792 | 0.163522 | 0.099241 | 0.049037 | 0.070053 | 0.953882 | 0.953882 | 0.953882 | 0.953882 | 0.953882 | 0.953882 | 0 | 0.00285 | 0.393547 | 3,471 | 174 | 59 | 19.948276 | 0.810926 | 0.175166 | 0 | 0.880597 | 0 | 0 | 0.002123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.402985 | false | 0 | 0.014925 | 0.134328 | 0.776119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
c149875636b0b48209dd9b7ba3f38bf7767c39ef | 8,044 | py | Python | gc_apps/dv_notify/tests/test_error_messages.py | IQSS/geoconnect | 09c91be7ffbc04fbfa9850f6b740277d971ac8a7 | [
"Apache-2.0"
] | 6 | 2015-10-28T15:35:04.000Z | 2020-08-20T10:18:33.000Z | gc_apps/dv_notify/tests/test_error_messages.py | IQSS/geoconnect | 09c91be7ffbc04fbfa9850f6b740277d971ac8a7 | [
"Apache-2.0"
] | 50 | 2015-01-05T15:09:00.000Z | 2021-06-30T04:11:36.000Z | gc_apps/dv_notify/tests/test_error_messages.py | IQSS/geoconnect | 09c91be7ffbc04fbfa9850f6b740277d971ac8a7 | [
"Apache-2.0"
] | 2 | 2017-02-28T02:18:49.000Z | 2017-07-14T02:40:43.000Z | from __future__ import print_function
from os.path import dirname, join
import json
from unittest import skip
from django.test import TestCase
from django.core import management
from django.core.files import File
from gc_apps.gis_tabular.models import TabularFileInfo, WorldMapTabularLayerInfo
from gc_apps.geo_utils.msg_util import msgt
from gc_apps.dv_notify.metadata_updater import MetadataUpdater,\
ERROR_DV_PAGE_NOT_FOUND,\
ERROR_DV_NO_SERVER_RESPONSE,\
ERROR_DV_METADATA_UPDATE
JSON_JOIN_TEST_FILENAME = join(dirname(__file__), 'input', 'core_data_join.json')
class TestDataverseNotify(TestCase):
"""
Test basic params
"""
def setUp(self):
print ('load JSON file + fixtures')
self.json_join_data_string = open(JSON_JOIN_TEST_FILENAME, 'r').read()
#self.join_targets_json = json.loads(json_data)
management.call_command('loaddata', 'test_join_layer-2016-1205.json')#, verbosity=)
def tearDown(self):
print ('flush fixtures')
management.call_command('flush', verbosity=3, interactive=False)
@skip('test_01_update_dataverse_metadata')
def test_01_update_dataverse_metadata(self):
"""Test Dataverse "update metadata" url endpoint. Only testing
fail conditions, e.g. can't contact server, etc."""
msgt(self.test_01_update_dataverse_metadata.__doc__)
tab_file_info = TabularFileInfo.objects.get(pk=15) # Election precinct test
# --------------------------------------------
# Attach actual file -- path from fixture is not correct
# --------------------------------------------
elect_filepath = join(dirname(__file__),
'input',
'election_precincts2.csv')
tab_file_info.dv_file.save(\
'election_precincts2.csv',
File(open(elect_filepath, 'r')),
save=False)
self.assertEqual(tab_file_info.id, 15)
# ------------------------------------------
# Load successful info
# ------------------------------------------
tab_map_info = WorldMapTabularLayerInfo.build_from_worldmap_json(\
tab_file_info,\
json.loads(self.json_join_data_string))
self.assertTrue(tab_map_info.id is not None)
# ------------------------------------------
# Make sure data loading as expected
# ------------------------------------------
self.assertEqual(type(tab_map_info.core_data), dict)
self.assertEqual(type(tab_map_info.attribute_data), list)
self.assertEqual(type(tab_map_info.download_links), dict)
# ------------------------------------------
# Send message to non-existent server
# ------------------------------------------
msgt('Send message to non-existent server')
url_non_existent = 'https://nope.dataverse.harvard.edu'
success, resp_dict = MetadataUpdater.update_dataverse_with_metadata(\
tab_map_info,
url_non_existent)
self.assertEqual(success, False)
self.assertTrue(resp_dict['message'].startswith(\
ERROR_DV_NO_SERVER_RESPONSE))
# ------------------------------------------
# Send message to server without an endpoint
# ------------------------------------------
msgt('Send message to server without an endpoint')
url_no_endpoint = 'http://www.harvard.edu'
success, resp_dict = MetadataUpdater.update_dataverse_with_metadata(\
tab_map_info,
url_no_endpoint)
self.assertEqual(success, False)
self.assertTrue(resp_dict['message'].startswith(\
ERROR_DV_PAGE_NOT_FOUND))
# ------------------------------------------
# No token in request to Dataverse
# ------------------------------------------
msgt(('No token in request to Dataverse'
' (requires working endpoint at https://dataverse.harvard.edu)'))
url_no_endpoint = 'https://dataverse.harvard.edu'
success, resp_dict = MetadataUpdater.update_dataverse_with_metadata(\
tab_map_info,
url_no_endpoint)
self.assertEqual(success, False)
self.assertEqual(resp_dict['message'], 'Token not found in JSON request.')
def test_02_delete_dataverse_metadata(self):
"""Test Dataverse "delete metadata" url endpoint. Only testing
fail conditions, e.g. can't contact server, etc."""
msgt(self.test_02_delete_dataverse_metadata.__doc__)
tab_file_info = TabularFileInfo.objects.get(pk=15) # Election precinct test
# --------------------------------------------
# Attach actual file -- path from fixture is not correct
# --------------------------------------------
elect_filepath = join(dirname(__file__),
'input',
'election_precincts2.csv')
tab_file_info.dv_file.save(\
'election_precincts2.csv',
File(open(elect_filepath, 'r')),
save=False)
self.assertEqual(tab_file_info.id, 15)
# ------------------------------------------
# Load successful info
# ------------------------------------------
tab_map_info = WorldMapTabularLayerInfo.build_from_worldmap_json(\
tab_file_info,\
json.loads(self.json_join_data_string))
self.assertTrue(tab_map_info.id is not None)
# ------------------------------------------
# Make sure data loading as expected
# ------------------------------------------
self.assertEqual(type(tab_map_info.core_data), dict)
self.assertEqual(type(tab_map_info.attribute_data), list)
self.assertEqual(type(tab_map_info.download_links), dict)
# ------------------------------------------
# Send message to non-existent server
# ------------------------------------------
msgt('Send message to non-existent server')
url_non_existent = 'https://nope.dataverse.harvard.edu'
success, err_msg_or_None = MetadataUpdater.delete_dataverse_map_metadata(\
tab_map_info,
url_non_existent)
self.assertEqual(success, False)
self.assertTrue(err_msg_or_None.startswith(\
ERROR_DV_NO_SERVER_RESPONSE))
# ------------------------------------------
# Send message to server without an endpoint
# ------------------------------------------
msgt('Send message to server without an endpoint')
url_no_endpoint = 'http://www.harvard.edu'
success, err_msg_or_None = MetadataUpdater.delete_dataverse_map_metadata(\
tab_map_info,
url_no_endpoint)
self.assertEqual(success, False)
self.assertTrue(err_msg_or_None.startswith(\
ERROR_DV_PAGE_NOT_FOUND))
# ------------------------------------------
# No token in request to Dataverse
# ------------------------------------------
msgt(('No token in request to Dataverse'
' (requires working endpoint at https://dataverse.harvard.edu)'))
url_no_endpoint = 'https://dataverse.harvard.edu'
success, err_msg_or_None = MetadataUpdater.delete_dataverse_map_metadata(\
tab_map_info,
url_no_endpoint)
self.assertEqual(success, False)
self.assertEqual(err_msg_or_None, 'Token not found in JSON request.')
| 41.040816 | 91 | 0.524615 | 770 | 8,044 | 5.168831 | 0.198701 | 0.060302 | 0.040201 | 0.033166 | 0.809548 | 0.759045 | 0.74598 | 0.74598 | 0.74598 | 0.74598 | 0 | 0.005349 | 0.279463 | 8,044 | 195 | 92 | 41.251282 | 0.681332 | 0.227374 | 0 | 0.728972 | 0 | 0 | 0.136779 | 0.025269 | 0 | 0 | 0 | 0 | 0.205607 | 1 | 0.037383 | false | 0 | 0.093458 | 0 | 0.140187 | 0.028037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c14ab7408f3bd33394019a7d4e96772f4baa81ed | 6,576 | py | Python | day10/SyntaxChecker_test.py | arribajuan/advent-of-code-2021 | 753f5e5ed29ce4bf94d43f9e954c15bb4795135e | [
"MIT"
] | null | null | null | day10/SyntaxChecker_test.py | arribajuan/advent-of-code-2021 | 753f5e5ed29ce4bf94d43f9e954c15bb4795135e | [
"MIT"
] | null | null | null | day10/SyntaxChecker_test.py | arribajuan/advent-of-code-2021 | 753f5e5ed29ce4bf94d43f9e954c15bb4795135e | [
"MIT"
] | null | null | null | import unittest
import SyntaxChecker as sc
class TestSyntaxChecker(unittest.TestCase):
def test_syntax_check_01(self):
result = sc.SyntaxChecker().check_syntax("")
self.assertEqual(result.is_valid, True)
def test_syntax_check_02(self):
result = sc.SyntaxChecker().check_syntax("()")
self.assertEqual(result.is_valid, True)
def test_syntax_check_03(self):
result = sc.SyntaxChecker().check_syntax("([])")
self.assertEqual(result.is_valid, True)
def test_syntax_check_04(self):
result = sc.SyntaxChecker().check_syntax("{()()()}")
self.assertEqual(result.is_valid, True)
def test_syntax_check_05(self):
result = sc.SyntaxChecker().check_syntax("<([{}])>")
self.assertEqual(result.is_valid, True)
def test_syntax_check_06(self):
result = sc.SyntaxChecker().check_syntax("[<>({}){}[([])<>]]")
self.assertEqual(result.is_valid, True)
def test_syntax_check_07(self):
result = sc.SyntaxChecker().check_syntax("(((((((((())))))))))")
self.assertEqual(result.is_valid, True)
def test_syntax_check_08(self):
result = sc.SyntaxChecker().check_syntax("(]")
self.assertEqual(result.is_valid, False)
def test_syntax_check_09(self):
result = sc.SyntaxChecker().check_syntax("{()()()>")
self.assertEqual(result.is_valid, False)
def test_syntax_check_10(self):
result = sc.SyntaxChecker().check_syntax("(((()))}")
self.assertEqual(result.is_valid, False)
def test_syntax_check_11(self):
result = sc.SyntaxChecker().check_syntax("<([]){()}[{}])")
self.assertEqual(result.is_valid, False)
def test_syntax_check_12(self):
result = sc.SyntaxChecker().check_syntax("()(")
self.assertEqual(result.is_valid, False)
def test_invalid_character_points_01(self):
result = sc.SyntaxChecker().check_syntax("[)")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.invalid_character, ")")
self.assertEqual(result.invalid_character_points, 3)
def test_invalid_character_points_02(self):
result = sc.SyntaxChecker().check_syntax("(]")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.invalid_character, "]")
self.assertEqual(result.invalid_character_points, 57)
def test_invalid_character_points_03(self):
result = sc.SyntaxChecker().check_syntax("[}")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.invalid_character, "}")
self.assertEqual(result.invalid_character_points, 1197)
def test_invalid_character_points_04(self):
result = sc.SyntaxChecker().check_syntax("[>")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.invalid_character, ">")
self.assertEqual(result.invalid_character_points, 25137)
def test_syntax_completion_01(self):
result = sc.SyntaxChecker().check_syntax("[({(<(())[]>[[{[]{<()<>>")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.incomplete_syntax_string, "[({([[{{")
self.assertEqual(result.incomplete_syntax_fix_string, "}}]])})]")
self.assertEqual(result.incomplete_syntax_fix_points, 288957)
def test_syntax_completion_02(self):
result = sc.SyntaxChecker().check_syntax("[(()[<>])]({[<{<<[]>>(")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.incomplete_syntax_string, "({[<{(")
self.assertEqual(result.incomplete_syntax_fix_string, ")}>]})")
self.assertEqual(result.incomplete_syntax_fix_points, 5566)
def test_syntax_completion_03(self):
result = sc.SyntaxChecker().check_syntax("(((({<>}<{<{<>}{[]{[]{}")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.incomplete_syntax_string, "((((<{<{{")
self.assertEqual(result.incomplete_syntax_fix_string, "}}>}>))))")
self.assertEqual(result.incomplete_syntax_fix_points, 1480781)
def test_syntax_completion_04(self):
result = sc.SyntaxChecker().check_syntax("{<[[]]>}<{[{[{[]{()[[[]")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.incomplete_syntax_string, "<{[{[{{[[")
self.assertEqual(result.incomplete_syntax_fix_string, "]]}}]}]}>")
self.assertEqual(result.incomplete_syntax_fix_points, 995444)
def test_syntax_completion_05(self):
result = sc.SyntaxChecker().check_syntax("<{([{{}}[<[[[<>{}]]]>[]]")
self.assertEqual(result.is_valid, False)
self.assertEqual(result.incomplete_syntax_string, "<{([")
self.assertEqual(result.incomplete_syntax_fix_string, "])}>")
self.assertEqual(result.incomplete_syntax_fix_points, 294)
def test_syntax_completion_string_01(self):
result = sc.SyntaxChecker().complete_invalid_syntax_string("[({([[{{")
self.assertEqual(result, "}}]])})]")
def test_syntax_completion_string_02(self):
result = sc.SyntaxChecker().complete_invalid_syntax_string("({[<{(")
self.assertEqual(result, ")}>]})")
def test_syntax_completion_string_03(self):
result = sc.SyntaxChecker().complete_invalid_syntax_string("((((<{<{{")
self.assertEqual(result, "}}>}>))))")
def test_syntax_completion_string_04(self):
result = sc.SyntaxChecker().complete_invalid_syntax_string("<{[{[{{[[")
self.assertEqual(result, "]]}}]}]}>")
def test_syntax_completion_string_05(self):
result = sc.SyntaxChecker().complete_invalid_syntax_string("<{([")
self.assertEqual(result, "])}>")
def test_syntax_completion_points_01(self):
result = sc.SyntaxChecker().calculate_invalid_syntax_points("}}]])})]")
self.assertEqual(result, 288957)
def test_syntax_completion_points_02(self):
result = sc.SyntaxChecker().calculate_invalid_syntax_points(")}>]})")
self.assertEqual(result, 5566)
def test_syntax_completion_points_03(self):
result = sc.SyntaxChecker().calculate_invalid_syntax_points("}}>}>))))")
self.assertEqual(result, 1480781)
def test_syntax_completion_points_04(self):
result = sc.SyntaxChecker().calculate_invalid_syntax_points("]]}}]}]}>")
self.assertEqual(result, 995444)
def test_syntax_completion_points_05(self):
result = sc.SyntaxChecker().calculate_invalid_syntax_points("])}>")
self.assertEqual(result, 294)
if __name__ == '__main__':
unittest.main()
| 41.620253 | 80 | 0.66469 | 704 | 6,576 | 5.869318 | 0.073864 | 0.196031 | 0.274443 | 0.187561 | 0.958858 | 0.871249 | 0.857696 | 0.857696 | 0.857696 | 0.857696 | 0 | 0.023094 | 0.170316 | 6,576 | 157 | 81 | 41.88535 | 0.734238 | 0 | 0 | 0.190083 | 0 | 0 | 0.0625 | 0.01764 | 0 | 0 | 0 | 0 | 0.446281 | 1 | 0.256198 | false | 0 | 0.016529 | 0 | 0.280992 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c1dfddf17a6dc610c5d28631c1e2605cc55628f9 | 2,565 | py | Python | tests/test_mof_thread_parser.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 104 | 2020-03-04T14:31:31.000Z | 2022-03-28T02:59:36.000Z | tests/test_mof_thread_parser.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 7 | 2020-04-20T09:18:39.000Z | 2022-03-19T17:06:19.000Z | tests/test_mof_thread_parser.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 16 | 2020-03-05T18:55:59.000Z | 2022-03-01T10:19:28.000Z | # -*- coding: utf-8 -*-
import unittest
from etl.parsers.kernel import Thread_TypeGroup1
from etl.parsers.kernel.core import build_mof
from etl.wmi import EventTraceGroup
class TestMofThreadParser(unittest.TestCase):
def test_thread_v3_type_group_1_type1(self):
payload = b'\x04\x00\x00\x00\x18\x01\x00\x00\x00\x90K\x17\x8b\x9e\xff\xff\x00 K\x17\x8b\x9e\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\x00\x00\x00\x00\x00\x00\x00\x00\xcfkm\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x05\x02\x00\x00\x00'
obj = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_THREAD, 3, 1, payload)
self.assertIsInstance(obj, Thread_TypeGroup1),
self.assertEqual(obj.get_process_id(), 4)
self.assertEqual(obj.get_thread_id(), 280)
def test_thread_v3_type_group_1_type2(self):
payload = b'\x04\x00\x00\x00\\\x01\x00\x00\x00\x10S\x17\x8b\x9e\xff\xff\x00\xa0R\x17\x8b\x9e\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\x00\x00\x00\x00\x00\x00\x00\x80\x9a\xb8q\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x05\x02\x00\x00\x00'
obj = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_THREAD, 3, 2, payload)
self.assertIsInstance(obj, Thread_TypeGroup1),
self.assertEqual(obj.get_process_id(), 4)
self.assertEqual(obj.get_thread_id(), 348)
def test_thread_v3_type_group_1_type3(self):
payload = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\xd0&s\x03\xf8\xff\xff\x00`&s\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xd0O|m\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00'
mof = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_THREAD, 3, 3, payload)
self.assertIsInstance(mof, Thread_TypeGroup1),
self.assertEqual(mof.get_process_id(), 0)
self.assertEqual(mof.get_thread_id(), 0)
def test_thread_v3_type_group_1_type4(self):
payload = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\xd0&s\x03\xf8\xff\xff\x00`&s\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xd0O|m\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00'
mof = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_THREAD, 3, 4, payload)
self.assertIsInstance(mof, Thread_TypeGroup1),
self.assertEqual(mof.get_process_id(), 0)
self.assertEqual(mof.get_thread_id(), 0)
| 62.560976 | 306 | 0.725536 | 471 | 2,565 | 3.819533 | 0.150743 | 0.550306 | 0.715397 | 0.807115 | 0.862146 | 0.862146 | 0.842135 | 0.753196 | 0.753196 | 0.751529 | 0 | 0.219747 | 0.107602 | 2,565 | 40 | 307 | 64.125 | 0.566186 | 0.008187 | 0 | 0.413793 | 0 | 0.137931 | 0.436664 | 0.436271 | 0 | 0 | 0 | 0 | 0.413793 | 1 | 0.137931 | false | 0 | 0.137931 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.