hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6366af81b4234237ad4deef35a2cb9af4f4e7c5b | 44,948 | py | Python | BlueKumquatAutoDiff/autodiff.py | cs107-blue-kumquat/cs107-FinalProject | 8fba44103ca0c48969b712de8bd8ac0e80ec3806 | [
"MIT"
] | null | null | null | BlueKumquatAutoDiff/autodiff.py | cs107-blue-kumquat/cs107-FinalProject | 8fba44103ca0c48969b712de8bd8ac0e80ec3806 | [
"MIT"
] | 10 | 2021-11-18T14:56:04.000Z | 2021-12-11T23:23:47.000Z | BlueKumquatAutoDiff/autodiff.py | cs107-blue-kumquat/cs107-FinalProject | 8fba44103ca0c48969b712de8bd8ac0e80ec3806 | [
"MIT"
] | 1 | 2021-11-18T08:38:35.000Z | 2021-11-18T08:38:35.000Z | import numpy as np
import re
class Variable():
def __init__(self, var, der = 1):
"""
Initialize a Variable object with following attributes.
Derivative is 1 by default if not specified.
:param var: attribute representing evaluated value.
:param der: attribute representing evaluated derivative/gradient.
"""
if isinstance(var, int) or isinstance(var, float):
self.var = var
self.der = der
else:
raise TypeError("Input is not a real number.")
def __str__(self):
return f"value = {self.var}, derivative = {self.der}"
def __repr__(self):
return f"value = {self.var}, derivative = {self.der}"
def __add__(self, other):
"""
dunder method for adding a Variable object.
:param self: Variable object.
:param other: Variable object, or int/float, to be added to self.var.
:return: Variable object with value and derivative of the sum of self and other.
"""
try:
new_add = self.var + other.var
new_der = self.der + other.der
return Variable(new_add, new_der)
except:
if isinstance(other, int) or isinstance(other, float):
# other is not a variable and the addition could complete if it is a real number
return Variable(self.var + other, self.der)
else:
raise TypeError("Input is not a real number.")
def __mul__(self, other):
"""
dunder method for multiplying a Variable object or constant.
:param self: Variable object.
:param other: Variable object, or int/float, to be multiplied to self.var.
:return: Variable object with value and derivative of the product of self and other.
"""
try:
new_mul = other.var * self.var
new_der = self.der * other.var + other.der * self.var
return Variable(new_mul, new_der)
except:
if isinstance(other, int) or isinstance(other, float):
# other is not a variable and the multiplication could complete if it is a real number
new_mul = other * self.var
new_der = self.der * other
return Variable(new_mul, new_der)
else:
raise TypeError("Input is not a real number.")
def __radd__(self, other):
"""
dunder method for adding a Variable object and left object without __add__ mehtod.
:param self: Variable object.
:param other: an object that does not have an __add__ method or not implemented.
:return: __add__ function call.
"""
return self.__add__(other)
def __rmul__(self, other):
"""
dunder method for multiplying a Variable object and left object without __mul__ mehtod.
:param self: Variable object.
:param other: an object that does not have an __mul__ method or not implemented.
:return: __mul__ function call.
"""
return self.__mul__(other)
def __sub__(self, other):
"""
dunder method for subtracting a Variable object or constant.
:param self: Variable object.
:param other: Variable object, or int/float, to be subtracted from self.var.
:return: __add__ function call that passes the other object with negative sign.
"""
return self.__add__(-other)
def __rsub__(self, other):
"""
dunder method for subtracting a Variable object and left object without __sub__ mehtod.
:param self: Variable object.
:param other: an object that does not have an __sub__ method or not implemented.
:return: __add__ function call that passes the self object with negative sign.
"""
return (-self).__add__(other)
def __truediv__(self, other):
"""
dunder method for dividing a Variable object or constant.
:param self: Variable object.
:param other: Variable object, or int/float, to be divided from self.var.
:return: Variable object with value and derivative of the fraction of self and other.
"""
try:
new_div = self.var / other.var
new_der = (self.der * other.var - other.der * self.var) / other.var**2
return Variable(new_div, new_der)
except AttributeError:
if isinstance(other, int) or isinstance(other, float):
new_div = self.var / other
new_der = self.der / other
return Variable(new_div, new_der)
else:
raise TypeError(f"Input {other} is not valid.")
def __neg__(self):
"""
dunder method for taking the negative of a Variable object or constant.
:param self: Variable object.
:param other: Variable object, or int/float, to be taken the negative.
:return: Variable object with negative value and derivative of itself.
"""
return Variable(-self.var, -self.der)
def __rtruediv__(self, other):
"""
dunder method for subtracting a Variable object and left object without __truediv__ mehtod.
:param self: Variable object.
:param other: an object that does not have an __truediv__ method or not implemented.
:return: Variable object with value and derivative of the fraction of self and other.
"""
try:
new_div = other.var / self.var
new_der = (other.der * self.var - other.var * self.der) / self.var**2
return Variable(new_div, new_der)
except:
if isinstance(other, int) or isinstance(other, float):
new_div = other / self.var
new_der = other * (self.var**(-2)) * self.der
return Variable(new_div, new_der)
else:
raise TypeError(f"Input {other} is not valid.")
def __lt__(self, other):
'''
dunder method for less than comparator.
:param self: Variable object.
:param other: Variable object, or int/float, to be compared with.
:return: True if self is less than other, otherwise False.
'''
try:
return self.var < other.var
except AttributeError:
return self.var < other
def __gt__(self, other):
'''
dunder method for greater than comparator.
:param self: Variable object.
:param other: Variable object, or int/float, to be compared with.
:return: True if self is greater than other, otherwise False.
'''
try:
return self.var > other.var
except AttributeError:
return self.var > other
def __le__(self, other):
'''
dunder method for less than or equal to comparator.
:param self: Variable object.
:param other: Variable object, or int/float, to be compared with.
:return: True if self is less than or equal to other, otherwise False.
'''
try:
return self.var <= other.var
except AttributeError:
return self.var <= other
def __ge__(self, other):
'''
dunder method for greater than or equal to comparator.
:param self: Variable object.
:param other: Variable object, or int/float, to be compared with.
:return: True if self is greater than or equal to other, otherwise False.
'''
try:
return self.var >= other.var
except AttributeError:
return self.var >= other
def __eq__(self, other):
'''
dunder method for equality comparator.
:param self: Variable object.
:param other: Variable object, or int/float, to be compared with.
:return: True if self is equal to other, otherwise False.
'''
try:
return self.var == other.var
except:
raise TypeError('Input is not comparable.')
def __ne__(self, other):
'''
dunder method for inequality comparator.
:param self: Variable object.
:param other: Variable object, or int/float, to be compared with.
:return: negation of __eq__ function call.
'''
return not self.__eq__(other)
def __abs__(self):
'''
dunder method for absolute value.
:param self: Variable object.
:return: Variable object with value and derivative of the absolute value of self.
'''
return Variable(abs(self.var), abs(self.der))
def __pow__(self, other):
"""
dunder method for taking Variable object to the value of other object's power.
:param self: Variable object.
:param other: Variable object, or int/float, to the power of.
:return: Variable object with value and derivative of the power of self with power of other's value.
"""
try:
new_val = self.var ** other.var
new_der = other.var * self.var ** (other.var-1)
return Variable(new_val, new_der)
except:
if isinstance(other, int):
new_val = self.var ** other
new_der = other * self.var ** (other - 1) * self.der
return Variable(new_val, new_der)
else:
raise TypeError(f"Exponent {other} is not valid.")
def __rpow__(self, other):
"""
dunder method for taking the other object to the Variable object's power.
:param self: Variable object.
:param other: Variable object, or int/float, as the base.
:return: Variable object with value and derivative of the power of other with power of self's value.
"""
try:
new_val = other ** self.var
except:
raise ValueError("{} must be a number.".format(other))
new_der = other**self.var * np.log(other)
return Variable(new_val, new_der)
@staticmethod
def log(var):
"""
takes the log of the Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the log of var.
"""
try:
if var.var <= 0:
raise ValueError('Input needs to be greater than 0.')
except:
raise TypeError(f"Input not valid.")
log_var = np.log(var.var)
log_der = (1. / var.var) * var.der
return Variable(log_var, log_der)
@staticmethod
def sqrt(var):
"""
takes the square root of the Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the square root of var.
"""
if var < 0:
raise ValueError("Square root only takes positive number in the current implementation.")
else:
try:
sqrt_var = var.var**(1/2)
sqrt_der = (1/2)*var.var**(-1/2)
except:
raise TypeError(f"Input is not an Variable object.")
return Variable(sqrt_var, sqrt_der)
@staticmethod
def exp(var):
"""
natural exponential function with the value of Variable object as the power.
:param var: Variable object.
:return: Variable object with value and derivative of natural exponential function with power var.
"""
try:
new_val = np.exp(var.var)
new_der = np.exp(var.var) * var.der
return Variable(new_val, new_der)
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return Variable(np.exp(var), np.exp(var))
@staticmethod
def sin(var):
"""
calculates the sine of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the sine of var.
"""
try:
new_val = np.sin(var.var)
new_der = var.der * np.cos(var.var)
return Variable(new_val, new_der)
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return Variable(np.sin(var), np.cos(var))
@staticmethod
def cos(var):
"""
calculates the cosine of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the cosine of var.
"""
try:
new_val = np.cos(var.var)
new_der = var.der * -np.sin(var.var)
return Variable(new_val, new_der)
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return Variable(np.cos(var), -np.sin(var))
@staticmethod
def tan(var):
"""
calculates the tangent of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the tangent of var.
"""
try:
new_val = np.tan(var.var)
new_der = var.der * 1 / np.power(np.cos(var.var), 2)
return Variable(new_val, new_der)
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return Variable(np.tan(var), 1/np.cos(var)**2)
@staticmethod
def arcsin(var):
"""
calculates the arcsine of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the arcsine of var.
"""
try:
if var.var > 1 or var.var < -1:
raise ValueError('Please input -1 <= x <=1')
else:
new_val = np.arcsin(var.var)
new_der = 1 / np.sqrt(1 - (var.var ** 2))
return Variable(new_val, new_der)
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return Variable(np.arcsin(var), 1 / np.sqrt(1 - (var ** 2)))
@staticmethod
def arccos(var):
"""
calculates the arccosine of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the arccosine of var.
"""
try:
if isinstance(var, int) or isinstance(var, float):
return np.arccos(var)
if var.var > 1 or var.var < -1:
raise ValueError('Please input -1 <= x <=1')
else:
new_val = np.arccos(var.var)
new_der = -1 / np.sqrt(1 - (var.var ** 2))
return Variable(new_val, new_der)
except:
raise TypeError(f"Input {var} is not valid.")
@staticmethod
def arctan(var):
"""
calculates the arctangent of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the arctangent of var.
"""
try:
new_val = np.arctan(var.var)
new_der = var.der * 1 / (1 + np.power(var.var, 2))
return Variable(new_val, new_der)
except AttributeError:
return np.arctan(var)
@staticmethod
def sinh(var):
"""
calculates the hyperbolic sine of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the hyperbolic sine of var.
"""
try:
new_val = np.sinh(var.var)
new_der = var.der * np.cosh(var.var)
return Variable(new_val, new_der)
except AttributeError:
return np.sinh(var)
@staticmethod
def cosh(var):
"""
calculates the hyperbolic cosine of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the hyperbolic cosine of var.
"""
try:
new_val = np.cosh(var.var)
new_der = var.der * np.sinh(var.var)
return Variable(new_val, new_der)
except AttributeError:
return np.cosh(var)
@staticmethod
def tanh(var):
"""
calculates the hyperbolic tangent of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the hyperbolic tangent of var.
"""
try:
new_val = np.tanh(var.var)
new_der = var.der * 1 / np.power(np.cosh(var.var), 2)
return Variable(new_val, new_der)
except AttributeError:
return np.tanh(var)
def sigmoid(var):
"""
calculates the sigmoid/logistic function of Variable object.
:param var: Variable object.
:return: Variable object with value and derivative of the sigmoid/logistic function of var.
"""
try:
logistic_var = 1 / (1 + np.exp(-var.var))
logistic_der = logistic_var * (1-logistic_var) * var.der
return Variable(logistic_var, logistic_der)
except:
raise TypeError(f"Input {var} not valid.")
class SimpleAutoDiff:
def __init__(self, dict_val, list_funct):
"""
How To Use SimpleAutoDiff
---------------------------------------------------------------------------------------------------------------------
Inputs:
dict_val: a dictionary object
dict_val represents the input which is a dictionary of variables and their values formatted like the examples below
example1={'x':7,'y':2, 'z':3}
example2 ={'x':4}
You can input as many variables as needed so long as it is more than 1
list_funct: a list object
list_funct represents the input which is a list of functions input as strings
example_a = ['x**2','cos(np.pi*y)', '8*z']
example_b = ['sin(x)']
An example of a sample return using dict_val example1 and list_funct example_a would be
---AutoDifferentiation---
Value: {'x': 7, 'y': 2, 'z': 3}
Function 1:
Expression = x**2
Value = 49
Gradient = 14
Function 2:
Expression = cos(np.pi*y)
Value = 1.0
Gradient = 7.694682774887159e-16
Function 3:
Expression = 8*z
Value = 24
Gradient = 8
---------------------------------------------------------------------------------------------------------------------
"""
for func in list_funct:
if not isinstance(func, str):
raise TypeError('Invalid function input.')
static_elem_funct = ['log', 'sqrt', 'exp', 'sin', 'cos', 'tan', 'arcsin', 'arccos', 'arctan', 'sinh', 'cosh', 'tanh', 'sigmoid']
func_vals = []
dict_keys =[]
dict_vals = []
self.jacobian = np.zeros((len(list_funct), len(dict_val)))
count = 0
for val, key in enumerate(dict_val):
dict_keys.append(key)
dict_vals.append(val)
for pair in range(0,len(dict_val)):
for _ in range(0,len(dict_val)):
if _ == count:
exec(dict_keys[_] + "= Variable(dict_val[dict_keys[_]], der=1)")
else:
exec(dict_keys[_] + "= Variable(dict_val[dict_keys[_]], der=0)")
for fun in range(0,len(list_funct)):
for elem_funct in static_elem_funct:
if elem_funct in list_funct[fun]: # e.g. log is in log(x)
func = 'Variable.' + list_funct[fun]
break
else:
func = list_funct[fun]
func_vals.append(eval(func).var)
self.jacobian[fun,count]= eval(func).der
count+=1
self.functions = func_vals
self.dict_val = dict_val
self.list_funct = list_funct
def __repr__ (self):
output = '---AutoDifferentiation---\n'
added_output = ''
added_output += f"Value: {self.dict_val}\n\n"
for i in range(0, len(self.functions)):
added_output += f"Function {i+1}: \nExpression = {self.list_funct[i]}\nValue = {str(self.functions[i])}\nGradient = {str(self.jacobian[i])}\n\n"
return output+added_output
def __str__(self):
output = '---AutoDifferentiation---\n'
added_output = ''
added_output += f"Value: {self.dict_val}\n\n"
for i in range(0, self.jacobian.shape[0]):
added_output += f"Function {i+1}: \nExpression = {self.list_funct[i]}\nValue = {str(self.functions[i])}\nGradient = {str(self.jacobian[i])}\n\n"
return output+added_output
class Node():
def __init__(self, var):
"""
Initialize a Node object with follow attributes:
child_node: a list that holds tuples of all depending Nodes and derivatives
derivative: attribute representing evaluated derivative/gradient. Derivative is 1 by default.
:param var: attribute representing evaluated value.
"""
if isinstance(var, int) or isinstance(var, float):
self.var = var
self.child_node = []
self.derivative = None
else:
raise TypeError("Input is not a real number.")
def get_derivatives(self, inputs):
"""
Method to get derivatives for each variable used in the function.
This function uses:
var_val: a variable which stored the function values
der_list: a list of derivatives with respect to each variable
:param inputs: the list of input functions.
"""
# self.der = 1
var_val = self.var
der_list = np.array([var_i.partials() for var_i in inputs])
return var_val, der_list
def partials(self):
"""
Method to compute derivative for variables.
Uses self.derivative to determine whether to use list comprehension.
For finding partial derivatives with respect to each function.
"""
if len(self.child_node) == 0:
return 1
if self.derivative is not None:
return self.derivative
else:
self.derivative = sum([child.partials() * partial for child, partial in self.child_node])
return self.derivative
def __add__(self, other):
"""
dunder method for adding a Node object.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:param other: Node object, or int/float, to be added to self.var.
:return: Node object with value of the sum of self and other.
"""
try:
new_add = Node(self.var + other.var)
self.child_node.append((new_add, 1))
other.child_node.append((new_add, 1))
return new_add
except:
if isinstance(other, int) or isinstance(other, float):
# other is not a Node and the addition could complete if it is a real number
new_add = Node(self.var + other)
self.child_node.append((new_add, 1))
return new_add
else:
raise TypeError("Input is not a real number.")
def __mul__(self, other):
"""
dunder method for multiplying a Node object or constant.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:param other: Node object, or int/float, to be multiplied to self.var.
:return: Node object with value of the product of self and other.
"""
try:
new_mul = Node(other.var * self.var)
self.child_node.append((new_mul, other.var))
other.child_node.append((new_mul, self.var))
return new_mul
except:
if isinstance(other, int) or isinstance(other, float):
# other is not a Node and the multiplication could complete if it is a real number
new_mul = Node(other * self.var)
self.child_node.append((new_mul, other))
return new_mul
else:
raise TypeError("Input is not a real number.")
def __radd__(self, other):
"""
dunder method for adding a Node object and left object without __add__ mehtod.
:param self: Node object.
:param other: an object that does not have an __add__ method or not implemented.
:return: __add__ function call.
"""
return self.__add__(other)
def __rmul__(self, other):
"""
dunder method for multiplying a Node object and left object without __mul__ mehtod.
:param self: Node object.
:param other: an object that does not have an __mul__ method or not implemented.
:return: __mul__ function call.
"""
return self.__mul__(other)
def __sub__(self, other):
"""
dunder method for subtracting a Node object or constant.
:param self: Node object.
:param other: Node object, or int/float, to be subtracted from self.var.
:return: __add__ function call that passes the other object with negative sign.
"""
return self.__add__(-other)
def __rsub__(self, other):
"""
dunder method for subtracting a Node object and left object without __sub__ mehtod.
:param self: Node object.
:param other: an object that does not have an __sub__ method or not implemented.
:return: __add__ function call that passes the self object with negative sign.
"""
return (-self).__add__(other)
def __truediv__(self, other):
"""
dunder method for dividing a Node object or constant.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:param other: Node object, or int/float, to be divided from self.var.
:return: Node object with value and the fraction of self and other.
"""
try:
new_div = Node(self.var / other.var)
self.child_node.append((new_div,((1 * other.var - 0 * self.var) / other.var**2)))
other.child_node.append((new_div, (-self.var/(other.var**2))))
return new_div
except AttributeError:
if isinstance(other, int) or isinstance(other, float):
new_div = Node(self.var / other)
self.child_node.append((new_div,((1 * other - 0 * self.var) / other**2)))
return new_div
else:
raise TypeError(f"Input {other} is not valid.")
def __neg__(self):
"""
dunder method for taking the negative of an Node object or constant.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:return: Node object with negative value of itself.
"""
new_neg = Node(-self.var)
self.child_node.append((new_neg, -1))
return new_neg
def __rtruediv__(self, other):
"""
dunder method for dividing a Node object and left other object without __truediv__ method.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:param other: an object that does not have an __truediv__ method or not implemented.
:return: Node object with value of the fraction of self and other.
"""
try:
new_div = Node(other.var / self.var)
self.child_node.append((new_div, ((0 * self.var - other.var * 1) / self.var**2)))
other.child_node.append((new_div, 1/self.var))
return new_div
except:
if isinstance(other, int) or isinstance(other, float):
new_div = Node(other / self.var)
self.child_node.append((new_div, ((0 * self.var - other * 1) / self.var**2)))
return new_div
else:
raise TypeError(f"Input {other} is not valid.")
def __lt__(self, other):
'''
dunder method for less than comparator.
:param self: Node object.
:param other: Node object, or int/float, to be compared with.
:return: True if self is less than other, otherwise False.
'''
try:
return self.var < other.var
except AttributeError:
return self.var < other
def __gt__(self, other):
'''
dunder method for greater than comparator.
:param self: Node object.
:param other: Node object, or int/float, to be compared with.
:return: True if self is greater than other, otherwise False.
'''
try:
return self.var > other.var
except AttributeError:
return self.var > other
def __le__(self, other):
'''
dunder method for less than or equal to comparator.
:param self: Node object.
:param other: Node object, or int/float, to be compared with.
:return: True if self is less than or equal to other, otherwise False.
'''
try:
return self.var <= other.var
except AttributeError:
return self.var <= other
def __ge__(self, other):
'''
dunder method for greater than or equal to comparator.
:param self: Node object.
:param other: Node object, or int/float, to be compared with.
:return: True if self is greater than or equal to other, otherwise False.
'''
try:
return self.var >= other.var
except AttributeError:
return self.var >= other
def __eq__(self, other):
'''
dunder method for equality comparator.
:param self: Node object.
:param other: Node object, or int/float, to be compared with.
:return: True if self is equal to other, otherwise False.
'''
try:
return self.var == other.var
except:
raise TypeError('Input is not comparable.')
def __ne__(self, other):
'''
dunder method for inequality comparator.
:param self: Node object.
:param other: Node object, or int/float, to be compared with.
:return: negation of __eq__ function call.
'''
return not self.__eq__(other)
def __abs__(self):
'''
dunder method for absolute value.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:return: Node object with value of the absolute value of self.
'''
new_abs = Node(abs(self.var))
self.child_node.append((1, new_abs))
return new_abs
def __pow__(self, other):
"""
dunder method for taking Variable object to the value of other object's power.
internally appends derivatives of self and other and the return object to the .child_node.
:param self: Node object.
:param other: Node object, or int/float, to the power of.
:return: Node object with value of the power of self with power of other's value.
"""
try:
new_val = Node(self.var ** other.var)
self.child_node.append((new_val, (other.var) * self.var ** (other.var-1)))
other.child_node.append((new_val, self.var ** other.var * (np.log(self.var))))
return new_val
except:
if isinstance(other, int):
new_val = Node(self.var ** other)
self.child_node.append((new_val, (other) * self.var ** (other-1)))
return new_val
else:
raise TypeError(f"Exponent {other} is not valid.")
def __rpow__(self, other):
"""
dunder method for taking the other object to the Node object's power.
internally appends derivative of self and the return object to self.child_node.
:param self: Node object.
:param other: Node object, or int/float, as the base.
:return: Node object with value of the power of other with power of self's value.
"""
try:
new_val = Node(other ** self.var)
except:
raise ValueError("{} must be a number.".format(other))
self.child_node.append((new_val, other**self.var * np.log(other)))
return new_val
@staticmethod
def log(var):
"""
takes the log of the Node object.
internally appends derivative of var and the return object to var.child_node.
:param var: Node object.
:return: Node object with value of the natural log of var.var
"""
try:
if var.var <= 0:
raise ValueError('Input needs to be greater than 0.')
except:
raise TypeError(f"Input not valid.")
log_var = Node(np.log(var.var))
var.child_node.append((log_var, (1. / var.var) * 1))
return log_var
@staticmethod
def sqrt(var):
"""
takes the square root of the Node object.
internally appends derivative of var and the return object to var.child_node.
:var: Node object.
:return: Node object with value of the square root of var.
"""
if var < 0:
raise ValueError("Square root can only takes positive values.")
else:
try:
sqrt_var = Node(var.var**(1/2))
var.child_node.append((sqrt_var, (1/2)*var.var**(-1/2)))
except:
raise TypeError(f"Input is not an Node object.")
return sqrt_var
@staticmethod
def exp(var):
"""
natural exponential function with the value of Node object as the power.
internally appends derivative of var and the return object to var.child_node.
:var: Node object.
:return: Node object with value of natural exponential function with power var.
"""
try:
new_val = Node(np.exp(var.var))
var.child_node.append((new_val, np.exp(var.var) * 1))
return new_val
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return np.exp(var)
@staticmethod
def sin(var):
"""
calculates the sine of Node object.
internally appends derivative of var and the return object to var.child_node.
:var: Node object.
:return: Node object with value of the sine of var.
"""
try:
new_val = Node(np.sin(var.var))
var.child_node.append((new_val, 1 * np.cos(var.var)))
return new_val
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return np.sin(var)
@staticmethod
def cos(var):
"""
calculates the cosine of Node object.
internally appends derivative of var and the return object to var.child_node
:var: Node object.
:return: Node object with value of the cosine of var.
"""
try:
new_val = Node(np.cos(var.var))
var.child_node.append((new_val, 1 * -np.sin(var.var)))
return new_val
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return np.cos(var)
@staticmethod
def tan(var):
"""
calculates the tangent of Node object.
internally appends derivative of var and the return object to var.child_node.
:var: Node object.
:return: Node object with value of the tangent of var.
"""
try:
new_val = Node(np.tan(var.var))
var.child_node.append((new_val, 1 * 1 / np.power(np.cos(var.var), 2)))
return new_val
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return np.tan(var)
@staticmethod
def arcsin(var):
"""
calculates the arcsine of Node object.
internally appends derivative of var and the return object to var.child_node.
:var: Node object.
:return: Node object with value of the arcsine of var.
"""
try:
if var.var > 1 or var.var < -1:
raise ValueError('Please input -1 <= x <=1')
else:
new_val = Node(np.arcsin(var.var))
var.child_node.append((new_val, 1 / np.sqrt(1 - (var.var ** 2))))
return new_val
except:
if not isinstance(var, int) and not isinstance(var, float):
raise TypeError(f"Input {var} is not valid.")
return np.arcsin(var)
@staticmethod
def arccos(var):
"""
calculates the arccosine of Node object.
internally appends derivative of var and the return object to var.child_node
:var: Node object.
:return: Node object with value of the arccosine of var.
"""
try:
if isinstance(var, int) or isinstance(var, float):
return np.arccos(var)
if var.var > 1 or var.var < -1:
raise ValueError('Please input -1 <= x <=1')
else:
new_val = Node(np.arccos(var.var))
var.child_node.append((new_val, -1 / np.sqrt(1 - (var.var ** 2))))
return new_val
except:
raise TypeError(f"Input {var} is not valid.")
@staticmethod
def arctan(var):
"""
calculates the arctangent of Node object.
internally appends derivative of var and the return object to var.child_node
:var: Node object.
:return: Node object with value of the arctangent of var.
"""
try:
new_val = Node(np.arctan(var.var))
var.child_node.append((new_val, 1 * 1 / (1 + np.power(var.var, 2))))
return new_val
except AttributeError:
return np.arctan(var)
@staticmethod
def sinh(var):
"""
calculates the hyperbolic sine of Node object.
internally appends derivative of var and the return object to var.child_node
:var: Node object.
:return: Node object with value and derivative of the hyperbolic sine of var.
"""
try:
new_val = Node(np.sinh(var.var))
var.child_node.append((new_val, 1 * np.cosh(var.var)))
return new_val
except AttributeError:
return np.sinh(var)
@staticmethod
def cosh(var):
"""
calculates the hyperbolic cosine of Node object.
internally appends derivative of var and the return object to var.child_node
:param var: Node object.
:return: Node object with value and derivative of the hyperbolic cosine of var.
"""
try:
new_val = Node(np.cosh(var.var))
var.child_node.append((new_val, 1 * np.sinh(var.var)))
return new_val
except AttributeError:
return np.cosh(var)
@staticmethod
def tanh(var):
"""
calculates the hyperbolic tangent of Node object.
internally appends derivative of var and the return object to var.child_node
:param var: Node object.
:return: Node object with value and derivative of the hyperbolic cosine of var.
"""
try:
new_val = Node(np.tanh(var.var))
var.child_node.append((new_val, 1 * 1 / np.power(np.cosh(var.var), 2)))
return new_val
except AttributeError:
return np.tanh(var)
def sigmoid(var):
"""
calculates the sigmoid/logistic function of Node object.
internally appends derivative of var and the return object to var.child_node
:param var: Node object.
:return: Node object with value of the sigmoid/logistic function of var.
"""
try:
logistic_var = Node(1 / (1 + np.exp(-var.var)))
var.child_node.append((logistic_var, 1 / (1 + np.exp(-var.var)) * (1-(1 / (1 + np.exp(-var.var)) * 1))))
return logistic_var
except:
raise TypeError(f"Input {var} not valid.")
def __str__(self):
return f"value = {self.var}, derivative = {self.partials()}"
def __repr__(self):
return f"value = {self.var}, derivative = {self.partials()}"
class Reverse:
def __init__(self, dict_val, list_funct):
"""
Inputs:
dict_val: a dictionary object
dict_val represents the input which is a dictionary of variables and their values formatted like the examples below
example1={'x':7,'y':2, 'z':3}
example2 ={'x':4}
You can input as many variables as needed so long as it is more than 1
list_funct: a list object
list_funct represents the input which is a list of functions input as strings
example_a = ['x**2','cos(np.pi*y)', '8*z']
example_b = ['sin(x)']
Demo Reverse Mode
______________________________________________________________
INPUT
dict_val = {'x': 3, 'y': 2, 'z':1}
list_funct = ['x * y + exp(x * y)+ z**2', 'x + 3 * y + 4*x*z']
reverse_out = Reverse(dict_val, list_funct)
print(reverse_out)
OUTPUT
---Reverse Differentiation---
Function 1:
Expression = x * y + exp(x * y)+ z**2
Value = 410.4287934927351
Gradient = [ 808.85758699 1213.28638048 2. ]
Function 2:
Expression = x + 3 * y + 4*x*z
Value = 21.0
Gradient = [ 5. 3. 12.]
"""
# checking for string type
for func in list_funct:
if not isinstance(func, str):
raise TypeError('Invalid function input, must be string.')
# checking for dictionary type
if not isinstance(dict_val, dict):
raise TypeError('Variable Input must be a dictionary')
self.var = []
self.der = []
self.list_funct = list_funct
static_elem_funct = ['log', 'sqrt', 'exp', 'sin', 'cos', 'tan', 'arcsin', 'arccos', 'arctan', 'sinh', 'cosh', 'tanh', 'sigmoid']
for func in list_funct:
for i in static_elem_funct:
if i in func:
func = re.sub(i + r'\(', 'Node.' + i + '(', func)
func = re.sub('arcNode.', 'arc', func)
for var_name, var_value in dict_val.items():
exec(f'{var_name} = Node(float(var_value))')
func_eval = eval(func)
value_keys = str(list(dict_val.keys())).replace('\'','')
val_1, der_1 = eval(f'func_eval.get_derivatives({value_keys})')
self.var.append(val_1)
self.der.append(der_1)
def __repr__(self):
output = '---Reverse Differentiation---\n'
added_output = ''
for i in range(0, len(self.list_funct)):
added_output += f"Function {i+1}: \nExpression = {self.list_funct[i]}\nValue = {str(self.var[i])}\nGradient = {str(self.der[i])}\n\n"
return output+added_output
def __str__(self):
output = '---Reverse Differentiation---\n'
added_output = ''
for i in range(0, len(self.list_funct)):
added_output += f"Function {i+1}: \nExpression = {self.list_funct[i]}\nValue = {str(self.var[i])}\nGradient = {str(self.der[i])}\n\n"
return output+added_output
| 34.103187 | 156 | 0.565587 | 5,624 | 44,948 | 4.386913 | 0.053165 | 0.052205 | 0.020428 | 0.027237 | 0.871352 | 0.847033 | 0.824781 | 0.803421 | 0.773468 | 0.730828 | 0 | 0.00843 | 0.337612 | 44,948 | 1,317 | 157 | 34.129081 | 0.82024 | 0.373098 | 0 | 0.677966 | 0 | 0.00678 | 0.096838 | 0.02093 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128814 | false | 0 | 0.00339 | 0.00678 | 0.328814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6387f42788887f5f7cf1146f81400562be38476f | 36 | py | Python | nn/optim/__init__.py | yuzuka4573/Dialog | 8485e916458efcd21a414c4ecb09b79b2f439b99 | [
"MIT"
] | 61 | 2019-10-24T00:11:31.000Z | 2022-02-27T15:21:40.000Z | nn/optim/__init__.py | yuzuka4573/Dialog | 8485e916458efcd21a414c4ecb09b79b2f439b99 | [
"MIT"
] | 10 | 2019-10-23T07:03:02.000Z | 2021-07-29T06:54:27.000Z | nn/optim/__init__.py | yuzuka4573/Dialog | 8485e916458efcd21a414c4ecb09b79b2f439b99 | [
"MIT"
] | 27 | 2019-11-27T22:18:47.000Z | 2022-03-01T15:35:37.000Z | from .optimizer import get_optimizer | 36 | 36 | 0.888889 | 5 | 36 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63a08788e1d0373d7bac7c7196fb8b10389dd3b9 | 79 | py | Python | decrypt.py | nvakhilnair/Image-Encryption-and-Decryption-using-RSA | 6a033aac703acd4d9f3f2f32faa2c79ad3c31600 | [
"MIT"
] | null | null | null | decrypt.py | nvakhilnair/Image-Encryption-and-Decryption-using-RSA | 6a033aac703acd4d9f3f2f32faa2c79ad3c31600 | [
"MIT"
] | null | null | null | decrypt.py | nvakhilnair/Image-Encryption-and-Decryption-using-RSA | 6a033aac703acd4d9f3f2f32faa2c79ad3c31600 | [
"MIT"
] | null | null | null | def decrpytion(cipher,d,N):
decipher = pow(cipher,d)%N
return decipher
| 19.75 | 30 | 0.683544 | 12 | 79 | 4.5 | 0.666667 | 0.259259 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189873 | 79 | 3 | 31 | 26.333333 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
63c0990f4b4f9e34b00187e774a2572530841a08 | 47 | py | Python | GAM_website/payments/__init__.py | easherma/GAM_website | 1bd9ff97653cdbe54c72d0a177a184106a0de8de | [
"BSD-3-Clause"
] | 1 | 2017-04-27T08:56:51.000Z | 2017-04-27T08:56:51.000Z | GAM_website/payments/__init__.py | greenagain/GAM_website | 13778fe2cd7e0bea4b08c24c08e4fde085475193 | [
"BSD-3-Clause"
] | null | null | null | GAM_website/payments/__init__.py | greenagain/GAM_website | 13778fe2cd7e0bea4b08c24c08e4fde085475193 | [
"BSD-3-Clause"
] | 1 | 2018-03-25T19:35:07.000Z | 2018-03-25T19:35:07.000Z | """The payments module."""
from . import views
| 15.666667 | 26 | 0.680851 | 6 | 47 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 2 | 27 | 23.5 | 0.8 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63cc0de45bd713cb52c3391af1c622df0c9dcd94 | 38 | py | Python | python/hpctl/hpctl/__init__.py | domyounglee/baseline | 2261abfb7e770cc6f3d63a7f6e0015238d0e11f8 | [
"Apache-2.0"
] | null | null | null | python/hpctl/hpctl/__init__.py | domyounglee/baseline | 2261abfb7e770cc6f3d63a7f6e0015238d0e11f8 | [
"Apache-2.0"
] | null | null | null | python/hpctl/hpctl/__init__.py | domyounglee/baseline | 2261abfb7e770cc6f3d63a7f6e0015238d0e11f8 | [
"Apache-2.0"
] | 3 | 2019-05-27T04:52:21.000Z | 2022-02-15T00:22:53.000Z | from hpctl.version import __version__
| 19 | 37 | 0.868421 | 5 | 38 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
892a20a5502365e9d1078797ce31733dfe753de9 | 28 | py | Python | helloworld.py | nehasingh112/hands-on-transfer-learning-with-python | 55a019514644e814edf0c8a9d89d9bdf04f36431 | [
"Apache-2.0"
] | null | null | null | helloworld.py | nehasingh112/hands-on-transfer-learning-with-python | 55a019514644e814edf0c8a9d89d9bdf04f36431 | [
"Apache-2.0"
] | null | null | null | helloworld.py | nehasingh112/hands-on-transfer-learning-with-python | 55a019514644e814edf0c8a9d89d9bdf04f36431 | [
"Apache-2.0"
] | null | null | null | print("hello world, Neha!")
| 14 | 27 | 0.678571 | 4 | 28 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
894240eb83b01c2e1df583943a762c1719ce58b9 | 23 | py | Python | __init__.py | Meg-archie/lda2vec | 34790061b6768dd315e0cd4fd228edb148493666 | [
"MIT"
] | null | null | null | __init__.py | Meg-archie/lda2vec | 34790061b6768dd315e0cd4fd228edb148493666 | [
"MIT"
] | null | null | null | __init__.py | Meg-archie/lda2vec | 34790061b6768dd315e0cd4fd228edb148493666 | [
"MIT"
] | null | null | null | from . import examples
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
896bc9ede364d00213db25c09ab4f364fb2ab37e | 80 | py | Python | python/ray/rllib/RL/DeepADFQ/baselines0/bench/__init__.py | christopher-hsu/ray | abe84b596253411607a91b3a44c135f5e9ac6ac7 | [
"Apache-2.0"
] | 1 | 2019-07-08T15:29:25.000Z | 2019-07-08T15:29:25.000Z | python/ray/rllib/RL/DeepADFQ/baselines0/bench/__init__.py | christopher-hsu/ray | abe84b596253411607a91b3a44c135f5e9ac6ac7 | [
"Apache-2.0"
] | null | null | null | python/ray/rllib/RL/DeepADFQ/baselines0/bench/__init__.py | christopher-hsu/ray | abe84b596253411607a91b3a44c135f5e9ac6ac7 | [
"Apache-2.0"
] | null | null | null | from baselines0.bench.benchmarks import *
from baselines0.bench.monitor import * | 40 | 41 | 0.8375 | 10 | 80 | 6.7 | 0.6 | 0.41791 | 0.567164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.0875 | 80 | 2 | 42 | 40 | 0.890411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
896e820d6e2a2c5922ecdad5a974249399d82836 | 93 | py | Python | main.py | Codegass/Gentonic | bec77a8cb262e481fc26646d5ab6e8e8bbde38e5 | [
"MIT"
] | null | null | null | main.py | Codegass/Gentonic | bec77a8cb262e481fc26646d5ab6e8e8bbde38e5 | [
"MIT"
] | null | null | null | main.py | Codegass/Gentonic | bec77a8cb262e481fc26646d5ab6e8e8bbde38e5 | [
"MIT"
] | null | null | null | import tensorflow as tf
print(tf.__version__)
print(tf.config.list_physical_devices('GPU')) | 18.6 | 45 | 0.806452 | 14 | 93 | 4.928571 | 0.785714 | 0.202899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 93 | 5 | 45 | 18.6 | 0.802326 | 0 | 0 | 0 | 0 | 0 | 0.031915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
989ce500fcd3caef25f9662f106449722c0c1515 | 5,530 | py | Python | tests/test_default_values.py | pjaytycy/improcflow | 4f9e40432436221690573b863c5fd8ab49bd9ac5 | [
"MIT"
] | 1 | 2021-06-22T07:39:12.000Z | 2021-06-22T07:39:12.000Z | tests/test_default_values.py | pjaytycy/improcflow | 4f9e40432436221690573b863c5fd8ab49bd9ac5 | [
"MIT"
] | 1 | 2018-02-08T20:50:53.000Z | 2018-02-25T14:23:56.000Z | tests/test_default_values.py | pjaytycy/improcflow | 4f9e40432436221690573b863c5fd8ab49bd9ac5 | [
"MIT"
] | null | null | null | import unittest
from django.test import TestCase
from improcflow.logic import *
class DefaultValueTests(TestCase):
def test_allow_default_None(self):
class MockElement(Element):
class_name = "test_allow_default_None_mock"
def __init__(self, title = None, element_model = None):
super(MockElement, self).__init__(title = title, element_model = element_model)
self.src = self.add_input_connector(title = "src")
self.mock = self.add_input_connector(title = "mock", default_value = None)
self.dst = self.add_output_connector(title = "dst")
def set_mock_value(self, src):
self.mock.set_value(src)
self.flow.invalidate_chain(self.dst)
def run(self, debug = False):
if self.mock.value is None:
self.dst.set_value(self.src.value)
else:
self.dst.set_value(self.src.value * 4)
register_element_type(MockElement)
element_input = InputData(title = "element_input")
element_input.set_value([[1, 2, 3], [4, 5, 6]])
element_mean = OpenCVMean(title = "element_mean")
element_mock = MockElement(title = "element_mock")
element_output = OutputData(title = "element_output")
flow = Flow()
flow.add_element(element_input)
flow.add_element(element_mean)
flow.add_element(element_mock)
flow.add_element(element_output)
flow.connect(element_input.data, element_mean.src, title = "data_connection_1")
flow.connect(element_mean.mean, element_mock.src, title = "data_connection_2")
flow.connect(element_mock.dst, element_output.data, title = "data_connection_3")
flow.run()
self.assertEqual(3.5, element_output.result())
element_mock.set_mock_value("blah")
flow.run()
self.assertEqual(14.0, element_output.result())
def test_disconnect_input_connector_without_default_value(self):
element_input = InputData(title = "element_input")
element_input.set_value([[1, 2, 3], [4, 5, 6]])
element_mean = OpenCVMean(title = "element_mean")
element_output = OutputData(title = "element_output")
flow = Flow()
flow.add_element(element_input)
flow.add_element(element_mean)
flow.add_element(element_output)
connection_data_1 = flow.connect(element_input.data, element_mean.src, title = "data_connection_1")
connection_data_2 = flow.connect(element_mean.mean, element_output.data, title = "data_connection_2")
flow.run()
self.assertEqual(3.5, element_output.result())
flow.disconnect(connection_data_1)
self.assertIsNone(element_output.result())
self.assertEqual(False, element_mean.is_ready())
def test_multiple_connect_disconnect_scenario(self):
class MockElement(Element):
class_name = "test_allow_default_None_mock"
def __init__(self, title = None, element_model = None):
super(MockElement, self).__init__(title = title, element_model = element_model)
self.src = self.add_input_connector(title = "src")
self.mock = self.add_input_connector(title = "mock", default_value = None)
self.dst = self.add_output_connector(title = "dst")
def set_mock_value(self, src):
self.mock.set_value(src)
self.flow.invalidate_chain(self.dst)
def run(self, debug = False):
if self.mock.value is None:
self.dst.set_value(self.src.value)
else:
self.dst.set_value(self.src.value * 4)
register_element_type(MockElement)
element_input = InputData(title = "element_input")
element_input.set_value([[1, 2, 3], [4, 5, 6]])
element_input2 = InputData(title = "element_input_2")
element_input2.set_value("blah")
element_mean = OpenCVMean(title = "element_mean")
element_mock = MockElement(title = "element_mock")
element_output = OutputData(title = "element_output")
flow = Flow()
flow.add_element(element_input)
flow.add_element(element_input2)
flow.add_element(element_mean)
flow.add_element(element_mock)
flow.add_element(element_output)
connection_1 = flow.connect(element_input.data, element_mean.src, title = "data_connection_1")
connection_2 = flow.connect(element_mean.mean, element_mock.src, title = "data_connection_2")
connection_3 = flow.connect(element_mock.dst, element_output.data, title = "data_connection_3")
# 1) run with element_mock.mock not connected, ie: default = None
flow.run()
self.assertEqual(3.5, element_output.result())
# 2) connect element_mock.mock to a value != None; this should invalidate everything, then rerun it.
connection_4 = flow.connect(element_input2.data, element_mock.mock, title = "mock_connection")
self.assertIsNone(element_output.result())
flow.run()
self.assertEqual(14, element_output.result())
# 3) disconnecting should invalidate everything
flow.disconnect(connection_4)
self.assertIsNone(element_output.result())
flow.run()
self.assertEqual(3.5, element_output.result())
# 4) connect element_mock.mock to a value = None; this could leave everything valid or invalidate everything. It is not so important.
element_input2.set_value(None)
connection_5 = flow.connect(element_input2.data, element_mock.mock, title = "mock_connection")
flow.run()
# 5) disconnecting should leave everything valid
flow.disconnect(connection_5)
self.assertEqual(3.5, element_output.result())
| 38.671329 | 137 | 0.699096 | 718 | 5,530 | 5.101671 | 0.119777 | 0.078078 | 0.045864 | 0.068796 | 0.790882 | 0.773956 | 0.758941 | 0.739558 | 0.739558 | 0.646738 | 0 | 0.014558 | 0.192586 | 5,530 | 142 | 138 | 38.943662 | 0.805823 | 0.069982 | 0 | 0.728155 | 0 | 0 | 0.07905 | 0.010903 | 0 | 0 | 0 | 0 | 0.106796 | 1 | 0.087379 | false | 0 | 0.029126 | 0 | 0.145631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f56ce2769a767eb6ece7b01d78fa7c89d394702 | 8,053 | py | Python | moves.py | bhaktanishant/PyChess | 194cee852f0cc455511dbb1537331ca0b367387b | [
"Apache-2.0"
] | null | null | null | moves.py | bhaktanishant/PyChess | 194cee852f0cc455511dbb1537331ca0b367387b | [
"Apache-2.0"
] | null | null | null | moves.py | bhaktanishant/PyChess | 194cee852f0cc455511dbb1537331ca0b367387b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from param import parameter
class Moves:
def __init__(self, block, blocks):
self.block = block
self.blocks = blocks
self.param = parameter()
def canRoamTo(self):
position = self.block.getPosition()
if self.block.getKey() == self.param.BLACK_PAWN:
return self.blackPawnMove(position)
elif self.block.getKey() == self.param.WHITE_PAWN:
return self.whitePawnMove(position)
elif self.block.getKey() == self.param.BLACK_ELEPHANT or self.block.getKey() == self.param.WHITE_ELEPHANT:
return self.elephantMove(position)
elif self.block.getKey() == self.param.BLACK_HORSE or self.block.getKey() == self.param.WHITE_HORSE:
return self.horseMove(position)
elif self.block.getKey() == self.param.BLACK_CAMEL or self.block.getKey() == self.param.WHITE_CAMEL:
return self.camelMoves(position)
elif self.block.getKey() == self.param.BLACK_QUEEN or self.block.getKey() == self.param.WHITE_QUEEN:
return self.queenMove(position)
elif self.block.getKey() == self.param.BLACK_KING or self.block.getKey() == self.param.WHITE_KING:
return self.kingMove(position)
def blackPawnMove(self, position):
maxMoves = []
if self.block.isFirstTurn():
checkNextBlock = True
for i in range(2):
if checkNextBlock:
result = self.blockAvailable((position[0] +i+1, position[1]))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] +i+1, position[1]))
elif self.blockAvailable((position[0] +1, position[1]))[0]:
maxMoves.append((position[0] +1, position[1]))
return maxMoves
def whitePawnMove(self, position):
maxMoves = []
if self.block.isFirstTurn():
checkNextBlock = True
for i in range(2):
if checkNextBlock:
result = self.blockAvailable((position[0] -i-1, position[1]))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] -i-1, position[1]))
elif self.blockAvailable((position[0] -1, position[1]))[0]:
maxMoves.append((position[0] -1, position[1]))
return maxMoves
def elephantMove(self, position):
maxMoves = []
i = 1
checkNextBlock = True
while position[1] +i < 8:
if checkNextBlock:
result = self.blockAvailable((position[0], position[1] + i))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0], position[1] + i))
i = i + 1
i = 1
checkNextBlock = True
while position[0] +i < 8:
if checkNextBlock:
result = self.blockAvailable((position[0] + i, position[1]))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] + i, position[1]))
i = i + 1
i = 1
checkNextBlock = True
while position[1] -i >= 0:
if checkNextBlock:
result = self.blockAvailable((position[0], position[1] - i))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0], position[1] - i))
i = i + 1
i = 1
checkNextBlock = True
while position[0] -i >= 0:
if checkNextBlock:
result = self.blockAvailable((position[0] - i, position[1]))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] - i, position[1]))
i = i + 1
return maxMoves
def horseMove(self, position):
maxMoves = []
if position[0] +2 >= 0 and position[1] +1 >= 0 and position[0] +2 < 8 and position[1] +1 < 8:
if self.blockAvailable((position[0] +2, position[1] +1))[0]:
maxMoves.append((position[0] +2, position[1] +1))
if position[0] +2 >= 0 and position[1] -1 >= 0 and position[0] +2 < 8 and position[1] -1 < 8:
if self.blockAvailable((position[0] +2, position[1] -1))[0]:
maxMoves.append((position[0] +2, position[1] -1))
if position[0] -2 >= 0 and position[1] +1 >= 0 and position[0] -2 < 8 and position[1] +1 < 8:
if self.blockAvailable((position[0] -2, position[1] +1))[0]:
maxMoves.append((position[0] -2, position[1] +1))
if position[0] -2 >= 0 and position[1] -1 >= 0 and position[0] -2 < 8 and position[1] -1 < 8:
if self.blockAvailable((position[0] -2, position[1] -1))[0]:
maxMoves.append((position[0] -2, position[1] -1))
if position[0] +1 >= 0 and position[1] +2 >= 0 and position[0] +1 < 8 and position[1] +2 < 8:
if self.blockAvailable((position[0] +1, position[1] +2))[0]:
maxMoves.append((position[0] +1, position[1] +2))
if position[0] -1 >= 0 and position[1] +2 >= 0 and position[0] -1 < 8 and position[1] +2 < 8:
if self.blockAvailable((position[0] -1, position[1] +2))[0]:
maxMoves.append((position[0] -1, position[1] +2))
if position[0] +1 >= 0 and position[1] -2 >= 0 and position[0] +1 < 8 and position[1] -2 < 8:
if self.blockAvailable((position[0] +1, position[1] -2))[0]:
maxMoves.append((position[0] +1, position[1] -2))
if position[0] -1 >= 0 and position[1] -2 >= 0 and position[0] -1 < 8 and position[1] -2 < 8:
if self.blockAvailable((position[0] -1, position[1] -2))[0]:
maxMoves.append((position[0] -1, position[1] -2))
return maxMoves
def camelMoves(self, position):
maxMoves = []
i = 1
checkNextBlock = True
while position[0] +i < 8 and position[1] +i < 8:
if checkNextBlock:
result = self.blockAvailable((position[0] +i, position[1] +i))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] +i, position[1] +i))
i = i +1
i = 1
checkNextBlock = True
while position[0] -i >= 0 and position[1] -i >= 0:
if checkNextBlock:
result = self.blockAvailable((position[0] -i, position[1] -i))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] -i, position[1] -i))
i = i +1
i = 1
checkNextBlock = True
while position[0] +i < 8 and position[1] -i >= 0:
if checkNextBlock:
result = self.blockAvailable((position[0] +i, position[1] -i))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] +i, position[1] -i))
i = i +1
i = 1
checkNextBlock = True
while position[0] -i >= 0 and position[1] +i < 8:
if checkNextBlock:
result = self.blockAvailable((position[0] -i, position[1] +i))
canOccupy = result[0]
checkNextBlock = result[1]
if canOccupy:
maxMoves.append((position[0] -i, position[1] +i))
i = i +1
return maxMoves
def queenMove(self, position):
maxMoves = []
if position[0] +1 < 8:
if self.blockAvailable((position[0] +1, position[1]))[0]:
maxMoves.append((position[0] +1, position[1]))
if position[0] -1 >= 0:
if self.blockAvailable((position[0] -1, position[1]))[0]:
maxMoves.append((position[0] -1, position[1]))
if position[1] +1 < 8:
if self.blockAvailable((position[0], position[1] +1))[0]:
maxMoves.append((position[0], position[1] +1))
if position[0] -1 >= 0:
if self.blockAvailable((position[0], position[1] -1))[0]:
maxMoves.append((position[0], position[1] -1))
if position[0] +1 < 8 and position[1] +1 < 8:
if self.blockAvailable((position[0] +1, position[1] +1))[0]:
maxMoves.append((position[0] +1, position[1] +1))
if position[0] -1 >= 0 and position[1] -1 >= 0:
if self.blockAvailable((position[0] -1, position[1] -1))[0]:
maxMoves.append((position[0] -1, position[1] -1))
if position[0] +1 < 8 and position[1] -1 >= 0:
if self.blockAvailable((position[0] +1, position[1] -1))[0]:
maxMoves.append((position[0] +1, position[1] -1))
if position[0] -1 >= 0 and position[1] +1 < 8:
if self.blockAvailable((position[0] -1, position[1] +1))[0]:
maxMoves.append((position[0] -1, position[1] +1))
return maxMoves
def kingMove(self, position):
return self.elephantMove(position) + self.camelMoves(position)
def blockAvailable(self, position):
block = self.blocks[position]
if block.haveOccupied():
if block.getColor() == self.block.getColor():
return (False, False)
else:
return (True, False)
else:
return (True, True)
| 37.455814 | 108 | 0.642121 | 1,168 | 8,053 | 4.413527 | 0.05137 | 0.1484 | 0.075655 | 0.146654 | 0.862076 | 0.862076 | 0.838991 | 0.80194 | 0.762173 | 0.744132 | 0 | 0.059571 | 0.189122 | 8,053 | 214 | 109 | 37.630841 | 0.729862 | 0.002608 | 0 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049751 | false | 0 | 0.004975 | 0.004975 | 0.144279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6b3d739842a465d1fce48e7f1c67248cd290bfe | 2,954 | py | Python | mat2img.py | mksarker/data_preprocessing | dabdb7f3dbf1c4bf5ee49a39aef2cb258539b027 | [
"MIT"
] | null | null | null | mat2img.py | mksarker/data_preprocessing | dabdb7f3dbf1c4bf5ee49a39aef2cb258539b027 | [
"MIT"
] | null | null | null | mat2img.py | mksarker/data_preprocessing | dabdb7f3dbf1c4bf5ee49a39aef2cb258539b027 | [
"MIT"
] | null | null | null | from scipy.io import loadmat
import cv2
import numpy as np
# from skimage.io import imread, imsave
from scipy.misc import imsave, imread
import os
import png
from PIL import Image
root_dir='D:/PathLake/HoverNet/data/CoNSeP/Train/Labels/'
dest_dir='D:/PathLake/HoverNet/my_data/my_ConSep/train/lbl_map/'
for file in os.listdir(root_dir):
if file.endswith('mat'):
main_part=file.split('.mat')[0]
mat_file= loadmat(root_dir+main_part+'.mat')
# print(mat_file.keys())
# print (main_part)
### 'inst_map', 'type_map', 'inst_type', 'inst_centroid']
# print (mat_file['type_map'].shape)
im=mat_file['inst_map']
unq=np.unique(im)
unq=unq[1:]
# print("before",unq)
# im=cv2.resize(im,(512,512))
new_mask=0*im
for i in range(unq.shape[0]):
inds=np.where(im==unq[i])
new_mask[inds]=unq[i]
# print(np.unique(new_mask))
# png.from_array(new_mask[:], 'L').save(dest_dir+ main_part + ".png")
binary_transform = np.array(new_mask).astype(np.uint8)
img = Image.fromarray(binary_transform, 'P')
img.save(dest_dir+ main_part + ".png")
# cv2.imshow('new_mask_before:',new_mask)
pp=imread(dest_dir+ main_part + ".png")
# cv2.imshow('new_mask_after:',pp)
# print(np.max(pp))
unq=np.unique(pp)
# print("after", unq)
# print(pp.shape)
# print(np.sum(np.abs(pp-new_mask)))
# # cv2.waitKey(0)
# from scipy.io import loadmat
# import cv2
# import numpy as np
# from scipy.misc import imsave, imread
# import os
# import png
# from PIL import Image
# root_dir='D:/PathLake/HoverNet/my_data/CoNSeP/train/512x512_256x256/'
# dest_dir='D:/PathLake/HoverNet/my_data/my_ConSep/train/512x512_256x256/'
# for file in os.listdir(root_dir):
# if file.endswith('npy'):
# main_part=file.split('.npy')[0]
# mat_file= np.load(root_dir+main_part+'.npy')
# print(mat_file)
# im=mat_file[-3:-1:]
# unq=np.unique(im)
# unq=unq[1:]
# print("before",unq)
# im=cv2.resize(im,(512,512))
# new_mask=0*im
# for i in range(unq.shape[0]):
# inds=np.where(im==unq[i])
# new_mask[inds]=unq[i]
# # print(np.unique(new_mask))
# # png.from_array(new_mask[:], 'L').save(dest_dir+ main_part + ".png")
# binary_transform = np.array(new_mask).astype(np.uint8)
# img = Image.fromarray(binary_transform, 'P')
# img.save(dest_dir+ main_part + ".png")
# # cv2.imshow('new_mask_before:',new_mask)
# pp=imread(dest_dir+ main_part + ".png")
# # cv2.imshow('new_mask_after:',pp)
# # print(np.max(pp))
# unq=np.unique(pp)
# print("after", unq)
# # print(pp.shape)
# # print(np.sum(np.abs(pp-new_mask)))
# # # cv2.waitKey(0)
| 25.686957 | 79 | 0.587001 | 429 | 2,954 | 3.869464 | 0.191142 | 0.075904 | 0.053012 | 0.054217 | 0.801205 | 0.801205 | 0.79759 | 0.79759 | 0.79759 | 0.79759 | 0 | 0.027003 | 0.2478 | 2,954 | 114 | 80 | 25.912281 | 0.720072 | 0.613406 | 0 | 0 | 0 | 0 | 0.11847 | 0.092351 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.28 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6bfb6dd38ffc073ec2cf0a5bbd3957a2a7ad978 | 2,207 | py | Python | varsom_landslide_client/models/__init__.py | NVE/python-varsom-landslide-client | 553f562d463e67545daa9fc9bfdd554f3c5d471c | [
"MIT"
] | null | null | null | varsom_landslide_client/models/__init__.py | NVE/python-varsom-landslide-client | 553f562d463e67545daa9fc9bfdd554f3c5d471c | [
"MIT"
] | null | null | null | varsom_landslide_client/models/__init__.py | NVE/python-varsom-landslide-client | 553f562d463e67545daa9fc9bfdd554f3c5d471c | [
"MIT"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
Jordskredvarsel API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: v1.0.6
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import models into model package
from varsom_landslide_client.models.alert import Alert
from varsom_landslide_client.models.alert_info import AlertInfo
from varsom_landslide_client.models.alert_info_area import AlertInfoArea
from varsom_landslide_client.models.alert_info_area_geocode import AlertInfoAreaGeocode
from varsom_landslide_client.models.alert_info_event_code import AlertInfoEventCode
from varsom_landslide_client.models.alert_info_parameter import AlertInfoParameter
from varsom_landslide_client.models.alert_info_resource import AlertInfoResource
from varsom_landslide_client.models.cause import Cause
from varsom_landslide_client.models.code_page_data_item import CodePageDataItem
from varsom_landslide_client.models.county import County
from varsom_landslide_client.models.decoder_fallback import DecoderFallback
from varsom_landslide_client.models.encoder_fallback import EncoderFallback
from varsom_landslide_client.models.encoding import Encoding
from varsom_landslide_client.models.formatted_content_result_alert import FormattedContentResultAlert
from varsom_landslide_client.models.formatted_content_result_list_alert import FormattedContentResultListAlert
from varsom_landslide_client.models.i_required_member_selector import IRequiredMemberSelector
from varsom_landslide_client.models.media_type_formatter import MediaTypeFormatter
from varsom_landslide_client.models.media_type_header_value import MediaTypeHeaderValue
from varsom_landslide_client.models.media_type_mapping import MediaTypeMapping
from varsom_landslide_client.models.micro_blog_post import MicroBlogPost
from varsom_landslide_client.models.municipality import Municipality
from varsom_landslide_client.models.name_value_header_value import NameValueHeaderValue
from varsom_landslide_client.models.station import Station
from varsom_landslide_client.models.warning import Warning
| 53.829268 | 119 | 0.888536 | 281 | 2,207 | 6.647687 | 0.323843 | 0.12848 | 0.244111 | 0.321199 | 0.512848 | 0.313705 | 0.294433 | 0.103854 | 0 | 0 | 0 | 0.00391 | 0.07295 | 2,207 | 40 | 120 | 55.175 | 0.909091 | 0.132759 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6c95d448043c8f7fb1c7aec8de443d569ce9fa4 | 304 | py | Python | api_ml/models/__init__.py | YaYaB/api_pytorch | c88c31bd67131837efe87d645172110d5794e9c3 | [
"Apache-2.0"
] | 1 | 2020-07-06T12:19:45.000Z | 2020-07-06T12:19:45.000Z | api_ml/models/__init__.py | YaYaB/api_pytorch | c88c31bd67131837efe87d645172110d5794e9c3 | [
"Apache-2.0"
] | null | null | null | api_ml/models/__init__.py | YaYaB/api_pytorch | c88c31bd67131837efe87d645172110d5794e9c3 | [
"Apache-2.0"
] | null | null | null | import api_ml.models.alexnet
import api_ml.models.densenet
import api_ml.models.googlenet
import api_ml.models.inception_v3
import api_ml.models.mnasnet
import api_ml.models.mobilenet_v2
import api_ml.models.resnet
import api_ml.models.shufflenet
import api_ml.models.squeezenet
import api_ml.models.vgg
| 27.636364 | 33 | 0.868421 | 52 | 304 | 4.846154 | 0.307692 | 0.357143 | 0.436508 | 0.674603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.065789 | 304 | 10 | 34 | 30.4 | 0.880282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6ec5eeca6c7ed22035202b6bb4249b14ee118ed | 149 | py | Python | pingo/rpi/__init__.py | pingo-io/pingo-py | 5d7081f99ff13973404dc6361560f30ce8f7009c | [
"MIT"
] | 116 | 2015-05-06T17:49:22.000Z | 2021-11-16T12:59:35.000Z | pingo/rpi/__init__.py | pingo-io/pingo-py | 5d7081f99ff13973404dc6361560f30ce8f7009c | [
"MIT"
] | 49 | 2015-05-08T23:18:05.000Z | 2017-07-12T17:11:48.000Z | pingo/rpi/__init__.py | pingo-io/pingo-py | 5d7081f99ff13973404dc6361560f30ce8f7009c | [
"MIT"
] | 47 | 2015-05-04T07:42:04.000Z | 2021-08-04T20:49:54.000Z | from rpi import RaspberryPi # noqa
from rpi import RaspberryPiBPlus # noqa
from rpi import RaspberryPi2B # noqa
from grove import GrovePi # noqa
| 29.8 | 40 | 0.785235 | 20 | 149 | 5.85 | 0.45 | 0.179487 | 0.333333 | 0.290598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.187919 | 149 | 4 | 41 | 37.25 | 0.958678 | 0.127517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
63e80a9be6924c1a7b86251b5066df6260a165e9 | 530 | py | Python | thing/views/__init__.py | skyride/evething-2 | e0778a539b7f8a56667b2508293ca7e9f515283f | [
"BSD-2-Clause"
] | 21 | 2017-05-24T00:06:07.000Z | 2019-08-06T04:31:18.000Z | thing/views/__init__.py | skyride/evething-2 | e0778a539b7f8a56667b2508293ca7e9f515283f | [
"BSD-2-Clause"
] | 11 | 2017-05-23T23:58:57.000Z | 2018-05-27T03:21:30.000Z | thing/views/__init__.py | skyride/evething-2 | e0778a539b7f8a56667b2508293ca7e9f515283f | [
"BSD-2-Clause"
] | 10 | 2017-06-08T18:23:51.000Z | 2021-09-05T06:03:59.000Z | # flake8: noqa
from thing.views.home import *
from thing.views.account import *
from thing.views.assets import *
from thing.views.blueprints import *
from thing.views.character import *
from thing.views.clones import *
from thing.views.contracts import *
from thing.views.events import *
from thing.views.industry import *
from thing.views.mail import *
from thing.views.orders import *
from thing.views.trade import *
from thing.views.transactions import *
from thing.views.wallet_journal import *
from thing.views.pi import *
| 27.894737 | 40 | 0.788679 | 78 | 530 | 5.346154 | 0.282051 | 0.323741 | 0.503597 | 0.671463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002151 | 0.122642 | 530 | 18 | 41 | 29.444444 | 0.894624 | 0.022642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.066667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63ed2d9d4dd28d834d80a822e863efde61d051c3 | 79 | py | Python | lib/test_rsi.py | mrotke/pyStock | 76aad7c8bdd112d3a53ed013cbe9ff660a90d5bf | [
"MIT"
] | null | null | null | lib/test_rsi.py | mrotke/pyStock | 76aad7c8bdd112d3a53ed013cbe9ff660a90d5bf | [
"MIT"
] | null | null | null | lib/test_rsi.py | mrotke/pyStock | 76aad7c8bdd112d3a53ed013cbe9ff660a90d5bf | [
"MIT"
] | null | null | null | import pytest
from lib.rsi import *
def test_rsi():
assert(True == True)
| 11.285714 | 24 | 0.670886 | 12 | 79 | 4.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21519 | 79 | 6 | 25 | 13.166667 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12161cc866110ef3c377756419a9b593d23d32f4 | 124 | py | Python | ALC/utils/__init__.py | SimiPixel/automatic_label_correction | c3e785cd9e2b185b4306e92677c1838a883eb2b3 | [
"MIT"
] | null | null | null | ALC/utils/__init__.py | SimiPixel/automatic_label_correction | c3e785cd9e2b185b4306e92677c1838a883eb2b3 | [
"MIT"
] | null | null | null | ALC/utils/__init__.py | SimiPixel/automatic_label_correction | c3e785cd9e2b185b4306e92677c1838a883eb2b3 | [
"MIT"
] | null | null | null | from .falsify import falsify
from .onehot import OneHot
from .kfold import kfold
from .convert_labels import convert_labels
| 24.8 | 42 | 0.83871 | 18 | 124 | 5.666667 | 0.388889 | 0.254902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 124 | 4 | 43 | 31 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1235f48385001a192e5d32e1312543caa18134b9 | 345 | py | Python | bankinator/bank/base_bank.py | GTmmiller/bankinator | cbb333c3027778f76a52dc85590c4015fff2d1e3 | [
"MIT"
] | 1 | 2017-05-11T05:59:04.000Z | 2017-05-11T05:59:04.000Z | bankinator/bank/base_bank.py | GTmmiller/bankinator | cbb333c3027778f76a52dc85590c4015fff2d1e3 | [
"MIT"
] | null | null | null | bankinator/bank/base_bank.py | GTmmiller/bankinator | cbb333c3027778f76a52dc85590c4015fff2d1e3 | [
"MIT"
] | null | null | null | import abc
class BankBase:
__metaclass__ = abc.ABCMeta
def __init__(self):
pass
@abc.abstractmethod
def authenticate(self, username, password):
return
@abc.abstractmethod
def navigate(self, homepage):
return
@abc.abstractmethod
def parse(self, account, account_text):
return
| 16.428571 | 47 | 0.643478 | 35 | 345 | 6.085714 | 0.571429 | 0.239437 | 0.28169 | 0.244131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.281159 | 345 | 20 | 48 | 17.25 | 0.858871 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.071429 | 0.214286 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
d606e8aa8b03416c11b9a80fbe8463e214aeaf90 | 756 | py | Python | src/scrapy_folder_tree/trees/date.py | sp1thas/scrapy-folder-tree | eac14f0e993c7ae93443899c55b34bc8752d7492 | [
"MIT"
] | 4 | 2022-02-08T14:43:25.000Z | 2022-03-07T10:45:14.000Z | src/scrapy_folder_tree/trees/date.py | sp1thas/scrapy-folder-tree | eac14f0e993c7ae93443899c55b34bc8752d7492 | [
"MIT"
] | null | null | null | src/scrapy_folder_tree/trees/date.py | sp1thas/scrapy-folder-tree | eac14f0e993c7ae93443899c55b34bc8752d7492 | [
"MIT"
] | null | null | null | import datetime
import os
from . import TreeBase
class DateTree(TreeBase):
def __init__(self, format: str) -> None:
self.FORMAT = format
def build_path(self, filepath) -> str:
filename = os.path.basename(filepath)
dir_name = os.path.dirname(filepath)
return os.path.join(
dir_name, datetime.date.today().strftime(self.FORMAT), filename
)
class TimeTree(TreeBase):
def __init__(self, format: str) -> None:
self.FORMAT = format
def build_path(self, filepath) -> str:
filename = os.path.basename(filepath)
dir_name = os.path.dirname(filepath)
return os.path.join(
dir_name, datetime.datetime.now().strftime(self.FORMAT), filename
)
| 26.068966 | 77 | 0.634921 | 91 | 756 | 5.120879 | 0.307692 | 0.128755 | 0.064378 | 0.081545 | 0.703863 | 0.703863 | 0.703863 | 0.703863 | 0.703863 | 0.703863 | 0 | 0 | 0.252646 | 756 | 28 | 78 | 27 | 0.824779 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.142857 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d63ffbd601e82a8bc27391fceb92028b354d7b66 | 84 | py | Python | models/__init__.py | ceroo1005/DATL | ac7ceee4f6d0f9ce493743e80afdedd808806703 | [
"MIT"
] | null | null | null | models/__init__.py | ceroo1005/DATL | ac7ceee4f6d0f9ce493743e80afdedd808806703 | [
"MIT"
] | null | null | null | models/__init__.py | ceroo1005/DATL | ac7ceee4f6d0f9ce493743e80afdedd808806703 | [
"MIT"
] | null | null | null | from .LoopNet import LoopNet_DANN, EDL_loss
__all__ = ['LoopNet_DANN', 'EDL_loss']
| 21 | 43 | 0.761905 | 12 | 84 | 4.666667 | 0.583333 | 0.392857 | 0.5 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 84 | 3 | 44 | 28 | 0.756757 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c39d2b7f71169447736c0beb7332470db7f1404e | 5,045 | py | Python | tests/test_os_updates.py | kitcareplanner/PenguinDome | c1291fc683b2001cf058ed45382c5167c98b0372 | [
"Apache-2.0"
] | 81 | 2017-07-30T18:40:13.000Z | 2022-02-23T23:04:54.000Z | tests/test_os_updates.py | kitcareplanner/PenguinDome | c1291fc683b2001cf058ed45382c5167c98b0372 | [
"Apache-2.0"
] | 18 | 2017-10-24T00:20:49.000Z | 2021-06-08T12:12:34.000Z | tests/test_os_updates.py | kitcareplanner/PenguinDome | c1291fc683b2001cf058ed45382c5167c98b0372 | [
"Apache-2.0"
] | 22 | 2017-09-12T20:22:29.000Z | 2021-11-11T18:45:04.000Z | import os
import tempfile
import time
from unittest import mock
import pytest
from client.plugins import os_updates
def test_ubuntu_checker_no_do_release_upgrade(fake_process):
def raise_oserror(process):
raise OSError()
fake_process.register_subprocess(('do-release-upgrade', '-c'),
callback=raise_oserror)
ret = os_updates.ubuntu_checker()
assert ret is None
def test_ubuntu_checker_release_update_available(fake_process):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",))
ret = os_updates.ubuntu_checker()
assert ret['release'] is True
def test_ubuntu_checker_no_release_update_available(fake_process):
fake_process.register_subprocess(('do-release-upgrade', '-c'),
returncode=1)
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",))
ret = os_updates.ubuntu_checker()
assert ret['release'] is False
@pytest.fixture
def os_stat():
# May be relevant later: currently only mocks the first time each path is
# stat'd.
orig_os_stat = os.stat
file_mappings = {}
def add_mapping(original, replacement):
file_mappings[original] = replacement
def my_os_stat(*args, **kwargs):
if args[0] in file_mappings:
args = (file_mappings.pop(args[0]),) + args[1:]
return orig_os_stat(*args, **kwargs)
with mock.patch('os.stat', my_os_stat):
yield add_mapping
if file_mappings:
raise NotImplementedError('Orphaned os.stat calls: ' +
', '.join(file_mappings.keys()))
def test_ubuntu_checker_current(fake_process, os_stat):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",))
with tempfile.NamedTemporaryFile() as f:
os_stat('/var/lib/apt/periodic/update-success-stamp', f.name)
ret = os_updates.ubuntu_checker()
assert ret['current'] is True
def test_ubuntu_checker_not_current_not_found(fake_process, os_stat):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",))
with tempfile.NamedTemporaryFile(delete=False) as f:
os_stat('/var/lib/apt/periodic/update-success-stamp', f.name)
os.unlink(f.name)
ret = os_updates.ubuntu_checker()
assert ret['current'] is False
def test_ubuntu_checker_not_current_old(fake_process, os_stat):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",))
with tempfile.NamedTemporaryFile() as f:
os_stat('/var/lib/apt/periodic/update-success-stamp', f.name)
t = time.time() - 60 * 60 * 24 * 3
os.utime(f.name, (t, t))
ret = os_updates.ubuntu_checker()
assert ret['current'] is False
def test_ubuntu_checker_no_apt_check(fake_process):
def raise_oserror(process):
raise OSError()
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",),
callback=raise_oserror)
ret = os_updates.ubuntu_checker()
assert ret['patches'] == 'unknown'
def test_ubuntu_checker_no_patches(fake_process):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("0;0",))
ret = os_updates.ubuntu_checker()
assert ret['patches'] is False
def test_ubuntu_checker_patches(fake_process):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("1;0",))
ret = os_updates.ubuntu_checker()
assert ret['patches'] is True
def test_ubuntu_checker_no_security(fake_process):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("1;0",))
ret = os_updates.ubuntu_checker()
assert ret['security_patches'] is False
def test_ubuntu_checker_security(fake_process):
fake_process.register_subprocess(('do-release-upgrade', '-c'))
fake_process.register_subprocess('/usr/lib/update-notifier/apt-check',
stdout=("1;1",))
ret = os_updates.ubuntu_checker()
assert ret['security_patches'] is True
| 37.37037 | 77 | 0.643806 | 619 | 5,045 | 4.996769 | 0.164782 | 0.113805 | 0.129001 | 0.196896 | 0.788231 | 0.773359 | 0.759134 | 0.726156 | 0.723893 | 0.723893 | 0 | 0.00801 | 0.232904 | 5,045 | 134 | 78 | 37.649254 | 0.791214 | 0.015659 | 0 | 0.524752 | 0 | 0 | 0.170058 | 0.093895 | 0 | 0 | 0 | 0 | 0.108911 | 1 | 0.158416 | false | 0 | 0.059406 | 0 | 0.227723 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3b80437f676ab80bfc846155ce2741b90f26014 | 203 | py | Python | hikcamerabot/clients/hikvision/__init__.py | tropicoo/hik-camera-bot | a7108c08a8e009e7361bbb9904c3a71f3226afd5 | [
"MIT"
] | 1 | 2019-02-09T20:08:50.000Z | 2019-02-09T20:08:50.000Z | hikcamerabot/clients/hikvision/__init__.py | SirNoish/hikvision-camera-bot | a7108c08a8e009e7361bbb9904c3a71f3226afd5 | [
"MIT"
] | 3 | 2019-02-10T12:42:10.000Z | 2019-02-16T00:33:29.000Z | hikcamerabot/clients/hikvision/__init__.py | SirNoish/hikvision-camera-bot | a7108c08a8e009e7361bbb9904c3a71f3226afd5 | [
"MIT"
] | null | null | null | from hikcamerabot.clients.hikvision.api_client import HikvisionAPIClient
from hikcamerabot.clients.hikvision.api_wrapper import HikvisionAPI
__all__ = [
'HikvisionAPI',
'HikvisionAPIClient',
]
| 22.555556 | 72 | 0.807882 | 19 | 203 | 8.315789 | 0.578947 | 0.202532 | 0.291139 | 0.405063 | 0.443038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118227 | 203 | 8 | 73 | 25.375 | 0.882682 | 0 | 0 | 0 | 0 | 0 | 0.147783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
614e865a010ccd56ccebfbd5feb95b087b7a8d4b | 96 | py | Python | venv/lib/python3.8/site-packages/yapf/yapflib/subtype_assigner.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/yapf/yapflib/subtype_assigner.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/yapf/yapflib/subtype_assigner.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/1e/51/95/7098663e3d6be034523fc373d091f3f9c2db8130a76733f25b96118901 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 96 | 1 | 96 | 96 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61862aef082c7656105b832423c261020507fbf7 | 34 | py | Python | kor2vec/trainer/__init__.py | sujoung/kor2vec | 2cd0b6e60e3f5ca1fc32ea4bd6b40b7a21dfa52e | [
"Apache-2.0"
] | 175 | 2018-10-10T09:52:09.000Z | 2021-11-15T11:15:05.000Z | kor2vec/trainer/__init__.py | sayduke/kor2vec | 2cd0b6e60e3f5ca1fc32ea4bd6b40b7a21dfa52e | [
"Apache-2.0"
] | 2 | 2019-01-05T07:06:29.000Z | 2019-09-06T02:53:38.000Z | kor2vec/trainer/__init__.py | sayduke/kor2vec | 2cd0b6e60e3f5ca1fc32ea4bd6b40b7a21dfa52e | [
"Apache-2.0"
] | 32 | 2018-10-12T06:49:30.000Z | 2021-06-21T07:44:38.000Z | from .skip_gram import SkipTrainer | 34 | 34 | 0.882353 | 5 | 34 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6196bf38805dc89d42a69eee65e06667d930c558 | 78,445 | py | Python | airflow/providers/google/cloud/operators/vertex_ai/custom_job.py | npodewitz/airflow | 511ea702d5f732582d018dad79754b54d5e53f9d | [
"Apache-2.0"
] | 8,092 | 2016-04-27T20:32:29.000Z | 2019-01-05T07:39:33.000Z | airflow/providers/google/cloud/operators/vertex_ai/custom_job.py | npodewitz/airflow | 511ea702d5f732582d018dad79754b54d5e53f9d | [
"Apache-2.0"
] | 2,961 | 2016-05-05T07:16:16.000Z | 2019-01-05T08:47:59.000Z | airflow/providers/google/cloud/operators/vertex_ai/custom_job.py | npodewitz/airflow | 511ea702d5f732582d018dad79754b54d5e53f9d | [
"Apache-2.0"
] | 3,546 | 2016-05-04T20:33:16.000Z | 2019-01-05T05:14:26.000Z | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
"""This module contains Google Vertex AI operators."""
from typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union
from google.api_core.exceptions import NotFound
from google.api_core.gapic_v1.method import DEFAULT, _MethodDefault
from google.api_core.retry import Retry
from google.cloud.aiplatform.models import Model
from google.cloud.aiplatform_v1.types.dataset import Dataset
from google.cloud.aiplatform_v1.types.training_pipeline import TrainingPipeline
from airflow.models import BaseOperator
from airflow.providers.google.cloud.hooks.vertex_ai.custom_job import CustomJobHook
from airflow.providers.google.cloud.links.vertex_ai import VertexAIModelLink, VertexAITrainingPipelinesLink
if TYPE_CHECKING:
from airflow.utils.context import Context
class CustomTrainingJobBaseOperator(BaseOperator):
"""The base class for operators that launch Custom jobs on VertexAI."""
def __init__(
self,
*,
project_id: str,
region: str,
display_name: str,
container_uri: str,
model_serving_container_image_uri: Optional[str] = None,
model_serving_container_predict_route: Optional[str] = None,
model_serving_container_health_route: Optional[str] = None,
model_serving_container_command: Optional[Sequence[str]] = None,
model_serving_container_args: Optional[Sequence[str]] = None,
model_serving_container_environment_variables: Optional[Dict[str, str]] = None,
model_serving_container_ports: Optional[Sequence[int]] = None,
model_description: Optional[str] = None,
model_instance_schema_uri: Optional[str] = None,
model_parameters_schema_uri: Optional[str] = None,
model_prediction_schema_uri: Optional[str] = None,
labels: Optional[Dict[str, str]] = None,
training_encryption_spec_key_name: Optional[str] = None,
model_encryption_spec_key_name: Optional[str] = None,
staging_bucket: Optional[str] = None,
# RUN
dataset_id: Optional[str] = None,
annotation_schema_uri: Optional[str] = None,
model_display_name: Optional[str] = None,
model_labels: Optional[Dict[str, str]] = None,
base_output_dir: Optional[str] = None,
service_account: Optional[str] = None,
network: Optional[str] = None,
bigquery_destination: Optional[str] = None,
args: Optional[List[Union[str, float, int]]] = None,
environment_variables: Optional[Dict[str, str]] = None,
replica_count: int = 1,
machine_type: str = "n1-standard-4",
accelerator_type: str = "ACCELERATOR_TYPE_UNSPECIFIED",
accelerator_count: int = 0,
boot_disk_type: str = "pd-ssd",
boot_disk_size_gb: int = 100,
training_fraction_split: Optional[float] = None,
validation_fraction_split: Optional[float] = None,
test_fraction_split: Optional[float] = None,
training_filter_split: Optional[str] = None,
validation_filter_split: Optional[str] = None,
test_filter_split: Optional[str] = None,
predefined_split_column_name: Optional[str] = None,
timestamp_split_column_name: Optional[str] = None,
tensorboard: Optional[str] = None,
sync=True,
gcp_conn_id: str = "google_cloud_default",
delegate_to: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.project_id = project_id
self.region = region
self.display_name = display_name
# START Custom
self.container_uri = container_uri
self.model_serving_container_image_uri = model_serving_container_image_uri
self.model_serving_container_predict_route = model_serving_container_predict_route
self.model_serving_container_health_route = model_serving_container_health_route
self.model_serving_container_command = model_serving_container_command
self.model_serving_container_args = model_serving_container_args
self.model_serving_container_environment_variables = model_serving_container_environment_variables
self.model_serving_container_ports = model_serving_container_ports
self.model_description = model_description
self.model_instance_schema_uri = model_instance_schema_uri
self.model_parameters_schema_uri = model_parameters_schema_uri
self.model_prediction_schema_uri = model_prediction_schema_uri
self.labels = labels
self.training_encryption_spec_key_name = training_encryption_spec_key_name
self.model_encryption_spec_key_name = model_encryption_spec_key_name
self.staging_bucket = staging_bucket
# END Custom
# START Run param
self.dataset = Dataset(name=dataset_id) if dataset_id else None
self.annotation_schema_uri = annotation_schema_uri
self.model_display_name = model_display_name
self.model_labels = model_labels
self.base_output_dir = base_output_dir
self.service_account = service_account
self.network = network
self.bigquery_destination = bigquery_destination
self.args = args
self.environment_variables = environment_variables
self.replica_count = replica_count
self.machine_type = machine_type
self.accelerator_type = accelerator_type
self.accelerator_count = accelerator_count
self.boot_disk_type = boot_disk_type
self.boot_disk_size_gb = boot_disk_size_gb
self.training_fraction_split = training_fraction_split
self.validation_fraction_split = validation_fraction_split
self.test_fraction_split = test_fraction_split
self.training_filter_split = training_filter_split
self.validation_filter_split = validation_filter_split
self.test_filter_split = test_filter_split
self.predefined_split_column_name = predefined_split_column_name
self.timestamp_split_column_name = timestamp_split_column_name
self.tensorboard = tensorboard
self.sync = sync
# END Run param
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.impersonation_chain = impersonation_chain
class CreateCustomContainerTrainingJobOperator(CustomTrainingJobBaseOperator):
"""Create Custom Container Training job
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param region: Required. The ID of the Google Cloud region that the service belongs to.
:param display_name: Required. The user-defined name of this TrainingPipeline.
:param command: The command to be invoked when the container is started.
It overrides the entrypoint instruction in Dockerfile when provided
:param container_uri: Required: Uri of the training container image in the GCR.
:param model_serving_container_image_uri: If the training produces a managed Vertex AI Model, the URI
of the Model serving container suitable for serving the model produced by the
training script.
:param model_serving_container_predict_route: If the training produces a managed Vertex AI Model, An
HTTP path to send prediction requests to the container, and which must be supported
by it. If not specified a default HTTP path will be used by Vertex AI.
:param model_serving_container_health_route: If the training produces a managed Vertex AI Model, an
HTTP path to send health check requests to the container, and which must be supported
by it. If not specified a standard HTTP path will be used by AI Platform.
:param model_serving_container_command: The command with which the container is run. Not executed
within a shell. The Docker image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's
environment. If a variable cannot be resolved, the reference in the
input string will be unchanged. The $(VAR_NAME) syntax can be escaped
with a double $$, ie: $$(VAR_NAME). Escaped references will never be
expanded, regardless of whether the variable exists or not.
:param model_serving_container_args: The arguments to the command. The Docker image's CMD is used if
this is not provided. Variable references $(VAR_NAME) are expanded using the
container's environment. If a variable cannot be resolved, the reference
in the input string will be unchanged. The $(VAR_NAME) syntax can be
escaped with a double $$, ie: $$(VAR_NAME). Escaped references will
never be expanded, regardless of whether the variable exists or not.
:param model_serving_container_environment_variables: The environment variables that are to be
present in the container. Should be a dictionary where keys are environment variable names
and values are environment variable values for those names.
:param model_serving_container_ports: Declaration of ports that are exposed by the container. This
field is primarily informational, it gives Vertex AI information about the
network connections the container uses. Listing or not a port here has
no impact on whether the port is actually exposed, any port listening on
the default "0.0.0.0" address inside a container will be accessible from
the network.
:param model_description: The description of the Model.
:param model_instance_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the format of a single instance, which
are used in
``PredictRequest.instances``,
``ExplainRequest.instances``
and
``BatchPredictionJob.input_config``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform. Note: The URI given on output will be immutable
and probably different, including the URI scheme, than the
one given on input. The output URI will point to a location
where the user only has a read access.
:param model_parameters_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the parameters of prediction and
explanation via
``PredictRequest.parameters``,
``ExplainRequest.parameters``
and
``BatchPredictionJob.model_parameters``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform, if no parameters are supported it is set to an
empty string. Note: The URI given on output will be
immutable and probably different, including the URI scheme,
than the one given on input. The output URI will point to a
location where the user only has a read access.
:param model_prediction_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the format of a single prediction
produced by this Model, which are returned via
``PredictResponse.predictions``,
``ExplainResponse.explanations``,
and
``BatchPredictionJob.output_config``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform. Note: The URI given on output will be immutable
and probably different, including the URI scheme, than the
one given on input. The output URI will point to a location
where the user only has a read access.
:param project_id: Project to run training in.
:param region: Location to run training in.
:param labels: Optional. The labels with user-defined metadata to
organize TrainingPipelines.
Label keys and values can be no longer than 64
characters, can only
contain lowercase letters, numeric characters,
underscores and dashes. International characters
are allowed.
See https://goo.gl/xmQnxf for more information
and examples of labels.
:param training_encryption_spec_key_name: Optional. The Cloud KMS resource identifier of the customer
managed encryption key used to protect the training pipeline. Has the
form:
``projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key``.
The key needs to be in the same region as where the compute
resource is created.
If set, this TrainingPipeline will be secured by this key.
Note: Model trained by this TrainingPipeline is also secured
by this key if ``model_to_upload`` is not set separately.
:param model_encryption_spec_key_name: Optional. The Cloud KMS resource identifier of the customer
managed encryption key used to protect the model. Has the
form:
``projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key``.
The key needs to be in the same region as where the compute
resource is created.
If set, the trained Model will be secured by this key.
:param staging_bucket: Bucket used to stage source and training artifacts.
:param dataset: Vertex AI to fit this training against.
:param annotation_schema_uri: Google Cloud Storage URI points to a YAML file describing
annotation schema. The schema is defined as an OpenAPI 3.0.2
[Schema Object]
(https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schema-object)
Only Annotations that both match this schema and belong to
DataItems not ignored by the split method are used in
respectively training, validation or test role, depending on
the role of the DataItem they are on.
When used in conjunction with
``annotations_filter``,
the Annotations used for training are filtered by both
``annotations_filter``
and
``annotation_schema_uri``.
:param model_display_name: If the script produces a managed Vertex AI Model. The display name of
the Model. The name can be up to 128 characters long and can be consist
of any UTF-8 characters.
If not provided upon creation, the job's display_name is used.
:param model_labels: Optional. The labels with user-defined metadata to
organize your Models.
Label keys and values can be no longer than 64
characters, can only
contain lowercase letters, numeric characters,
underscores and dashes. International characters
are allowed.
See https://goo.gl/xmQnxf for more information
and examples of labels.
:param base_output_dir: GCS output directory of job. If not provided a timestamped directory in the
staging directory will be used.
Vertex AI sets the following environment variables when it runs your training code:
- AIP_MODEL_DIR: a Cloud Storage URI of a directory intended for saving model artifacts,
i.e. <base_output_dir>/model/
- AIP_CHECKPOINT_DIR: a Cloud Storage URI of a directory intended for saving checkpoints,
i.e. <base_output_dir>/checkpoints/
- AIP_TENSORBOARD_LOG_DIR: a Cloud Storage URI of a directory intended for saving TensorBoard
logs, i.e. <base_output_dir>/logs/
:param service_account: Specifies the service account for workload run-as account.
Users submitting jobs must have act-as permission on this run-as account.
:param network: The full name of the Compute Engine network to which the job
should be peered.
Private services access must already be configured for the network.
If left unspecified, the job is not peered with any network.
:param bigquery_destination: Provide this field if `dataset` is a BiqQuery dataset.
The BigQuery project location where the training data is to
be written to. In the given project a new dataset is created
with name
``dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>``
where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All
training input data will be written into that dataset. In
the dataset three tables will be created, ``training``,
``validation`` and ``test``.
- AIP_DATA_FORMAT = "bigquery".
- AIP_TRAINING_DATA_URI ="bigquery_destination.dataset_*.training"
- AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_*.validation"
- AIP_TEST_DATA_URI = "bigquery_destination.dataset_*.test"
:param args: Command line arguments to be passed to the Python script.
:param environment_variables: Environment variables to be passed to the container.
Should be a dictionary where keys are environment variable names
and values are environment variable values for those names.
At most 10 environment variables can be specified.
The Name of the environment variable must be unique.
:param replica_count: The number of worker replicas. If replica count = 1 then one chief
replica will be provisioned. If replica_count > 1 the remainder will be
provisioned as a worker replica pool.
:param machine_type: The type of machine to use for training.
:param accelerator_type: Hardware accelerator type. One of ACCELERATOR_TYPE_UNSPECIFIED,
NVIDIA_TESLA_K80, NVIDIA_TESLA_P100, NVIDIA_TESLA_V100, NVIDIA_TESLA_P4,
NVIDIA_TESLA_T4
:param accelerator_count: The number of accelerators to attach to a worker replica.
:param boot_disk_type: Type of the boot disk, default is `pd-ssd`.
Valid values: `pd-ssd` (Persistent Disk Solid State Drive) or
`pd-standard` (Persistent Disk Hard Disk Drive).
:param boot_disk_size_gb: Size in GB of the boot disk, default is 100GB.
boot disk size must be within the range of [100, 64000].
:param training_fraction_split: Optional. The fraction of the input data that is to be used to train
the Model. This is ignored if Dataset is not provided.
:param validation_fraction_split: Optional. The fraction of the input data that is to be used to
validate the Model. This is ignored if Dataset is not provided.
:param test_fraction_split: Optional. The fraction of the input data that is to be used to evaluate
the Model. This is ignored if Dataset is not provided.
:param training_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to train the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param validation_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to validate the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param test_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to test the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param predefined_split_column_name: Optional. The key is a name of one of the Dataset's data
columns. The value of the key (either the label's value or
value in the column) must be one of {``training``,
``validation``, ``test``}, and it defines to which set the
given piece of data is assigned. If for a piece of data the
key is not present or has an invalid value, that piece is
ignored by the pipeline.
Supported only for tabular and time series Datasets.
:param timestamp_split_column_name: Optional. The key is a name of one of the Dataset's data
columns. The value of the key values of the key (the values in
the column) must be in RFC 3339 `date-time` format, where
`time-offset` = `"Z"` (e.g. 1985-04-12T23:20:50.52Z). If for a
piece of data the key is not present or has an invalid value,
that piece is ignored by the pipeline.
Supported only for tabular and time series Datasets.
:param tensorboard: Optional. The name of a Vertex AI resource to which this CustomJob will upload
logs. Format:
``projects/{project}/locations/{location}/tensorboards/{tensorboard}``
For more information on configuring your service account please visit:
https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training
:param sync: Whether to execute the AI Platform job synchronously. If False, this method
will be executed in concurrent Future and any downstream object will
be immediately returned and synced when the Future has completed.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields = [
'region',
'command',
'impersonation_chain',
]
operator_extra_links = (VertexAIModelLink(),)
def __init__(
self,
*,
command: Sequence[str] = [],
**kwargs,
) -> None:
super().__init__(**kwargs)
self.command = command
def execute(self, context: "Context"):
self.hook = CustomJobHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
)
model = self.hook.create_custom_container_training_job(
project_id=self.project_id,
region=self.region,
display_name=self.display_name,
container_uri=self.container_uri,
command=self.command,
model_serving_container_image_uri=self.model_serving_container_image_uri,
model_serving_container_predict_route=self.model_serving_container_predict_route,
model_serving_container_health_route=self.model_serving_container_health_route,
model_serving_container_command=self.model_serving_container_command,
model_serving_container_args=self.model_serving_container_args,
model_serving_container_environment_variables=self.model_serving_container_environment_variables,
model_serving_container_ports=self.model_serving_container_ports,
model_description=self.model_description,
model_instance_schema_uri=self.model_instance_schema_uri,
model_parameters_schema_uri=self.model_parameters_schema_uri,
model_prediction_schema_uri=self.model_prediction_schema_uri,
labels=self.labels,
training_encryption_spec_key_name=self.training_encryption_spec_key_name,
model_encryption_spec_key_name=self.model_encryption_spec_key_name,
staging_bucket=self.staging_bucket,
# RUN
dataset=self.dataset,
annotation_schema_uri=self.annotation_schema_uri,
model_display_name=self.model_display_name,
model_labels=self.model_labels,
base_output_dir=self.base_output_dir,
service_account=self.service_account,
network=self.network,
bigquery_destination=self.bigquery_destination,
args=self.args,
environment_variables=self.environment_variables,
replica_count=self.replica_count,
machine_type=self.machine_type,
accelerator_type=self.accelerator_type,
accelerator_count=self.accelerator_count,
boot_disk_type=self.boot_disk_type,
boot_disk_size_gb=self.boot_disk_size_gb,
training_fraction_split=self.training_fraction_split,
validation_fraction_split=self.validation_fraction_split,
test_fraction_split=self.test_fraction_split,
training_filter_split=self.training_filter_split,
validation_filter_split=self.validation_filter_split,
test_filter_split=self.test_filter_split,
predefined_split_column_name=self.predefined_split_column_name,
timestamp_split_column_name=self.timestamp_split_column_name,
tensorboard=self.tensorboard,
sync=True,
)
result = Model.to_dict(model)
model_id = self.hook.extract_model_id(result)
VertexAIModelLink.persist(context=context, task_instance=self, model_id=model_id)
return result
def on_kill(self) -> None:
"""
Callback called when the operator is killed.
Cancel any running job.
"""
if self.hook:
self.hook.cancel_job()
class CreateCustomPythonPackageTrainingJobOperator(CustomTrainingJobBaseOperator):
"""Create Custom Python Package Training job
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param region: Required. The ID of the Google Cloud region that the service belongs to.
:param display_name: Required. The user-defined name of this TrainingPipeline.
:param python_package_gcs_uri: Required: GCS location of the training python package.
:param python_module_name: Required: The module name of the training python package.
:param container_uri: Required: Uri of the training container image in the GCR.
:param model_serving_container_image_uri: If the training produces a managed Vertex AI Model, the URI
of the Model serving container suitable for serving the model produced by the
training script.
:param model_serving_container_predict_route: If the training produces a managed Vertex AI Model, An
HTTP path to send prediction requests to the container, and which must be supported
by it. If not specified a default HTTP path will be used by Vertex AI.
:param model_serving_container_health_route: If the training produces a managed Vertex AI Model, an
HTTP path to send health check requests to the container, and which must be supported
by it. If not specified a standard HTTP path will be used by AI Platform.
:param model_serving_container_command: The command with which the container is run. Not executed
within a shell. The Docker image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's
environment. If a variable cannot be resolved, the reference in the
input string will be unchanged. The $(VAR_NAME) syntax can be escaped
with a double $$, ie: $$(VAR_NAME). Escaped references will never be
expanded, regardless of whether the variable exists or not.
:param model_serving_container_args: The arguments to the command. The Docker image's CMD is used if
this is not provided. Variable references $(VAR_NAME) are expanded using the
container's environment. If a variable cannot be resolved, the reference
in the input string will be unchanged. The $(VAR_NAME) syntax can be
escaped with a double $$, ie: $$(VAR_NAME). Escaped references will
never be expanded, regardless of whether the variable exists or not.
:param model_serving_container_environment_variables: The environment variables that are to be
present in the container. Should be a dictionary where keys are environment variable names
and values are environment variable values for those names.
:param model_serving_container_ports: Declaration of ports that are exposed by the container. This
field is primarily informational, it gives Vertex AI information about the
network connections the container uses. Listing or not a port here has
no impact on whether the port is actually exposed, any port listening on
the default "0.0.0.0" address inside a container will be accessible from
the network.
:param model_description: The description of the Model.
:param model_instance_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the format of a single instance, which
are used in
``PredictRequest.instances``,
``ExplainRequest.instances``
and
``BatchPredictionJob.input_config``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform. Note: The URI given on output will be immutable
and probably different, including the URI scheme, than the
one given on input. The output URI will point to a location
where the user only has a read access.
:param model_parameters_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the parameters of prediction and
explanation via
``PredictRequest.parameters``,
``ExplainRequest.parameters``
and
``BatchPredictionJob.model_parameters``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform, if no parameters are supported it is set to an
empty string. Note: The URI given on output will be
immutable and probably different, including the URI scheme,
than the one given on input. The output URI will point to a
location where the user only has a read access.
:param model_prediction_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the format of a single prediction
produced by this Model, which are returned via
``PredictResponse.predictions``,
``ExplainResponse.explanations``,
and
``BatchPredictionJob.output_config``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform. Note: The URI given on output will be immutable
and probably different, including the URI scheme, than the
one given on input. The output URI will point to a location
where the user only has a read access.
:param project_id: Project to run training in.
:param region: Location to run training in.
:param labels: Optional. The labels with user-defined metadata to
organize TrainingPipelines.
Label keys and values can be no longer than 64
characters, can only
contain lowercase letters, numeric characters,
underscores and dashes. International characters
are allowed.
See https://goo.gl/xmQnxf for more information
and examples of labels.
:param training_encryption_spec_key_name: Optional. The Cloud KMS resource identifier of the customer
managed encryption key used to protect the training pipeline. Has the
form:
``projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key``.
The key needs to be in the same region as where the compute
resource is created.
If set, this TrainingPipeline will be secured by this key.
Note: Model trained by this TrainingPipeline is also secured
by this key if ``model_to_upload`` is not set separately.
:param model_encryption_spec_key_name: Optional. The Cloud KMS resource identifier of the customer
managed encryption key used to protect the model. Has the
form:
``projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key``.
The key needs to be in the same region as where the compute
resource is created.
If set, the trained Model will be secured by this key.
:param staging_bucket: Bucket used to stage source and training artifacts.
:param dataset: Vertex AI to fit this training against.
:param annotation_schema_uri: Google Cloud Storage URI points to a YAML file describing
annotation schema. The schema is defined as an OpenAPI 3.0.2
[Schema Object]
(https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schema-object)
Only Annotations that both match this schema and belong to
DataItems not ignored by the split method are used in
respectively training, validation or test role, depending on
the role of the DataItem they are on.
When used in conjunction with
``annotations_filter``,
the Annotations used for training are filtered by both
``annotations_filter``
and
``annotation_schema_uri``.
:param model_display_name: If the script produces a managed Vertex AI Model. The display name of
the Model. The name can be up to 128 characters long and can be consist
of any UTF-8 characters.
If not provided upon creation, the job's display_name is used.
:param model_labels: Optional. The labels with user-defined metadata to
organize your Models.
Label keys and values can be no longer than 64
characters, can only
contain lowercase letters, numeric characters,
underscores and dashes. International characters
are allowed.
See https://goo.gl/xmQnxf for more information
and examples of labels.
:param base_output_dir: GCS output directory of job. If not provided a timestamped directory in the
staging directory will be used.
Vertex AI sets the following environment variables when it runs your training code:
- AIP_MODEL_DIR: a Cloud Storage URI of a directory intended for saving model artifacts,
i.e. <base_output_dir>/model/
- AIP_CHECKPOINT_DIR: a Cloud Storage URI of a directory intended for saving checkpoints,
i.e. <base_output_dir>/checkpoints/
- AIP_TENSORBOARD_LOG_DIR: a Cloud Storage URI of a directory intended for saving TensorBoard
logs, i.e. <base_output_dir>/logs/
:param service_account: Specifies the service account for workload run-as account.
Users submitting jobs must have act-as permission on this run-as account.
:param network: The full name of the Compute Engine network to which the job
should be peered.
Private services access must already be configured for the network.
If left unspecified, the job is not peered with any network.
:param bigquery_destination: Provide this field if `dataset` is a BiqQuery dataset.
The BigQuery project location where the training data is to
be written to. In the given project a new dataset is created
with name
``dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>``
where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All
training input data will be written into that dataset. In
the dataset three tables will be created, ``training``,
``validation`` and ``test``.
- AIP_DATA_FORMAT = "bigquery".
- AIP_TRAINING_DATA_URI ="bigquery_destination.dataset_*.training"
- AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_*.validation"
- AIP_TEST_DATA_URI = "bigquery_destination.dataset_*.test"
:param args: Command line arguments to be passed to the Python script.
:param environment_variables: Environment variables to be passed to the container.
Should be a dictionary where keys are environment variable names
and values are environment variable values for those names.
At most 10 environment variables can be specified.
The Name of the environment variable must be unique.
:param replica_count: The number of worker replicas. If replica count = 1 then one chief
replica will be provisioned. If replica_count > 1 the remainder will be
provisioned as a worker replica pool.
:param machine_type: The type of machine to use for training.
:param accelerator_type: Hardware accelerator type. One of ACCELERATOR_TYPE_UNSPECIFIED,
NVIDIA_TESLA_K80, NVIDIA_TESLA_P100, NVIDIA_TESLA_V100, NVIDIA_TESLA_P4,
NVIDIA_TESLA_T4
:param accelerator_count: The number of accelerators to attach to a worker replica.
:param boot_disk_type: Type of the boot disk, default is `pd-ssd`.
Valid values: `pd-ssd` (Persistent Disk Solid State Drive) or
`pd-standard` (Persistent Disk Hard Disk Drive).
:param boot_disk_size_gb: Size in GB of the boot disk, default is 100GB.
boot disk size must be within the range of [100, 64000].
:param training_fraction_split: Optional. The fraction of the input data that is to be used to train
the Model. This is ignored if Dataset is not provided.
:param validation_fraction_split: Optional. The fraction of the input data that is to be used to
validate the Model. This is ignored if Dataset is not provided.
:param test_fraction_split: Optional. The fraction of the input data that is to be used to evaluate
the Model. This is ignored if Dataset is not provided.
:param training_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to train the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param validation_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to validate the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param test_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to test the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param predefined_split_column_name: Optional. The key is a name of one of the Dataset's data
columns. The value of the key (either the label's value or
value in the column) must be one of {``training``,
``validation``, ``test``}, and it defines to which set the
given piece of data is assigned. If for a piece of data the
key is not present or has an invalid value, that piece is
ignored by the pipeline.
Supported only for tabular and time series Datasets.
:param timestamp_split_column_name: Optional. The key is a name of one of the Dataset's data
columns. The value of the key values of the key (the values in
the column) must be in RFC 3339 `date-time` format, where
`time-offset` = `"Z"` (e.g. 1985-04-12T23:20:50.52Z). If for a
piece of data the key is not present or has an invalid value,
that piece is ignored by the pipeline.
Supported only for tabular and time series Datasets.
:param tensorboard: Optional. The name of a Vertex AI resource to which this CustomJob will upload
logs. Format:
``projects/{project}/locations/{location}/tensorboards/{tensorboard}``
For more information on configuring your service account please visit:
https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training
:param sync: Whether to execute the AI Platform job synchronously. If False, this method
will be executed in concurrent Future and any downstream object will
be immediately returned and synced when the Future has completed.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields = [
'region',
'impersonation_chain',
]
operator_extra_links = (VertexAIModelLink(),)
def __init__(
self,
*,
python_package_gcs_uri: str,
python_module_name: str,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.python_package_gcs_uri = python_package_gcs_uri
self.python_module_name = python_module_name
def execute(self, context: "Context"):
self.hook = CustomJobHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
)
model = self.hook.create_custom_python_package_training_job(
project_id=self.project_id,
region=self.region,
display_name=self.display_name,
python_package_gcs_uri=self.python_package_gcs_uri,
python_module_name=self.python_module_name,
container_uri=self.container_uri,
model_serving_container_image_uri=self.model_serving_container_image_uri,
model_serving_container_predict_route=self.model_serving_container_predict_route,
model_serving_container_health_route=self.model_serving_container_health_route,
model_serving_container_command=self.model_serving_container_command,
model_serving_container_args=self.model_serving_container_args,
model_serving_container_environment_variables=self.model_serving_container_environment_variables,
model_serving_container_ports=self.model_serving_container_ports,
model_description=self.model_description,
model_instance_schema_uri=self.model_instance_schema_uri,
model_parameters_schema_uri=self.model_parameters_schema_uri,
model_prediction_schema_uri=self.model_prediction_schema_uri,
labels=self.labels,
training_encryption_spec_key_name=self.training_encryption_spec_key_name,
model_encryption_spec_key_name=self.model_encryption_spec_key_name,
staging_bucket=self.staging_bucket,
# RUN
dataset=self.dataset,
annotation_schema_uri=self.annotation_schema_uri,
model_display_name=self.model_display_name,
model_labels=self.model_labels,
base_output_dir=self.base_output_dir,
service_account=self.service_account,
network=self.network,
bigquery_destination=self.bigquery_destination,
args=self.args,
environment_variables=self.environment_variables,
replica_count=self.replica_count,
machine_type=self.machine_type,
accelerator_type=self.accelerator_type,
accelerator_count=self.accelerator_count,
boot_disk_type=self.boot_disk_type,
boot_disk_size_gb=self.boot_disk_size_gb,
training_fraction_split=self.training_fraction_split,
validation_fraction_split=self.validation_fraction_split,
test_fraction_split=self.test_fraction_split,
training_filter_split=self.training_filter_split,
validation_filter_split=self.validation_filter_split,
test_filter_split=self.test_filter_split,
predefined_split_column_name=self.predefined_split_column_name,
timestamp_split_column_name=self.timestamp_split_column_name,
tensorboard=self.tensorboard,
sync=True,
)
result = Model.to_dict(model)
model_id = self.hook.extract_model_id(result)
VertexAIModelLink.persist(context=context, task_instance=self, model_id=model_id)
return result
def on_kill(self) -> None:
"""
Callback called when the operator is killed.
Cancel any running job.
"""
if self.hook:
self.hook.cancel_job()
class CreateCustomTrainingJobOperator(CustomTrainingJobBaseOperator):
"""Create Custom Training job
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param region: Required. The ID of the Google Cloud region that the service belongs to.
:param display_name: Required. The user-defined name of this TrainingPipeline.
:param script_path: Required. Local path to training script.
:param container_uri: Required: Uri of the training container image in the GCR.
:param requirements: List of python packages dependencies of script.
:param model_serving_container_image_uri: If the training produces a managed Vertex AI Model, the URI
of the Model serving container suitable for serving the model produced by the
training script.
:param model_serving_container_predict_route: If the training produces a managed Vertex AI Model, An
HTTP path to send prediction requests to the container, and which must be supported
by it. If not specified a default HTTP path will be used by Vertex AI.
:param model_serving_container_health_route: If the training produces a managed Vertex AI Model, an
HTTP path to send health check requests to the container, and which must be supported
by it. If not specified a standard HTTP path will be used by AI Platform.
:param model_serving_container_command: The command with which the container is run. Not executed
within a shell. The Docker image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's
environment. If a variable cannot be resolved, the reference in the
input string will be unchanged. The $(VAR_NAME) syntax can be escaped
with a double $$, ie: $$(VAR_NAME). Escaped references will never be
expanded, regardless of whether the variable exists or not.
:param model_serving_container_args: The arguments to the command. The Docker image's CMD is used if
this is not provided. Variable references $(VAR_NAME) are expanded using the
container's environment. If a variable cannot be resolved, the reference
in the input string will be unchanged. The $(VAR_NAME) syntax can be
escaped with a double $$, ie: $$(VAR_NAME). Escaped references will
never be expanded, regardless of whether the variable exists or not.
:param model_serving_container_environment_variables: The environment variables that are to be
present in the container. Should be a dictionary where keys are environment variable names
and values are environment variable values for those names.
:param model_serving_container_ports: Declaration of ports that are exposed by the container. This
field is primarily informational, it gives Vertex AI information about the
network connections the container uses. Listing or not a port here has
no impact on whether the port is actually exposed, any port listening on
the default "0.0.0.0" address inside a container will be accessible from
the network.
:param model_description: The description of the Model.
:param model_instance_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the format of a single instance, which
are used in
``PredictRequest.instances``,
``ExplainRequest.instances``
and
``BatchPredictionJob.input_config``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform. Note: The URI given on output will be immutable
and probably different, including the URI scheme, than the
one given on input. The output URI will point to a location
where the user only has a read access.
:param model_parameters_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the parameters of prediction and
explanation via
``PredictRequest.parameters``,
``ExplainRequest.parameters``
and
``BatchPredictionJob.model_parameters``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform, if no parameters are supported it is set to an
empty string. Note: The URI given on output will be
immutable and probably different, including the URI scheme,
than the one given on input. The output URI will point to a
location where the user only has a read access.
:param model_prediction_schema_uri: Optional. Points to a YAML file stored on Google Cloud
Storage describing the format of a single prediction
produced by this Model, which are returned via
``PredictResponse.predictions``,
``ExplainResponse.explanations``,
and
``BatchPredictionJob.output_config``.
The schema is defined as an OpenAPI 3.0.2 `Schema
Object <https://tinyurl.com/y538mdwt#schema-object>`__.
AutoML Models always have this field populated by AI
Platform. Note: The URI given on output will be immutable
and probably different, including the URI scheme, than the
one given on input. The output URI will point to a location
where the user only has a read access.
:param project_id: Project to run training in.
:param region: Location to run training in.
:param labels: Optional. The labels with user-defined metadata to
organize TrainingPipelines.
Label keys and values can be no longer than 64
characters, can only
contain lowercase letters, numeric characters,
underscores and dashes. International characters
are allowed.
See https://goo.gl/xmQnxf for more information
and examples of labels.
:param training_encryption_spec_key_name: Optional. The Cloud KMS resource identifier of the customer
managed encryption key used to protect the training pipeline. Has the
form:
``projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key``.
The key needs to be in the same region as where the compute
resource is created.
If set, this TrainingPipeline will be secured by this key.
Note: Model trained by this TrainingPipeline is also secured
by this key if ``model_to_upload`` is not set separately.
:param model_encryption_spec_key_name: Optional. The Cloud KMS resource identifier of the customer
managed encryption key used to protect the model. Has the
form:
``projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key``.
The key needs to be in the same region as where the compute
resource is created.
If set, the trained Model will be secured by this key.
:param staging_bucket: Bucket used to stage source and training artifacts.
:param dataset: Vertex AI to fit this training against.
:param annotation_schema_uri: Google Cloud Storage URI points to a YAML file describing
annotation schema. The schema is defined as an OpenAPI 3.0.2
[Schema Object]
(https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schema-object)
Only Annotations that both match this schema and belong to
DataItems not ignored by the split method are used in
respectively training, validation or test role, depending on
the role of the DataItem they are on.
When used in conjunction with
``annotations_filter``,
the Annotations used for training are filtered by both
``annotations_filter``
and
``annotation_schema_uri``.
:param model_display_name: If the script produces a managed Vertex AI Model. The display name of
the Model. The name can be up to 128 characters long and can be consist
of any UTF-8 characters.
If not provided upon creation, the job's display_name is used.
:param model_labels: Optional. The labels with user-defined metadata to
organize your Models.
Label keys and values can be no longer than 64
characters, can only
contain lowercase letters, numeric characters,
underscores and dashes. International characters
are allowed.
See https://goo.gl/xmQnxf for more information
and examples of labels.
:param base_output_dir: GCS output directory of job. If not provided a timestamped directory in the
staging directory will be used.
Vertex AI sets the following environment variables when it runs your training code:
- AIP_MODEL_DIR: a Cloud Storage URI of a directory intended for saving model artifacts,
i.e. <base_output_dir>/model/
- AIP_CHECKPOINT_DIR: a Cloud Storage URI of a directory intended for saving checkpoints,
i.e. <base_output_dir>/checkpoints/
- AIP_TENSORBOARD_LOG_DIR: a Cloud Storage URI of a directory intended for saving TensorBoard
logs, i.e. <base_output_dir>/logs/
:param service_account: Specifies the service account for workload run-as account.
Users submitting jobs must have act-as permission on this run-as account.
:param network: The full name of the Compute Engine network to which the job
should be peered.
Private services access must already be configured for the network.
If left unspecified, the job is not peered with any network.
:param bigquery_destination: Provide this field if `dataset` is a BiqQuery dataset.
The BigQuery project location where the training data is to
be written to. In the given project a new dataset is created
with name
``dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>``
where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All
training input data will be written into that dataset. In
the dataset three tables will be created, ``training``,
``validation`` and ``test``.
- AIP_DATA_FORMAT = "bigquery".
- AIP_TRAINING_DATA_URI ="bigquery_destination.dataset_*.training"
- AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_*.validation"
- AIP_TEST_DATA_URI = "bigquery_destination.dataset_*.test"
:param args: Command line arguments to be passed to the Python script.
:param environment_variables: Environment variables to be passed to the container.
Should be a dictionary where keys are environment variable names
and values are environment variable values for those names.
At most 10 environment variables can be specified.
The Name of the environment variable must be unique.
:param replica_count: The number of worker replicas. If replica count = 1 then one chief
replica will be provisioned. If replica_count > 1 the remainder will be
provisioned as a worker replica pool.
:param machine_type: The type of machine to use for training.
:param accelerator_type: Hardware accelerator type. One of ACCELERATOR_TYPE_UNSPECIFIED,
NVIDIA_TESLA_K80, NVIDIA_TESLA_P100, NVIDIA_TESLA_V100, NVIDIA_TESLA_P4,
NVIDIA_TESLA_T4
:param accelerator_count: The number of accelerators to attach to a worker replica.
:param boot_disk_type: Type of the boot disk, default is `pd-ssd`.
Valid values: `pd-ssd` (Persistent Disk Solid State Drive) or
`pd-standard` (Persistent Disk Hard Disk Drive).
:param boot_disk_size_gb: Size in GB of the boot disk, default is 100GB.
boot disk size must be within the range of [100, 64000].
:param training_fraction_split: Optional. The fraction of the input data that is to be used to train
the Model. This is ignored if Dataset is not provided.
:param validation_fraction_split: Optional. The fraction of the input data that is to be used to
validate the Model. This is ignored if Dataset is not provided.
:param test_fraction_split: Optional. The fraction of the input data that is to be used to evaluate
the Model. This is ignored if Dataset is not provided.
:param training_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to train the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param validation_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to validate the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param test_filter_split: Optional. A filter on DataItems of the Dataset. DataItems that match
this filter are used to test the Model. A filter with same syntax
as the one used in DatasetService.ListDataItems may be used. If a
single DataItem is matched by more than one of the FilterSplit filters,
then it is assigned to the first set that applies to it in the training,
validation, test order. This is ignored if Dataset is not provided.
:param predefined_split_column_name: Optional. The key is a name of one of the Dataset's data
columns. The value of the key (either the label's value or
value in the column) must be one of {``training``,
``validation``, ``test``}, and it defines to which set the
given piece of data is assigned. If for a piece of data the
key is not present or has an invalid value, that piece is
ignored by the pipeline.
Supported only for tabular and time series Datasets.
:param timestamp_split_column_name: Optional. The key is a name of one of the Dataset's data
columns. The value of the key values of the key (the values in
the column) must be in RFC 3339 `date-time` format, where
`time-offset` = `"Z"` (e.g. 1985-04-12T23:20:50.52Z). If for a
piece of data the key is not present or has an invalid value,
that piece is ignored by the pipeline.
Supported only for tabular and time series Datasets.
:param tensorboard: Optional. The name of a Vertex AI resource to which this CustomJob will upload
logs. Format:
``projects/{project}/locations/{location}/tensorboards/{tensorboard}``
For more information on configuring your service account please visit:
https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training
:param sync: Whether to execute the AI Platform job synchronously. If False, this method
will be executed in concurrent Future and any downstream object will
be immediately returned and synced when the Future has completed.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields = [
'region',
'script_path',
'requirements',
'impersonation_chain',
]
operator_extra_links = (VertexAIModelLink(),)
def __init__(
self,
*,
script_path: str,
requirements: Optional[Sequence[str]] = None,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.requirements = requirements
self.script_path = script_path
def execute(self, context: "Context"):
self.hook = CustomJobHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
)
model = self.hook.create_custom_training_job(
project_id=self.project_id,
region=self.region,
display_name=self.display_name,
script_path=self.script_path,
container_uri=self.container_uri,
requirements=self.requirements,
model_serving_container_image_uri=self.model_serving_container_image_uri,
model_serving_container_predict_route=self.model_serving_container_predict_route,
model_serving_container_health_route=self.model_serving_container_health_route,
model_serving_container_command=self.model_serving_container_command,
model_serving_container_args=self.model_serving_container_args,
model_serving_container_environment_variables=self.model_serving_container_environment_variables,
model_serving_container_ports=self.model_serving_container_ports,
model_description=self.model_description,
model_instance_schema_uri=self.model_instance_schema_uri,
model_parameters_schema_uri=self.model_parameters_schema_uri,
model_prediction_schema_uri=self.model_prediction_schema_uri,
labels=self.labels,
training_encryption_spec_key_name=self.training_encryption_spec_key_name,
model_encryption_spec_key_name=self.model_encryption_spec_key_name,
staging_bucket=self.staging_bucket,
# RUN
dataset=self.dataset,
annotation_schema_uri=self.annotation_schema_uri,
model_display_name=self.model_display_name,
model_labels=self.model_labels,
base_output_dir=self.base_output_dir,
service_account=self.service_account,
network=self.network,
bigquery_destination=self.bigquery_destination,
args=self.args,
environment_variables=self.environment_variables,
replica_count=self.replica_count,
machine_type=self.machine_type,
accelerator_type=self.accelerator_type,
accelerator_count=self.accelerator_count,
boot_disk_type=self.boot_disk_type,
boot_disk_size_gb=self.boot_disk_size_gb,
training_fraction_split=self.training_fraction_split,
validation_fraction_split=self.validation_fraction_split,
test_fraction_split=self.test_fraction_split,
training_filter_split=self.training_filter_split,
validation_filter_split=self.validation_filter_split,
test_filter_split=self.test_filter_split,
predefined_split_column_name=self.predefined_split_column_name,
timestamp_split_column_name=self.timestamp_split_column_name,
tensorboard=self.tensorboard,
sync=True,
)
result = Model.to_dict(model)
model_id = self.hook.extract_model_id(result)
VertexAIModelLink.persist(context=context, task_instance=self, model_id=model_id)
return result
def on_kill(self) -> None:
"""
Callback called when the operator is killed.
Cancel any running job.
"""
if self.hook:
self.hook.cancel_job()
class DeleteCustomTrainingJobOperator(BaseOperator):
"""Deletes a CustomTrainingJob, CustomPythonTrainingJob, or CustomContainerTrainingJob.
:param training_pipeline_id: Required. The name of the TrainingPipeline resource to be deleted.
:param custom_job_id: Required. The name of the CustomJob to delete.
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param region: Required. The ID of the Google Cloud region that the service belongs to.
:param retry: Designation of what errors, if any, should be retried.
:param timeout: The timeout for this request.
:param metadata: Strings which should be sent along with the request as metadata.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields = ("region", "project_id", "impersonation_chain")
def __init__(
self,
*,
training_pipeline_id: str,
custom_job_id: str,
region: str,
project_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
gcp_conn_id: str = "google_cloud_default",
delegate_to: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.training_pipeline = training_pipeline_id
self.custom_job = custom_job_id
self.region = region
self.project_id = project_id
self.retry = retry
self.timeout = timeout
self.metadata = metadata
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.impersonation_chain = impersonation_chain
def execute(self, context: "Context"):
hook = CustomJobHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
)
try:
self.log.info("Deleting custom training pipeline: %s", self.training_pipeline)
training_pipeline_operation = hook.delete_training_pipeline(
training_pipeline=self.training_pipeline,
region=self.region,
project_id=self.project_id,
retry=self.retry,
timeout=self.timeout,
metadata=self.metadata,
)
hook.wait_for_operation(timeout=self.timeout, operation=training_pipeline_operation)
self.log.info("Training pipeline was deleted.")
except NotFound:
self.log.info("The Training Pipeline ID %s does not exist.", self.training_pipeline)
try:
self.log.info("Deleting custom job: %s", self.custom_job)
custom_job_operation = hook.delete_custom_job(
custom_job=self.custom_job,
region=self.region,
project_id=self.project_id,
retry=self.retry,
timeout=self.timeout,
metadata=self.metadata,
)
hook.wait_for_operation(timeout=self.timeout, operation=custom_job_operation)
self.log.info("Custom job was deleted.")
except NotFound:
self.log.info("The Custom Job ID %s does not exist.", self.custom_job)
class ListCustomTrainingJobOperator(BaseOperator):
"""Lists CustomTrainingJob, CustomPythonTrainingJob, or CustomContainerTrainingJob in a Location.
:param project_id: Required. The ID of the Google Cloud project that the service belongs to.
:param region: Required. The ID of the Google Cloud region that the service belongs to.
:param filter: Optional. The standard list filter. Supported fields:
- ``display_name`` supports = and !=.
- ``state`` supports = and !=.
Some examples of using the filter are:
- ``state="PIPELINE_STATE_SUCCEEDED" AND display_name="my_pipeline"``
- ``state="PIPELINE_STATE_RUNNING" OR display_name="my_pipeline"``
- ``NOT display_name="my_pipeline"``
- ``state="PIPELINE_STATE_FAILED"``
:param page_size: Optional. The standard list page size.
:param page_token: Optional. The standard list page token. Typically obtained via
[ListTrainingPipelinesResponse.next_page_token][google.cloud.aiplatform.v1.ListTrainingPipelinesResponse.next_page_token]
of the previous
[PipelineService.ListTrainingPipelines][google.cloud.aiplatform.v1.PipelineService.ListTrainingPipelines]
call.
:param read_mask: Optional. Mask specifying which fields to read.
:param retry: Designation of what errors, if any, should be retried.
:param timeout: The timeout for this request.
:param metadata: Strings which should be sent along with the request as metadata.
:param gcp_conn_id: The connection ID to use connecting to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
"""
template_fields = [
"region",
"project_id",
"impersonation_chain",
]
operator_extra_links = [
VertexAITrainingPipelinesLink(),
]
def __init__(
self,
*,
region: str,
project_id: str,
page_size: Optional[int] = None,
page_token: Optional[str] = None,
filter: Optional[str] = None,
read_mask: Optional[str] = None,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
gcp_conn_id: str = "google_cloud_default",
delegate_to: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.region = region
self.project_id = project_id
self.page_size = page_size
self.page_token = page_token
self.filter = filter
self.read_mask = read_mask
self.retry = retry
self.timeout = timeout
self.metadata = metadata
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.impersonation_chain = impersonation_chain
def execute(self, context: "Context"):
hook = CustomJobHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
)
results = hook.list_training_pipelines(
region=self.region,
project_id=self.project_id,
page_size=self.page_size,
page_token=self.page_token,
filter=self.filter,
read_mask=self.read_mask,
retry=self.retry,
timeout=self.timeout,
metadata=self.metadata,
)
VertexAITrainingPipelinesLink.persist(context=context, task_instance=self)
return [TrainingPipeline.to_dict(result) for result in results]
| 56.720897 | 129 | 0.694882 | 10,385 | 78,445 | 5.095619 | 0.061338 | 0.009071 | 0.034525 | 0.013228 | 0.910522 | 0.893439 | 0.872255 | 0.862523 | 0.856023 | 0.84629 | 0 | 0.004423 | 0.253541 | 78,445 | 1,382 | 130 | 56.761939 | 0.899293 | 0.672293 | 0 | 0.622601 | 0 | 0 | 0.022715 | 0.00125 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.023454 | 0 | 0.093817 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
619db66d811c8c07dbd9725243ee0a71a6919a02 | 9,998 | py | Python | ros_radar_mine/neuro_learning/controller/evol_funcs/evol_funcs_ANN.py | tudelft/blimp_snn | 23acbef8822337387aee196a3a10854e82bb4f80 | [
"Apache-2.0"
] | 3 | 2021-11-08T20:20:21.000Z | 2021-12-29T09:05:37.000Z | ros_radar_mine/neuro_learning/controller/evol_funcs/evol_funcs_ANN.py | tudelft/blimp_snn | 23acbef8822337387aee196a3a10854e82bb4f80 | [
"Apache-2.0"
] | null | null | null | ros_radar_mine/neuro_learning/controller/evol_funcs/evol_funcs_ANN.py | tudelft/blimp_snn | 23acbef8822337387aee196a3a10854e82bb4f80 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 30 17:42:23 2021
@author: marina
"""
# Set absolute package path
import sys, os
sys.path.append(os.path.abspath(".."))
import torch
import numpy as np
import random
import network.ann_automate as ann
import blimp_model.blimp_id_synthetic as blimp
#import plot_funcs as pf # :D
import extra.plots_automate as pf # :D
import copy
import pid.myPID as PID
############################################################
# FUNCTIONS EVOLUTION: initialization, mutation, evaluation
############################################################
# Initialization individual
def initializeIndividual(cf):
"""
Initialize a SNN individual with all its desired parameters
"""
# Network creation
network = ann.MyANN(cf)
network.update_params()
if cf["device"] == "cuda:0":
network = network.to(torch.float16).cuda()
return network
# Mutation individual
def mutation(individual):
"""
Shuffle certain parameters of the network to keep evolving it. Concretely:
- thresh, tau_v, tau_t, alpha_v, alpha_t, q
"""
individual[0].update_params()
return individual, # <------ Comma
# Evaluation function
def evaluate(individual, cf, h_refList, h_init):
"""
1) Altitude initialization
2) Start simulation (from t=0 to t=T):
- Compute the error [error]
- Feed error to SNN and do forward pass [snn]
- Save output speed, u (to proper size array) [u]
- Feed u and range array (u_in + r_stored) to blimp model [plant]
- Get new range [range]
----> Loop all over again
3) Calculate MSE and return
"""
dic = {}
dic["error"] = []
if cf["evol"]["plot"]:
dic["r"] = []
dic["h_ref"] = []
dic["u"] = []
dic["values"] = {}
for i in range(len(cf["n_layers"])):
dic["values"][i] = []
h_curr = h_init
error_prev = 0
# Array initialization for the blimp model
u_in = np.zeros(len(blimp.TF_ran.num))
r_stored = np.ones(len(blimp.TF_ran.den)-1) * h_curr
# Reset state before evaluating
#individual[0].reset_state()
count = 0
for h_ref in h_refList:
individual[0].heights.append((h_ref,h_init))
"""
if count != 0:
h_curr = h_refList[count-1]
u_in = np.zeros(len(blimp.TF_ran.num))
r_stored = np.ones(len(blimp.TF_ran.den)-1) * h_curr
count += 1
"""
# Start simulation with duration T
for j in range(cf["evol"]["T"]):
# Desired (reference) altitude for each timestep
error = h_ref - h_curr
dic["error"].append(error)
error_diff = float((error-error_prev))/cf["init"]["dt"]
error_prev = error
#inp = torch.tensor([float(error), float(error_diff)])
inp = torch.tensor([float(error)])
# Do forward pass
x_array, u = individual[0].forward(inp)
# Round / discretize u
u = torch.round(u)
if u < -cf["evol"]["u_lim"]:
u = -cf["evol"]["u_lim"]
elif u > cf["evol"]["u_lim"]:
u = cf["evol"]["u_lim"]
# Penalize big u (motor command) and traces
#if abs(u) > cf["evol"]["u_lim"]:
# mse = 5000
# return (mse,)
# Get the proper u_in array
u_in = np.delete(u_in, 0)
u_in = np.append(u_in, u)
# Apply model and store range
h_curr, r_stored = blimp.mymodel_ran(u_in, blimp.TF_ran, r_stored)
# Add noise to height
h_curr += random.uniform(-cf["evol"]["noise"], cf["evol"]["noise"])
# For plotting
if cf["evol"]["plot"]:
dic["u"].append(u)
dic["r"].append(h_curr)
dic["h_ref"].append(h_ref)
for i in range(len(cf["n_layers"])):
dic["values"][i].append(copy.deepcopy(x_array[i].cpu().view(-1).numpy().transpose()))
if cf["evol"]["plot"]:
pf.plotANN(dic)
pf.plotBlimp(dic)
error_array = np.array(dic["error"])
# Mean squared error calculation
mse = np.sqrt(np.mean(error_array**2))
return (mse,) # <------ Comma # Actually here is the value of the error
def evaluate_ANNyPID(individual, cf, h_refList, h_init):
"""
1) Altitude initialization
2) Start simulation (from t=0 to t=T):
- Compute the error [error]
- Feed error to SNN and do forward pass [snn]
- Save output speed, u (to proper size array) [u]
- Feed u and range array (u_in + r_stored) to blimp model [plant]
- Get new range [range]
----> Loop all over again
3) Calculate MSE and return
"""
P = individual[0].pid[0]
I = individual[0].pid[1]
D = individual[0].pid[2]
pid = PID.PID(P, I, D, 1/30, True)
dic = {}
dic["error"] = []
if cf["evol"]["plot"]:
dic["r"] = []
dic["h_ref"] = []
dic["u"] = []
dic["u_ann"] = []
dic["u_pid"] = []
dic["values"] = {}
for i in range(len(cf["n_layers"])):
dic["values"][i] = []
h_curr = h_init
error_prev = 0
# Array initialization for the blimp model
u_in = np.zeros(len(blimp.TF_ran.num))
r_stored = np.ones(len(blimp.TF_ran.den)-1) * h_curr
# Reset state before evaluating
#individual[0].reset_state()
count = 0
for h_ref in h_refList:
individual[0].heights.append((h_ref,h_init))
"""
if count != 0:
h_curr = h_refList[count-1]
u_in = np.zeros(len(blimp.TF_ran.num))
r_stored = np.ones(len(blimp.TF_ran.den)-1) * h_curr
count += 1
"""
# Start simulation with duration T
for j in range(cf["evol"]["T"]):
# Desired (reference) altitude for each timestep
error = h_ref - h_curr
dic["error"].append(error)
error_diff = float((error-error_prev))/cf["init"]["dt"]
error_prev = error
#inp = torch.tensor([float(error), float(error_diff)])
inp = torch.tensor([float(error)])
# Do forward pass
x_array, u_ann = individual[0].forward(inp)
u_pid = pid.update_simple(error)
u = u_ann + u_pid
# Round / discretize u
#u = torch.round(u)
if u < -cf["evol"]["u_lim"]:
u = -cf["evol"]["u_lim"]
elif u > cf["evol"]["u_lim"]:
u = cf["evol"]["u_lim"]
# Penalize big u (motor command) and traces
#if abs(u) > cf["evol"]["u_lim"]:
# mse = 5000
# return (mse,)
# Get the proper u_in array
u_in = np.delete(u_in, 0)
u_in = np.append(u_in, u)
# Apply model and store range
h_curr, r_stored = blimp.mymodel_ran(u_in, blimp.TF_ran, r_stored)
# Add noise to height
h_curr += random.uniform(-cf["evol"]["noise"], cf["evol"]["noise"])
# For plotting
if cf["evol"]["plot"]:
dic["u"].append(u)
dic["u_ann"].append(u_ann)
dic["u_pid"].append(u_pid)
dic["r"].append(h_curr)
dic["h_ref"].append(h_ref)
for i in range(len(cf["n_layers"])):
dic["values"][i].append(copy.deepcopy(x_array[i].cpu().view(-1).numpy().transpose()))
if cf["evol"]["plot"]:
pf.plotANN(dic)
pf.plotBlimp(dic)
error_array = np.array(dic["error"])
# Mean squared error calculation
mse = np.sqrt(np.mean(error_array**2))
return (mse,) # <------ Comma # Actually here is the value of the error
'''
def evaluate_PID(individual, cf, pid, inputList):
dic = {}
dic["error"] = []
if cf["evol"]["plot"]:
dic["inp"] = []
dic["u_pid"] = []
dic["u_snn"] = []
dic["values"] = {}
for i in range(len(cf["n_layers"])):
dic["values"][i] = []
for inp in inputList:
# Add noise to error
inp += random.uniform(-cf["evol"]["noise"], cf["evol"]["noise"])
# Do forward pass
x_array, u_nn = individual[0].forward(inp)
if cf["pid"]["simple"]:
u_pid = pid.update_simple(inp)
else:
u_pid = pid.update(inp)
# Penalize big u (motor command) and traces
if abs(u_nn) > cf["evol"]["u_lim"]:
mse = 5000
return (mse,)
dic["error"].append(abs(u_pid-u_nn))
# For plotting
if cf["evol"]["plot"]:
dic["inp"].append(inp)
dic["u_pid"].append(u_pid)
dic["u_snn"].append(u_nn)
for i in range(len(cf["n_layers"])):
dic["values"][i].append(copy.deepcopy(x_array[i].cpu().view(-1).numpy().transpose()))
if cf["evol"]["plot"]:
#pf.plotANN(dic)
pf.plot_PID_SNN(dic)
error_array = np.array(dic["error"])
# Mean squared error calculation
mse = np.sqrt(np.mean(error_array**2))
return (mse,) # <------ Comma # Actually here is the value of the error
''' | 31.146417 | 105 | 0.489898 | 1,253 | 9,998 | 3.780527 | 0.162809 | 0.035465 | 0.016255 | 0.023221 | 0.769685 | 0.75533 | 0.749842 | 0.736542 | 0.718176 | 0.718176 | 0 | 0.011838 | 0.357872 | 9,998 | 321 | 106 | 31.146417 | 0.726012 | 0.253851 | 0 | 0.711864 | 0 | 0 | 0.064578 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.076271 | 0 | 0.144068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f611c40b9637b586e2b4dab16a42f145c0e3eae7 | 42 | py | Python | jython/Lib/app-packages/main.py | eugeneai/LOD-table-annotator | adbbcee8c6591414d65ab2672cf37578636f2fc0 | [
"Apache-2.0"
] | null | null | null | jython/Lib/app-packages/main.py | eugeneai/LOD-table-annotator | adbbcee8c6591414d65ab2672cf37578636f2fc0 | [
"Apache-2.0"
] | null | null | null | jython/Lib/app-packages/main.py | eugeneai/LOD-table-annotator | adbbcee8c6591414d65ab2672cf37578636f2fc0 | [
"Apache-2.0"
] | null | null | null | import os.path
# import jena
import ssdc
| 8.4 | 14 | 0.761905 | 7 | 42 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 4 | 15 | 10.5 | 0.941176 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f62438004ffa6b44f946a50c97dc1f91df19db4a | 115 | py | Python | samples/python/src/main/service/__init__.py | BroadcomMFD/test4z | c974f8f508e3e764d023696f32f813862c386397 | [
"BSD-2-Clause-Patent"
] | 6 | 2021-09-23T11:29:28.000Z | 2022-03-16T10:26:59.000Z | samples/python/src/main/service/__init__.py | BroadcomMFD/test4z | c974f8f508e3e764d023696f32f813862c386397 | [
"BSD-2-Clause-Patent"
] | 3 | 2021-10-19T09:00:27.000Z | 2022-03-11T12:53:52.000Z | samples/python/src/main/service/__init__.py | BroadcomMFD/test4z | c974f8f508e3e764d023696f32f813862c386397 | [
"BSD-2-Clause-Patent"
] | 3 | 2021-09-30T18:53:32.000Z | 2022-02-08T09:21:09.000Z | from .Test4zService import submit_job_notify, get_config_prop, copy, roll_back_dataset, search, update, diagnostic
| 57.5 | 114 | 0.852174 | 16 | 115 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 0.086957 | 115 | 1 | 115 | 115 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6402467bf280de6bc13e9708bcf265b0b97bf22 | 12,376 | py | Python | src/utilities/PlottingUtility.py | krmnino/Peru_COVID19_OpenData | 61ab2aea72e5998f925a628e9d9251769f863c48 | [
"CC0-1.0"
] | 3 | 2020-09-14T21:30:14.000Z | 2021-01-07T23:58:19.000Z | src/utilities/PlottingUtility.py | krmnino/Peru_COVID19_OpenData | 61ab2aea72e5998f925a628e9d9251769f863c48 | [
"CC0-1.0"
] | null | null | null | src/utilities/PlottingUtility.py | krmnino/Peru_COVID19_OpenData | 61ab2aea72e5998f925a628e9d9251769f863c48 | [
"CC0-1.0"
] | 3 | 2021-01-15T19:03:38.000Z | 2021-06-26T18:28:15.000Z | from matplotlib.figure import Figure
import warnings
import numpy as np
import sys
warnings.filterwarnings('ignore')
class QuadPlot:
def __init__(self, colors_sp, titles_sp, enable_rolling_avg_sp, type_sp, x_label_sp, y_label_sp, x_data, y_data, stitle, ofile,
ravg_days=[1, 1, 1, 1], ravg_labels=[None, None, None, None], ravg_ydata=[None, None, None, None]):
if(len(colors_sp) != 4):
sys.exit('colors_sp does not equal 4')
else:
self.colors_subplots = colors_sp
if(len(titles_sp) != 4):
sys.exit('titles_sp does not equal 4')
else:
self.titles_subplots = titles_sp
if(len(enable_rolling_avg_sp) != 4):
sys.exit('enable_rolling_avg_sp does not equal 4')
else:
self.enable_rolling_avg_subplots = enable_rolling_avg_sp
if(len(type_sp) != 4):
sys.exit('type_sp does not equal 4')
else:
self.type_subplots = type_sp
if(len(x_label_sp) != 4):
sys.exit('x_label_sp does not equal 4')
else:
self.x_label_subplots = x_label_sp
if(len(y_label_sp) != 4):
sys.exit('y_label_sp does not equal 4')
else:
self.y_label_subplots = y_label_sp
if(len(x_data) != 4):
sys.exit('x_data does not equal 4')
else:
self.x_data = x_data
if(len(y_data) != 4):
sys.exit('y_data does not equal 4')
else:
self.y_data = y_data
if(len(ravg_days) != 4):
sys.exit('ravg_days does not equal 4')
else:
self.ravg_days = ravg_days
if(len(ravg_labels) != 4):
sys.exit('ravg_labels does not equal 4')
else:
self.ravg_labels = ravg_labels
if(len(ravg_ydata) != 4):
sys.exit('ravg_ydata does not equal 4')
else:
self.ravg_ydata = ravg_ydata
for i in range(0, 4):
if(self.enable_rolling_avg_subplots[i] and self.ravg_days[i] < 0):
sys.exit('ravg_days[' + str(i) + '] must be 1 or greater if rolling average is enabled')
if(self.enable_rolling_avg_subplots[i] and self.ravg_labels[i] == None):
sys.exit('ravg_labels[' + str(i) + '] cannot be None if rolling average is enabled')
self.suptitle = stitle
self.out_file = ofile
self.text_font = {'fontname':'Bahnschrift'}
self.digit_font = {'fontname':'Consolas'}
def export(self):
self.fig = Figure(figsize=(14, 10), dpi=200)
self.axes = [self.fig.add_subplot(2,2,1),
self.fig.add_subplot(2,2,2),
self.fig.add_subplot(2,2,3),
self.fig.add_subplot(2,2,4)]
self.fig.subplots_adjust(left=0.05, bottom=0.10, right=0.98, top=0.94, wspace=0.15, hspace=0.38)
for i in range(0, 4):
if(self.type_subplots[i] == 'bar'):
self.bar_plot(i)
elif(self.type_subplots[i] == 'scatter'):
self.scatter_plot(i)
if(self.enable_rolling_avg_subplots[i]):
self.generate_rolling_average(i)
self.fig.suptitle(self.suptitle, fontsize=10, **self.text_font)
self.fig.savefig(self.out_file)
def bar_plot(self, index):
self.axes[index].grid(zorder=0)
self.axes[index].bar(self.x_data[index], self.y_data[index], color=self.colors_subplots[index], zorder=2)
self.axes[index].set_title(self.titles_subplots[index], fontsize=14, **self.text_font)
self.axes[index].tick_params(axis='x',labelrotation=90)
self.axes[index].set_xticklabels(labels=self.x_data[index], fontsize=8, **self.digit_font)
for tick in self.axes[index].get_yticklabels():
tick.set_fontname(**self.digit_font)
self.axes[index].set_xlabel(self.x_label_subplots[index], **self.text_font)
self.axes[index].set_ylabel(self.y_label_subplots[index], **self.text_font)
def scatter_plot(self, index):
self.axes[index].plot(self.x_data[index], self.y_data[index], color=self.colors_subplots[index])
self.axes[index].plot(self.x_data[index], self.y_data[index], 'ko')
self.axes[index].set_title(self.titles_subplots[index], fontsize=14, **self.text_font)
self.axes[index].tick_params(axis='x',labelrotation=90)
self.axes[index].set_xticklabels(labels=self.x_data[index], fontsize=8, **self.digit_font)
for tick in self.axes[index].get_yticklabels():
tick.set_fontname(**self.digit_font)
self.axes[index].set_xlabel(self.x_label_subplots[index], **self.text_font)
self.axes[index].set_ylabel(self.y_label_subplots[index], **self.text_font)
self.axes[index].grid()
def rgb_threshold(self, color, min=0, max=255):
if (color < min):
return min
if (color > max):
return max
return color
def generate_rolling_average(self, index):
avgd_data = np.array([])
for i in range(len(self.ravg_ydata[index]) - (len(self.x_data[index]) + self.ravg_days[index]), len(self.ravg_ydata[index])):
sum_data = 0
for j in range(i - self.ravg_days[index], i):
sum_data += self.ravg_ydata[index][j]
sum_data /= self.ravg_days[index]
avgd_data = np.append(avgd_data, sum_data)
color_to_string = self.colors_subplots[index][1:]
r, g, b = int(color_to_string[0:2], 16), int(color_to_string[2:4], 16), int(color_to_string[4:], 16)
r = int(self.rgb_threshold(r * 0.6))
g = int(self.rgb_threshold(g * 0.6))
b = int(self.rgb_threshold(b * 0.6))
avg_color = "#%02x%02x%02x" % (r, g, b)
self.axes[index].plot(self.x_data[index], avgd_data[self.ravg_days[index]:], linestyle='dashed', linewidth=2.5, color=avg_color, label=self.ravg_labels[index])
self.axes[index].legend(loc='upper left')
def get_path(self):
return self.out_file
class ScatterPlot:
def __init__(self, color, title, enable_rolling_avg, x_label, y_label, x_data, y_data, stitle, ofile, ravg=1, ravg_label='', ravg_ydata=[]):
self.color_plot = color
self.title_plot = title
self.enable_rolling_avg_plot = enable_rolling_avg
self.ravg_days = ravg
self.ravg_label = ravg_label
self.ravg_ydata = ravg_ydata
self.x_label_plot = x_label
self.y_label_plot = y_label
self.x_data = x_data
self.y_data = y_data
self.suptitle = stitle
self.out_file = ofile
self.text_font = {'fontname':'Bahnschrift'}
self.digit_font = {'fontname':'Consolas'}
def export(self):
self.fig = Figure(figsize=(14, 10), dpi=200)
self.axis = self.fig.add_subplot(1,1,1)
self.fig.subplots_adjust(left=0.07, bottom=0.14, right=0.98, top=0.92, wspace=0.15, hspace=0.38)
self.axis.plot(self.x_data, self.y_data, color=self.color_plot)
self.axis.plot(self.x_data, self.y_data, 'ko')
self.axis.set_title(self.title_plot, fontsize=22, **self.text_font)
self.axis.tick_params(axis='x',labelrotation=90)
self.axis.set_xticklabels(labels=self.x_data, fontsize=12, **self.digit_font)
for tick in self.axis.get_yticklabels():
tick.set_fontname(**self.digit_font)
tick.set_fontsize(12)
self.axis.set_xlabel(self.x_label_plot, **self.text_font, fontsize=12)
self.axis.set_ylabel(self.y_label_plot, **self.text_font, fontsize=12)
if(self.enable_rolling_avg_plot):
if(len(self.ravg_label) == 0):
sys.exit('Rolling average label or ydata is empty')
if(len(self.ravg_ydata) == 0):
sys.exit('Rolling average ydata is empty')
self.generate_rolling_average()
self.axis.grid()
self.fig.suptitle(self.suptitle, fontsize=12, **self.text_font)
self.fig.savefig(self.out_file)
def rgb_threshold(self, color, min=0, max=255):
if (color < min):
return min
if (color > max):
return max
return color
def generate_rolling_average(self):
avgd_data = np.array([])
for i in range(len(self.ravg_ydata) - (len(self.x_data) + self.ravg_days), len(self.ravg_ydata)):
sum_data = 0
for j in range(i - self.ravg_days, i):
sum_data += self.ravg_ydata[j]
sum_data /= self.ravg_days
avgd_data = np.append(avgd_data, sum_data)
color_to_string = self.color_plot[1:]
r, g, b = int(color_to_string[0:2], 16), int(color_to_string[2:4], 16), int(color_to_string[4:], 16)
r = int(self.rgb_threshold(r * 0.6))
g = int(self.rgb_threshold(g * 0.6))
b = int(self.rgb_threshold(b * 0.6))
avg_color = "#%02x%02x%02x" % (r, g, b)
self.axis.plot(self.x_data, avgd_data[self.ravg_days:], linestyle='dashed', linewidth=2.5, color=avg_color, label=self.ravg_label)
self.axis.legend(loc='upper left')
def get_path(self):
return self.out_file
class BarPlot:
def __init__(self, color, title, enable_rolling_avg, x_label, y_label, x_data, y_data, stitle, ofile, ravg=1, ravg_label='', ravg_ydata=[]):
self.color_plot = color
self.title_plot = title
self.enable_rolling_avg_plot = enable_rolling_avg
self.ravg_days = ravg
self.ravg_label = ravg_label
self.ravg_ydata = ravg_ydata
self.x_label_plot = x_label
self.y_label_plot = y_label
self.x_data = x_data
self.y_data = y_data
self.suptitle = stitle
self.out_file = ofile
self.text_font = {'fontname':'Bahnschrift'}
self.digit_font = {'fontname':'Consolas'}
def export(self):
self.fig = Figure(figsize=(14, 10), dpi=200)
self.axis = self.fig.add_subplot(1,1,1)
self.fig.subplots_adjust(left=0.07, bottom=0.14, right=0.98, top=0.92, wspace=0.15, hspace=0.38)
self.axis.grid(zorder=0)
self.axis.bar(self.x_data, self.y_data, color=self.color_plot, zorder=2)
self.axis.set_title(self.title_plot, fontsize=22, **self.text_font)
self.axis.tick_params(axis='x',labelrotation=90)
self.axis.set_xticklabels(labels=self.x_data, fontsize=12, **self.digit_font)
for tick in self.axis.get_yticklabels():
tick.set_fontname(**self.digit_font)
tick.set_fontsize(12)
self.axis.set_xlabel(self.x_label_plot, **self.text_font, fontsize=12)
self.axis.set_ylabel(self.y_label_plot, **self.text_font, fontsize=12)
if(self.enable_rolling_avg_plot):
if(len(self.ravg_label) == 0):
sys.exit('Rolling average label or ydata is empty')
if(len(self.ravg_ydata == None) == 0):
sys.exit('Rolling average ydata is empty')
self.generate_rolling_average()
self.fig.suptitle(self.suptitle, fontsize=12, **self.text_font)
self.fig.savefig(self.out_file)
def rgb_threshold(self, color, min=0, max=255):
if (color < min):
return min
if (color > max):
return max
return color
def generate_rolling_average(self):
avgd_data = np.array([])
for i in range(len(self.ravg_ydata) - (len(self.x_data) + self.ravg_days), len(self.ravg_ydata)):
sum_data = 0
for j in range(i - self.ravg_days, i):
sum_data += self.ravg_ydata[j]
sum_data /= self.ravg_days
avgd_data = np.append(avgd_data, sum_data)
color_to_string = self.color_plot[1:]
r, g, b = int(color_to_string[0:2], 16), int(color_to_string[2:4], 16), int(color_to_string[4:], 16)
r = int(self.rgb_threshold(r * 0.6))
g = int(self.rgb_threshold(g * 0.6))
b = int(self.rgb_threshold(b * 0.6))
avg_color = "#%02x%02x%02x" % (r, g, b)
self.axis.plot(self.x_data, avgd_data[self.ravg_days:], linestyle='dashed', linewidth=2.5, color=avg_color, label=self.ravg_label)
self.axis.legend(loc='upper left')
def get_path(self):
return self.out_file | 42.675862 | 167 | 0.610698 | 1,839 | 12,376 | 3.888526 | 0.082654 | 0.04363 | 0.023913 | 0.019997 | 0.853028 | 0.821983 | 0.78982 | 0.751084 | 0.735002 | 0.72843 | 0 | 0.029156 | 0.257272 | 12,376 | 290 | 168 | 42.675862 | 0.748803 | 0 | 0 | 0.684211 | 0 | 0 | 0.062131 | 0.001697 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068826 | false | 0 | 0.016194 | 0.012146 | 0.145749 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f65fa9b32cb533bc55de293fd360f2422ea084e5 | 31 | py | Python | rrs_scraper/scraper/serializers/__init__.py | mohamedmansor/rrs-scraper | 8b0252400639dd77399fd04b536bf47096e20991 | [
"MIT"
] | null | null | null | rrs_scraper/scraper/serializers/__init__.py | mohamedmansor/rrs-scraper | 8b0252400639dd77399fd04b536bf47096e20991 | [
"MIT"
] | null | null | null | rrs_scraper/scraper/serializers/__init__.py | mohamedmansor/rrs-scraper | 8b0252400639dd77399fd04b536bf47096e20991 | [
"MIT"
] | null | null | null | from .feed_serializer import * | 31 | 31 | 0.806452 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f66e3535f7f2db466da31313c46ef3d2e36dc81d | 5,676 | py | Python | test_autolens/point/model/test_analysis_point.py | Jammy2211/PyAutoLens | 728100a3bf13f89f35030724aa08593ab44e65eb | [
"MIT"
] | 114 | 2018-03-05T07:31:47.000Z | 2022-03-08T06:40:52.000Z | test_autolens/point/model/test_analysis_point.py | Jammy2211/PyAutoLens | 728100a3bf13f89f35030724aa08593ab44e65eb | [
"MIT"
] | 143 | 2018-01-31T09:57:13.000Z | 2022-03-16T09:41:05.000Z | test_autolens/point/model/test_analysis_point.py | Jammy2211/PyAutoLens | 728100a3bf13f89f35030724aa08593ab44e65eb | [
"MIT"
] | 33 | 2018-01-31T12:15:57.000Z | 2022-01-08T18:31:02.000Z | from os import path
import autofit as af
import autolens as al
from autolens.point.model.result import ResultPoint
from autolens.mock import mock
directory = path.dirname(path.realpath(__file__))
class TestAnalysisPoint:
def test__make_result__result_imaging_is_returned(self, point_dict):
model = af.Collection(
galaxies=af.Collection(
lens=al.Galaxy(redshift=0.5, point_0=al.ps.Point(centre=(0.0, 0.0)))
)
)
search = mock.MockSearch(name="test_search")
solver = mock.MockPointSolver(model_positions=point_dict["point_0"].positions)
analysis = al.AnalysisPoint(point_dict=point_dict, solver=solver)
result = search.fit(model=model, analysis=analysis)
assert isinstance(result, ResultPoint)
def test__figure_of_merit__matches_correct_fit_given_galaxy_profiles(
self, positions_x2, positions_x2_noise_map
):
point_dataset = al.PointDataset(
name="point_0",
positions=positions_x2,
positions_noise_map=positions_x2_noise_map,
)
point_dict = al.PointDict(point_dataset_list=[point_dataset])
model = af.Collection(
galaxies=af.Collection(
lens=al.Galaxy(redshift=0.5, point_0=al.ps.Point(centre=(0.0, 0.0)))
)
)
solver = mock.MockPointSolver(model_positions=positions_x2)
analysis = al.AnalysisPoint(point_dict=point_dict, solver=solver)
instance = model.instance_from_unit_vector([])
analysis_log_likelihood = analysis.log_likelihood_function(instance=instance)
tracer = analysis.tracer_for_instance(instance=instance)
fit_positions = al.FitPositionsImage(
name="point_0",
positions=positions_x2,
noise_map=positions_x2_noise_map,
tracer=tracer,
point_solver=solver,
)
assert fit_positions.chi_squared == 0.0
assert fit_positions.log_likelihood == analysis_log_likelihood
model_positions = al.Grid2DIrregular([(0.0, 1.0), (1.0, 2.0)])
solver = mock.MockPointSolver(model_positions=model_positions)
analysis = al.AnalysisPoint(point_dict=point_dict, solver=solver)
analysis_log_likelihood = analysis.log_likelihood_function(instance=instance)
fit_positions = al.FitPositionsImage(
name="point_0",
positions=positions_x2,
noise_map=positions_x2_noise_map,
tracer=tracer,
point_solver=solver,
)
assert fit_positions.residual_map.in_list == [1.0, 1.0]
assert fit_positions.chi_squared == 2.0
assert fit_positions.log_likelihood == analysis_log_likelihood
def test__figure_of_merit__includes_fit_fluxes(
self, positions_x2, positions_x2_noise_map, fluxes_x2, fluxes_x2_noise_map
):
point_dataset = al.PointDataset(
name="point_0",
positions=positions_x2,
positions_noise_map=positions_x2_noise_map,
fluxes=fluxes_x2,
fluxes_noise_map=fluxes_x2_noise_map,
)
point_dict = al.PointDict(point_dataset_list=[point_dataset])
model = af.Collection(
galaxies=af.Collection(
lens=al.Galaxy(
redshift=0.5,
sis=al.mp.SphIsothermal(einstein_radius=1.0),
point_0=al.ps.PointFlux(flux=1.0),
)
)
)
solver = mock.MockPointSolver(model_positions=positions_x2)
analysis = al.AnalysisPoint(point_dict=point_dict, solver=solver)
instance = model.instance_from_unit_vector([])
analysis_log_likelihood = analysis.log_likelihood_function(instance=instance)
tracer = analysis.tracer_for_instance(instance=instance)
fit_positions = al.FitPositionsImage(
name="point_0",
positions=positions_x2,
noise_map=positions_x2_noise_map,
tracer=tracer,
point_solver=solver,
)
fit_fluxes = al.FitFluxes(
name="point_0",
fluxes=fluxes_x2,
noise_map=fluxes_x2_noise_map,
positions=positions_x2,
tracer=tracer,
)
assert (
fit_positions.log_likelihood + fit_fluxes.log_likelihood
== analysis_log_likelihood
)
model_positions = al.Grid2DIrregular([(0.0, 1.0), (1.0, 2.0)])
solver = mock.MockPointSolver(model_positions=model_positions)
analysis = al.AnalysisPoint(point_dict=point_dict, solver=solver)
instance = model.instance_from_unit_vector([])
analysis_log_likelihood = analysis.log_likelihood_function(instance=instance)
fit_positions = al.FitPositionsImage(
name="point_0",
positions=positions_x2,
noise_map=positions_x2_noise_map,
tracer=tracer,
point_solver=solver,
)
fit_fluxes = al.FitFluxes(
name="point_0",
fluxes=fluxes_x2,
noise_map=fluxes_x2_noise_map,
positions=positions_x2,
tracer=tracer,
)
assert fit_positions.residual_map.in_list == [1.0, 1.0]
assert fit_positions.chi_squared == 2.0
assert (
fit_positions.log_likelihood + fit_fluxes.log_likelihood
== analysis_log_likelihood
)
| 32.809249 | 87 | 0.618746 | 617 | 5,676 | 5.350081 | 0.142626 | 0.050894 | 0.054529 | 0.06907 | 0.853378 | 0.820357 | 0.816116 | 0.798546 | 0.798546 | 0.786428 | 0 | 0.022642 | 0.299683 | 5,676 | 172 | 88 | 33 | 0.807799 | 0 | 0 | 0.695313 | 0 | 0 | 0.013445 | 0 | 0 | 0 | 0 | 0 | 0.078125 | 1 | 0.023438 | false | 0 | 0.039063 | 0 | 0.070313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cd8b26a433706a15cea5d23d29430f8986fcfa9 | 40 | py | Python | __init__.py | JanMarcelKezmann/Semi-Supervised-Learning-Image-Classification | b32599d1a5f28beefb4f9a744087e1dc47bd9906 | [
"MIT"
] | 4 | 2021-04-16T18:56:46.000Z | 2021-11-18T07:14:04.000Z | __init__.py | JanMarcelKezmann/Semi-Supervised-Learning-Image-Classification | b32599d1a5f28beefb4f9a744087e1dc47bd9906 | [
"MIT"
] | 2 | 2022-02-05T16:55:14.000Z | 2022-03-06T11:04:23.000Z | __init__.py | JanMarcelKezmann/Semi-Supervised-Learning-Image-Classification | b32599d1a5f28beefb4f9a744087e1dc47bd9906 | [
"MIT"
] | 2 | 2022-02-08T12:43:49.000Z | 2022-03-06T10:47:56.000Z | from .ssl_image_classification import *
| 20 | 39 | 0.85 | 5 | 40 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
142c0e5a92052cf493665126cb4a27fafc46e4ab | 3,044 | py | Python | tests/test_ec2_get_instances.py | MattSegal/cjob | 6f300bb6da24b0e8b17d45dd06d1520380536bca | [
"MIT"
] | 1 | 2021-11-05T23:36:01.000Z | 2021-11-05T23:36:01.000Z | tests/test_ec2_get_instances.py | MattSegal/cjob | 6f300bb6da24b0e8b17d45dd06d1520380536bca | [
"MIT"
] | null | null | null | tests/test_ec2_get_instances.py | MattSegal/cjob | 6f300bb6da24b0e8b17d45dd06d1520380536bca | [
"MIT"
] | null | null | null | """
Tests for EC2 instance lookup.
"""
import boto3
from moto import mock_ec2
import cjob.ec2 as ec2
from tests.utils import create_test_instance
@mock_ec2
def test_get_instances__with_no_instances():
client = boto3.client("ec2", region_name="ap-southeast-2")
instances = ec2.get_instances(client)
assert instances == []
@mock_ec2
def test_get_instances__with_no_cjob_instances():
client = boto3.client("ec2", region_name="ap-southeast-2")
create_test_instance(client, "foo")
create_test_instance(client, "when-harry-met-sally")
create_test_instance(client, "")
instances = ec2.get_instances(client)
assert instances == []
@mock_ec2
def test_get_instances__with_mixed_instances():
client = boto3.client("ec2", region_name="ap-southeast-2")
id_a = create_test_instance(client, ec2.add_job_prefix("foo"))
id_b = create_test_instance(client, ec2.add_job_prefix("bar-baz"))
create_test_instance(client, "foo")
create_test_instance(client, "when-harry-met-sally")
create_test_instance(client, "")
instances = ec2.get_instances(client)
# Only prefixed instances count
assert len(instances) == 2
assert instances[0].id == id_a
assert instances[0].name == ec2.add_job_prefix("foo")
assert instances[0].state == "running"
assert instances[1].id == id_b
assert instances[1].name == ec2.add_job_prefix("bar-baz")
assert instances[1].state == "running"
# Terminated instances don't count
client.terminate_instances(InstanceIds=[id_a])
instances = ec2.get_instances(client)
assert len(instances) == 1
assert instances[0].id == id_b
assert instances[0].name == ec2.add_job_prefix("bar-baz")
assert instances[0].state == "running"
@mock_ec2
def test_find_instance__with_no_instances():
client = boto3.client("ec2", region_name="ap-southeast-2")
name = ec2.add_job_prefix("foo")
instance = ec2.find_instance(client, name)
assert instance is None
@mock_ec2
def test_find_instance__with_mixed_instances__when_not_exists():
client = boto3.client("ec2", region_name="ap-southeast-2")
create_test_instance(client, "foo")
create_test_instance(client, "when-harry-met-sally")
create_test_instance(client, "")
create_test_instance(client, ec2.add_job_prefix("bar-baz"))
name = ec2.add_job_prefix("foo")
instance = ec2.find_instance(client, name)
assert instance is None
@mock_ec2
def test_find_instance__with_mixed_instances__when_exists():
client = boto3.client("ec2", region_name="ap-southeast-2")
name = ec2.add_job_prefix("foo")
create_test_instance(client, "foo")
create_test_instance(client, "when-harry-met-sally")
create_test_instance(client, "")
create_test_instance(client, ec2.add_job_prefix("bar-baz"))
instance_id = create_test_instance(client, name)
instance = ec2.find_instance(client, name)
assert instance is not None
assert instance.id == instance_id
assert instance.name == name
assert instance.state == "running"
| 33.822222 | 70 | 0.727332 | 427 | 3,044 | 4.882904 | 0.138173 | 0.134293 | 0.155396 | 0.195683 | 0.805755 | 0.755396 | 0.736691 | 0.724221 | 0.667626 | 0.605755 | 0 | 0.02205 | 0.150788 | 3,044 | 89 | 71 | 34.202247 | 0.784526 | 0.03088 | 0 | 0.608696 | 0 | 0 | 0.092486 | 0 | 0 | 0 | 0 | 0 | 0.275362 | 1 | 0.086957 | false | 0 | 0.057971 | 0 | 0.144928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14400afdbe8c357fd57f91688a7156d161a4b977 | 305 | py | Python | rhobot/components/commands/__init__.py | rerobins/rhobot_framework | f97d1cedc929387f69448e41346a0d15fe202eef | [
"BSD-3-Clause"
] | null | null | null | rhobot/components/commands/__init__.py | rerobins/rhobot_framework | f97d1cedc929387f69448e41346a0d15fe202eef | [
"BSD-3-Clause"
] | null | null | null | rhobot/components/commands/__init__.py | rerobins/rhobot_framework | f97d1cedc929387f69448e41346a0d15fe202eef | [
"BSD-3-Clause"
] | null | null | null | from rhobot.components.commands.base_command import BaseCommand
from rhobot.components.commands.reset_configuration import reset_configuration
from rhobot.components.commands.export_configuration import export_configuration
from rhobot.components.commands.import_configuration import import_configuration
| 61 | 80 | 0.908197 | 35 | 305 | 7.714286 | 0.314286 | 0.148148 | 0.296296 | 0.414815 | 0.303704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052459 | 305 | 4 | 81 | 76.25 | 0.934256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ad83a8ef2d359828633a0018a5ed435185e4109 | 40 | py | Python | cparse/__init__.py | luciancooper/cparse | fd6c5733c821e38f0be46ed930d763107ad8deea | [
"MIT"
] | null | null | null | cparse/__init__.py | luciancooper/cparse | fd6c5733c821e38f0be46ed930d763107ad8deea | [
"MIT"
] | null | null | null | cparse/__init__.py | luciancooper/cparse | fd6c5733c821e38f0be46ed930d763107ad8deea | [
"MIT"
] | null | null | null | from .fpath import *
from .tree import * | 20 | 20 | 0.725 | 6 | 40 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 2 | 21 | 20 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
215f3605582cf8f1b95d007ac8c7d0a25e5118ac | 4,781 | py | Python | tests/test_extensions/test_tabbed.py | Lincoln2000/pymdown-extensions | f6ad2d410c9463db7f9f609ee5024e9c59bc14d8 | [
"MIT"
] | null | null | null | tests/test_extensions/test_tabbed.py | Lincoln2000/pymdown-extensions | f6ad2d410c9463db7f9f609ee5024e9c59bc14d8 | [
"MIT"
] | 2 | 2019-12-10T23:12:38.000Z | 2020-02-20T23:27:26.000Z | tests/test_extensions/test_tabbed.py | Lincoln2000/pymdown-extensions | f6ad2d410c9463db7f9f609ee5024e9c59bc14d8 | [
"MIT"
] | null | null | null | """Test cases for SuperFences."""
from .. import util
class TestLegacyTab(util.MdCase):
"""Test legacy tab cases."""
extension = ['pymdownx.tabbed', 'pymdownx.superfences']
extension_configs = {}
def test_tabbed(self):
"""Test tabbed."""
self.check_markdown(
r'''
=== "Tab"
Some *content*
And more `content`.
=== "Another Tab"
Some more content.
```
code
```
''',
r'''
<div class="tabbed-set" data-tabs="1:2"><input checked="checked" id="__tabbed_1_1" name="__tabbed_1" type="radio" /><label for="__tabbed_1_1">Tab</label><div class="tabbed-content">
<p>Some <em>content</em></p>
<p>And more <code>content</code>.</p>
</div>
<input id="__tabbed_1_2" name="__tabbed_1" type="radio" /><label for="__tabbed_1_2">Another Tab</label><div class="tabbed-content">
<p>Some more content.</p>
<div class="highlight"><pre><span></span><code>code
</code></pre></div>
</div>
</div>
''', # noqa: E501
True
)
def test_nested_tabbed(self):
"""Test nested tabbed."""
self.check_markdown(
r'''
=== "Tab"
Some *content*
=== "Tab A"
- item 1
- item 2
=== "Tab B"
- item A
- item B
=== "Another Tab"
Some more content.
''',
r'''
<div class="tabbed-set" data-tabs="1:2"><input checked="checked" id="__tabbed_1_1" name="__tabbed_1" type="radio" /><label for="__tabbed_1_1">Tab</label><div class="tabbed-content">
<p>Some <em>content</em></p>
<div class="tabbed-set" data-tabs="2:2"><input checked="checked" id="__tabbed_2_1" name="__tabbed_2" type="radio" /><label for="__tabbed_2_1">Tab A</label><div class="tabbed-content">
<ul>
<li>
<p>item 1</p>
</li>
<li>
<p>item 2</p>
</li>
</ul>
</div>
<input id="__tabbed_2_2" name="__tabbed_2" type="radio" /><label for="__tabbed_2_2">Tab B</label><div class="tabbed-content">
<ul>
<li>
<p>item A</p>
</li>
<li>
<p>item B</p>
</li>
</ul>
</div>
</div>
</div>
<input id="__tabbed_1_2" name="__tabbed_1" type="radio" /><label for="__tabbed_1_2">Another Tab</label><div class="tabbed-content">
<p>Some more content.</p>
</div>
</div>
''', # noqa: E501
True
)
def test_tabbed_split(self):
"""Force a split of tab sets."""
self.check_markdown(
r'''
=== "Tab"
Some *content*
And more `content`.
===! "Another Tab"
Some more content.
```
code
```
''',
r'''
<div class="tabbed-set" data-tabs="1:1"><input checked="checked" id="__tabbed_1_1" name="__tabbed_1" type="radio" /><label for="__tabbed_1_1">Tab</label><div class="tabbed-content">
<p>Some <em>content</em></p>
<p>And more <code>content</code>.</p>
</div>
</div>
<div class="tabbed-set" data-tabs="2:1"><input checked="checked" id="__tabbed_2_1" name="__tabbed_2" type="radio" /><label for="__tabbed_2_1">Another Tab</label><div class="tabbed-content">
<p>Some more content.</p>
<div class="highlight"><pre><span></span><code>code
</code></pre></div>
</div>
</div>
''', # noqa: E501
True
)
def test_tabbed_break(self):
"""Test that tabs are properly terminated on blocks that are not under the tab."""
self.check_markdown(
r'''
=== "Tab"
Some *content*
And more `content`.
Content
''',
r'''
<div class="tabbed-set" data-tabs="1:1"><input checked="checked" id="__tabbed_1_1" name="__tabbed_1" type="radio" /><label for="__tabbed_1_1">Tab</label><div class="tabbed-content">
<p>Some <em>content</em></p>
<p>And more <code>content</code>.</p>
</div>
</div>
<p>Content</p>
''', # noqa: E501
True
)
| 31.248366 | 201 | 0.452207 | 531 | 4,781 | 3.862524 | 0.12806 | 0.061433 | 0.102389 | 0.074598 | 0.812774 | 0.794247 | 0.793272 | 0.767918 | 0.729888 | 0.671867 | 0 | 0.024763 | 0.383393 | 4,781 | 152 | 202 | 31.453947 | 0.670963 | 0.048525 | 0 | 0.571429 | 0 | 0 | 0.052006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.035714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d05e71bffdca922d35a0d3b643f077fa12a0d89 | 30 | py | Python | plugins/category_order/__init__.py | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 13 | 2020-01-27T09:02:25.000Z | 2022-01-20T07:45:26.000Z | plugins/category_order/__init__.py | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 29 | 2020-03-22T06:57:57.000Z | 2022-01-24T22:46:42.000Z | plugins/category_order/__init__.py | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 6 | 2020-07-10T00:13:30.000Z | 2022-01-26T08:22:33.000Z | from .category_order import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0d39df968515744d4f2a4d03b2f20e67170b7c4a | 45 | py | Python | tests/test_load.py | mdbloice/Labeller | 3f204589b8f573bc6a3640cd630155ce1ad889fe | [
"MIT"
] | 2 | 2021-09-30T18:04:33.000Z | 2021-10-01T07:20:04.000Z | tests/test_load.py | mdbloice/Labeller | 3f204589b8f573bc6a3640cd630155ce1ad889fe | [
"MIT"
] | null | null | null | tests/test_load.py | mdbloice/Labeller | 3f204589b8f573bc6a3640cd630155ce1ad889fe | [
"MIT"
] | null | null | null | def test_actions_setup():
print("All OK") | 22.5 | 25 | 0.688889 | 7 | 45 | 4.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 26 | 22.5 | 0.763158 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0d40c1c8fc52a2517a9a558b0245407ed7780578 | 15 | py | Python | test.py | iAmCorey/CARP | 21af683fc8a8fb5161e721343f644afe94ed0a4f | [
"MIT"
] | null | null | null | test.py | iAmCorey/CARP | 21af683fc8a8fb5161e721343f644afe94ed0a4f | [
"MIT"
] | null | null | null | test.py | iAmCorey/CARP | 21af683fc8a8fb5161e721343f644afe94ed0a4f | [
"MIT"
] | null | null | null | import heapq
| 3.75 | 12 | 0.733333 | 2 | 15 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 15 | 3 | 13 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4a1878309c06b25919cadd211788d0490e4c242 | 60 | py | Python | tests/conftest.py | voegtlel/depot-manager-server | 1092d60e8d8d44dc3daa11aa0e30bb3294daa548 | [
"MIT"
] | null | null | null | tests/conftest.py | voegtlel/depot-manager-server | 1092d60e8d8d44dc3daa11aa0e30bb3294daa548 | [
"MIT"
] | null | null | null | tests/conftest.py | voegtlel/depot-manager-server | 1092d60e8d8d44dc3daa11aa0e30bb3294daa548 | [
"MIT"
] | null | null | null | from .motor_mock import motor_mock
motor_mock = motor_mock
| 15 | 34 | 0.833333 | 10 | 60 | 4.6 | 0.4 | 0.782609 | 0.608696 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 3 | 35 | 20 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b4d2b67776ad5001c96756a892e32bdaa8283b66 | 182 | py | Python | bindARP.py | GeraltShi/verilog-mii | c5be9dda939f53d5886abbcddca26ebc2af65947 | [
"Apache-2.0"
] | 3 | 2020-07-25T03:24:20.000Z | 2021-05-12T20:59:02.000Z | bindARP.py | GeraltShi/verilog-mii | c5be9dda939f53d5886abbcddca26ebc2af65947 | [
"Apache-2.0"
] | null | null | null | bindARP.py | GeraltShi/verilog-mii | c5be9dda939f53d5886abbcddca26ebc2af65947 | [
"Apache-2.0"
] | null | null | null | import os
os.system('echo Binding dev board mac to temp arp list...')
os.system('arp -s 192.168.3.123 00:00:e5:00:af:ec')
os.system('echo Binded 192.168.3.123 to 00:00:e5:00:af:ec') | 36.4 | 59 | 0.697802 | 42 | 182 | 3.02381 | 0.52381 | 0.188976 | 0.188976 | 0.15748 | 0.188976 | 0.188976 | 0 | 0 | 0 | 0 | 0 | 0.209877 | 0.10989 | 182 | 5 | 60 | 36.4 | 0.574074 | 0 | 0 | 0 | 0 | 0.5 | 0.710383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4f753c5c28b7995e95b7ae41d409400e30d23d8 | 118 | py | Python | subsamplr/__init__.py | Living-with-machines/subsamplr | 3b5d07f86136e8e62628155a695c8f27c1dab8ef | [
"MIT"
] | 1 | 2022-01-13T11:38:09.000Z | 2022-01-13T11:38:09.000Z | subsamplr/__init__.py | Living-with-machines/subsamplr | 3b5d07f86136e8e62628155a695c8f27c1dab8ef | [
"MIT"
] | null | null | null | subsamplr/__init__.py | Living-with-machines/subsamplr | 3b5d07f86136e8e62628155a695c8f27c1dab8ef | [
"MIT"
] | 1 | 2022-03-30T14:44:35.000Z | 2022-03-30T14:44:35.000Z | from subsamplr.core.bin import BinCollection
from subsamplr.data.unit_generator import UnitGenerator, DbUnitGenerator
| 39.333333 | 72 | 0.881356 | 14 | 118 | 7.357143 | 0.785714 | 0.252427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076271 | 118 | 2 | 73 | 59 | 0.944954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2e98ba601169a47a0e6e20fd3a6f3b32f0a387ab | 9,840 | py | Python | tests/tests_f12020/test_session.py | f1laps/f1laps-telemetry | 0c264f9300d58397fe2f8b3018cd2e9151e28d08 | [
"MIT"
] | 3 | 2021-02-23T22:06:13.000Z | 2022-02-06T15:05:56.000Z | tests/tests_f12020/test_session.py | f1laps/f1laps-telemetry | 0c264f9300d58397fe2f8b3018cd2e9151e28d08 | [
"MIT"
] | null | null | null | tests/tests_f12020/test_session.py | f1laps/f1laps-telemetry | 0c264f9300d58397fe2f8b3018cd2e9151e28d08 | [
"MIT"
] | null | null | null | from unittest import TestCase
from unittest.mock import MagicMock, patch
import json
from receiver.f12020.session import Session
from receiver.f12020.api import F1LapsAPI
class SessionBaseTests(TestCase):
def test_empty_session_object(self):
session = Session(session_uid="vettel2021")
self.assertEqual(session.session_udp_uid, "vettel2021")
def test_map_udp_session_id_to_f1laps_token(self):
session = Session(session_uid="vettel2021")
self.assertEqual(session.map_udp_session_id_to_f1laps_token(), None)
self.assertEqual(session.session_type_supported_by_f1laps_as_session(), False)
session.session_type = 10
self.assertEqual(session.map_udp_session_id_to_f1laps_token(), "race")
self.assertEqual(session.session_type_supported_by_f1laps_as_session(), True)
session.session_type = 12
self.assertEqual(session.map_udp_session_id_to_f1laps_token(), "time_trial")
self.assertEqual(session.session_type_supported_by_f1laps_as_session(), False)
session.session_type = 7
self.assertEqual(session.map_udp_session_id_to_f1laps_token(), "qualifying_3")
def test_map_weather_ids_to_f1laps_token(self):
session = Session(session_uid="vettel2021")
self.assertEqual(session.map_weather_ids_to_f1laps_token(), "dry")
session.weather_ids = [1]
self.assertEqual(session.map_weather_ids_to_f1laps_token(), "dry")
session.weather_ids = [0, 1]
self.assertEqual(session.map_weather_ids_to_f1laps_token(), "dry")
session.weather_ids = [3]
self.assertEqual(session.map_weather_ids_to_f1laps_token(), "wet")
session.weather_ids = [3, 5]
self.assertEqual(session.map_weather_ids_to_f1laps_token(), "wet")
session.weather_ids = [0, 1, 4, 5]
self.assertEqual(session.map_weather_ids_to_f1laps_token(), "mixed")
def test_get_f1laps_lap_times_list(self):
session = Session(session_uid="vettel2021")
self.assertEqual(session.get_f1laps_lap_times_list(), [])
session.lap_list = {
1: {"sector_1_time_ms" : 10001, "sector_2_time_ms": 20001, "sector_3_time_ms": 30001, "lap_number": 1, "car_race_position": 1, "pit_status": 0},
2: {"sector_1_time_ms" : 10002, "sector_2_time_ms": 20002, "sector_3_time_ms": 30002, "lap_number": 2, "car_race_position": 1, "pit_status": 1, "tyre_compound_visual": 8},
}
self.assertEqual(session.get_f1laps_lap_times_list(), [
{'car_race_position': 1, 'lap_number': 1, 'pit_status': 0, 'sector_1_time_ms': 10001, 'sector_2_time_ms': 20001, 'sector_3_time_ms': 30001, "tyre_compound_visual": None, 'telemetry_data_string': None,},
{'car_race_position': 1, 'lap_number': 2, 'pit_status': 1, 'sector_1_time_ms': 10002, 'sector_2_time_ms': 20002, 'sector_3_time_ms': 30002, "tyre_compound_visual": 8, 'telemetry_data_string': None,}
])
# incomplete lap should not be returned
session.lap_list = {
1: {"sector_1_time_ms" : 10001, "sector_2_time_ms": 20001, "sector_3_time_ms": 30001, "lap_number": 1, "car_race_position": 1, "pit_status": 0},
2: {"sector_1_time_ms" : 10002, "sector_2_time_ms": 20002, "sector_3_time_ms": 30002, "lap_number": 2, "car_race_position": 1, "pit_status": 1, "tyre_compound_visual": 16},
3: {"sector_1_time_ms" : 10002, "sector_2_time_ms": None , "sector_3_time_ms": 30002, "lap_number": 3, "car_race_position": 1, "pit_status": 1},
}
self.assertEqual(session.get_f1laps_lap_times_list(), [
{'car_race_position': 1, 'lap_number': 1, 'pit_status': 0, 'sector_1_time_ms': 10001, 'sector_2_time_ms': 20001, 'sector_3_time_ms': 30001, "tyre_compound_visual": None, 'telemetry_data_string': None,},
{'car_race_position': 1, 'lap_number': 2, 'pit_status': 1, 'sector_1_time_ms': 10002, 'sector_2_time_ms': 20002, 'sector_3_time_ms': 30002, "tyre_compound_visual": 16, 'telemetry_data_string': None,}
])
def test_get_track_name(self):
session = Session(session_uid="vettel2021")
self.assertEqual(session.get_track_name(), None)
session.track_id = 11
self.assertEqual(session.get_track_name(), "Monza")
def test_get_lap_telemetry_data(self):
session = Session(session_uid="vettel2021")
self.assertEqual(session.get_lap_telemetry_data(1), None)
session.telemetry = MagicMock()
session.telemetry.get_telemetry_api_dict.return_value = {"test": "dict"}
self.assertEqual(session.get_lap_telemetry_data(1), '{"test": "dict"}')
session.telemetry_enabled = False
self.assertEqual(session.get_lap_telemetry_data(1), None)
class SessionAPITests(TestCase):
def setUp(self):
self.session = Session(session_uid="vettel2021")
self.session.team_id = 1
self.session.track_id = 5
self.session.lap_number_current = 2
self.session.lap_list = {1: {"sector_1_time_ms" : 10001, "sector_2_time_ms": 20001, "sector_3_time_ms": 30001, "lap_number": 1, "car_race_position": 1, "pit_status": 0}}
@patch('receiver.f12020.session.F1LapsAPI.lap_create')
def test_single_lap_success(self, mock_lap_create_api):
# set session_type to time_trial to test single lap
self.session.session_type = 12
mock_lap_create_api.return_value = MagicMock(status_code=201)
self.assertEqual(self.session.process_lap_in_f1laps(1), True)
@patch('receiver.f12020.session.F1LapsAPI.lap_create')
def test_single_lap_fail(self, mock_lap_create_api):
# set session_type to time_trial to test single lap
self.session.session_type = 12
mock_lap_create_api.return_value = MagicMock(status_code=404, content=json.dumps({"error": "it didnt work"}))
self.assertEqual(self.session.process_lap_in_f1laps(1), False)
@patch('receiver.f12020.session.F1LapsAPI.session_create')
def test_session_create_success(self, mock_session_create_api):
# set session_type to race to test session
self.session.session_type = 10
# make sure we have no f1laps id in the session yet
self.session.f1_laps_session_id = None
mock_session_create_api.return_value = MagicMock(status_code=201, content=json.dumps({"id": "astonmartin4tw"}))
self.assertEqual(self.session.process_lap_in_f1laps(1), True)
self.assertEqual(self.session.f1_laps_session_id, "astonmartin4tw")
@patch('receiver.f12020.session.F1LapsAPI.session_update')
def test_session_update_success(self, mock_session_update_api):
# set session_type to race to test session
self.session.session_type = 10
# make sure we have a f1laps id in the session
self.session.f1_laps_session_id = "astonmartin4tw"
mock_session_update_api.return_value = MagicMock(status_code=200)
self.assertEqual(self.session.process_lap_in_f1laps(1), True)
@patch('receiver.f12020.session.F1LapsAPI.session_update')
def test_session_update_error(self, mock_session_update_api):
# set session_type to race to test session
self.session.session_type = 10
# make sure we have a f1laps id in the session
self.session.f1_laps_session_id = "astonmartin4tw"
mock_session_update_api.return_value = MagicMock(status_code=404, content=json.dumps({"error": "it didnt work"}))
self.assertEqual(self.session.process_lap_in_f1laps(1), False)
@patch('receiver.f12020.session.F1LapsAPI.session_update')
@patch('receiver.f12020.session.F1LapsAPI.session_list')
@patch('receiver.f12020.session.F1LapsAPI.session_create')
def test_session_create_error_list_success_update(self, mock_session_create_api, mock_session_list_api, mock_session_update_api):
# set session_type to race to test session
self.session.session_type = 10
# make sure we don't have a f1laps id in the session yet
self.session.f1_laps_session_id = None
mock_session_create_api.return_value = MagicMock(status_code=400, content=json.dumps({"error": "already exists"}))
mock_session_list_api.return_value = MagicMock(status_code=200, content=json.dumps({"results": [{"id": "astonmartin4tw"}]}))
mock_session_update_api.return_value = MagicMock(status_code=200)
self.assertEqual(self.session.process_lap_in_f1laps(1), True)
self.assertEqual(self.session.f1_laps_session_id, "astonmartin4tw")
@patch('receiver.f12020.session.F1LapsAPI.session_list')
@patch('receiver.f12020.session.F1LapsAPI.session_create')
def test_session_create_error_list_error(self, mock_session_create_api, mock_session_list_api):
# set session_type to race to test session
self.session.session_type = 10
# make sure we don't have a f1laps id in the session yet
self.session.f1_laps_session_id = None
mock_session_create_api.return_value = MagicMock(status_code=400, content=json.dumps({"error": "already exists"}))
mock_session_list_api.return_value = MagicMock(status_code=200, content=json.dumps({"results": []}))
self.assertEqual(self.session.process_lap_in_f1laps(1), False)
@patch('receiver.f12020.session.F1LapsAPI.session_create')
def test_session_create_error_401(self, mock_session_create_api):
# set session_type to race to test session
self.session.session_type = 10
# make sure we don't have a f1laps id in the session yet
self.session.f1_laps_session_id = None
mock_session_create_api.return_value = MagicMock(status_code=401, content=json.dumps({"error": "invalid token"}))
self.assertEqual(self.session.process_lap_in_f1laps(1), False)
if __name__ == '__main__':
unittest.main() | 58.922156 | 215 | 0.717581 | 1,368 | 9,840 | 4.788743 | 0.098684 | 0.05877 | 0.073882 | 0.043657 | 0.873302 | 0.869638 | 0.86109 | 0.843383 | 0.833308 | 0.796825 | 0 | 0.056081 | 0.17185 | 9,840 | 167 | 216 | 58.922156 | 0.747822 | 0.069919 | 0 | 0.492308 | 0 | 0 | 0.210901 | 0.065667 | 0 | 0 | 0 | 0 | 0.246154 | 1 | 0.115385 | false | 0 | 0.038462 | 0 | 0.169231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2eb3c2bef56b7e0e81db6ff4a7401792e67495c5 | 1,554 | py | Python | app/decorators/decorators.py | hoslack/Book-A-Meal_API | c67d19d4f2b785904ad0b8e2b2a7408e8f296a1f | [
"MIT"
] | 1 | 2018-12-14T11:11:01.000Z | 2018-12-14T11:11:01.000Z | app/decorators/decorators.py | hoslack/Book-A-Meal_API | c67d19d4f2b785904ad0b8e2b2a7408e8f296a1f | [
"MIT"
] | null | null | null | app/decorators/decorators.py | hoslack/Book-A-Meal_API | c67d19d4f2b785904ad0b8e2b2a7408e8f296a1f | [
"MIT"
] | 1 | 2018-05-06T21:43:19.000Z | 2018-05-06T21:43:19.000Z | from functools import wraps
from flask import request
from app.models.models import User
from app.custom_http_respones.responses import Success, Error
success = Success()
error = Error()
def token_required(f):
@wraps(f)
def decorated(*args, **kwargs):
access_token = None
if 'Authorization' in request.headers:
auth_header = request.headers.get('Authorization')
access_token = auth_header.split(" ")[1]
if not access_token:
return error.unauthorized("Please login to perform this action")
user_id = User.decode_token(access_token)
if isinstance(user_id, str):
return error.forbidden_action("Token has been rejected")
return f(*args, user_id=user_id, **kwargs)
return decorated
def admin_only(f):
@wraps(f)
def decorated(*args, **kwargs):
access_token = None
if 'Authorization' in request.headers:
auth_header = request.headers.get('Authorization')
access_token = auth_header.split(" ")[1]
if not access_token:
return error.unauthorized("Please login to perform this action")
user_id = User.decode_token(access_token)
if isinstance(user_id, str):
return error.forbidden_action("Token has been rejected")
user = User.query.filter_by(id=user_id).first()
if not user.admin:
return error.unauthorized("This action can only be performed by admin")
return f(*args, user_id=user_id, **kwargs)
return decorated
| 33.782609 | 83 | 0.658301 | 197 | 1,554 | 5.040609 | 0.299492 | 0.054381 | 0.040282 | 0.020141 | 0.712991 | 0.712991 | 0.712991 | 0.712991 | 0.712991 | 0.712991 | 0 | 0.001709 | 0.247104 | 1,554 | 45 | 84 | 34.533333 | 0.847009 | 0 | 0 | 0.702703 | 0 | 0 | 0.136686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.108108 | 0 | 0.459459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ee409320ffaf6b7cf4f7f851d0a321b2c4c7e0e | 182 | py | Python | tests/app/tests/__init__.py | modohash/django-hstore-flattenfields | 09626a638b9ef85d28fa5bfef1b040f9926bb95b | [
"BSD-3-Clause"
] | 5 | 2015-09-18T16:35:56.000Z | 2020-12-24T11:46:17.000Z | tests/app/tests/__init__.py | modohash/django-hstore-flattenfields | 09626a638b9ef85d28fa5bfef1b040f9926bb95b | [
"BSD-3-Clause"
] | 9 | 2020-02-11T22:01:06.000Z | 2021-06-10T17:46:04.000Z | tests/app/tests/__init__.py | modohash/django-hstore-flattenfields | 09626a638b9ef85d28fa5bfef1b040f9926bb95b | [
"BSD-3-Clause"
] | 2 | 2015-10-20T10:21:30.000Z | 2016-03-23T09:54:54.000Z | from test_types import *
from test_queryset import *
from test_model_fields import *
from test_forms import *
from test_dynamic_field_groups import *
from test_content_panes import * | 30.333333 | 39 | 0.840659 | 28 | 182 | 5.107143 | 0.464286 | 0.335664 | 0.48951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126374 | 182 | 6 | 40 | 30.333333 | 0.899371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2ee930632efcb6d2fc3f595f7af5e38c3e673d15 | 3,703 | py | Python | tests/utils/test_pao_manager.py | albgar/legacy_aiida_plugin | a375d357c88b30209bee9b341064f84a8ef244d6 | [
"MIT"
] | 1 | 2022-02-09T11:43:42.000Z | 2022-02-09T11:43:42.000Z | tests/utils/test_pao_manager.py | albgar/legacy_aiida_plugin | a375d357c88b30209bee9b341064f84a8ef244d6 | [
"MIT"
] | 12 | 2020-12-08T16:55:06.000Z | 2022-02-23T15:51:15.000Z | tests/utils/test_pao_manager.py | albgar/legacy_aiida_plugin | a375d357c88b30209bee9b341064f84a8ef244d6 | [
"MIT"
] | 1 | 2021-01-05T15:27:15.000Z | 2021-01-05T15:27:15.000Z | import pytest
from aiida_siesta.utils.pao_manager import PaoManager
def test_set_from_ion(generate_ion_data):
pao_man = PaoManager()
ion = generate_ion_data('Si')
pao_man.set_from_ion(ion)
assert pao_man.name == "Si"
assert pao_man._gen_dict is not None
assert pao_man._pol_dict == {3: {1: {1: 4.0531999999999995, 2: 3.1566}}}
def test_validator_and_get_pao_block():
pao_man = PaoManager()
with pytest.raises(RuntimeError):
pao_man.get_pao_block()
pao_man.name = "Si"
with pytest.raises(RuntimeError):
pao_man.get_pao_block()
pao_man._gen_dict = {3: {0: {1: 4.05}}}
with pytest.raises(RuntimeError):
pao_man.get_pao_block()
pao_man._pol_dict = {}
assert pao_man.get_pao_block() == "Si 1\n n=3 0 1 \n 7.6533504667600045"
pao_man._gen_dict = {}
with pytest.raises(RuntimeError):
pao_man.get_pao_block()
def test_pao_size(generate_ion_data):
pao_man = PaoManager()
ion = generate_ion_data('Si')
pao_man.set_from_ion(ion)
assert pao_man.pao_size() == "DZDP"
def test_change_all_radius():
pao_man = PaoManager()
pao_man.name = "Si"
pao_man._gen_dict = {3: {0: {1: 4.05}}}
pao_man._pol_dict = {3: {0: {1: 4.05}}}
pao_man.change_all_radius(2)
assert pao_man._gen_dict == {3: {0: {1: 4.131}}}
assert pao_man._pol_dict == {3: {0: {1: 4.131}}}
def test_reset_radius():
pao_man = PaoManager()
pao_man.name = "Si"
pao_man._gen_dict = {3: {0: {1: 4.05}}}
pao_man._pol_dict = {3: {0: {1: 4.05}}}
with pytest.raises(ValueError):
pao_man.reset_radius("Bohr",0.0,3,1,2)
pao_man.reset_radius("Bohr",0.0,3,0,1)
assert pao_man._gen_dict == {3: {0: {1: 0.0}}}
assert pao_man._pol_dict == {3: {0: {1: 0.0}}}
def test_add_polarization():
pao_man = PaoManager()
pao_man.name = "Si"
pao_man._gen_dict = {3: {0: {1: 4.05}}}
pao_man._pol_dict = {3: {0: {1: 4.05}}}
with pytest.raises(ValueError):
pao_man.add_polarization(3,1)
pao_man.add_polarization(3,0)
assert pao_man._pol_dict == {3: {0: {1: 4.05, 2: 0.0}}}
assert pao_man.pao_size() == "SZDP"
def test_remove_polarization():
pao_man = PaoManager()
pao_man.name = "Si"
pao_man._gen_dict = {3: {0: {1: 4.05}}}
pao_man._pol_dict = {3: {0: {1: 4.05, 2: 0.0}}}
with pytest.raises(ValueError):
pao_man.remove_polarization(3,1)
pao_man.remove_polarization(3,0)
assert pao_man._pol_dict == {3: {0: {1: 4.05}}}
assert pao_man.pao_size() == "SZP"
pao_man.remove_polarization(3,0)
assert pao_man._pol_dict == {}
assert pao_man.pao_size() == "SZ"
def test_add_orbital():
pao_man = PaoManager()
pao_man.name = "Si"
pao_man._gen_dict = {3: {0: {1: 4.05}}}
pao_man._pol_dict = {3: {0: {1: 4.05}}}
with pytest.raises(ValueError):
pao_man.add_orbital("Bohr",0.0,3,1,2)
pao_man.add_orbital("Bohr",0.0,3,0,2)
assert pao_man._gen_dict == {3: {0: {1: 4.05, 2: 0.0}}}
assert pao_man.pao_size() == "DZP"
def test_remove_orbital():
pao_man = PaoManager()
pao_man.name = "Si"
pao_man._gen_dict = {3: {0: {1: 4.05, 2: 0.0}}}
pao_man._pol_dict = {3: {0: {1: 4.05}}}
with pytest.raises(ValueError):
pao_man.remove_orbital(3,1,1)
with pytest.raises(ValueError):
pao_man.remove_orbital(3,0,1)
pao_man.remove_orbital(3,0,2)
assert pao_man._gen_dict == {3: {0: {1: 4.05}}}
assert pao_man._pol_dict == {3: {0: {1: 4.05}}}
assert pao_man.pao_size() == "SZP"
pao_man.remove_orbital(3,0,1)
assert pao_man._gen_dict == {}
assert pao_man._pol_dict == {}
| 23.14375 | 79 | 0.618417 | 618 | 3,703 | 3.383495 | 0.097087 | 0.209469 | 0.037303 | 0.073649 | 0.85318 | 0.797226 | 0.774271 | 0.736968 | 0.672884 | 0.594452 | 0 | 0.080468 | 0.214691 | 3,703 | 159 | 80 | 23.289308 | 0.638583 | 0 | 0 | 0.572917 | 1 | 0 | 0.025399 | 0 | 0 | 0 | 0 | 0 | 0.229167 | 1 | 0.09375 | false | 0 | 0.020833 | 0 | 0.114583 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3266ffe2efa39a78749902d8b2ac29dd8199706 | 57 | py | Python | cargo/utils/date/__init__.py | dalou/django-cargo | 633d051ca8647623adbab746c9da9153f46e1e8f | [
"BSD-3-Clause"
] | null | null | null | cargo/utils/date/__init__.py | dalou/django-cargo | 633d051ca8647623adbab746c9da9153f46e1e8f | [
"BSD-3-Clause"
] | null | null | null | cargo/utils/date/__init__.py | dalou/django-cargo | 633d051ca8647623adbab746c9da9153f46e1e8f | [
"BSD-3-Clause"
] | null | null | null | from .format import format_date_range, date_range_to_html | 57 | 57 | 0.894737 | 10 | 57 | 4.6 | 0.7 | 0.391304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 57 | 1 | 57 | 57 | 0.867925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d329b80e7e651e2423c88797010ebfa4437d8dc9 | 11,956 | py | Python | src/test/v8n-tests.py | nschejtman/py-v8n | 00210fe4a3dcb586b4551b5be8be0e82771063fb | [
"MIT"
] | null | null | null | src/test/v8n-tests.py | nschejtman/py-v8n | 00210fe4a3dcb586b4551b5be8be0e82771063fb | [
"MIT"
] | null | null | null | src/test/v8n-tests.py | nschejtman/py-v8n | 00210fe4a3dcb586b4551b5be8be0e82771063fb | [
"MIT"
] | null | null | null | import unittest
from array import array
from src.py_v8n import v8n
class TestV8N(unittest.TestCase):
def test_greater_than(self):
validator = v8n().greater_than(2)
self.assertTrue(validator.test(3))
self.assertFalse(validator.test(2))
self.assertFalse(validator.test(1))
def test_greater_or_equal_than(self):
validator = v8n().greater_or_equal_than(2)
self.assertTrue(validator.test(3))
self.assertTrue(validator.test(2))
self.assertFalse(validator.test(1))
def test_less_than(self):
validator = v8n().less_than(2)
self.assertFalse(validator.test(3))
self.assertFalse(validator.test(2))
self.assertTrue(validator.test(1))
def test_less_or_equal_than(self):
validator = v8n().less_or_equal_than(2)
self.assertFalse(validator.test(3))
self.assertTrue(validator.test(2))
self.assertTrue(validator.test(1))
def test_equal(self):
validator = v8n().equal(2)
self.assertTrue(validator.test(2))
self.assertFalse(validator.test(3))
def test_length(self):
validator = v8n().length(2)
self.assertTrue(validator.test("12"))
self.assertTrue(validator.test([1, 2]))
self.assertFalse(validator.test("1"))
self.assertFalse(validator.test([1]))
def test_min_length(self):
validator = v8n().min_length(2)
self.assertTrue(validator.test("12"))
self.assertTrue(validator.test("123"))
self.assertFalse(validator.test("1"))
self.assertTrue(validator.test([1, 2]))
self.assertTrue(validator.test([1, 2, 3]))
self.assertFalse(validator.test([1]))
def test_max_length(self):
validator = v8n().max_length(3)
self.assertTrue(validator.test([1, 2]))
self.assertTrue(validator.test("12"))
self.assertTrue(validator.test([1, 2, 3]))
self.assertTrue(validator.test("123"))
self.assertFalse(validator.test([1, 2, 3, 4]))
self.assertFalse(validator.test("1234"))
def test_divisible_by(self):
validator = v8n().divisible_by(5)
self.assertTrue(validator.test(10))
self.assertFalse(validator.test(11))
def test_odd(self):
validator = v8n().odd()
self.assertTrue(validator.test(3))
self.assertFalse(validator.test(4))
def test_even(self):
validator = v8n().even()
self.assertFalse(validator.test(3))
self.assertTrue(validator.test(4))
def test_between(self):
validator = v8n().between(2, 4)
self.assertTrue(validator.test(2))
self.assertTrue(validator.test(3))
self.assertTrue(validator.test(4))
self.assertFalse(validator.test(1))
self.assertFalse(validator.test(5))
def test_str_(self):
validator = v8n().str_()
self.assertTrue(validator.test("string"))
self.assertFalse(validator.test(1))
def test_int_(self):
validator = v8n().int_()
self.assertTrue(validator.test(1))
self.assertFalse(validator.test("string"))
def test_bool_(self):
validator = v8n().bool_()
self.assertTrue(validator.test(True))
self.assertFalse(validator.test("string"))
def test_empty(self):
validator = v8n().empty()
self.assertTrue(validator.test([]))
self.assertTrue(validator.test(""))
self.assertFalse(validator.test([1]))
self.assertFalse(validator.test("a"))
def test_first(self):
validator_str = v8n().first("a")
self.assertTrue(validator_str.test("a string"))
self.assertFalse(validator_str.test("b string"))
validator_list = v8n().first(1)
self.assertTrue(validator_list.test([1, 2, 3]))
self.assertFalse(validator_list.test([4, 5, 6]))
def test_last(self):
validator_str = v8n().last("a")
self.assertTrue(validator_str.test("string a"))
self.assertFalse(validator_str.test("string b"))
validator_list = v8n().last(3)
self.assertTrue(validator_list.test([1, 2, 3]))
self.assertFalse(validator_list.test([4, 5, 6]))
def test_negative(self):
validator = v8n().negative()
self.assertTrue(validator.test(-1))
self.assertFalse(validator.test(1))
def test_positive(self):
validator = v8n().positive()
self.assertFalse(validator.test(-1))
self.assertTrue(validator.test(1))
def test_includes(self):
validator_str = v8n().includes("a")
self.assertTrue(validator_str.test("a string"))
self.assertFalse(validator_str.test("b string"))
validator_list = v8n().includes(1)
self.assertTrue(validator_list.test([1, 2, 3]))
self.assertFalse(validator_list.test([4, 5, 6]))
def test_none(self):
validator = v8n().none()
self.assertTrue(validator.test(None))
self.assertFalse(validator.test("string"))
self.assertFalse(validator.test(1))
def test_list_(self):
validator = v8n().list_()
self.assertTrue(validator.test([1]))
self.assertFalse(validator.test(1))
def test_not_greater_than(self):
validator = v8n().not_().greater_than(2)
self.assertFalse(validator.test(3))
self.assertTrue(validator.test(2))
self.assertTrue(validator.test(1))
def test_not_greater_or_equal_than(self):
validator = v8n().not_().greater_or_equal_than(2)
self.assertFalse(validator.test(3))
self.assertFalse(validator.test(2))
self.assertTrue(validator.test(1))
def test_not_less_than(self):
validator = v8n().not_().less_than(2)
self.assertTrue(validator.test(3))
self.assertTrue(validator.test(2))
self.assertFalse(validator.test(1))
def test_not_less_or_equal_than(self):
validator = v8n().not_().less_or_equal_than(2)
self.assertTrue(validator.test(3))
self.assertFalse(validator.test(2))
self.assertFalse(validator.test(1))
def test_not_equal(self):
validator = v8n().not_().equal(2)
self.assertFalse(validator.test(2))
self.assertTrue(validator.test(3))
def test_not_length(self):
validator = v8n().not_().length(2)
self.assertFalse(validator.test("12"))
self.assertFalse(validator.test([1, 2]))
self.assertTrue(validator.test("1"))
self.assertTrue(validator.test([1]))
def test_not_min_length(self):
validator = v8n().not_().min_length(2)
self.assertFalse(validator.test("12"))
self.assertFalse(validator.test("123"))
self.assertTrue(validator.test("1"))
self.assertFalse(validator.test([1, 2]))
self.assertFalse(validator.test([1, 2, 3]))
self.assertTrue(validator.test([1]))
def test_not_max_length(self):
validator = v8n().not_().max_length(3)
self.assertFalse(validator.test([1, 2]))
self.assertFalse(validator.test("12"))
self.assertFalse(validator.test([1, 2, 3]))
self.assertFalse(validator.test("123"))
self.assertTrue(validator.test([1, 2, 3, 4]))
self.assertTrue(validator.test("1234"))
def test_not_divisible_by(self):
validator = v8n().not_().divisible_by(5)
self.assertFalse(validator.test(10))
self.assertTrue(validator.test(11))
def test_not_odd(self):
validator = v8n().not_().odd()
self.assertFalse(validator.test(3))
self.assertTrue(validator.test(4))
def test_not_even(self):
validator = v8n().not_().even()
self.assertTrue(validator.test(3))
self.assertFalse(validator.test(4))
def test_not_between(self):
validator = v8n().not_().between(2, 4)
self.assertFalse(validator.test(2))
self.assertFalse(validator.test(3))
self.assertFalse(validator.test(4))
self.assertTrue(validator.test(1))
self.assertTrue(validator.test(5))
def test_not_str_(self):
validator = v8n().not_().str_()
self.assertFalse(validator.test("string"))
self.assertTrue(validator.test(1))
def test_not_int_(self):
validator = v8n().not_().int_()
self.assertFalse(validator.test(1))
self.assertTrue(validator.test("string"))
def test_not_bool_(self):
validator = v8n().not_().bool_()
self.assertFalse(validator.test(True))
self.assertTrue(validator.test("string"))
def test_not_empty(self):
validator = v8n().not_().empty()
self.assertFalse(validator.test([]))
self.assertFalse(validator.test(""))
self.assertTrue(validator.test([1]))
self.assertTrue(validator.test("a"))
def test_not_first(self):
validator_str = v8n().not_().first("a")
self.assertFalse(validator_str.test("a string"))
self.assertTrue(validator_str.test("b string"))
validator_list = v8n().not_().first(1)
self.assertFalse(validator_list.test([1, 2, 3]))
self.assertTrue(validator_list.test([4, 5, 6]))
def test_not_last(self):
validator_str = v8n().not_().last("a")
self.assertFalse(validator_str.test("string a"))
self.assertTrue(validator_str.test("string b"))
validator_list = v8n().not_().last(3)
self.assertFalse(validator_list.test([1, 2, 3]))
self.assertTrue(validator_list.test([4, 5, 6]))
def test_not_negative(self):
validator = v8n().not_().negative()
self.assertFalse(validator.test(-1))
self.assertTrue(validator.test(1))
def test_not_positive(self):
validator = v8n().not_().positive()
self.assertTrue(validator.test(-1))
self.assertFalse(validator.test(1))
def test_not_includes(self):
validator_str = v8n().not_().includes("a")
self.assertFalse(validator_str.test("a string"))
self.assertTrue(validator_str.test("b string"))
validator_list = v8n().not_().includes(1)
self.assertFalse(validator_list.test([1, 2, 3]))
self.assertTrue(validator_list.test([4, 5, 6]))
def test_not_none(self):
validator = v8n().not_().none()
self.assertFalse(validator.test(None))
self.assertTrue(validator.test("string"))
self.assertTrue(validator.test(1))
def test_not_list_(self):
validator = v8n().not_().list_()
self.assertFalse(validator.test([1]))
self.assertTrue(validator.test(1))
def test_float_(self):
validator = v8n().float_()
self.assertFalse(validator.test(1))
self.assertTrue(validator.test(1.0))
def test_dict_(self):
validator = v8n().dict_()
self.assertFalse(validator.test(1))
self.assertTrue(validator.test({'a': 1}))
def test_set_(self):
validator = v8n().set_()
self.assertFalse(validator.test(1))
self.assertTrue(validator.test({1, 2, 3}))
def test_tuple_(self):
validator = v8n().tuple_()
self.assertFalse(validator.test(1))
self.assertTrue(validator.test((1, 2)))
def test_of_type(self):
validator = v8n().of_type(array)
self.assertFalse(validator.test(1))
arr = array('i')
arr.append(-1)
arr.append(1)
self.assertTrue(validator.test(arr))
def test_every(self):
validator = v8n().list_().every().int_().between(1, 10)
self.assertFalse(validator.test([1, 5, 11]))
self.assertFalse(validator.test([1, 5, '9']))
self.assertTrue(validator.test([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]))
def test_validate(self):
validator = v8n().float_().between(0, 1)
with self.assertRaises(ValueError, msg="my_var must:\n\t- be between 0 and 1 (inclusive)"):
validator.validate(2.0, value_name="my_var")
if __name__ == '__main__':
unittest.main()
| 35.372781 | 99 | 0.62914 | 1,472 | 11,956 | 4.95856 | 0.059103 | 0.236882 | 0.259762 | 0.257022 | 0.801891 | 0.700781 | 0.660775 | 0.615975 | 0.590218 | 0.501028 | 0 | 0.033547 | 0.217129 | 11,956 | 337 | 100 | 35.477745 | 0.746261 | 0 | 0 | 0.446429 | 0 | 0 | 0.021161 | 0 | 0 | 0 | 0 | 0 | 0.564286 | 1 | 0.189286 | false | 0 | 0.010714 | 0 | 0.203571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6cce9e743d5610445b4ee865a2870ebd14c77d36 | 26 | py | Python | telegram_spiders/spider/ip_util.py | cysk003/telegram_spider | cc5c28487970969970b419510f76a1846bc0445d | [
"MIT"
] | 18 | 2019-12-06T03:12:38.000Z | 2022-03-31T01:47:40.000Z | telegram_spiders/spider/ip_util.py | cysk003/telegram_spider | cc5c28487970969970b419510f76a1846bc0445d | [
"MIT"
] | 2 | 2021-05-28T02:10:29.000Z | 2021-11-19T04:28:28.000Z | telegram_spiders/spider/ip_util.py | cysk003/telegram_spider | cc5c28487970969970b419510f76a1846bc0445d | [
"MIT"
] | 9 | 2020-11-02T16:59:50.000Z | 2022-03-31T01:47:41.000Z | def change_ip():
pass
| 8.666667 | 16 | 0.615385 | 4 | 26 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 26 | 2 | 17 | 13 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
6cea70df0a5fb9ed476e9d89bce56112e833c306 | 3,657 | py | Python | recognition/arcface_torch/backbones/__init__.py | xpertdev/insightface | 78654944d332573715c04ab5956761f5215d0f51 | [
"MIT"
] | 1 | 2021-10-31T09:02:34.000Z | 2021-10-31T09:02:34.000Z | recognition/arcface_torch/backbones/__init__.py | xpertdev/insightface | 78654944d332573715c04ab5956761f5215d0f51 | [
"MIT"
] | null | null | null | recognition/arcface_torch/backbones/__init__.py | xpertdev/insightface | 78654944d332573715c04ab5956761f5215d0f51 | [
"MIT"
] | null | null | null | from .iresnet import iresnet18, iresnet34, iresnet50, iresnet100, iresnet200
from .mobilefacenet import get_mbf
def get_model(name, **kwargs):
# resnet
if name == "r18":
return iresnet18(False, **kwargs)
elif name == "r34":
return iresnet34(False, **kwargs)
elif name == "r50":
return iresnet50(False, **kwargs)
elif name == "r100":
return iresnet100(False, **kwargs)
elif name == "r200":
return iresnet200(False, **kwargs)
elif name == "r2060":
from .iresnet2060 import iresnet2060
return iresnet2060(False, **kwargs)
elif name == "mbf":
fp16 = kwargs.get("fp16", False)
num_features = kwargs.get("num_features", 512)
return get_mbf(fp16=fp16, num_features=num_features)
elif name == "mbf_large":
from .mobilefacenet import get_mbf_large
fp16 = kwargs.get("fp16", False)
num_features = kwargs.get("num_features", 512)
return get_mbf_large(fp16=fp16, num_features=num_features)
elif name == "vit_t":
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=256, depth=12,
num_heads=8, drop_path_rate=0.1, norm_layer="ln", mask_ratio=0.1)
elif name == "vit_t_dp005_mask0": # For WebFace42M
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=256, depth=12,
num_heads=8, drop_path_rate=0.05, norm_layer="ln", mask_ratio=0.0)
elif name == "vit_s":
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=512, depth=12,
num_heads=8, drop_path_rate=0.1, norm_layer="ln", mask_ratio=0.1)
elif name == "vit_s_dp005_mask_0": # For WebFace42M
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=512, depth=12,
num_heads=8, drop_path_rate=0.05, norm_layer="ln", mask_ratio=0.0)
elif name == "vit_b":
# this is a feature
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=512, depth=24,
num_heads=8, drop_path_rate=0.1, norm_layer="ln", mask_ratio=0.1, using_checkpoint=True)
elif name == "vit_b_dp005_mask_005": # For WebFace42M
# this is a feature
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=512, depth=24,
num_heads=8, drop_path_rate=0.05, norm_layer="ln", mask_ratio=0.05, using_checkpoint=True)
elif name == "vit_l_dp005_mask_005": # For WebFace42M
# this is a feature
num_features = kwargs.get("num_features", 512)
from .vit import VisionTransformer
return VisionTransformer(
img_size=112, patch_size=9, num_classes=num_features, embed_dim=768, depth=24,
num_heads=8, drop_path_rate=0.05, norm_layer="ln", mask_ratio=0.05, using_checkpoint=True)
else:
raise ValueError()
| 42.523256 | 102 | 0.657369 | 493 | 3,657 | 4.630832 | 0.160243 | 0.139728 | 0.067017 | 0.078844 | 0.810775 | 0.78537 | 0.767411 | 0.767411 | 0.734122 | 0.734122 | 0 | 0.081216 | 0.235712 | 3,657 | 85 | 103 | 43.023529 | 0.735599 | 0.032814 | 0 | 0.536232 | 0 | 0 | 0.071995 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0 | 0.15942 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6cfaa8bf3f7cac7501cd2f4614482aa76c0c0221 | 38 | py | Python | auv_nav/__init__.py | ocean-perception/oplab_pipeline | 1138e716f43e015812e9eb44b542cf76544b6b98 | [
"BSD-3-Clause"
] | 5 | 2020-06-27T08:58:07.000Z | 2021-08-23T01:10:59.000Z | auv_nav/__init__.py | ocean-perception/oplab_pipeline | 1138e716f43e015812e9eb44b542cf76544b6b98 | [
"BSD-3-Clause"
] | 43 | 2020-06-01T08:28:34.000Z | 2022-03-17T12:20:39.000Z | auv_nav/__init__.py | ocean-perception/oplab_pipeline | 1138e716f43e015812e9eb44b542cf76544b6b98 | [
"BSD-3-Clause"
] | null | null | null | from auv_nav.sensors import * # noqa
| 19 | 37 | 0.736842 | 6 | 38 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 1 | 38 | 38 | 0.870968 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f2c74257271a893cc9500c5cf5908b58084e09d | 56 | py | Python | temp.py | SoyM/Jiang | 7bf2a4efb27873c751b484f808996098517372b2 | [
"Apache-2.0"
] | null | null | null | temp.py | SoyM/Jiang | 7bf2a4efb27873c751b484f808996098517372b2 | [
"Apache-2.0"
] | null | null | null | temp.py | SoyM/Jiang | 7bf2a4efb27873c751b484f808996098517372b2 | [
"Apache-2.0"
] | null | null | null | from test import gettime
def jiang():
gettime()
| 9.333333 | 25 | 0.642857 | 7 | 56 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.267857 | 56 | 5 | 26 | 11.2 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c946c759e71316a2f48c5939a22685bf4629ac0 | 750 | py | Python | cohort_back/views.py | aphp/Cohort360-Back-end | 03184db6c4cb639955e2f3726c7e1b5cc7809f01 | [
"Apache-2.0"
] | 9 | 2020-11-04T13:08:47.000Z | 2022-02-03T17:04:05.000Z | cohort_back/views.py | aphp/Cohort360-Back-end | 03184db6c4cb639955e2f3726c7e1b5cc7809f01 | [
"Apache-2.0"
] | 7 | 2021-03-17T17:48:26.000Z | 2022-02-10T13:27:43.000Z | cohort_back/views.py | aphp/Cohort360-Back-end | 03184db6c4cb639955e2f3726c7e1b5cc7809f01 | [
"Apache-2.0"
] | 2 | 2020-11-23T10:42:40.000Z | 2022-02-03T17:04:09.000Z | from rest_framework import status
from rest_framework.response import Response
class NoDeleteViewSetMixin:
def destroy(self, request, *args, **kwargs):
return Response({"response": "request_query_snapshot manual deletion not possible"},
status=status.HTTP_400_BAD_REQUEST)
class NoUpdateViewSetMixin:
def update(self, request, *args, **kwargs):
return Response({"response": "request_query_snapshot manual update not possible"},
status=status.HTTP_400_BAD_REQUEST)
def partial_update(self, request, *args, **kwargs):
return Response({"response": "request_query_snapshot manual update not possible"},
status=status.HTTP_400_BAD_REQUEST)
| 35.714286 | 92 | 0.690667 | 82 | 750 | 6.097561 | 0.329268 | 0.066 | 0.09 | 0.126 | 0.702 | 0.702 | 0.702 | 0.702 | 0.622 | 0.622 | 0 | 0.015411 | 0.221333 | 750 | 20 | 93 | 37.5 | 0.840753 | 0 | 0 | 0.384615 | 0 | 0 | 0.231283 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.230769 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
4c989265ed8b6fb8a6f030e054d4378d912d4b31 | 142 | py | Python | wall/admin.py | Yar59/vshaurme | 8a970e847e0d5926cd33970be7d1b3c95a2a698a | [
"MIT"
] | 1 | 2021-02-28T18:15:23.000Z | 2021-02-28T18:15:23.000Z | wall/admin.py | Yar59/vshaurme | 8a970e847e0d5926cd33970be7d1b3c95a2a698a | [
"MIT"
] | 1 | 2021-05-26T16:53:05.000Z | 2021-05-26T16:53:10.000Z | wall/admin.py | Yar59/vshaurme | 8a970e847e0d5926cd33970be7d1b3c95a2a698a | [
"MIT"
] | 2 | 2021-05-05T10:24:31.000Z | 2022-02-05T08:56:59.000Z | from .models import Comment
from .models import Post
from django.contrib import admin
admin.site.register(Post)
admin.site.register(Comment)
| 20.285714 | 32 | 0.816901 | 21 | 142 | 5.52381 | 0.47619 | 0.172414 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105634 | 142 | 6 | 33 | 23.666667 | 0.913386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ca870cbc57fe721c615e91c5b4d89b2c913d9c6 | 1,292 | py | Python | tests/test_bifid.py | Malmosmo/pycipher2 | 9460cd4028dfe520f7bd4cc20f45116df3551495 | [
"MIT"
] | null | null | null | tests/test_bifid.py | Malmosmo/pycipher2 | 9460cd4028dfe520f7bd4cc20f45116df3551495 | [
"MIT"
] | null | null | null | tests/test_bifid.py | Malmosmo/pycipher2 | 9460cd4028dfe520f7bd4cc20f45116df3551495 | [
"MIT"
] | null | null | null | import unittest
from pycipher2 import Bifid
class TestBifid(unittest.TestCase):
def test_encrypt(self):
keys = (('phqgmeaylnofdxkrcvszwbuti', 4),
('ezrxdkuatgvncmiwhsqpyfblo', 5))
plaintext = ('abcdefghiiklmnopqrstuvwxyzabcdefghiiklmnopqrstuvwxyz',
'zyxwvutsrqponmlkiihgfedcbazyxwvutsrqponmlkiihgfedcba')
ciphertext = ('nvayyphcifithoipgzostudglnkavyyhpicifhtiogpoztsdulgk',
'dxnxeuwhsmpcofqamkogyrfdckyskwntetcqbmotfcqdfgeikbfc')
for i, key in enumerate(keys):
enc = Bifid(*key).encrypt(plaintext[i])
self.assertEqual(enc, ciphertext[i])
def test_decrypt(self):
keys = (('phqgmeaylnofdxkrcvszwbuti', 4),
('ezrxdkuatgvncmiwhsqpyfblo', 5))
plaintext = ('abcdefghiiklmnopqrstuvwxyzabcdefghiiklmnopqrstuvwxyz',
'zyxwvutsrqponmlkiihgfedcbazyxwvutsrqponmlkiihgfedcba')
ciphertext = ('nvayyphcifithoipgzostudglnkavyyhpicifhtiogpoztsdulgk',
'dxnxeuwhsmpcofqamkogyrfdckyskwntetcqbmotfcqdfgeikbfc')
for i, key in enumerate(keys):
dec = Bifid(*key).decrypt(ciphertext[i])
self.assertEqual(dec, plaintext[i])
if __name__ == '__main__':
unittest.main()
| 36.914286 | 77 | 0.665635 | 79 | 1,292 | 10.759494 | 0.443038 | 0.016471 | 0.077647 | 0.08 | 0.727059 | 0.727059 | 0.727059 | 0.727059 | 0.727059 | 0.727059 | 0 | 0.005123 | 0.244582 | 1,292 | 34 | 78 | 38 | 0.865779 | 0 | 0 | 0.56 | 0 | 0 | 0.405573 | 0.399381 | 0 | 0 | 0 | 0 | 0.08 | 1 | 0.08 | false | 0 | 0.08 | 0 | 0.2 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
980263e65b76d249d6e5e40a2855e374644d3911 | 152 | py | Python | Imprimir mensagem.py | AnaCoutopc/Projeto-em-Python | d879a75e8b3dafd00fac9838280a0ab7941ae610 | [
"MIT"
] | null | null | null | Imprimir mensagem.py | AnaCoutopc/Projeto-em-Python | d879a75e8b3dafd00fac9838280a0ab7941ae610 | [
"MIT"
] | null | null | null | Imprimir mensagem.py | AnaCoutopc/Projeto-em-Python | d879a75e8b3dafd00fac9838280a0ab7941ae610 | [
"MIT"
] | null | null | null | #Imprimir mensagem#
print("a) Comentários na linguagem Python iniciam com #","\n b) Letras minúsculas e maiúsculas são diferentes na linguagem Python") | 76 | 131 | 0.782895 | 21 | 152 | 5.666667 | 0.857143 | 0.184874 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 152 | 2 | 131 | 76 | 0.901515 | 0.111842 | 0 | 0 | 0 | 0 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e22f833048fd1f877895e82b34eb82bf18eb0220 | 7,299 | py | Python | tests/test_cookie_on_redirects.py | HenryGessau/httpie | 85ba9ad8eaa718d7f9dbcb7129168d6a877f3d30 | [
"BSD-3-Clause"
] | 2 | 2022-01-31T18:18:58.000Z | 2022-01-31T18:26:35.000Z | tests/test_cookie_on_redirects.py | isidentical/httpie | 85ba9ad8eaa718d7f9dbcb7129168d6a877f3d30 | [
"BSD-3-Clause"
] | 2 | 2022-03-05T19:16:08.000Z | 2022-03-05T19:16:09.000Z | tests/test_cookie_on_redirects.py | isidentical/httpie | 85ba9ad8eaa718d7f9dbcb7129168d6a877f3d30 | [
"BSD-3-Clause"
] | null | null | null | import pytest
from .utils import http
@pytest.fixture
def remote_httpbin(httpbin_with_chunked_support):
return httpbin_with_chunked_support
def _stringify(fixture):
return fixture + ''
@pytest.mark.parametrize('instance', [
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
])
def test_explicit_user_set_cookie(httpbin, instance):
# User set cookies ARE NOT persisted within redirects
# when there is no session, even on the same domain.
r = http(
'--follow',
httpbin + '/redirect-to',
f'url=={_stringify(instance)}/cookies',
'Cookie:a=b'
)
assert r.json == {'cookies': {}}
@pytest.mark.parametrize('instance', [
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
])
def test_explicit_user_set_cookie_in_session(tmp_path, httpbin, instance):
# User set cookies ARE persisted within redirects
# when there is A session, even on the same domain.
r = http(
'--follow',
'--session',
str(tmp_path / 'session.json'),
httpbin + '/redirect-to',
f'url=={_stringify(instance)}/cookies',
'Cookie:a=b'
)
assert r.json == {'cookies': {'a': 'b'}}
@pytest.mark.parametrize('instance', [
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
])
def test_saved_user_set_cookie_in_session(tmp_path, httpbin, instance):
# User set cookies ARE persisted within redirects
# when there is A session, even on the same domain.
http(
'--follow',
'--session',
str(tmp_path / 'session.json'),
httpbin + '/get',
'Cookie:a=b'
)
r = http(
'--follow',
'--session',
str(tmp_path / 'session.json'),
httpbin + '/redirect-to',
f'url=={_stringify(instance)}/cookies',
)
assert r.json == {'cookies': {'a': 'b'}}
@pytest.mark.parametrize('instance', [
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
])
@pytest.mark.parametrize('session', [True, False])
def test_explicit_user_set_headers(httpbin, tmp_path, instance, session):
# User set headers ARE persisted within redirects
# even on different domains domain with or without
# an active session.
session_args = []
if session:
session_args.extend([
'--session',
str(tmp_path / 'session.json')
])
r = http(
'--follow',
*session_args,
httpbin + '/redirect-to',
f'url=={_stringify(instance)}/get',
'X-Custom-Header:value'
)
assert 'X-Custom-Header' in r.json['headers']
@pytest.mark.parametrize('session', [True, False])
def test_server_set_cookie_on_redirect_same_domain(tmp_path, httpbin, session):
# Server set cookies ARE persisted on the same domain
# when they are forwarded.
session_args = []
if session:
session_args.extend([
'--session',
str(tmp_path / 'session.json')
])
r = http(
'--follow',
*session_args,
httpbin + '/cookies/set/a/b',
)
assert r.json['cookies'] == {'a': 'b'}
@pytest.mark.parametrize('session', [True, False])
def test_server_set_cookie_on_redirect_different_domain(tmp_path, http_server, httpbin, session):
# Server set cookies ARE persisted on different domains
# when they are forwarded.
session_args = []
if session:
session_args.extend([
'--session',
str(tmp_path / 'session.json')
])
r = http(
'--follow',
*session_args,
http_server + '/cookies/set-and-redirect',
f"X-Redirect-To:{httpbin + '/cookies'}",
'X-Cookies:a=b'
)
assert r.json['cookies'] == {'a': 'b'}
def test_saved_session_cookies_on_same_domain(tmp_path, httpbin):
# Saved session cookies ARE persisted when making a new
# request to the same domain.
http(
'--session',
str(tmp_path / 'session.json'),
httpbin + '/cookies/set/a/b'
)
r = http(
'--session',
str(tmp_path / 'session.json'),
httpbin + '/cookies'
)
assert r.json == {'cookies': {'a': 'b'}}
def test_saved_session_cookies_on_different_domain(tmp_path, httpbin, remote_httpbin):
# Saved session cookies ARE persisted when making a new
# request to a different domain.
http(
'--session',
str(tmp_path / 'session.json'),
httpbin + '/cookies/set/a/b'
)
r = http(
'--session',
str(tmp_path / 'session.json'),
remote_httpbin + '/cookies'
)
assert r.json == {'cookies': {}}
@pytest.mark.parametrize('initial_domain, first_request_domain, second_request_domain, expect_cookies', [
(
# Cookies are set by Domain A
# Initial domain is Domain A
# Redirected domain is Domain A
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('httpbin'),
True,
),
(
# Cookies are set by Domain A
# Initial domain is Domain B
# Redirected domain is Domain B
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
pytest.lazy_fixture('remote_httpbin'),
False,
),
(
# Cookies are set by Domain A
# Initial domain is Domain A
# Redirected domain is Domain B
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
False,
),
(
# Cookies are set by Domain A
# Initial domain is Domain B
# Redirected domain is Domain A
pytest.lazy_fixture('httpbin'),
pytest.lazy_fixture('remote_httpbin'),
pytest.lazy_fixture('httpbin'),
True,
),
])
def test_saved_session_cookies_on_redirect(tmp_path, initial_domain, first_request_domain, second_request_domain, expect_cookies):
http(
'--session',
str(tmp_path / 'session.json'),
initial_domain + '/cookies/set/a/b'
)
r = http(
'--session',
str(tmp_path / 'session.json'),
'--follow',
first_request_domain + '/redirect-to',
f'url=={_stringify(second_request_domain)}/cookies'
)
if expect_cookies:
expected_data = {'cookies': {'a': 'b'}}
else:
expected_data = {'cookies': {}}
assert r.json == expected_data
def test_saved_session_cookie_pool(tmp_path, httpbin, remote_httpbin):
http(
'--session',
str(tmp_path / 'session.json'),
httpbin + '/cookies/set/a/b'
)
http(
'--session',
str(tmp_path / 'session.json'),
remote_httpbin + '/cookies/set/a/c'
)
http(
'--session',
str(tmp_path / 'session.json'),
remote_httpbin + '/cookies/set/b/d'
)
response = http(
'--session',
str(tmp_path / 'session.json'),
httpbin + '/cookies'
)
assert response.json['cookies'] == {'a': 'b'}
response = http(
'--session',
str(tmp_path / 'session.json'),
remote_httpbin + '/cookies'
)
assert response.json['cookies'] == {'a': 'c', 'b': 'd'}
| 27.752852 | 130 | 0.594465 | 842 | 7,299 | 4.963183 | 0.108076 | 0.043551 | 0.081359 | 0.069155 | 0.829864 | 0.798995 | 0.772673 | 0.740129 | 0.703996 | 0.684374 | 0 | 0 | 0.268393 | 7,299 | 262 | 131 | 27.858779 | 0.782584 | 0.151939 | 0 | 0.698492 | 0 | 0 | 0.223864 | 0.04789 | 0 | 0 | 0 | 0 | 0.055276 | 1 | 0.060302 | false | 0 | 0.01005 | 0.01005 | 0.080402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2325bac714e4496c5423e9dac67391d70a8f6ea | 55 | py | Python | python/testData/refactoring/changeSignature/removeDefaultFromParam.before.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/changeSignature/removeDefaultFromParam.before.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/changeSignature/removeDefaultFromParam.before.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def bar(a, b = 2):
pass
bar(1, 3)
bar(1, b=3)
bar(1) | 9.166667 | 18 | 0.509091 | 15 | 55 | 1.866667 | 0.533333 | 0.428571 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.236364 | 55 | 6 | 19 | 9.166667 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0 | 0 | 0.2 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e2341b4dbba5a0215d64d725aeba73e407828625 | 26,857 | py | Python | eZmaxApi/api/object_ezsignfoldersignerassociation_api.py | eZmaxinc/eZmax-SDK-python | 5b4d54b69db68aab8ee814a1e26460a0af03784e | [
"MIT"
] | null | null | null | eZmaxApi/api/object_ezsignfoldersignerassociation_api.py | eZmaxinc/eZmax-SDK-python | 5b4d54b69db68aab8ee814a1e26460a0af03784e | [
"MIT"
] | null | null | null | eZmaxApi/api/object_ezsignfoldersignerassociation_api.py | eZmaxinc/eZmax-SDK-python | 5b4d54b69db68aab8ee814a1e26460a0af03784e | [
"MIT"
] | null | null | null | """
eZmax API Definition
This API expose all the functionnalities for the eZmax and eZsign applications. # noqa: E501
The version of the OpenAPI document: 1.1.3
Contact: support-api@ezmax.ca
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from eZmaxApi.api_client import ApiClient, Endpoint as _Endpoint
from eZmaxApi.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from eZmaxApi.model.common_response_error import CommonResponseError
from eZmaxApi.model.ezsignfoldersignerassociation_create_object_v1_request import EzsignfoldersignerassociationCreateObjectV1Request
from eZmaxApi.model.ezsignfoldersignerassociation_create_object_v1_response import EzsignfoldersignerassociationCreateObjectV1Response
from eZmaxApi.model.ezsignfoldersignerassociation_delete_object_v1_response import EzsignfoldersignerassociationDeleteObjectV1Response
from eZmaxApi.model.ezsignfoldersignerassociation_get_in_person_login_url_v1_response import EzsignfoldersignerassociationGetInPersonLoginUrlV1Response
from eZmaxApi.model.ezsignfoldersignerassociation_get_object_v1_response import EzsignfoldersignerassociationGetObjectV1Response
class ObjectEzsignfoldersignerassociationApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.ezsignfoldersignerassociation_create_object_v1_endpoint = _Endpoint(
settings={
'response_type': (EzsignfoldersignerassociationCreateObjectV1Response,),
'auth': [
'Authorization'
],
'endpoint_path': '/1/object/ezsignfoldersignerassociation',
'operation_id': 'ezsignfoldersignerassociation_create_object_v1',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'ezsignfoldersignerassociation_create_object_v1_request',
],
'required': [
'ezsignfoldersignerassociation_create_object_v1_request',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'ezsignfoldersignerassociation_create_object_v1_request':
([EzsignfoldersignerassociationCreateObjectV1Request],),
},
'attribute_map': {
},
'location_map': {
'ezsignfoldersignerassociation_create_object_v1_request': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.ezsignfoldersignerassociation_delete_object_v1_endpoint = _Endpoint(
settings={
'response_type': (EzsignfoldersignerassociationDeleteObjectV1Response,),
'auth': [
'Authorization'
],
'endpoint_path': '/1/object/ezsignfoldersignerassociation/{pkiEzsignfoldersignerassociationID}',
'operation_id': 'ezsignfoldersignerassociation_delete_object_v1',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'pki_ezsignfoldersignerassociation_id',
],
'required': [
'pki_ezsignfoldersignerassociation_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'pki_ezsignfoldersignerassociation_id':
(int,),
},
'attribute_map': {
'pki_ezsignfoldersignerassociation_id': 'pkiEzsignfoldersignerassociationID',
},
'location_map': {
'pki_ezsignfoldersignerassociation_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.ezsignfoldersignerassociation_get_children_v1_endpoint = _Endpoint(
settings={
'response_type': None,
'auth': [
'Authorization'
],
'endpoint_path': '/1/object/ezsignfoldersignerassociation/{pkiEzsignfoldersignerassociationID}/getChildren',
'operation_id': 'ezsignfoldersignerassociation_get_children_v1',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'pki_ezsignfoldersignerassociation_id',
],
'required': [
'pki_ezsignfoldersignerassociation_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'pki_ezsignfoldersignerassociation_id':
(int,),
},
'attribute_map': {
'pki_ezsignfoldersignerassociation_id': 'pkiEzsignfoldersignerassociationID',
},
'location_map': {
'pki_ezsignfoldersignerassociation_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.ezsignfoldersignerassociation_get_in_person_login_url_v1_endpoint = _Endpoint(
settings={
'response_type': (EzsignfoldersignerassociationGetInPersonLoginUrlV1Response,),
'auth': [
'Authorization'
],
'endpoint_path': '/1/object/ezsignfoldersignerassociation/{pkiEzsignfoldersignerassociationID}/getInPersonLoginUrl',
'operation_id': 'ezsignfoldersignerassociation_get_in_person_login_url_v1',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'pki_ezsignfoldersignerassociation_id',
],
'required': [
'pki_ezsignfoldersignerassociation_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'pki_ezsignfoldersignerassociation_id':
(int,),
},
'attribute_map': {
'pki_ezsignfoldersignerassociation_id': 'pkiEzsignfoldersignerassociationID',
},
'location_map': {
'pki_ezsignfoldersignerassociation_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.ezsignfoldersignerassociation_get_object_v1_endpoint = _Endpoint(
settings={
'response_type': (EzsignfoldersignerassociationGetObjectV1Response,),
'auth': [
'Authorization'
],
'endpoint_path': '/1/object/ezsignfoldersignerassociation/{pkiEzsignfoldersignerassociationID}',
'operation_id': 'ezsignfoldersignerassociation_get_object_v1',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'pki_ezsignfoldersignerassociation_id',
],
'required': [
'pki_ezsignfoldersignerassociation_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'pki_ezsignfoldersignerassociation_id':
(int,),
},
'attribute_map': {
'pki_ezsignfoldersignerassociation_id': 'pkiEzsignfoldersignerassociationID',
},
'location_map': {
'pki_ezsignfoldersignerassociation_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
def ezsignfoldersignerassociation_create_object_v1(
self,
ezsignfoldersignerassociation_create_object_v1_request,
**kwargs
):
"""Create a new Ezsignfoldersignerassociation # noqa: E501
The endpoint allows to create one or many elements at once. The array can contain simple (Just the object) or compound (The object and its child) objects. Creating compound elements allows to reduce the multiple requests to create all child objects. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.ezsignfoldersignerassociation_create_object_v1(ezsignfoldersignerassociation_create_object_v1_request, async_req=True)
>>> result = thread.get()
Args:
ezsignfoldersignerassociation_create_object_v1_request ([EzsignfoldersignerassociationCreateObjectV1Request]):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
EzsignfoldersignerassociationCreateObjectV1Response
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['ezsignfoldersignerassociation_create_object_v1_request'] = \
ezsignfoldersignerassociation_create_object_v1_request
return self.ezsignfoldersignerassociation_create_object_v1_endpoint.call_with_http_info(**kwargs)
def ezsignfoldersignerassociation_delete_object_v1(
self,
pki_ezsignfoldersignerassociation_id,
**kwargs
):
"""Delete an existing Ezsignfoldersignerassociation # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.ezsignfoldersignerassociation_delete_object_v1(pki_ezsignfoldersignerassociation_id, async_req=True)
>>> result = thread.get()
Args:
pki_ezsignfoldersignerassociation_id (int):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
EzsignfoldersignerassociationDeleteObjectV1Response
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['pki_ezsignfoldersignerassociation_id'] = \
pki_ezsignfoldersignerassociation_id
return self.ezsignfoldersignerassociation_delete_object_v1_endpoint.call_with_http_info(**kwargs)
def ezsignfoldersignerassociation_get_children_v1(
self,
pki_ezsignfoldersignerassociation_id,
**kwargs
):
"""Retrieve an existing Ezsignfoldersignerassociation's children IDs # noqa: E501
## ⚠️EARLY ADOPTERS WARNING ### This endpoint is not officially released. Its definition might still change and it might not be available in every environment and region. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.ezsignfoldersignerassociation_get_children_v1(pki_ezsignfoldersignerassociation_id, async_req=True)
>>> result = thread.get()
Args:
pki_ezsignfoldersignerassociation_id (int):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['pki_ezsignfoldersignerassociation_id'] = \
pki_ezsignfoldersignerassociation_id
return self.ezsignfoldersignerassociation_get_children_v1_endpoint.call_with_http_info(**kwargs)
def ezsignfoldersignerassociation_get_in_person_login_url_v1(
self,
pki_ezsignfoldersignerassociation_id,
**kwargs
):
"""Retrieve a Login Url to allow In-Person signing # noqa: E501
This endpoint returns a Login Url that can be used in a browser or embedded in an I-Frame to allow in person signing. The signer Login type must be configured as In-Person. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.ezsignfoldersignerassociation_get_in_person_login_url_v1(pki_ezsignfoldersignerassociation_id, async_req=True)
>>> result = thread.get()
Args:
pki_ezsignfoldersignerassociation_id (int):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
EzsignfoldersignerassociationGetInPersonLoginUrlV1Response
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['pki_ezsignfoldersignerassociation_id'] = \
pki_ezsignfoldersignerassociation_id
return self.ezsignfoldersignerassociation_get_in_person_login_url_v1_endpoint.call_with_http_info(**kwargs)
def ezsignfoldersignerassociation_get_object_v1(
self,
pki_ezsignfoldersignerassociation_id,
**kwargs
):
"""Retrieve an existing Ezsignfoldersignerassociation # noqa: E501
## ⚠️EARLY ADOPTERS WARNING ### This endpoint is not officially released. Its definition might still change and it might not be available in every environment and region. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.ezsignfoldersignerassociation_get_object_v1(pki_ezsignfoldersignerassociation_id, async_req=True)
>>> result = thread.get()
Args:
pki_ezsignfoldersignerassociation_id (int):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
EzsignfoldersignerassociationGetObjectV1Response
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['pki_ezsignfoldersignerassociation_id'] = \
pki_ezsignfoldersignerassociation_id
return self.ezsignfoldersignerassociation_get_object_v1_endpoint.call_with_http_info(**kwargs)
| 41.003053 | 273 | 0.570615 | 2,359 | 26,857 | 6.220008 | 0.101738 | 0.087235 | 0.092687 | 0.046889 | 0.848293 | 0.811968 | 0.781912 | 0.741702 | 0.725618 | 0.722756 | 0 | 0.006142 | 0.357374 | 26,857 | 654 | 274 | 41.065749 | 0.843792 | 0.362438 | 0 | 0.643868 | 0 | 0 | 0.273192 | 0.14094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014151 | false | 0 | 0.023585 | 0 | 0.051887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2624001d04b1d05edbefa6c0a1dfb5e979fa49a | 29 | py | Python | hello_world.py | arehman01/profiles-rest-api | e6bb319529c728bb78ba87fa370ef48b35b904ed | [
"MIT"
] | null | null | null | hello_world.py | arehman01/profiles-rest-api | e6bb319529c728bb78ba87fa370ef48b35b904ed | [
"MIT"
] | 7 | 2020-06-06T01:45:38.000Z | 2022-02-10T09:31:28.000Z | hello_world.py | arehman01/profiles-rest-api | e6bb319529c728bb78ba87fa370ef48b35b904ed | [
"MIT"
] | null | null | null | print('Hello World from VM!') | 29 | 29 | 0.724138 | 5 | 29 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e26e689de9f2810ab4ea1a167041de8d86dcc137 | 8,794 | py | Python | coop_cms/tests/test_edition.py | ljean/coop_cms | 531f65ceb9ad82c113597d15b764dbcf51264794 | [
"BSD-3-Clause"
] | 3 | 2016-01-29T10:55:09.000Z | 2022-03-08T16:02:12.000Z | coop_cms/tests/test_edition.py | ljean/coop_cms | 531f65ceb9ad82c113597d15b764dbcf51264794 | [
"BSD-3-Clause"
] | 11 | 2015-03-07T17:30:24.000Z | 2016-07-13T09:40:43.000Z | coop_cms/tests/test_edition.py | ljean/coop_cms | 531f65ceb9ad82c113597d15b764dbcf51264794 | [
"BSD-3-Clause"
] | 5 | 2018-08-30T09:03:22.000Z | 2019-09-10T13:01:56.000Z | # -*- coding: utf-8 -*-
from django.template import Template, Context
from django.test.utils import override_settings
from model_mommy import mommy
from coop_cms.models import PieceOfHtml
from coop_cms.settings import get_article_class
from coop_cms.tests import BaseTestCase, BeautifulSoup, BaseArticleTest
class PieceOfHtmlTagsTest(BaseTestCase):
def test_create_poc(self):
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" %}')
html = tpl.render(Context({}))
self.assertEqual(html, "")
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
def test_existing_poc(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" %}')
html = tpl.render(Context({}))
self.assertEqual(html, poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
def test_create_poc_read_only(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" read-only %}')
html = tpl.render(Context({}))
self.assertEqual(html, poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
def test_create_edit_poc(self):
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" %}')
html = tpl.render(Context({"inline_html_edit": True}))
self.assertNotEqual(html, "")
soup = BeautifulSoup(html)
tags = soup.select("#html_editor_html_editor__coop_cms__PieceOfHtml__div_id__test__content")
self.assertEqual(len(tags), 1)
self.assertEqual(tags[0].text, "")
tags_hidden = soup.select("#html_editor_html_editor__coop_cms__PieceOfHtml__div_id__test__content_hidden")
self.assertEqual(len(tags_hidden), 1)
self.assertEqual(tags_hidden[0].get("value", ""), "")
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
def test_edit_poc(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" %}')
html = tpl.render(Context({"inline_html_edit": True}))
self.assertNotEqual(html, poc.content)
soup = BeautifulSoup(html)
tags = soup.select("#html_editor_html_editor__coop_cms__PieceOfHtml__div_id__test__content")
self.assertEqual(len(tags), 1)
self.assertEqual(tags[0].text, poc.content)
tags_hidden = soup.select("#html_editor_html_editor__coop_cms__PieceOfHtml__div_id__test__content_hidden")
self.assertEqual(len(tags_hidden), 1)
self.assertEqual(tags_hidden[0]["value"], poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
def test_edit_poc_read_only(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" read-only %}')
html = tpl.render(Context({"inline_html_edit": True}))
self.assertEqual(html, poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
def test_view_poc_extra_id(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!", extra_id="1")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" extra_id=1 %}')
html = tpl.render(Context({"inline_html_edit": False}))
self.assertEqual(html, poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
self.assertEqual(poc.extra_id, "1")
def test_edit_poc_extra_id(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!", extra_id="1")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" extra_id=1 %}')
html = tpl.render(Context({"inline_html_edit": True}))
soup = BeautifulSoup(html)
#print html
tags = soup.select("input[type=hidden]")
self.assertEqual(len(tags), 1)
div_selector = tags[0].attrs['id']
div_selector = div_selector.replace("_hidden", "")
tags = soup.select("#"+div_selector)
self.assertEqual(len(tags), 1)
self.assertEqual(tags[0].text, poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
self.assertEqual(poc.extra_id, "1")
def test_create_poc_extra_id(self):
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" extra_id=1 %}')
html = tpl.render(Context({"inline_html_edit": False}))
self.assertEqual(html, "")
self.assertEqual(PieceOfHtml.objects.count(), 1)
poc = PieceOfHtml.objects.all()[0]
self.assertEqual(poc.div_id, "test")
self.assertEqual(poc.extra_id, "1")
def test_create_new_poc_extra_id(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!", extra_id="1")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" extra_id=2 %}')
html = tpl.render(Context({"inline_html_edit": False}))
self.assertEqual(html, "")
self.assertEqual(PieceOfHtml.objects.count(), 2)
PieceOfHtml.objects.get(div_id="test", extra_id="1")
PieceOfHtml.objects.get(div_id="test", extra_id="2")
def test_poc_extra_id_readonly(self):
poc = mommy.make(PieceOfHtml, div_id="test", content="HELLO!!!", extra_id="1")
tpl = Template('{% load coop_edition %}{% coop_piece_of_html "test" read-only extra_id=1 %}')
html = tpl.render(Context({"inline_html_edit": True}))
self.assertEqual(html, poc.content)
self.assertEqual(PieceOfHtml.objects.count(), 1)
PieceOfHtml.objects.get(div_id="test", extra_id="1")
@override_settings(COOP_CMS_ARTICLE_TEMPLATES=(('test/article_with_blocks.html', 'Article with blocks'),))
class BlockInheritanceTest(BaseArticleTest):
"""test using block templatetag inside the cms_edit template tag"""
def test_view_with_blocks(self):
"""test view article with block templatetag inside the cms_edit template tag"""
article_class = get_article_class()
article = mommy.make(
article_class,
title="This is my article", content="<p>This is my <b>content</b></p>",
template='test/article_with_blocks.html'
)
response = self.client.get(article.get_absolute_url())
self.assertEqual(response.status_code, 200)
self.assertContains(response, article.title)
self.assertContains(response, article.content)
self.assertContains(response, "*** HELLO FROM CHILD ***")
self.assertContains(response, "*** HELLO FROM PARENT ***")
self.assertContains(response, "*** HELLO FROM BLOCK ***")
def test_edit_with_blocks(self):
"""test edition with block templatetag inside the cms_edit template tag"""
article_class = get_article_class()
article = mommy.make(
article_class,
title="This is my article", content="<p>This is my <b>content</b></p>",
template='test/article_with_blocks.html'
)
self._log_as_editor()
data = {
"title": "This is a new title",
'content': "<p>This is a <i>*** NEW ***</i> <b>content</b></p>"
}
response = self.client.post(article.get_edit_url(), data=data, follow=True)
self.assertEqual(response.status_code, 200)
article = article_class.objects.get(id=article.id)
self.assertEqual(article.title, data['title'])
self.assertEqual(article.content, data['content'])
self.assertContains(response, article.title)
self.assertContains(response, article.content)
self.assertContains(response, "*** HELLO FROM CHILD ***")
self.assertContains(response, "*** HELLO FROM PARENT ***")
self.assertContains(response, "*** HELLO FROM BLOCK ***")
| 43.97 | 114 | 0.639982 | 1,073 | 8,794 | 5.012116 | 0.100652 | 0.1283 | 0.040164 | 0.044626 | 0.816846 | 0.806806 | 0.793418 | 0.793418 | 0.785422 | 0.764411 | 0 | 0.007855 | 0.218217 | 8,794 | 199 | 115 | 44.190955 | 0.7744 | 0.02695 | 0 | 0.69281 | 0 | 0.006536 | 0.203537 | 0.044619 | 0 | 0 | 0 | 0 | 0.379085 | 1 | 0.084967 | false | 0 | 0.039216 | 0 | 0.137255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2752d7b1a1e2476a3ba87e83ef774e5f6cf6aa9 | 38 | py | Python | runner/__init__.py | Hyunmok-Park/GNN_hyunmok | 86e722ab9754b64819a3016f2530e3629a160565 | [
"MIT"
] | null | null | null | runner/__init__.py | Hyunmok-Park/GNN_hyunmok | 86e722ab9754b64819a3016f2530e3629a160565 | [
"MIT"
] | null | null | null | runner/__init__.py | Hyunmok-Park/GNN_hyunmok | 86e722ab9754b64819a3016f2530e3629a160565 | [
"MIT"
] | null | null | null | from runner.inference_runner import *
| 19 | 37 | 0.842105 | 5 | 38 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e281a24722b5d7f0006337d562eacb7b2f4b00c3 | 6,333 | py | Python | src/pipeline/datasets/training_datasets.py | guyfreund/data_drift_detection | 80ca5eb7445b17e04f2aa98c5f6d9ac1fe6d5ac5 | [
"MIT"
] | null | null | null | src/pipeline/datasets/training_datasets.py | guyfreund/data_drift_detection | 80ca5eb7445b17e04f2aa98c5f6d9ac1fe6d5ac5 | [
"MIT"
] | 1 | 2021-12-12T22:13:58.000Z | 2021-12-17T22:49:39.000Z | src/pipeline/datasets/training_datasets.py | guyfreund/data_drift_detection | 80ca5eb7445b17e04f2aa98c5f6d9ac1fe6d5ac5 | [
"MIT"
] | null | null | null | import pandas as pd
import pickle
from src.pipeline.config import Config
from src.pipeline.datasets.dataset import Dataset, SampledDataset
from src.pipeline.datasets.constants import DatasetType
from src.pipeline.datasets.paths import BANK_MARKETING_DATASET_PATH, GERMAN_CREDIT_DATASET_PATH, \
GERMAN_CREDIT_TRAINING_PROCESSED_DF_PLUS_PATH, BANK_MARKETING_TRAINING_PROCESSED_DF_PLUS_PATH, \
GERMAN_CREDIT_TRAINING_PROCESSED_DF_PATH, BANK_MARKETING_TRAINING_PROCESSED_DF_PATH, \
BANK_MARKETING_SAMPLED_DATASET_PATH, \
GERMAN_CREDIT_SAMPLED_DATASET_PATH, BANK_MARKETING_TRAINING_X_TRAIN_RAW, BANK_MARKETING_TRAINING_Y_TRAIN_RAW, \
GERMAN_CREDIT_TRAINING_Y_TRAIN_RAW, GERMAN_CREDIT_TRAINING_X_TRAIN_RAW
class BankMarketingDataset(Dataset):
def __init__(self, to_load: bool = True):
super().__init__(
dtype=DatasetType.Training,
path=BANK_MARKETING_DATASET_PATH,
numeric_feature_names=Config().preprocessing.bank_marketing.numeric_features,
categorical_feature_names=Config().preprocessing.bank_marketing.categorical_features,
label_column_name=Config().preprocessing.bank_marketing.original_label_column_name,
to_load=to_load
)
def load(self) -> pd.DataFrame:
return pd.read_csv(self._path, delimiter=';')
class GermanCreditDataset(Dataset):
def __init__(self, to_load: bool = True):
super().__init__(
dtype=DatasetType.Training,
path=GERMAN_CREDIT_DATASET_PATH,
numeric_feature_names=Config().preprocessing.german_credit.numeric_features,
categorical_feature_names=Config().preprocessing.german_credit.categorical_features,
label_column_name=Config().preprocessing.german_credit.original_label_column_name,
to_load=to_load
)
def load(self) -> pd.DataFrame:
return pd.read_csv(self._path, names=Config().preprocessing.german_credit.names, delimiter=' ')
class BankMarketingProcessedDataset(Dataset):
def __init__(self, to_load: bool = True):
super().__init__(
dtype=DatasetType.Training,
path=BANK_MARKETING_TRAINING_PROCESSED_DF_PATH,
numeric_feature_names=Config().preprocessing.bank_marketing.numeric_features,
categorical_feature_names=Config().preprocessing.bank_marketing.categorical_features,
label_column_name=Config().preprocessing.bank_marketing.original_label_column_name,
to_load=to_load
)
def load(self) -> pickle:
return pd.read_pickle(self._path)
class GermanCreditProcessedDataset(Dataset):
def __init__(self, to_load: bool = True):
super().__init__(
dtype=DatasetType.Training,
path=GERMAN_CREDIT_TRAINING_PROCESSED_DF_PATH,
numeric_feature_names=Config().preprocessing.german_credit.numeric_features,
categorical_feature_names=Config().preprocessing.german_credit.categorical_features,
label_column_name=Config().preprocessing.german_credit.original_label_column_name,
to_load=to_load
)
def load(self) -> pickle:
return pd.read_pickle(self._path)
class BankMarketingDatasetPlus(Dataset):
def __init__(self, to_load: bool = True):
super().__init__(
dtype=DatasetType.Training,
path=BANK_MARKETING_DATASET_PATH,
numeric_feature_names=Config().preprocessing.bank_marketing.numeric_features,
categorical_feature_names=Config().preprocessing.bank_marketing.categorical_features + ['y'],
label_column_name=Config().preprocessing.data_drift_model_label_column_name,
to_load=to_load
)
def load(self) -> pd.DataFrame:
df = pd.read_csv(self._path, delimiter=';')
df[Config().preprocessing.data_drift_model_label_column_name] = DatasetType.Training.value
return df
class GermanCreditDatasetPlus(Dataset):
def __init__(self, to_load: bool = True):
super().__init__(
dtype=DatasetType.Training,
path=GERMAN_CREDIT_DATASET_PATH,
numeric_feature_names=Config().preprocessing.german_credit.numeric_features,
categorical_feature_names=Config().preprocessing.german_credit.categorical_features + ['y'],
label_column_name=Config().preprocessing.data_drift_model_label_column_name,
to_load=to_load
)
def load(self) -> pd.DataFrame:
df = pd.read_csv(self._path, names=Config().preprocessing.german_credit.names, delimiter=' ')
df[Config().preprocessing.data_drift_model_label_column_name] = DatasetType.Training.value
return df
class BankMarketingSampledTrainingTrainDataset(SampledDataset):
def __init__(self):
super().__init__(
dtype=DatasetType.TrainingSampled,
raw_df_paths=[BANK_MARKETING_TRAINING_X_TRAIN_RAW, BANK_MARKETING_TRAINING_Y_TRAIN_RAW],
path=BANK_MARKETING_SAMPLED_DATASET_PATH,
numeric_feature_names=Config().preprocessing.bank_marketing.numeric_features,
categorical_feature_names=Config().preprocessing.bank_marketing.categorical_features,
label_column_name=Config().preprocessing.bank_marketing.original_label_column_name,
sample_size_in_percent=Config().retraining.training_sample_size_in_percent,
)
def load(self) -> pd.DataFrame:
return pd.read_csv(self._path, delimiter=';')
class GermanCreditSampledTrainingTrainDataset(SampledDataset):
def __init__(self):
super().__init__(
dtype=DatasetType.Training,
raw_df_paths=[GERMAN_CREDIT_TRAINING_X_TRAIN_RAW, GERMAN_CREDIT_TRAINING_Y_TRAIN_RAW],
path=GERMAN_CREDIT_SAMPLED_DATASET_PATH,
numeric_feature_names=Config().preprocessing.german_credit.numeric_features,
categorical_feature_names=Config().preprocessing.german_credit.categorical_features,
label_column_name=Config().preprocessing.german_credit.original_label_column_name,
sample_size_in_percent=Config().retraining.training_sample_size_in_percent
)
def load(self) -> pd.DataFrame:
return pd.read_csv(self._path, names=Config().preprocessing.german_credit.names, delimiter=' ')
| 45.891304 | 115 | 0.732512 | 707 | 6,333 | 6.067893 | 0.101839 | 0.128438 | 0.106294 | 0.115618 | 0.886247 | 0.877156 | 0.831702 | 0.809557 | 0.767599 | 0.758741 | 0 | 0 | 0.184904 | 6,333 | 137 | 116 | 46.226277 | 0.831073 | 0 | 0 | 0.646018 | 0 | 0 | 0.001263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.141593 | false | 0 | 0.053097 | 0.053097 | 0.336283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2a2a8b9b6f4f132cd870dee484190b9bee967a6 | 76 | py | Python | models/modules/__init__.py | sadegh1404/Refinedet_saffran | 3c756fe16b75e83630553b64cb9cb53203b9cb81 | [
"MIT"
] | null | null | null | models/modules/__init__.py | sadegh1404/Refinedet_saffran | 3c756fe16b75e83630553b64cb9cb53203b9cb81 | [
"MIT"
] | null | null | null | models/modules/__init__.py | sadegh1404/Refinedet_saffran | 3c756fe16b75e83630553b64cb9cb53203b9cb81 | [
"MIT"
] | null | null | null | from .detection_block import DetectionBlock
from .tcb_block import TCBBlock
| 25.333333 | 43 | 0.868421 | 10 | 76 | 6.4 | 0.7 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 76 | 2 | 44 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2acee8ef0c9290d5b490425c38b18e7ac0b644c | 243 | py | Python | homepage/views.py | Endraraaz/Raj-Beverages | fcf767d70db4c7d368a2b3c62f3e48dbc089d9b4 | [
"Apache-2.0"
] | null | null | null | homepage/views.py | Endraraaz/Raj-Beverages | fcf767d70db4c7d368a2b3c62f3e48dbc089d9b4 | [
"Apache-2.0"
] | null | null | null | homepage/views.py | Endraraaz/Raj-Beverages | fcf767d70db4c7d368a2b3c62f3e48dbc089d9b4 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def home(request):
return render(request,'home.html')
def about(request):
return render(request,'about.html')
def store(request):
return render(request,'store.html') | 20.25 | 39 | 0.728395 | 33 | 243 | 5.363636 | 0.484848 | 0.220339 | 0.322034 | 0.440678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 243 | 12 | 40 | 20.25 | 0.855072 | 0.09465 | 0 | 0 | 0 | 0 | 0.13242 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.142857 | 0.428571 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2c7d6ee7499b78d01d0394ca7b28092f0fae7c14 | 94 | py | Python | stdnet/contrib/monitor/application.py | TheProjecter/python-stdnet | 059be991bea40c0281dac12459539d0aa4397823 | [
"BSD-3-Clause"
] | null | null | null | stdnet/contrib/monitor/application.py | TheProjecter/python-stdnet | 059be991bea40c0281dac12459539d0aa4397823 | [
"BSD-3-Clause"
] | null | null | null | stdnet/contrib/monitor/application.py | TheProjecter/python-stdnet | 059be991bea40c0281dac12459539d0aa4397823 | [
"BSD-3-Clause"
] | null | null | null |
from djpcms.views import appsite
class StdNetApplication(appsite.ModelApplication):
pass | 18.8 | 50 | 0.819149 | 10 | 94 | 7.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 94 | 5 | 51 | 18.8 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e2f36286359fe1722b0cebecfba8e8250c4e7b59 | 23,016 | py | Python | nrewebservices/ldbws/webservice.py | grundleborg/nrewebservices | f8416afad160366c70b0579c37aa11fb175ccac1 | [
"BSD-2-Clause"
] | 15 | 2017-03-16T14:34:56.000Z | 2021-09-18T23:27:03.000Z | nrewebservices/ldbws/webservice.py | grundleborg/nrewebservices | f8416afad160366c70b0579c37aa11fb175ccac1 | [
"BSD-2-Clause"
] | 9 | 2016-09-09T13:43:32.000Z | 2018-12-30T19:55:09.000Z | nrewebservices/ldbws/webservice.py | grundleborg/nrewebservices | f8416afad160366c70b0579c37aa11fb175ccac1 | [
"BSD-2-Clause"
] | 3 | 2016-12-12T12:29:56.000Z | 2018-06-03T22:44:51.000Z | from .responses import NextDeparturesBoard, NextDeparturesBoardWithDetails
from .responses import ServiceDetails
from .responses import StationBoard, StationBoardWithDetails
from suds.client import Client
from suds.sax.element import Element
import logging
import os
log = logging.getLogger(__name__)
ACCESS_TOKEN_NAMESPACE = 'http://thalesgroup.com/RTTI/2013-11-28/Token/types'
class Session(object):
"""
This class provides the interface to the LDBWS web service session.
Note:
There are some (unknown) internal rules on the LDBWS server which limit the number of
services returned in a response, sometimes to less than the number requested by the
`time_window` and/or `rows` parameters to a request. Unfortunately there is nothing that can
be done about this, so you just have to work with it.
"""
def __init__(self, wsdl=None, api_key=None, timeout=5):
"""
You should normally instantiate this class only once per application, as it fetches and
parses the WSDL from the server on instantiation, normally taking a few seconds to complete.
Args:
wsdl (str): the URL of the web service WSDL. Be sure to pass the ?ver=2016-02-16 on the
end of the URL to get the version this library currently supports. If this parameter
is not provided, the code expects to find an environment variable called
**NRE_LDBWS_WSDL** containing it instead.
api_key (str): your LDBWS API key. If this is not provided, the code expects to find an
environment variable called **NRE_LDBWS_API_KEY** containing it instead.
timeout (int): the number of seconds the after which the underlying SOAP client should
timeout unfinished requests.
Raises:
ValueError: if neither of the `wsdl` parameter or the **NRE_LDBWS_WSDL** environment
variable are provided.
ValueError: if neither of the `api_key` parameter or the **NRE_LDBWS_API_KEY**
environment variable are provided.
"""
# Try getting the WSDL and API KEY from the environment if they aren't explicitly passed.
if not wsdl:
try:
wsdl = os.environ['NRE_LDBWS_WSDL']
except AttributeError:
raise ValueError("LDBWS WSDL must be either explicitly provided to the Session initializer or via the environment variable NRE_LDBWS_WSDL.")
if not api_key:
try:
api_key = os.environ['NRE_LDBWS_API_KEY']
except AttributeError:
raise ValueError("LDBWS API key must be either explicitly provided to the Session initializer or via the environment variable NRE_LDBWS_API_KEY.")
# Build the SOAP client.
self._soap_client = Client(wsdl)
self._soap_client.set_options(timeout=timeout)
self._service = self._soap_client.service['LDBServiceSoap']
# Build the SOAP authentication headers.
access_token = self._soap_client.factory.create('{{{0}}}AccessToken'.format(
ACCESS_TOKEN_NAMESPACE))
access_token.TokenValue = api_key
self._soap_client.set_options(soapheaders=(access_token))
def _do_soap_query(self, query, parameters):
# TODO: Some form of error handling.
soap_response = query(**parameters)
return soap_response
def get_station_board(self, crs, rows=10, include_departures=True, include_arrivals=False,
from_filter_crs=None, to_filter_crs=None, time_offset=None, time_window=None):
"""
Get a list of public services at a station as would populate a departure/arrival board.
Args:
crs (str): the CRS code of the station for which this board is being fetched.
rows (int, from 1 to 150): the maximum number of services to include in the returned
board.
include_departures (boolean): whether the returned services should include departures
from this station. At least one of `include_departures` or `include_arrivals` must
be set to true.
include_arrivals (boolean): whether the returned services should include arrivals at
this station. At least one of `include_departures` or `include_arrivals` must be set
to true.
from_filter_crs (str): the CRS code of a station at which all services returned must
have called previously. Only one of `from_filter_crs` and `to_filter_crs` can be set
for a given request.
to_filter_crs (str): the CRS code of a station at which all services returned must
subsequently call. Only one of `from_filter_crs` and `to_filter_crs` can be set for
a given request.
time_offset (int, from -120 to 120): An offset in minutes against the current time which
determines the starting point of the time window for which services are returned. If
set to `None`, the value of 0 will be used.
time_window (int, from -120 to 120): How far into the future from the value passed as
`time_offset` should services be fetched. If the value passed is negative, the time
window starts before the value of `time_offset` and ends at `time_offset`. If `None`
is passed, the default value is 120.
Returns:
StationBoard: a `StationBoard` object containing the station details and the requested
services.
Raises:
ValueError: if neither include_departures or include_arrivals are set to True.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
if include_departures and include_arrivals:
query = self._service.GetArrivalDepartureBoard
elif include_departures:
query = self._service.GetDepartureBoard
elif include_arrivals:
query = self._service.GetArrivalBoard
else:
raise ValueError("When calling get_station_board, either include_departures or include_arrivals must be set to True.")
# Construct the query parameters.
params = {}
params['crs'] = crs
params['numRows'] = rows
if to_filter_crs:
if from_filter_crs:
log.warn("get_station_board() can only be filtered on one of from_filter_crs and to_filter_crs. Since both are provided, using only to_filter_crs")
params['filterCrs'] = to_filter_crs
params['filterType'] = 'to'
elif from_filter_crs:
params['filterCrs'] = from_filter_crs
params['filterType'] = 'from'
if time_offset is not None:
params['timeOffset'] = time_offset
if time_window is not None:
params['timeWindow'] = time_window
# Do the SOAP query.
return StationBoard(self._do_soap_query(query, params))
def get_station_board_with_details(self, crs, rows=10, include_departures=True,
include_arrivals=False, from_filter_crs=None, to_filter_crs=None, time_offset=None,
time_window=None):
"""
Get a list of public services at a station as would populate a departure/arrival board.
This method is identical in arguments and result to `get_station_board`, except that the
returned result is of type `StationBoardWithDetails`, which includes the calling points
on the services, allowing access to them without an additional call to `get_service_details`
for each service.
Args:
crs (str): the CRS code of the station for which this board is being fetched.
rows (int, from 1 to 150): the maximum number of services to include in the returned
board.
include_departures (boolean): whether the returned services should include departures
from this station. At least one of `include_departures` or `include_arrivals` must
be set to true.
include_arrivals (boolean): whether the returned services should include arrivals at
this station. At least one of `include_departures` or `include_arrivals` must be set
to true.
from_filter_crs (str): the CRS code of a station at which all services returned must
have called previously. Only one of `from_filter_crs` and `to_filter_crs` can be set
for a given request.
to_filter_crs (str): the CRS code of a station at which all services returned must
subsequently call. Only one of `from_filter_crs` and `to_filter_crs` can be set for
a given request.
time_offset (int, from -120 to 120): An offset in minutes against the current time which
determines the starting point of the time window for which services are returned. If
set to `None`, the value of 0 will be used.
time_window (int, from -120 to 120): How far into the future from the value passed as
`time_offset` should services be fetched. If the value passed is negative, the time
window starts before the value of `time_offset` and ends at `time_offset`. If `None`
is passed, the default value is 120.
Returns:
StationBoardWithDetails: a `StationBoardWithDetails` object containing the station
details and the requested services, along with their calling points.
Raises:
ValueError: if neither include_departures or include_arrivals are set to True.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
if include_departures and include_arrivals:
query = self._service.GetArrDepBoardWithDetails
elif include_departures:
query = self._service.GetDepBoardWithDetails
elif include_arrivals:
query = self._service.GetArrBoardWithDetails
else:
raise ValueError("When calling get_station_board, either include_departures or include_arrivals must be set to True.")
# Construct the query parameters.
params = {}
params['crs'] = crs
params['numRows'] = rows
if to_filter_crs:
if from_filter_crs:
log.warn("get_station_board() can only be filtered on one of from_filter_crs and to_filter_crs. Since both are provided, using only to_filter_crs")
params['filterCrs'] = to_filter_crs
params['filterType'] = 'to'
elif from_filter_crs:
params['filterCrs'] = from_filter_crs
params['filterType'] = 'from'
if time_offset is not None:
params['timeOffset'] = time_offset
if time_window is not None:
params['timeWindow'] = time_window
# Do the SOAP query.
return StationBoardWithDetails(self._do_soap_query(query, params))
def get_next_departures(self, crs, destinations, time_offset=None, time_window=None):
"""
Get the next public departures (within the supplied time window and offset) from the station
indicated by `crs` to the stations indicated by `destinations`.
Args:
crs (str): the CRS code of the station for which this board is being fetched.
destinations ([str]): a list of CRS codes representing the stations for which the next
departure from `crs` will be fetched. This parameter must contain at least 1, but no
more than 25 station CRS codes.
time_offset (int, from -120 to 120): An offset in minutes against the current time which
determines the starting point of the time window for which services are returned. If
set to `None`, the value of 0 will be used.
time_window (int, from -120 to 120): How far into the future from the value passed as
`time_offset` should services be fetched. If the value passed is negative, the time
window starts before the value of `time_offset` and ends at `time_offset`. If `None`
is passed, the default value is 120.
Returns:
NextDeparturesBoard: a `NextDeparturesBoard` object containing the station details and
the next departures to each of the requested destinations.
Raises:
ValueError: if `destinations` is not a list of between 1 and 25 values.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
query = self._service.GetNextDepartures
# Construct the query parameters.
params = {}
params['crs'] = crs
if type(destinations) is list and 1 <= len(destinations) <= 25:
params['filterList'] = {"crs": destinations}
else:
raise ValueError("destinations parameter should be a list of at least 1 but no more than 25 CRS codes.")
if time_offset is not None:
params['timeOffset'] = time_offset
if time_window is not None:
params['timeWindow'] = time_window
# Do the SOAP query.
return NextDeparturesBoard(self._do_soap_query(query, params))
def get_next_departures_with_details(self, crs, destinations, time_offset=None, time_window=None):
"""
Get the next public departures (within the supplied time window and offset) from the station
indicated by `crs` to the stations indicated by `destinations`.
This method is identical in arguments and result to `get_next_departures`, except that the
returned result is of type `NextDeparturesBoardWithDetails`, which includes the calling
points on the services, allowing access to them without an additional call to
`get_service_details` for each service.
Args:
crs (str): the CRS code of the station for which this board is being fetched.
destinations ([str]): a list of CRS codes representing the stations for which the next
departure from `crs` will be fetched. This parameter must contain at least 1, but no
more than 25 station CRS codes.
time_offset (int, from -120 to 120): An offset in minutes against the current time which
determines the starting point of the time window for which services are returned. If
set to `None`, the value of 0 will be used.
time_window (int, from -120 to 120): How far into the future from the value passed as
`time_offset` should services be fetched. If the value passed is negative, the time
window starts before the value of `time_offset` and ends at `time_offset`. If `None`
is passed, the default value is 120.
Returns:
NextDeparturesBoardWithDetails: a `NextDeparturesBoardWithDetails` object containing the
station details and the next departures to each of the requested destinations.
Raises:
ValueError: if `destinations` is not a list of between 1 and 25 values.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
query = self._service.GetNextDeparturesWithDetails
# Construct the query parameters.
params = {}
params['crs'] = crs
if type(destinations) is list and 1 <= len(destinations) <= 25:
params['filterList'] = {"crs": destinations}
else:
raise ValueError("destinations parameter should be a list of at least 1 but no more than 25 CRS codes.")
if time_offset is not None:
params['timeOffset'] = time_offset
if time_window is not None:
params['timeWindow'] = time_window
# Do the SOAP query.
return NextDeparturesBoardWithDetails(self._do_soap_query(query, params))
def get_fastest_departures(self, crs, destinations, time_offset=None, time_window=None):
"""
Get the fastest public departures (within the supplied time window and offset) from the
station indicated by `crs` to the stations indicated by `destinations`. The difference
between this method and `get_next_departures` is that for each destination, the train which
arrives first at the destination out of the next departures from this station is returned,
rather than the one which departs from this station first.
Args:
crs (str): the CRS code of the station for which this board is being fetched.
destinations ([str]): a list of CRS codes representing the stations for which the next
departure from `crs` will be fetched. This parameter must contain at least 1, but no
more than 25 station CRS codes.
time_offset (int, from -120 to 120): An offset in minutes against the current time which
determines the starting point of the time window for which services are returned. If
set to `None`, the value of 0 will be used.
time_window (int, from -120 to 120): How far into the future from the value passed as
`time_offset` should services be fetched. If the value passed is negative, the time
window starts before the value of `time_offset` and ends at `time_offset`. If `None`
is passed, the default value is 120.
Returns:
NextDeparturesBoard: a `NextDeparturesBoard` object containing the station details and
the fastest departures to each of the requested destinations.
Raises:
ValueError: if `destinations` is not a list of between 1 and 25 values.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
query = self._service.GetFastestDepartures
# Construct the query parameters.
params = {}
params['crs'] = crs
if type(destinations) is list and 1 <= len(destinations) <= 25:
params['filterList'] = {"crs": destinations}
else:
raise ValueError("destinations parameter should be a list of at least 1 but no more than 25 CRS codes.")
if time_offset is not None:
params['timeOffset'] = time_offset
if time_window is not None:
params['timeWindow'] = time_window
# Do the SOAP query.
# TODO: Some form of error handling.
soap_response = query(**params)
return NextDeparturesBoard(soap_response)
def get_fastest_departures_with_details(self, crs, destinations, time_offset=None,
time_window=None):
"""
Get the fastest public departures (within the supplied time window and offset) from the
station indicated by `crs` to the stations indicated by `destinations`. The difference
between this method and `get_next_departures_with_details` is that for each destination, the
train which arrives first at the destination out of the next departures from this station is
returned, rather than the one which departs from this station first.
This method is identical in arguments and result to `get_fastest_departures`, except that
the returned result is of type `NextDeparturesBoardWithDetails`, which includes the calling
points on the services, allowing access to them without an additional call to
`get_service_details` for each service.
Args:
crs (str): the CRS code of the station for which this board is being fetched.
destinations ([str]): a list of CRS codes representing the stations for which the next
departure from `crs` will be fetched. This parameter must contain at least 1, but no
more than 25 station CRS codes.
time_offset (int, from -120 to 120): An offset in minutes against the current time which
determines the starting point of the time window for which services are returned. If
set to `None`, the value of 0 will be used.
time_window (int, from -120 to 120): How far into the future from the value passed as
`time_offset` should services be fetched. If the value passed is negative, the time
window starts before the value of `time_offset` and ends at `time_offset`. If `None`
is passed, the default value is 120.
Returns:
NextDeparturesBoardWithDetails: a `NextDeparturesBoardWithDetails` object containing the
station details and the fastest departures to each of the requested destinations.
Raises:
ValueError: if `destinations` is not a list of between 1 and 25 values.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
query = self._service.GetNextDeparturesWithDetails
# Construct the query parameters.
params = {}
params['crs'] = crs
if type(destinations) is list and 1 <= len(destinations) <= 25:
params['filterList'] = {"crs": destinations}
else:
raise ValueError("destinations parameter should be a list of at least 1 but no more than 25 CRS codes.")
if time_offset is not None:
params['timeOffset'] = time_offset
if time_window is not None:
params['timeWindow'] = time_window
# Do the SOAP query.
return NextDeparturesBoardWithDetails(self._do_soap_query(query, params))
def get_service_details(self, service_id):
"""
Get the full details of a service from a board.
Args:
service_id (str): the service_id of the relevant ServiceItem on the board.
Returns:
ServiceDetails: a `ServiceDetails` object containing the details of the requested
service.
Note:
Each time this his method is called, it makes **1** request to the LDBWS server.
"""
# Get the appropriate SOAP query method.
query = self._service.GetServiceDetails
# Construct the query parameters.
params = {}
params['serviceID'] = service_id
# Do the SOAP query.
return ServiceDetails(self._do_soap_query(query, params))
| 47.850312 | 163 | 0.653154 | 3,001 | 23,016 | 4.908364 | 0.103299 | 0.028513 | 0.014121 | 0.009776 | 0.834284 | 0.817244 | 0.803055 | 0.803055 | 0.798506 | 0.784929 | 0 | 0.011095 | 0.291189 | 23,016 | 480 | 164 | 47.95 | 0.891811 | 0.590937 | 0 | 0.636364 | 0 | 0.013986 | 0.189378 | 0 | 0 | 0 | 0 | 0.004167 | 0 | 1 | 0.062937 | false | 0 | 0.048951 | 0 | 0.174825 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2faa87b1eb6fc239edb8d9b114f3367fb49cc8f | 76 | py | Python | modules/__init__.py | fpthink/V2B | 87561d5cd00ebf31326e8364167a787681ded367 | [
"MIT"
] | 19 | 2021-11-09T03:56:00.000Z | 2022-03-11T11:06:39.000Z | modules/__init__.py | fpthink/V2B | 87561d5cd00ebf31326e8364167a787681ded367 | [
"MIT"
] | 5 | 2021-11-12T02:49:40.000Z | 2022-03-06T02:41:31.000Z | modules/__init__.py | fpthink/V2B | 87561d5cd00ebf31326e8364167a787681ded367 | [
"MIT"
] | null | null | null | from modules.se import SE3d
from modules.voxelization import Voxelization
| 25.333333 | 46 | 0.842105 | 10 | 76 | 6.4 | 0.6 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.131579 | 76 | 2 | 47 | 38 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3940b60aed5dbf6a6df42596cd57409af962fb7d | 34 | py | Python | value/primitives/is_equal.py | choleraehyq/yuujins | dff6f7def0081f24afac30a7c1e3ca6755a5ea3f | [
"MIT"
] | null | null | null | value/primitives/is_equal.py | choleraehyq/yuujins | dff6f7def0081f24afac30a7c1e3ca6755a5ea3f | [
"MIT"
] | null | null | null | value/primitives/is_equal.py | choleraehyq/yuujins | dff6f7def0081f24afac30a7c1e3ca6755a5ea3f | [
"MIT"
] | null | null | null | # TODO(Cholerae): Implement equal? | 34 | 34 | 0.764706 | 4 | 34 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.83871 | 0.941176 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
394ef68097d10ed1c0d63044237c785d6038e702 | 36 | py | Python | metric/modeling/heads/__init__.py | jireh-father/pymetric | bb3fd85f872da7bf867cb92b0eb17ad22cc5f96e | [
"MIT"
] | 62 | 2020-08-26T11:06:37.000Z | 2022-03-29T03:26:00.000Z | metric/modeling/heads/__init__.py | ym547559398/pycls | f7c4f354f87969142263c87e1fb33499b7b2d62a | [
"MIT"
] | 2 | 2021-06-02T10:19:53.000Z | 2021-12-06T05:41:23.000Z | metric/modeling/heads/__init__.py | ym547559398/pycls | f7c4f354f87969142263c87e1fb33499b7b2d62a | [
"MIT"
] | 11 | 2020-09-14T12:26:17.000Z | 2021-10-04T06:29:35.000Z | from .linear_head import LinearHead
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a30a420795a24eaa5ec5d6146213c0cb87935a5 | 12,663 | py | Python | depc/apiv1/variables.py | dingcycle/depc | 5ff0a5322684daf715e1171a259b9c643925cf73 | [
"BSD-3-Clause"
] | 77 | 2019-01-30T10:12:36.000Z | 2021-10-19T16:25:53.000Z | depc/apiv1/variables.py | dingcycle/depc | 5ff0a5322684daf715e1171a259b9c643925cf73 | [
"BSD-3-Clause"
] | 13 | 2019-02-20T16:57:57.000Z | 2022-03-01T23:10:26.000Z | depc/apiv1/variables.py | dingcycle/depc | 5ff0a5322684daf715e1171a259b9c643925cf73 | [
"BSD-3-Clause"
] | 10 | 2019-01-30T13:30:39.000Z | 2021-08-02T05:55:18.000Z | from flask import abort, jsonify
from flask_login import login_required
from depc.apiv1 import api, format_object, get_payload
from depc.controllers.variables import VariableController
from depc.users import TeamPermission
VISIBLE = ["name", "value", "type", "expression"]
def format_variable(source):
visible = list(VISIBLE)
s = format_object(source, visible)
return s
@api.route("/teams/<team_id>/variables")
@login_required
def list_team_variables(team_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variables = VariableController.list(
filters={
"Variable": {
"team_id": team_id,
"rule_id": None,
"source_id": None,
"check_id": None,
}
}
)
return jsonify([format_variable(v) for v in variables]), 200
@api.route("/teams/<team_id>/rules/<rule_id>/variables")
@login_required
def list_rule_variables(team_id, rule_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variables = VariableController.list(
filters={
"Variable": {
"team_id": team_id,
"rule_id": rule_id,
"source_id": None,
"check_id": None,
}
}
)
return jsonify([format_variable(v) for v in variables]), 200
@api.route("/teams/<team_id>/sources/<source_id>/variables")
@login_required
def list_source_variables(team_id, source_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variables = VariableController.list(
filters={
"Variable": {
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": None,
}
}
)
return jsonify([format_variable(v) for v in variables]), 200
@api.route("/teams/<team_id>/sources/<source_id>/checks/<check_id>/variables")
@login_required
def list_check_variables(team_id, source_id, check_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variables = VariableController.list(
filters={
"Variable": {
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": check_id,
}
}
)
return jsonify([format_variable(v) for v in variables]), 200
@api.route("/teams/<team_id>/variables/<variable_id>")
@login_required
def get_team_variable(team_id, variable_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variable = VariableController.get(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": None,
"check_id": None,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route("/teams/<team_id>/rules/<rule_id>/variables/<variable_id>")
@login_required
def get_rule_variable(team_id, rule_id, variable_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variable = VariableController.get(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": rule_id,
"source_id": None,
"check_id": None,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route("/teams/<team_id>/sources/<source_id>/variables/<variable_id>")
@login_required
def get_source_variable(team_id, source_id, variable_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variable = VariableController.get(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": None,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/checks/<check_id>/variables/<variable_id>"
)
@login_required
def get_check_variable(team_id, source_id, check_id, variable_id):
"""
.. :quickref: GET; Lorem ipsum."""
if not TeamPermission.is_user(team_id):
abort(403)
variable = VariableController.get(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": check_id,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/variables",
methods=["POST"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def post_team_variable(team_id):
"""
.. :quickref: POST; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
payload.update({"team_id": team_id})
variable = VariableController.create(payload)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/rules/<rule_id>/variables",
methods=["POST"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def post_rule_variable(team_id, rule_id):
"""
.. :quickref: POST; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
payload.update({"team_id": team_id, "rule_id": rule_id})
variable = VariableController.create(payload)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/variables",
methods=["POST"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def post_source_variable(team_id, source_id):
"""
.. :quickref: POST; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
payload.update({"team_id": team_id, "source_id": source_id})
variable = VariableController.create(payload)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/checks/<check_id>/variables",
methods=["POST"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def post_check_variable(team_id, source_id, check_id):
"""
.. :quickref: POST; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
payload.update({"team_id": team_id, "source_id": source_id, "check_id": check_id})
variable = VariableController.create(payload)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/variables/<variable_id>",
methods=["PUT"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def put_team_variable(team_id, variable_id):
"""
.. :quickref: PUT; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
variable = VariableController.update(
payload,
{
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": None,
"check_id": None,
}
},
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/rules/<rule_id>/variables/<variable_id>",
methods=["PUT"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def put_rule_variable(team_id, rule_id, variable_id):
"""
.. :quickref: PUT; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
variable = VariableController.update(
payload,
{
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": rule_id,
"source_id": None,
"check_id": None,
}
},
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/variables/<variable_id>",
methods=["PUT"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def put_source_variable(team_id, source_id, variable_id):
"""
.. :quickref: PUT; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
variable = VariableController.update(
payload,
{
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": None,
}
},
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/checks/<check_id>/variables/<variable_id>",
methods=["PUT"],
request_schema=("v1_variable", "variable_input"),
)
@login_required
def put_check_variable(team_id, source_id, check_id, variable_id):
"""
.. :quickref: PUT; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
payload = get_payload()
variable = VariableController.update(
payload,
{
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": check_id,
}
},
)
return jsonify(format_variable(variable)), 200
@api.route("/teams/<team_id>/variables/<variable_id>", methods=["DELETE"])
@login_required
def delete_team_variable(team_id, variable_id):
"""
.. :quickref: DELETE; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
variable = VariableController.delete(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": None,
"check_id": None,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/rules/<rule_id>/variables/<variable_id>", methods=["DELETE"]
)
@login_required
def delete_rule_variable(team_id, rule_id, variable_id):
"""
.. :quickref: DELETE; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
variable = VariableController.delete(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": rule_id,
"source_id": None,
"check_id": None,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/variables/<variable_id>", methods=["DELETE"]
)
@login_required
def delete_source_variable(team_id, source_id, variable_id):
"""
.. :quickref: DELETE; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
variable = VariableController.delete(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": None,
}
}
)
return jsonify(format_variable(variable)), 200
@api.route(
"/teams/<team_id>/sources/<source_id>/checks/<check_id>/variables/<variable_id>",
methods=["DELETE"],
)
@login_required
def delete_check_variable(team_id, source_id, check_id, variable_id):
"""
.. :quickref: DELETE; Lorem ipsum."""
if not TeamPermission.is_manager_or_editor(team_id):
abort(403)
variable = VariableController.delete(
filters={
"Variable": {
"id": variable_id,
"team_id": team_id,
"rule_id": None,
"source_id": source_id,
"check_id": check_id,
}
}
)
return jsonify(format_variable(variable)), 200
| 25.530242 | 86 | 0.581458 | 1,390 | 12,663 | 5.002158 | 0.047482 | 0.086294 | 0.036819 | 0.037969 | 0.949662 | 0.943909 | 0.921473 | 0.911837 | 0.89314 | 0.869697 | 0 | 0.014253 | 0.28524 | 12,663 | 495 | 87 | 25.581818 | 0.75395 | 0.051726 | 0 | 0.677686 | 0 | 0 | 0.173242 | 0.08976 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057851 | false | 0 | 0.013774 | 0 | 0.129477 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a3e1e00be991e8ed3dee61a89a340126a6e98f5 | 113 | py | Python | transmission_influxdb/utils.py | cheeseandcereal/transmission_influxdb_exporter | 9d815648bdaeb7d73e4635a1e652bf669bce62fe | [
"Unlicense"
] | 7 | 2021-01-29T07:01:41.000Z | 2022-01-06T23:52:09.000Z | transmission_influxdb/utils.py | cheeseandcereal/transmission_influxdb_exporter | 9d815648bdaeb7d73e4635a1e652bf669bce62fe | [
"Unlicense"
] | 5 | 2021-01-29T18:27:02.000Z | 2022-03-08T18:29:36.000Z | transmission_influxdb/utils.py | cheeseandcereal/transmission_influxdb_exporter | 9d815648bdaeb7d73e4635a1e652bf669bce62fe | [
"Unlicense"
] | 1 | 2022-01-25T22:53:25.000Z | 2022-01-25T22:53:25.000Z | import datetime
def now() -> str:
return datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S.%f") + "Z"
| 18.833333 | 76 | 0.60177 | 18 | 113 | 3.777778 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 113 | 5 | 77 | 22.6 | 0.701031 | 0 | 0 | 0 | 0 | 0 | 0.185841 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
1a49347844f964da5445e3393863be76ea404def | 104 | py | Python | cla_backend/libs/eligibility_calculator/exceptions.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 3 | 2019-10-02T15:31:03.000Z | 2022-01-13T10:15:53.000Z | cla_backend/libs/eligibility_calculator/exceptions.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 206 | 2015-01-02T16:50:11.000Z | 2022-02-16T20:16:05.000Z | cla_backend/libs/eligibility_calculator/exceptions.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 6 | 2015-03-23T23:08:42.000Z | 2022-02-15T17:04:44.000Z | class PropertyExpectedException(Exception):
pass
class InvalidStateException(Exception):
pass
| 14.857143 | 43 | 0.788462 | 8 | 104 | 10.25 | 0.625 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 104 | 6 | 44 | 17.333333 | 0.931818 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1aba405dec3a510354ecc6c20b9e5615a2b02b83 | 39 | py | Python | src/pyfuncs/__init__.py | fishs-x/pyfuncs | 91ab5206b9a47866b29631f5666079d1b680ce5b | [
"MIT"
] | null | null | null | src/pyfuncs/__init__.py | fishs-x/pyfuncs | 91ab5206b9a47866b29631f5666079d1b680ce5b | [
"MIT"
] | null | null | null | src/pyfuncs/__init__.py | fishs-x/pyfuncs | 91ab5206b9a47866b29631f5666079d1b680ce5b | [
"MIT"
] | null | null | null | from .chrome_cookie import ChromeCookie | 39 | 39 | 0.897436 | 5 | 39 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1abe7dd7a7fd89379932e0f2a681f6ef008a19f9 | 154 | py | Python | seed.py | triethuynh2301/macronizer-project | b4cd234de603abc8f588c143ac1fdf56063390c5 | [
"MIT"
] | null | null | null | seed.py | triethuynh2301/macronizer-project | b4cd234de603abc8f588c143ac1fdf56063390c5 | [
"MIT"
] | null | null | null | seed.py | triethuynh2301/macronizer-project | b4cd234de603abc8f588c143ac1fdf56063390c5 | [
"MIT"
] | null | null | null | from macronizer_cores import create_app
from macronizer_cores import db
app = create_app()
with app.app_context():
db.drop_all()
db.create_all() | 19.25 | 39 | 0.75974 | 24 | 154 | 4.583333 | 0.458333 | 0.254545 | 0.345455 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 154 | 8 | 40 | 19.25 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
46c88531d954b7b83f8529bc290ec819eeb6e5f4 | 45 | py | Python | mundo 1/abc.py | jorgeduartejr/Ex-PYTHON | 266b656ad94065e77ece7cdbc9e09062c5933100 | [
"MIT"
] | null | null | null | mundo 1/abc.py | jorgeduartejr/Ex-PYTHON | 266b656ad94065e77ece7cdbc9e09062c5933100 | [
"MIT"
] | null | null | null | mundo 1/abc.py | jorgeduartejr/Ex-PYTHON | 266b656ad94065e77ece7cdbc9e09062c5933100 | [
"MIT"
] | null | null | null | a = 10
b = 20
c = a
b = c
a = b
print(a,b,c)
| 6.428571 | 12 | 0.444444 | 14 | 45 | 1.428571 | 0.428571 | 0.3 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 0.355556 | 45 | 6 | 13 | 7.5 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 1 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46e05c90910160b187ae4614946d7c0d6cea9a41 | 330 | py | Python | Gathered CTF writeups/ptr-yudai-writeups/2019/Facebook_CTF_2019/matryoshka/reverse.py | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | 1 | 2022-03-27T06:00:41.000Z | 2022-03-27T06:00:41.000Z | Gathered CTF writeups/ptr-yudai-writeups/2019/Facebook_CTF_2019/matryoshka/reverse.py | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | null | null | null | Gathered CTF writeups/ptr-yudai-writeups/2019/Facebook_CTF_2019/matryoshka/reverse.py | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | 1 | 2022-03-27T06:01:42.000Z | 2022-03-27T06:01:42.000Z | string = b"\xf6\x2c\x72\x1a\x03\x99\x0e\x78\xbd\x90\xe9\x68\xd0\x69\x37\x29"
string += b"\xf8\x12\xf4\xe5\xd0\xfb\xf3\x7e\x72\x61\x79\x19\xed\x44\x12\x52"
string += b"\xf5\xf9\xaa\x14\x36\x0d\x1f\xb2\x52\x6b\xf2\x6a\xda\x9d\xec\x3c"
x = b'\xda\x9d\xec\x3c'
last = b'\xda\x28\x5c\x11'
target = b'\x00\xb5\xb0\x2d'
memory[xmm2]
| 27.5 | 77 | 0.681818 | 74 | 330 | 3.040541 | 0.783784 | 0.093333 | 0.08 | 0.106667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2443 | 0.069697 | 330 | 11 | 78 | 30 | 0.488599 | 0 | 0 | 0 | 0 | 0.428571 | 0.729483 | 0.583587 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46f16cc4327009688cba83d95a20b7c286a29537 | 2,169 | py | Python | day19/solution.py | andrewyang96/AdventOfCode2017 | 665d7869fb8677f41c07ca2177b4fe3ea3356fec | [
"MIT"
] | null | null | null | day19/solution.py | andrewyang96/AdventOfCode2017 | 665d7869fb8677f41c07ca2177b4fe3ea3356fec | [
"MIT"
] | null | null | null | day19/solution.py | andrewyang96/AdventOfCode2017 | 665d7869fb8677f41c07ca2177b4fe3ea3356fec | [
"MIT"
] | null | null | null | from typing import List
def get_letters(grid: List[List[str]]) -> str:
i_dir, j_dir = 1, 0
i, j = 0, grid[0].index('|')
height, width = len(grid), len(grid[0])
letters_encontered = ''
while i >= 0 and i < height and j >= 0 and j < width:
char = grid[i][j]
if char == '+':
if i_dir == 0:
if i > 0 and grid[i-1][j] != ' ':
i_dir, j_dir = -1, 0
elif i < height-1 and grid[i+1][j] != ' ':
i_dir, j_dir = 1, 0
else:
raise ValueError('Invalid rotation')
elif j_dir == 0:
if j > 0 and grid[i][j-1] != ' ':
i_dir, j_dir = 0, -1
elif j < width-1 and grid[i][j+1] != ' ':
i_dir, j_dir = 0, 1
else:
raise ValueError('Invalid rotation')
else:
raise ValueError('One of i_dir and j_dir must be 0')
elif 'A' <= char <= 'Z':
letters_encontered += char
i += i_dir
j += j_dir
return letters_encontered
def get_num_steps(grid: List[List[str]]) -> int:
i_dir, j_dir = 1, 0
i, j = 0, grid[0].index('|')
height, width = len(grid), len(grid[0])
num_steps = -1
while i >= 0 and i < height and j >= 0 and j < width:
char = grid[i][j]
if char == '+':
if i_dir == 0:
if i > 0 and grid[i-1][j] != ' ':
i_dir, j_dir = -1, 0
elif i < height-1 and grid[i+1][j] != ' ':
i_dir, j_dir = 1, 0
else:
raise ValueError('Invalid rotation')
elif j_dir == 0:
if j > 0 and grid[i][j-1] != ' ':
i_dir, j_dir = 0, -1
elif j < width-1 and grid[i][j+1] != ' ':
i_dir, j_dir = 0, 1
else:
raise ValueError('Invalid rotation')
else:
raise ValueError('One of i_dir and j_dir must be 0')
i += i_dir
j += j_dir
num_steps += 1
return num_steps
| 35.557377 | 68 | 0.420931 | 306 | 2,169 | 2.849673 | 0.127451 | 0.073395 | 0.068807 | 0.091743 | 0.786697 | 0.786697 | 0.763761 | 0.763761 | 0.763761 | 0.763761 | 0 | 0.044888 | 0.445367 | 2,169 | 60 | 69 | 36.15 | 0.679967 | 0 | 0 | 0.827586 | 0 | 0 | 0.065468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.017241 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
204d734355c47e853eb215432efa968dcd1a9ab2 | 17,454 | py | Python | tests/test_strategy.py | lohithn4/NowTrade | ac04499731130297135b3526325191bd2cb36343 | [
"MIT"
] | 87 | 2015-11-09T07:11:32.000Z | 2021-12-16T03:13:09.000Z | tests/test_strategy.py | lohithn4/NowTrade | ac04499731130297135b3526325191bd2cb36343 | [
"MIT"
] | 14 | 2015-09-28T18:24:18.000Z | 2020-04-22T15:17:26.000Z | tests/test_strategy.py | lohithn4/NowTrade | ac04499731130297135b3526325191bd2cb36343 | [
"MIT"
] | 34 | 2015-10-12T13:26:09.000Z | 2022-01-15T20:16:23.000Z | import unittest
import datetime
import numpy as np
import pandas as pd
from testing_data import DummyDataConnection
from nowtrade import symbol_list, data_connection, dataset, technical_indicator, \
criteria, criteria_group, trading_profile, trading_amount, \
trading_fee, report, strategy
from nowtrade.report import InvalidExit
from nowtrade.action import Long, Short, LongExit, ShortExit, SHORT_EXIT
class TestStrategy(unittest.TestCase):
def setUp(self):
self.dc = DummyDataConnection()
self.sl = symbol_list.SymbolList(['MSFT'])
self.symbol = self.sl.get('msft')
self.d = dataset.Dataset(self.sl, self.dc, None, None, 0)
self.d.load_data()
def test_simple_long_strategy(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.BarsSinceLong(self.symbol, 2)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Long(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
repr_string = 'Strategy(dataset=Dataset(symbol_list=[MSFT], data_connection=DummyDataConnection(), start_datetime=None, end_datetime=None, periods=0, granularity=None), criteria_groups=[CriteriaGroup(criteria_list=[Above_MSFT_Close_25.88_1, Not_InMarket(symbol=MSFT)], action=long, symbol=MSFT), CriteriaGroup(criteria_list=[BarsSinceLong_MSFT_2_None, IsLong_MSFT], action=longexit, symbol=MSFT)], trading_profile=TradingProfile(capital=10000, trading_amount=StaticAmount(amount=5000, round_up=False), trading_fee=StaticFee(fee=0), slippage=0.0)'
self.assertEquals(strat.__repr__(), repr_string)
strat.simulate()
report_overview = strat.report.overview()
self.assertAlmostEqual(strat.realtime_data_frame.iloc[4]['PL_MSFT'], report_overview['net_profit'])
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[0]['CHANGE_PERCENT_MSFT']))
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[5]['CHANGE_VALUE_MSFT']))
self.assertEqual(strat.realtime_data_frame.iloc[0]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['ACTIONS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[2]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[3]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[4]['ACTIONS_MSFT'], -1)
self.assertEqual(strat.realtime_data_frame.iloc[5]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[0]['STATUS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['STATUS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[2]['STATUS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[3]['STATUS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[4]['STATUS_MSFT'], 0)
self.assertEqual(report_overview['trades'], 1)
self.assertEqual(report_overview['winning_trades'], 0)
self.assertEqual(report_overview['losing_trades'], 1)
self.assertEqual(report_overview['lacking_capital'], 0)
self.assertEqual(report_overview['gross_profit'], 0)
self.assertEqual(report_overview['gross_loss'], report_overview['net_profit'])
self.assertEqual(report_overview['ongoing_trades'], 0)
self.assertEqual(report_overview['average_trading_amount'], 5003.5199999999995)
self.assertEqual(report_overview['profitability'], 0)
pretty_overview_string = 'Trades:\nMSFT\nTrade(datetime=2010-06-02 00:00:00, action=LONG, symbol=MSFT, price=26.06, shares=192.0, money=5003.52, fee=0, slippage=0.0)\nTrade(datetime=2010-06-07 00:00:00, action=LONG_EXIT, symbol=MSFT, price=25.82, shares=192.0, money=4957.44, fee=0, slippage=0.0)\nProfitability: 0.0\n# Trades: 1\nNet Profit: -46.08\nGross Profit: 0.0\nGross Loss: -46.08\nWinning Trades: 0\nLosing Trades: 1\nSharpe Ratio: -6.0\nAvg. Trading Amount: 5003.52\nAvg. Fees: 0.0\nAvg. Slippage: 0.0\nAvg. Gains: -0.929512006197\nAvg. Winner: 0.0\nAvg. Loser: -0.929512006197\nAvg. Bars: 3.0\nTotal Fees: 0.0\nTotal Slippage: 0.0\nTrades Lacking Capital: 0\nOngoing Trades: 0'
self.assertEqual(strat.report.pretty_overview(), pretty_overview_string)
with self.assertRaises(report.InvalidExit):
strat.report.long_exit(None, None, 'MSFT')
with self.assertRaises(report.InvalidExit):
strat.report.short_exit(None, None, 'MSFT')
def test_simple_short_strategy(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.BarsSinceShort(self.symbol, 2)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Short(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], ShortExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(5))
self.assertEquals(tp.__repr__(), 'TradingProfile(capital=10000, trading_amount=StaticAmount(amount=5000, round_up=False), trading_fee=StaticFee(fee=5), slippage=0.0')
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
report_overview = strat.report.overview()
self.assertAlmostEqual(strat.realtime_data_frame.iloc[4]['PL_MSFT'], report_overview['net_profit'])
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[0]['CHANGE_PERCENT_MSFT']))
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[5]['CHANGE_VALUE_MSFT']))
self.assertEqual(strat.realtime_data_frame.iloc[0]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['ACTIONS_MSFT'], 2)
self.assertEqual(strat.realtime_data_frame.iloc[2]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[3]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[4]['ACTIONS_MSFT'], -2)
self.assertEqual(strat.realtime_data_frame.iloc[5]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[0]['STATUS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['STATUS_MSFT'], -1)
self.assertEqual(strat.realtime_data_frame.iloc[2]['STATUS_MSFT'], -1)
self.assertEqual(strat.realtime_data_frame.iloc[3]['STATUS_MSFT'], -1)
self.assertEqual(strat.realtime_data_frame.iloc[4]['STATUS_MSFT'], 0)
self.assertEqual(report_overview['trades'], 1)
self.assertEqual(report_overview['winning_trades'], 1)
self.assertEqual(report_overview['losing_trades'], 0)
self.assertEqual(report_overview['lacking_capital'], 0)
self.assertEqual(report_overview['gross_loss'], 0)
self.assertEqual(report_overview['gross_profit'], report_overview['net_profit'])
self.assertEqual(report_overview['ongoing_trades'], 0)
self.assertEqual(report_overview['average_trading_amount'], 5003.5199999999995)
self.assertEqual(report_overview['profitability'], 100.00)
def test_simple_ti_crit_strategy(self):
sma2 = technical_indicator.SMA(self.symbol.close, 2)
sma3 = technical_indicator.SMA(self.symbol.close, 3)
self.d.add_technical_indicator(sma2);
self.d.add_technical_indicator(sma3);
enter_crit1 = criteria.Above(sma2, sma3)
enter_crit2 = criteria.Below(sma3, sma2)
enter_crit3 = criteria.InRange(sma2, 25, 26)
enter_crit4 = criteria.CrossingAbove(sma2, sma3)
enter_crit5 = criteria.CrossingBelow(sma2, sma3)
exit_crit1 = criteria.BarsSinceLong(self.symbol, 2)
exit_crit2 = criteria.Equals(sma2, sma3)
enter_crit_group1 = criteria_group.CriteriaGroup([enter_crit1, enter_crit2], Long(), self.symbol)
enter_crit_group2 = criteria_group.CriteriaGroup([enter_crit1, enter_crit2], Short(), self.symbol)
enter_crit_group3 = criteria_group.CriteriaGroup([enter_crit3, enter_crit4, enter_crit5], Long(), self.symbol)
exit_crit_group1 = criteria_group.CriteriaGroup([exit_crit1], LongExit(), self.symbol)
exit_crit_group2 = criteria_group.CriteriaGroup([exit_crit2], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group1, enter_crit_group2, enter_crit_group3, exit_crit_group1, exit_crit_group2], tp)
strat.simulate()
overview = strat.report.overview()
self.assertEqual(overview['trades'], 0)
def test_report(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.BarsSinceLong(self.symbol, 1)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Long(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
report_overview = strat.report.overview()
self.assertAlmostEqual(report_overview['net_profit'], 7.68)
self.assertAlmostEqual(report_overview['average_gains'], 0.153256704981)
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.BarsSinceShort(self.symbol, 1)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Short(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], ShortExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
report_overview = strat.report.overview()
self.assertAlmostEqual(report_overview['net_profit'], -7.68)
self.assertAlmostEqual(report_overview['average_gains'], -0.15325670498086685)
enter_crit = criteria.Above(self.symbol.close, 50)
exit_crit = criteria.BarsSinceLong(self.symbol, 1)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Long(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
pretty_overview = strat.report.pretty_overview()
no_trades = pretty_overview.split('\n')[0]
self.assertEqual(no_trades, 'No trades')
def test_stop_loss_strategy(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.StopLoss(self.symbol, -0.8)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Long(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
report_overview = strat.report.overview()
self.assertAlmostEqual(strat.realtime_data_frame.iloc[-2]['PL_MSFT'], report_overview['net_profit'])
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[0]['CHANGE_PERCENT_MSFT']))
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[-1]['CHANGE_VALUE_MSFT']))
self.assertEqual(strat.realtime_data_frame.iloc[0]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['ACTIONS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[-2]['ACTIONS_MSFT'], -1)
self.assertEqual(strat.realtime_data_frame.iloc[-1]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[0]['STATUS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['STATUS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[-3]['STATUS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[-2]['STATUS_MSFT'], 0)
self.assertEqual(report_overview['trades'], 1)
self.assertEqual(report_overview['winning_trades'], 0)
self.assertEqual(report_overview['losing_trades'], 1)
self.assertEqual(report_overview['lacking_capital'], 0)
self.assertEqual(report_overview['gross_loss'], report_overview['net_profit'])
self.assertEqual(report_overview['ongoing_trades'], 0)
self.assertEqual(report_overview['average_trading_amount'], 5003.5199999999995)
self.assertEqual(report_overview['profitability'], 0.0)
def test_trailing_stop_long_strategy(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.TrailingStop(self.symbol, -0.2)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Long(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
report_overview = strat.report.overview()
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[0]['CHANGE_PERCENT_MSFT']))
self.assertEqual(strat.realtime_data_frame.iloc[-5]['CHANGE_VALUE_MSFT'], -0.26999999999999957)
self.assertEqual(strat.realtime_data_frame.iloc[1]['CHANGE_VALUE_MSFT'], 0.40000000000000213)
self.assertEqual(strat.realtime_data_frame.iloc[2]['PL_MSFT'], 153.60000000000014)
self.assertEqual(strat.realtime_data_frame.iloc[0]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['ACTIONS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[2]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[4]['ACTIONS_MSFT'], -1)
self.assertEqual(strat.realtime_data_frame.iloc[5]['ACTIONS_MSFT'], 0)
def test_trailing_stop_short_strategy(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.TrailingStop(self.symbol, -0.2)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Long(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], LongExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[0]['CHANGE_PERCENT_MSFT']))
self.assertTrue(np.isnan(strat.realtime_data_frame.iloc[-3]['CHANGE_VALUE_MSFT']))
self.assertEqual(strat.realtime_data_frame.iloc[-4]['CHANGE_VALUE_MSFT'], -0.23999999999999844)
self.assertEqual(strat.realtime_data_frame.iloc[1]['CHANGE_VALUE_MSFT'], 0.40000000000000213)
self.assertEqual(strat.realtime_data_frame.iloc[2]['PL_MSFT'], 153.60000000000014)
self.assertEqual(strat.realtime_data_frame.iloc[3]['CHANGE_PERCENT_MSFT'], -0.01036070606293168)
self.assertEqual(strat.realtime_data_frame.iloc[0]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[1]['ACTIONS_MSFT'], 1)
self.assertEqual(strat.realtime_data_frame.iloc[2]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[3]['ACTIONS_MSFT'], 0)
self.assertEqual(strat.realtime_data_frame.iloc[4]['ACTIONS_MSFT'], -1)
# No properly implemented yet
self.assertTrue(np.isnan(strat.report.get_sharpe_ratio(benchmark=5)))
self.assertTrue(np.isnan(strat.report.get_sharpe_ratio(benchmark=pd.Series())))
def test_upcoming_action(self):
enter_crit = criteria.Above(self.symbol.close, 25.88)
exit_crit = criteria.Equals(self.symbol.close, 25.00)
enter_crit_group = criteria_group.CriteriaGroup([enter_crit], Short(), self.symbol)
exit_crit_group = criteria_group.CriteriaGroup([exit_crit], ShortExit(), self.symbol)
tp = trading_profile.TradingProfile(10000, trading_amount.StaticAmount(5000), trading_fee.StaticFee(0))
strat = strategy.Strategy(self.d, [enter_crit_group, exit_crit_group], tp)
strat.simulate()
next_action = strat.get_next_action()[self.symbol]
self.assertTrue(self.symbol in strat.upcoming_actions)
self.assertEqual(strat.upcoming_actions[self.symbol], SHORT_EXIT)
self.assertEqual(next_action['estimated_money_required'], 5000.8699999999999)
self.assertEqual(next_action['estimated_enter_value'], 25.129999999999999)
self.assertEqual(next_action['action_name'], 'SHORT_EXIT')
self.assertEqual(next_action['estimated_shares'], 199.0)
self.assertEqual(next_action['action'], SHORT_EXIT)
self.assertEqual(next_action['enter_on'], 'OPEN')
if __name__ == "__main__":
unittest.main()
| 72.423237 | 696 | 0.726195 | 2,240 | 17,454 | 5.398214 | 0.094643 | 0.102961 | 0.082947 | 0.107344 | 0.787545 | 0.751902 | 0.73958 | 0.709395 | 0.708237 | 0.704681 | 0 | 0.048982 | 0.144952 | 17,454 | 240 | 697 | 72.725 | 0.761257 | 0.001547 | 0 | 0.550218 | 0 | 0.0131 | 0.152884 | 0.040689 | 0 | 0 | 0 | 0 | 0.462882 | 1 | 0.039301 | false | 0 | 0.034935 | 0 | 0.078603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64c0936b28219955645584623eac7973a94811eb | 6,974 | py | Python | imbd/imbd_data_gen.py | worldwidekatie/GPT3_Synthetic | 722ceb0b931873ea1b835dd291028d25b775adeb | [
"MIT"
] | 1 | 2022-02-28T19:49:20.000Z | 2022-02-28T19:49:20.000Z | imbd/imbd_data_gen.py | worldwidekatie/GPT3_Synthetic | 722ceb0b931873ea1b835dd291028d25b775adeb | [
"MIT"
] | null | null | null | imbd/imbd_data_gen.py | worldwidekatie/GPT3_Synthetic | 722ceb0b931873ea1b835dd291028d25b775adeb | [
"MIT"
] | 1 | 2022-03-06T06:40:42.000Z | 2022-03-06T06:40:42.000Z | import pandas as pd
import openai
openai.api_key = ""
pos_ = pd.read_csv("imbd/data/pos_seed_05.csv")
pos_df = list(pos_['text'])
neg_ = pd.read_csv("imbd/data/neg_seed_05.csv")
neg_df = list(neg_['text'])
def gen_data(df_column, sentiment, shots, engine="ada"):
data = []
for i in range(len(df_column)):
if shots == 1:
prompt = f"The following are movie reviews with a {sentiment} sentiment. REVIEW: {df_column[i]} REVIEW:"
response = openai.Completion.create(engine=engine, prompt=prompt,
max_tokens=75, temperature=.8, top_p=1, n=20,
stream=False, stop="REVIEW:")
for i in response['choices']:
data.append(i['text'])
if shots == 2:
prompt = f"The following are movie reviews with a {sentiment} sentiment. REVIEW: {df_column[i]} REVIEW: {df_column[i-1]} REVIEW:"
response = openai.Completion.create(engine=engine, prompt=prompt,
max_tokens=75, temperature=.8, top_p=1, n=20,
stream=False, stop="REVIEW:")
for i in response['choices']:
data.append(i['text'])
if shots == 3:
prompt = f"The following are movie reviews with a {sentiment} sentiment. REVIEW: {df_column[i]} REVIEW: {df_column[i-1]} REVIEW: {df_column[i-2]} REVIEW:"
response = openai.Completion.create(engine=engine, prompt=prompt,
max_tokens=75, temperature=.8, top_p=1, n=20,
stream=False, stop="REVIEW:")
for i in response['choices']:
data.append(i['text'])
if shots == 4:
prompt = f"The following are movie reviews with a {sentiment} sentiment. REVIEW: {df_column[i]} REVIEW: {df_column[i-1]} REVIEW: {df_column[i-2]} REVIEW: {df_column[i-3]} REVIEW:"
response = openai.Completion.create(engine=engine, prompt=prompt,
max_tokens=75, temperature=.8, top_p=1, n=20,
stream=False, stop="REVIEW:")
for i in response['choices']:
data.append(i['text'])
if shots == 5:
prompt = f"The following are movie reviews with a {sentiment} sentiment. REVIEW: {df_column[i]} REVIEW: {df_column[i-1]} REVIEW: {df_column[i-2]} REVIEW: {df_column[i-3]} REVIEW: {df_column[i-4]} REVIEW:"
response = openai.Completion.create(engine=engine, prompt=prompt,
max_tokens=75, temperature=.8, top_p=1, n=20,
stream=False, stop="REVIEW:")
for i in response['choices']:
data.append(i['text'])
return data
# One Shot Ada
pos = pd.DataFrame(gen_data(pos_df, 'positive', 1), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/1shot_pos_05_ada.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 1), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/1shot_neg_05_ada.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/1shot_05_train_ada.csv')
# One Shot Davinci
pos = pd.DataFrame(gen_data(pos_df, 'positive', 1, engine='davinci'), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/1shot_pos_05_davinci.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 1, engine='davinci'), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/1shot_neg_05_davinci.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/1shot_05_train_davinci.csv')
# Two Shot Ada
pos = pd.DataFrame(gen_data(pos_df, 'positive', 2), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/2shot_pos_05_ada.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 2), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/2shot_neg_05_ada.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/2shot_05_train_ada.csv')
# Two Shot Davinci
pos = pd.DataFrame(gen_data(pos_df, 'positive', 2, engine='davinci'), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/2shot_pos_05_davinci.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 2, engine='davinci'), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/2shot_neg_05_davinci.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/2shot_05_train_davinci.csv')
# Three Shot Ada
pos = pd.DataFrame(gen_data(pos_df, 'positive', 3), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/3shot_pos_05_ada.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 3), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/3shot_neg_05_ada.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/3shot_05_train_ada.csv')
# Three Shot Davinci
pos = pd.DataFrame(gen_data(pos_df, 'positive', 3, engine='davinci'), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/3shot_pos_05_davinci.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 3, engine='davinci'), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/3shot_neg_05_davinci.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/3shot_05_train_davinci.csv')
# Four Shot Ada
pos = pd.DataFrame(gen_data(pos_df, 'positive', 4), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/4shot_pos_05_ada.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 4), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/4shot_neg_05_ada.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/4shot_05_train_ada.csv')
# Four Shot Davinci
pos = pd.DataFrame(gen_data(pos_df, 'positive', 4, engine='davinci'), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/4shot_pos_05_davinci.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 4, engine='davinci'), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/4shot_neg_05_davinci.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/4shot_05_train_davinci.csv')
# Five Shot Ada
pos = pd.DataFrame(gen_data(pos_df, 'positive', 5), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/5shot_pos_05_ada.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 5), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/5shot_neg_05_ada.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/5shot_05_train_ada.csv')
# Five Shot Davinci
pos = pd.DataFrame(gen_data(pos_df, 'positive', 5, engine='davinci'), columns=["text"])
pos['label'] = 1
pos.to_csv("imbd/data/5shot_pos_05_davinci.csv", index=False)
neg = pd.DataFrame(gen_data(neg_df, 'negative', 5, engine='davinci'), columns=["text"])
neg['label'] = 0
neg.to_csv("imbd/data/5shot_neg_05_davinci.csv", index=False)
df = pd.concat([pos, neg])
df.to_csv('imbd/data/5shot_05_train_davinci.csv') | 39.851429 | 216 | 0.639805 | 1,078 | 6,974 | 3.95269 | 0.079777 | 0.05257 | 0.08261 | 0.091528 | 0.925135 | 0.917156 | 0.917156 | 0.917156 | 0.917156 | 0.914809 | 0 | 0.031681 | 0.189848 | 6,974 | 175 | 217 | 39.851429 | 0.722478 | 0.022512 | 0 | 0.436508 | 0 | 0.039683 | 0.330689 | 0.151315 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007937 | false | 0 | 0.015873 | 0 | 0.031746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64d2d5323e772185375bcaaae1150b7ea851cb54 | 44 | py | Python | fsfc/text/__init__.py | Ryuusei-Aika/fsfc | d26abe6376f119e6097fb009c570d48eba7036e3 | [
"MIT"
] | 64 | 2018-04-21T08:16:39.000Z | 2022-03-20T19:38:08.000Z | fsfc/text/__init__.py | Ryuusei-Aika/fsfc | d26abe6376f119e6097fb009c570d48eba7036e3 | [
"MIT"
] | 5 | 2020-10-08T07:34:45.000Z | 2021-07-09T15:07:26.000Z | fsfc/text/__init__.py | Ryuusei-Aika/fsfc | d26abe6376f119e6097fb009c570d48eba7036e3 | [
"MIT"
] | 23 | 2018-12-01T19:16:49.000Z | 2021-12-27T01:20:10.000Z | from .CHIR import CHIR
from .FTC import FTC
| 14.666667 | 22 | 0.772727 | 8 | 44 | 4.25 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 23 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3f5bc33632bca93238c74e4067114e70088f6e6 | 1,445 | py | Python | IKtest.py | wsdsgqc/Robot_Code | c825963f53cadca552abf7a0dc3058b87c05b96f | [
"MIT"
] | null | null | null | IKtest.py | wsdsgqc/Robot_Code | c825963f53cadca552abf7a0dc3058b87c05b96f | [
"MIT"
] | null | null | null | IKtest.py | wsdsgqc/Robot_Code | c825963f53cadca552abf7a0dc3058b87c05b96f | [
"MIT"
] | null | null | null | #Ik test
import math
def rad2deg(rad):
return rad*(180/math.pi)
def deg2rad(deg):
return deg*(math.pi/180)
#方案一
# def Ik(L1,L2,x0,y0,select): #这个x y 是正常的坐标系 不是y=-y那种坐标系
# if L1+L2 < math.sqrt(x0**2+y0**2):
# return 0,0
# seta2 = math.acos(((x0**2+y0**2)-L1**2-L2**2)/(2*L1*L2)) #acos返回-pi/2 到 pi/2
# beta = math.atan2(y0,x0) #注意!!!!atan2 参数先y后x
# pesai = math.acos((x0**2+y0**2+L1**2-L2**2)/(2*L1*math.sqrt(x0**2+y0**2)))
# if select == 1:
# seta1 = beta + pesai
# return seta1,-seta2,beta,pesai
# elif select == 2:
# seta1 = beta - pesai
# return seta1,seta2,beta,pesai
# seta1,seta2,beta,pesai = Ik(100,100,170,70,1)
#方案2
# def Ik(L1,L2,x0,y0,select): #这个x y 是正常的坐标系 不是y=-y那种坐标系
# if L1+L2 < math.sqrt(x0**2+y0**2):
# return 0,0
# seta2 = math.acos((L1**2+L2**2-(x0**2+y0**2))/(2*L1*L2)) #acos返回-pi/2 到 pi/2
# seta2 = math.pi - seta2
# beta = math.atan2(y0,x0) #注意!!!!atan2 参数先y后x
# pesai = math.acos((x0**2+y0**2+L1**2-L2**2)/(2*L1*math.sqrt(x0**2+y0**2)))
# if select == 1:
# seta1 = beta + pesai
# return seta1,-seta2,beta,pesai
# elif select == 2:
# seta1 = beta - pesai
# return seta1,seta2,beta,pesai
seta1,seta2,beta,pesai = Ik(100,100,170,70,2)
print("seta1:"+str(rad2deg(seta1)))
print("seta2:"+str(rad2deg(seta2)))
print("beta:"+str(rad2deg(beta)))
print("pesai:"+str(rad2deg(pesai))) | 33.604651 | 82 | 0.570242 | 252 | 1,445 | 3.269841 | 0.178571 | 0.109223 | 0.048544 | 0.058252 | 0.743932 | 0.743932 | 0.743932 | 0.743932 | 0.743932 | 0.743932 | 0 | 0.131763 | 0.20692 | 1,445 | 43 | 83 | 33.604651 | 0.58726 | 0.746713 | 0 | 0 | 0 | 0 | 0.069277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.2 | 0.5 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
b610a74ee76db6eaef1b37b5185686f929679fec | 44 | py | Python | src/slizzy/util/logging/__init__.py | matheushsouza/slizzy | f224b8e4621d11031315da9178202781b4a2dcef | [
"BSD-3-Clause"
] | 1 | 2019-12-24T03:08:12.000Z | 2019-12-24T03:08:12.000Z | src/slizzy/util/logging/__init__.py | matheushsouza/slizzy | f224b8e4621d11031315da9178202781b4a2dcef | [
"BSD-3-Clause"
] | null | null | null | src/slizzy/util/logging/__init__.py | matheushsouza/slizzy | f224b8e4621d11031315da9178202781b4a2dcef | [
"BSD-3-Clause"
] | null | null | null | from . import level
from .log import Logger
| 14.666667 | 23 | 0.772727 | 7 | 44 | 4.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 24 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
375a387eb2731232c1246782e1f04f6c14a6677a | 36 | py | Python | bms_receiver/DataAnalyzers/__init__.py | clean-code-craft-tcq-1/stream-bms-data-Aruna1396 | bf7c185966faeb8ff9ac98fe91e99d4f8152fef3 | [
"MIT"
] | null | null | null | bms_receiver/DataAnalyzers/__init__.py | clean-code-craft-tcq-1/stream-bms-data-Aruna1396 | bf7c185966faeb8ff9ac98fe91e99d4f8152fef3 | [
"MIT"
] | null | null | null | bms_receiver/DataAnalyzers/__init__.py | clean-code-craft-tcq-1/stream-bms-data-Aruna1396 | bf7c185966faeb8ff9ac98fe91e99d4f8152fef3 | [
"MIT"
] | null | null | null | from .SimpleStats import SimpleStats | 36 | 36 | 0.888889 | 4 | 36 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3760052fe4a3e1bc0ad3ae68811e02551538a63f | 76,698 | py | Python | cottonformation/res/dynamodb.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 5 | 2021-07-22T03:45:59.000Z | 2021-12-17T21:07:14.000Z | cottonformation/res/dynamodb.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 1 | 2021-06-25T18:01:31.000Z | 2021-06-25T18:01:31.000Z | cottonformation/res/dynamodb.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 2 | 2021-06-27T03:08:21.000Z | 2021-06-28T22:15:51.000Z | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class PropGlobalTablePointInTimeRecoverySpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.PointInTimeRecoverySpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-pointintimerecoveryspecification.html
Property Document:
- ``p_PointInTimeRecoveryEnabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-pointintimerecoveryspecification.html#cfn-dynamodb-globaltable-pointintimerecoveryspecification-pointintimerecoveryenabled
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.PointInTimeRecoverySpecification"
p_PointInTimeRecoveryEnabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "PointInTimeRecoveryEnabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-pointintimerecoveryspecification.html#cfn-dynamodb-globaltable-pointintimerecoveryspecification-pointintimerecoveryenabled"""
@attr.s
class PropTablePointInTimeRecoverySpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.PointInTimeRecoverySpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-pointintimerecoveryspecification.html
Property Document:
- ``p_PointInTimeRecoveryEnabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-pointintimerecoveryspecification.html#cfn-dynamodb-table-pointintimerecoveryspecification-pointintimerecoveryenabled
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.PointInTimeRecoverySpecification"
p_PointInTimeRecoveryEnabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "PointInTimeRecoveryEnabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-pointintimerecoveryspecification.html#cfn-dynamodb-table-pointintimerecoveryspecification-pointintimerecoveryenabled"""
@attr.s
class PropTableKinesisStreamSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.KinesisStreamSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-kinesisstreamspecification.html
Property Document:
- ``rp_StreamArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-kinesisstreamspecification.html#cfn-dynamodb-kinesisstreamspecification-streamarn
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.KinesisStreamSpecification"
rp_StreamArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "StreamArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-kinesisstreamspecification.html#cfn-dynamodb-kinesisstreamspecification-streamarn"""
@attr.s
class PropGlobalTableContributorInsightsSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.ContributorInsightsSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-contributorinsightsspecification.html
Property Document:
- ``rp_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-contributorinsightsspecification.html#cfn-dynamodb-globaltable-contributorinsightsspecification-enabled
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.ContributorInsightsSpecification"
rp_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-contributorinsightsspecification.html#cfn-dynamodb-globaltable-contributorinsightsspecification-enabled"""
@attr.s
class PropTableAttributeDefinition(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.AttributeDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html
Property Document:
- ``rp_AttributeName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html#cfn-dynamodb-attributedef-attributename
- ``rp_AttributeType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html#cfn-dynamodb-attributedef-attributename-attributetype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.AttributeDefinition"
rp_AttributeName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html#cfn-dynamodb-attributedef-attributename"""
rp_AttributeType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-attributedef.html#cfn-dynamodb-attributedef-attributename-attributetype"""
@attr.s
class PropTableContributorInsightsSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.ContributorInsightsSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-contributorinsightsspecification.html
Property Document:
- ``rp_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-contributorinsightsspecification.html#cfn-dynamodb-contributorinsightsspecification-enabled
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.ContributorInsightsSpecification"
rp_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-contributorinsightsspecification.html#cfn-dynamodb-contributorinsightsspecification-enabled"""
@attr.s
class PropTableKeySchema(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.KeySchema"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html
Property Document:
- ``rp_AttributeName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html#aws-properties-dynamodb-keyschema-attributename
- ``rp_KeyType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html#aws-properties-dynamodb-keyschema-keytype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.KeySchema"
rp_AttributeName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html#aws-properties-dynamodb-keyschema-attributename"""
rp_KeyType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "KeyType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-keyschema.html#aws-properties-dynamodb-keyschema-keytype"""
@attr.s
class PropGlobalTableTargetTrackingScalingPolicyConfiguration(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.TargetTrackingScalingPolicyConfiguration"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html
Property Document:
- ``rp_TargetValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-targetvalue
- ``p_DisableScaleIn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-disablescalein
- ``p_ScaleInCooldown``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-scaleincooldown
- ``p_ScaleOutCooldown``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-scaleoutcooldown
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.TargetTrackingScalingPolicyConfiguration"
rp_TargetValue: float = attr.ib(
default=None,
validator=attr.validators.instance_of(float),
metadata={AttrMeta.PROPERTY_NAME: "TargetValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-targetvalue"""
p_DisableScaleIn: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "DisableScaleIn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-disablescalein"""
p_ScaleInCooldown: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ScaleInCooldown"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-scaleincooldown"""
p_ScaleOutCooldown: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ScaleOutCooldown"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-targettrackingscalingpolicyconfiguration.html#cfn-dynamodb-globaltable-targettrackingscalingpolicyconfiguration-scaleoutcooldown"""
@attr.s
class PropGlobalTableKeySchema(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.KeySchema"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-keyschema.html
Property Document:
- ``rp_AttributeName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-keyschema.html#cfn-dynamodb-globaltable-keyschema-attributename
- ``rp_KeyType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-keyschema.html#cfn-dynamodb-globaltable-keyschema-keytype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.KeySchema"
rp_AttributeName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-keyschema.html#cfn-dynamodb-globaltable-keyschema-attributename"""
rp_KeyType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "KeyType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-keyschema.html#cfn-dynamodb-globaltable-keyschema-keytype"""
@attr.s
class PropGlobalTableStreamSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.StreamSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-streamspecification.html
Property Document:
- ``rp_StreamViewType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-streamspecification.html#cfn-dynamodb-globaltable-streamspecification-streamviewtype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.StreamSpecification"
rp_StreamViewType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "StreamViewType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-streamspecification.html#cfn-dynamodb-globaltable-streamspecification-streamviewtype"""
@attr.s
class PropGlobalTableProjection(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.Projection"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-projection.html
Property Document:
- ``p_NonKeyAttributes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-projection.html#cfn-dynamodb-globaltable-projection-nonkeyattributes
- ``p_ProjectionType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-projection.html#cfn-dynamodb-globaltable-projection-projectiontype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.Projection"
p_NonKeyAttributes: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "NonKeyAttributes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-projection.html#cfn-dynamodb-globaltable-projection-nonkeyattributes"""
p_ProjectionType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ProjectionType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-projection.html#cfn-dynamodb-globaltable-projection-projectiontype"""
@attr.s
class PropGlobalTableAttributeDefinition(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.AttributeDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-attributedefinition.html
Property Document:
- ``rp_AttributeName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-attributedefinition.html#cfn-dynamodb-globaltable-attributedefinition-attributename
- ``rp_AttributeType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-attributedefinition.html#cfn-dynamodb-globaltable-attributedefinition-attributetype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.AttributeDefinition"
rp_AttributeName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-attributedefinition.html#cfn-dynamodb-globaltable-attributedefinition-attributename"""
rp_AttributeType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-attributedefinition.html#cfn-dynamodb-globaltable-attributedefinition-attributetype"""
@attr.s
class PropGlobalTableSSESpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.SSESpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-ssespecification.html
Property Document:
- ``rp_SSEEnabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-ssespecification.html#cfn-dynamodb-globaltable-ssespecification-sseenabled
- ``p_SSEType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-ssespecification.html#cfn-dynamodb-globaltable-ssespecification-ssetype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.SSESpecification"
rp_SSEEnabled: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "SSEEnabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-ssespecification.html#cfn-dynamodb-globaltable-ssespecification-sseenabled"""
p_SSEType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "SSEType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-ssespecification.html#cfn-dynamodb-globaltable-ssespecification-ssetype"""
@attr.s
class PropTableSSESpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.SSESpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html
Property Document:
- ``rp_SSEEnabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-sseenabled
- ``p_KMSMasterKeyId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-kmsmasterkeyid
- ``p_SSEType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-ssetype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.SSESpecification"
rp_SSEEnabled: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "SSEEnabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-sseenabled"""
p_KMSMasterKeyId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "KMSMasterKeyId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-kmsmasterkeyid"""
p_SSEType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "SSEType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-ssespecification.html#cfn-dynamodb-table-ssespecification-ssetype"""
@attr.s
class PropTableTimeToLiveSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.TimeToLiveSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html
Property Document:
- ``rp_AttributeName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html#cfn-dynamodb-timetolivespecification-attributename
- ``rp_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html#cfn-dynamodb-timetolivespecification-enabled
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.TimeToLiveSpecification"
rp_AttributeName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AttributeName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html#cfn-dynamodb-timetolivespecification-attributename"""
rp_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-timetolivespecification.html#cfn-dynamodb-timetolivespecification-enabled"""
@attr.s
class PropTableProvisionedThroughput(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.ProvisionedThroughput"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html
Property Document:
- ``rp_ReadCapacityUnits``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html#cfn-dynamodb-provisionedthroughput-readcapacityunits
- ``rp_WriteCapacityUnits``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html#cfn-dynamodb-provisionedthroughput-writecapacityunits
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.ProvisionedThroughput"
rp_ReadCapacityUnits: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "ReadCapacityUnits"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html#cfn-dynamodb-provisionedthroughput-readcapacityunits"""
rp_WriteCapacityUnits: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "WriteCapacityUnits"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-provisionedthroughput.html#cfn-dynamodb-provisionedthroughput-writecapacityunits"""
@attr.s
class PropTableProjection(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.Projection"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html
Property Document:
- ``p_NonKeyAttributes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html#cfn-dynamodb-projectionobj-nonkeyatt
- ``p_ProjectionType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html#cfn-dynamodb-projectionobj-projtype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.Projection"
p_NonKeyAttributes: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "NonKeyAttributes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html#cfn-dynamodb-projectionobj-nonkeyatt"""
p_ProjectionType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ProjectionType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html#cfn-dynamodb-projectionobj-projtype"""
@attr.s
class PropGlobalTableTimeToLiveSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.TimeToLiveSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-timetolivespecification.html
Property Document:
- ``rp_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-timetolivespecification.html#cfn-dynamodb-globaltable-timetolivespecification-enabled
- ``p_AttributeName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-timetolivespecification.html#cfn-dynamodb-globaltable-timetolivespecification-attributename
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.TimeToLiveSpecification"
rp_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-timetolivespecification.html#cfn-dynamodb-globaltable-timetolivespecification-enabled"""
p_AttributeName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AttributeName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-timetolivespecification.html#cfn-dynamodb-globaltable-timetolivespecification-attributename"""
@attr.s
class PropGlobalTableReplicaSSESpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.ReplicaSSESpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicassespecification.html
Property Document:
- ``rp_KMSMasterKeyId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicassespecification.html#cfn-dynamodb-globaltable-replicassespecification-kmsmasterkeyid
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.ReplicaSSESpecification"
rp_KMSMasterKeyId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "KMSMasterKeyId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicassespecification.html#cfn-dynamodb-globaltable-replicassespecification-kmsmasterkeyid"""
@attr.s
class PropTableStreamSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.StreamSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-streamspecification.html
Property Document:
- ``rp_StreamViewType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-streamspecification.html#cfn-dynamodb-streamspecification-streamviewtype
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.StreamSpecification"
rp_StreamViewType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "StreamViewType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-streamspecification.html#cfn-dynamodb-streamspecification-streamviewtype"""
@attr.s
class PropTableLocalSecondaryIndex(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.LocalSecondaryIndex"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html
Property Document:
- ``rp_IndexName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-indexname
- ``rp_KeySchema``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-keyschema
- ``rp_Projection``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-projection
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.LocalSecondaryIndex"
rp_IndexName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "IndexName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-indexname"""
rp_KeySchema: typing.List[typing.Union['PropTableKeySchema', dict]] = attr.ib(
default=None,
converter=PropTableKeySchema.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropTableKeySchema), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "KeySchema"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-keyschema"""
rp_Projection: typing.Union['PropTableProjection', dict] = attr.ib(
default=None,
converter=PropTableProjection.from_dict,
validator=attr.validators.instance_of(PropTableProjection),
metadata={AttrMeta.PROPERTY_NAME: "Projection"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-lsi.html#cfn-dynamodb-lsi-projection"""
@attr.s
class PropGlobalTableCapacityAutoScalingSettings(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.CapacityAutoScalingSettings"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html
Property Document:
- ``rp_MaxCapacity``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-maxcapacity
- ``rp_MinCapacity``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-mincapacity
- ``rp_TargetTrackingScalingPolicyConfiguration``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-targettrackingscalingpolicyconfiguration
- ``p_SeedCapacity``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-seedcapacity
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.CapacityAutoScalingSettings"
rp_MaxCapacity: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "MaxCapacity"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-maxcapacity"""
rp_MinCapacity: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "MinCapacity"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-mincapacity"""
rp_TargetTrackingScalingPolicyConfiguration: typing.Union['PropGlobalTableTargetTrackingScalingPolicyConfiguration', dict] = attr.ib(
default=None,
converter=PropGlobalTableTargetTrackingScalingPolicyConfiguration.from_dict,
validator=attr.validators.instance_of(PropGlobalTableTargetTrackingScalingPolicyConfiguration),
metadata={AttrMeta.PROPERTY_NAME: "TargetTrackingScalingPolicyConfiguration"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-targettrackingscalingpolicyconfiguration"""
p_SeedCapacity: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "SeedCapacity"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-capacityautoscalingsettings.html#cfn-dynamodb-globaltable-capacityautoscalingsettings-seedcapacity"""
@attr.s
class PropGlobalTableReadProvisionedThroughputSettings(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.ReadProvisionedThroughputSettings"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-readprovisionedthroughputsettings.html
Property Document:
- ``p_ReadCapacityAutoScalingSettings``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-readprovisionedthroughputsettings.html#cfn-dynamodb-globaltable-readprovisionedthroughputsettings-readcapacityautoscalingsettings
- ``p_ReadCapacityUnits``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-readprovisionedthroughputsettings.html#cfn-dynamodb-globaltable-readprovisionedthroughputsettings-readcapacityunits
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.ReadProvisionedThroughputSettings"
p_ReadCapacityAutoScalingSettings: typing.Union['PropGlobalTableCapacityAutoScalingSettings', dict] = attr.ib(
default=None,
converter=PropGlobalTableCapacityAutoScalingSettings.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableCapacityAutoScalingSettings)),
metadata={AttrMeta.PROPERTY_NAME: "ReadCapacityAutoScalingSettings"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-readprovisionedthroughputsettings.html#cfn-dynamodb-globaltable-readprovisionedthroughputsettings-readcapacityautoscalingsettings"""
p_ReadCapacityUnits: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ReadCapacityUnits"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-readprovisionedthroughputsettings.html#cfn-dynamodb-globaltable-readprovisionedthroughputsettings-readcapacityunits"""
@attr.s
class PropGlobalTableLocalSecondaryIndex(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.LocalSecondaryIndex"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html
Property Document:
- ``rp_IndexName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html#cfn-dynamodb-globaltable-localsecondaryindex-indexname
- ``rp_KeySchema``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html#cfn-dynamodb-globaltable-localsecondaryindex-keyschema
- ``rp_Projection``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html#cfn-dynamodb-globaltable-localsecondaryindex-projection
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.LocalSecondaryIndex"
rp_IndexName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "IndexName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html#cfn-dynamodb-globaltable-localsecondaryindex-indexname"""
rp_KeySchema: typing.List[typing.Union['PropGlobalTableKeySchema', dict]] = attr.ib(
default=None,
converter=PropGlobalTableKeySchema.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableKeySchema), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "KeySchema"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html#cfn-dynamodb-globaltable-localsecondaryindex-keyschema"""
rp_Projection: typing.Union['PropGlobalTableProjection', dict] = attr.ib(
default=None,
converter=PropGlobalTableProjection.from_dict,
validator=attr.validators.instance_of(PropGlobalTableProjection),
metadata={AttrMeta.PROPERTY_NAME: "Projection"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-localsecondaryindex.html#cfn-dynamodb-globaltable-localsecondaryindex-projection"""
@attr.s
class PropTableGlobalSecondaryIndex(Property):
"""
AWS Object Type = "AWS::DynamoDB::Table.GlobalSecondaryIndex"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html
Property Document:
- ``rp_IndexName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-indexname
- ``rp_KeySchema``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-keyschema
- ``rp_Projection``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-projection
- ``p_ContributorInsightsSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-contributorinsightsspecification-enabled
- ``p_ProvisionedThroughput``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-provisionedthroughput
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table.GlobalSecondaryIndex"
rp_IndexName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "IndexName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-indexname"""
rp_KeySchema: typing.List[typing.Union['PropTableKeySchema', dict]] = attr.ib(
default=None,
converter=PropTableKeySchema.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropTableKeySchema), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "KeySchema"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-keyschema"""
rp_Projection: typing.Union['PropTableProjection', dict] = attr.ib(
default=None,
converter=PropTableProjection.from_dict,
validator=attr.validators.instance_of(PropTableProjection),
metadata={AttrMeta.PROPERTY_NAME: "Projection"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-projection"""
p_ContributorInsightsSpecification: typing.Union['PropTableContributorInsightsSpecification', dict] = attr.ib(
default=None,
converter=PropTableContributorInsightsSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableContributorInsightsSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "ContributorInsightsSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-contributorinsightsspecification-enabled"""
p_ProvisionedThroughput: typing.Union['PropTableProvisionedThroughput', dict] = attr.ib(
default=None,
converter=PropTableProvisionedThroughput.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableProvisionedThroughput)),
metadata={AttrMeta.PROPERTY_NAME: "ProvisionedThroughput"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-gsi.html#cfn-dynamodb-gsi-provisionedthroughput"""
@attr.s
class PropGlobalTableReplicaGlobalSecondaryIndexSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.ReplicaGlobalSecondaryIndexSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html
Property Document:
- ``rp_IndexName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html#cfn-dynamodb-globaltable-replicaglobalsecondaryindexspecification-indexname
- ``p_ContributorInsightsSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html#cfn-dynamodb-globaltable-replicaglobalsecondaryindexspecification-contributorinsightsspecification
- ``p_ReadProvisionedThroughputSettings``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html#cfn-dynamodb-globaltable-replicaglobalsecondaryindexspecification-readprovisionedthroughputsettings
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.ReplicaGlobalSecondaryIndexSpecification"
rp_IndexName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "IndexName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html#cfn-dynamodb-globaltable-replicaglobalsecondaryindexspecification-indexname"""
p_ContributorInsightsSpecification: typing.Union['PropGlobalTableContributorInsightsSpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTableContributorInsightsSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableContributorInsightsSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "ContributorInsightsSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html#cfn-dynamodb-globaltable-replicaglobalsecondaryindexspecification-contributorinsightsspecification"""
p_ReadProvisionedThroughputSettings: typing.Union['PropGlobalTableReadProvisionedThroughputSettings', dict] = attr.ib(
default=None,
converter=PropGlobalTableReadProvisionedThroughputSettings.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableReadProvisionedThroughputSettings)),
metadata={AttrMeta.PROPERTY_NAME: "ReadProvisionedThroughputSettings"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaglobalsecondaryindexspecification.html#cfn-dynamodb-globaltable-replicaglobalsecondaryindexspecification-readprovisionedthroughputsettings"""
@attr.s
class PropGlobalTableWriteProvisionedThroughputSettings(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.WriteProvisionedThroughputSettings"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-writeprovisionedthroughputsettings.html
Property Document:
- ``p_WriteCapacityAutoScalingSettings``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-writeprovisionedthroughputsettings.html#cfn-dynamodb-globaltable-writeprovisionedthroughputsettings-writecapacityautoscalingsettings
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.WriteProvisionedThroughputSettings"
p_WriteCapacityAutoScalingSettings: typing.Union['PropGlobalTableCapacityAutoScalingSettings', dict] = attr.ib(
default=None,
converter=PropGlobalTableCapacityAutoScalingSettings.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableCapacityAutoScalingSettings)),
metadata={AttrMeta.PROPERTY_NAME: "WriteCapacityAutoScalingSettings"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-writeprovisionedthroughputsettings.html#cfn-dynamodb-globaltable-writeprovisionedthroughputsettings-writecapacityautoscalingsettings"""
@attr.s
class PropGlobalTableReplicaSpecification(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.ReplicaSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html
Property Document:
- ``rp_Region``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-region
- ``p_ContributorInsightsSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-contributorinsightsspecification
- ``p_GlobalSecondaryIndexes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-globalsecondaryindexes
- ``p_PointInTimeRecoverySpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-pointintimerecoveryspecification
- ``p_ReadProvisionedThroughputSettings``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-readprovisionedthroughputsettings
- ``p_SSESpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-ssespecification
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-tags
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.ReplicaSpecification"
rp_Region: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Region"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-region"""
p_ContributorInsightsSpecification: typing.Union['PropGlobalTableContributorInsightsSpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTableContributorInsightsSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableContributorInsightsSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "ContributorInsightsSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-contributorinsightsspecification"""
p_GlobalSecondaryIndexes: typing.List[typing.Union['PropGlobalTableReplicaGlobalSecondaryIndexSpecification', dict]] = attr.ib(
default=None,
converter=PropGlobalTableReplicaGlobalSecondaryIndexSpecification.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableReplicaGlobalSecondaryIndexSpecification), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "GlobalSecondaryIndexes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-globalsecondaryindexes"""
p_PointInTimeRecoverySpecification: typing.Union['PropGlobalTablePointInTimeRecoverySpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTablePointInTimeRecoverySpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTablePointInTimeRecoverySpecification)),
metadata={AttrMeta.PROPERTY_NAME: "PointInTimeRecoverySpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-pointintimerecoveryspecification"""
p_ReadProvisionedThroughputSettings: typing.Union['PropGlobalTableReadProvisionedThroughputSettings', dict] = attr.ib(
default=None,
converter=PropGlobalTableReadProvisionedThroughputSettings.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableReadProvisionedThroughputSettings)),
metadata={AttrMeta.PROPERTY_NAME: "ReadProvisionedThroughputSettings"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-readprovisionedthroughputsettings"""
p_SSESpecification: typing.Union['PropGlobalTableReplicaSSESpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTableReplicaSSESpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableReplicaSSESpecification)),
metadata={AttrMeta.PROPERTY_NAME: "SSESpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-ssespecification"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-replicaspecification.html#cfn-dynamodb-globaltable-replicaspecification-tags"""
@attr.s
class PropGlobalTableGlobalSecondaryIndex(Property):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable.GlobalSecondaryIndex"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html
Property Document:
- ``rp_IndexName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-indexname
- ``rp_KeySchema``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-keyschema
- ``rp_Projection``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-projection
- ``p_WriteProvisionedThroughputSettings``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-writeprovisionedthroughputsettings
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable.GlobalSecondaryIndex"
rp_IndexName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "IndexName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-indexname"""
rp_KeySchema: typing.List[typing.Union['PropGlobalTableKeySchema', dict]] = attr.ib(
default=None,
converter=PropGlobalTableKeySchema.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableKeySchema), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "KeySchema"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-keyschema"""
rp_Projection: typing.Union['PropGlobalTableProjection', dict] = attr.ib(
default=None,
converter=PropGlobalTableProjection.from_dict,
validator=attr.validators.instance_of(PropGlobalTableProjection),
metadata={AttrMeta.PROPERTY_NAME: "Projection"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-projection"""
p_WriteProvisionedThroughputSettings: typing.Union['PropGlobalTableWriteProvisionedThroughputSettings', dict] = attr.ib(
default=None,
converter=PropGlobalTableWriteProvisionedThroughputSettings.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableWriteProvisionedThroughputSettings)),
metadata={AttrMeta.PROPERTY_NAME: "WriteProvisionedThroughputSettings"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-globaltable-globalsecondaryindex.html#cfn-dynamodb-globaltable-globalsecondaryindex-writeprovisionedthroughputsettings"""
#--- Resource declaration ---
@attr.s
class Table(Resource):
"""
AWS Object Type = "AWS::DynamoDB::Table"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html
Property Document:
- ``rp_KeySchema``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-keyschema
- ``p_AttributeDefinitions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-attributedef
- ``p_BillingMode``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-billingmode
- ``p_ContributorInsightsSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-contributorinsightsspecification-enabled
- ``p_GlobalSecondaryIndexes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-gsi
- ``p_KinesisStreamSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-kinesisstreamspecification
- ``p_LocalSecondaryIndexes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-lsi
- ``p_PointInTimeRecoverySpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-pointintimerecoveryspecification
- ``p_ProvisionedThroughput``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-provisionedthroughput
- ``p_SSESpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-ssespecification
- ``p_StreamSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-streamspecification
- ``p_TableName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tablename
- ``p_TimeToLiveSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tags
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::Table"
rp_KeySchema: typing.List[typing.Union['PropTableKeySchema', dict]] = attr.ib(
default=None,
converter=PropTableKeySchema.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropTableKeySchema), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "KeySchema"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-keyschema"""
p_AttributeDefinitions: typing.List[typing.Union['PropTableAttributeDefinition', dict]] = attr.ib(
default=None,
converter=PropTableAttributeDefinition.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropTableAttributeDefinition), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "AttributeDefinitions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-attributedef"""
p_BillingMode: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "BillingMode"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-billingmode"""
p_ContributorInsightsSpecification: typing.Union['PropTableContributorInsightsSpecification', dict] = attr.ib(
default=None,
converter=PropTableContributorInsightsSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableContributorInsightsSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "ContributorInsightsSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-contributorinsightsspecification-enabled"""
p_GlobalSecondaryIndexes: typing.List[typing.Union['PropTableGlobalSecondaryIndex', dict]] = attr.ib(
default=None,
converter=PropTableGlobalSecondaryIndex.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropTableGlobalSecondaryIndex), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "GlobalSecondaryIndexes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-gsi"""
p_KinesisStreamSpecification: typing.Union['PropTableKinesisStreamSpecification', dict] = attr.ib(
default=None,
converter=PropTableKinesisStreamSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableKinesisStreamSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "KinesisStreamSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-kinesisstreamspecification"""
p_LocalSecondaryIndexes: typing.List[typing.Union['PropTableLocalSecondaryIndex', dict]] = attr.ib(
default=None,
converter=PropTableLocalSecondaryIndex.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropTableLocalSecondaryIndex), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "LocalSecondaryIndexes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-lsi"""
p_PointInTimeRecoverySpecification: typing.Union['PropTablePointInTimeRecoverySpecification', dict] = attr.ib(
default=None,
converter=PropTablePointInTimeRecoverySpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTablePointInTimeRecoverySpecification)),
metadata={AttrMeta.PROPERTY_NAME: "PointInTimeRecoverySpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-pointintimerecoveryspecification"""
p_ProvisionedThroughput: typing.Union['PropTableProvisionedThroughput', dict] = attr.ib(
default=None,
converter=PropTableProvisionedThroughput.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableProvisionedThroughput)),
metadata={AttrMeta.PROPERTY_NAME: "ProvisionedThroughput"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-provisionedthroughput"""
p_SSESpecification: typing.Union['PropTableSSESpecification', dict] = attr.ib(
default=None,
converter=PropTableSSESpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableSSESpecification)),
metadata={AttrMeta.PROPERTY_NAME: "SSESpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-ssespecification"""
p_StreamSpecification: typing.Union['PropTableStreamSpecification', dict] = attr.ib(
default=None,
converter=PropTableStreamSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableStreamSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "StreamSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-streamspecification"""
p_TableName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "TableName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tablename"""
p_TimeToLiveSpecification: typing.Union['PropTableTimeToLiveSpecification', dict] = attr.ib(
default=None,
converter=PropTableTimeToLiveSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropTableTimeToLiveSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "TimeToLiveSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#aws-resource-dynamodb-table-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_StreamArn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#aws-resource-dynamodb-table-return-values"""
return GetAtt(resource=self, attr_name="StreamArn")
@attr.s
class GlobalTable(Resource):
"""
AWS Object Type = "AWS::DynamoDB::GlobalTable"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html
Property Document:
- ``rp_AttributeDefinitions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-attributedefinitions
- ``rp_KeySchema``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-keyschema
- ``rp_Replicas``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-replicas
- ``p_BillingMode``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-billingmode
- ``p_GlobalSecondaryIndexes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-globalsecondaryindexes
- ``p_LocalSecondaryIndexes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-localsecondaryindexes
- ``p_SSESpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-ssespecification
- ``p_StreamSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-streamspecification
- ``p_TableName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-tablename
- ``p_TimeToLiveSpecification``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-timetolivespecification
- ``p_WriteProvisionedThroughputSettings``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-writeprovisionedthroughputsettings
"""
AWS_OBJECT_TYPE = "AWS::DynamoDB::GlobalTable"
rp_AttributeDefinitions: typing.List[typing.Union['PropGlobalTableAttributeDefinition', dict]] = attr.ib(
default=None,
converter=PropGlobalTableAttributeDefinition.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableAttributeDefinition), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "AttributeDefinitions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-attributedefinitions"""
rp_KeySchema: typing.List[typing.Union['PropGlobalTableKeySchema', dict]] = attr.ib(
default=None,
converter=PropGlobalTableKeySchema.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableKeySchema), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "KeySchema"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-keyschema"""
rp_Replicas: typing.List[typing.Union['PropGlobalTableReplicaSpecification', dict]] = attr.ib(
default=None,
converter=PropGlobalTableReplicaSpecification.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableReplicaSpecification), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "Replicas"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-replicas"""
p_BillingMode: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "BillingMode"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-billingmode"""
p_GlobalSecondaryIndexes: typing.List[typing.Union['PropGlobalTableGlobalSecondaryIndex', dict]] = attr.ib(
default=None,
converter=PropGlobalTableGlobalSecondaryIndex.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableGlobalSecondaryIndex), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "GlobalSecondaryIndexes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-globalsecondaryindexes"""
p_LocalSecondaryIndexes: typing.List[typing.Union['PropGlobalTableLocalSecondaryIndex', dict]] = attr.ib(
default=None,
converter=PropGlobalTableLocalSecondaryIndex.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropGlobalTableLocalSecondaryIndex), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "LocalSecondaryIndexes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-localsecondaryindexes"""
p_SSESpecification: typing.Union['PropGlobalTableSSESpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTableSSESpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableSSESpecification)),
metadata={AttrMeta.PROPERTY_NAME: "SSESpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-ssespecification"""
p_StreamSpecification: typing.Union['PropGlobalTableStreamSpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTableStreamSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableStreamSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "StreamSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-streamspecification"""
p_TableName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "TableName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-tablename"""
p_TimeToLiveSpecification: typing.Union['PropGlobalTableTimeToLiveSpecification', dict] = attr.ib(
default=None,
converter=PropGlobalTableTimeToLiveSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableTimeToLiveSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "TimeToLiveSpecification"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-timetolivespecification"""
p_WriteProvisionedThroughputSettings: typing.Union['PropGlobalTableWriteProvisionedThroughputSettings', dict] = attr.ib(
default=None,
converter=PropGlobalTableWriteProvisionedThroughputSettings.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropGlobalTableWriteProvisionedThroughputSettings)),
metadata={AttrMeta.PROPERTY_NAME: "WriteProvisionedThroughputSettings"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#cfn-dynamodb-globaltable-writeprovisionedthroughputsettings"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#aws-resource-dynamodb-globaltable-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_StreamArn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#aws-resource-dynamodb-globaltable-return-values"""
return GetAtt(resource=self, attr_name="StreamArn")
@property
def rv_TableId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-globaltable.html#aws-resource-dynamodb-globaltable-return-values"""
return GetAtt(resource=self, attr_name="TableId")
| 67.278947 | 290 | 0.785001 | 7,447 | 76,698 | 7.997851 | 0.021217 | 0.08677 | 0.040631 | 0.062794 | 0.927149 | 0.92369 | 0.910527 | 0.864053 | 0.864053 | 0.862626 | 0 | 0.000014 | 0.096169 | 76,698 | 1,139 | 291 | 67.338016 | 0.859164 | 0.335706 | 0 | 0.591572 | 0 | 0 | 0.129979 | 0.102496 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008104 | false | 0 | 0.006483 | 0 | 0.272285 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
80822d7fcdfb43b469257d10b1f813cb7b9a3b65 | 4,188 | py | Python | extra_formulas.py | MasterOdin/LogicalEquivalency | c1f4e053c4c18b8fc23a5842944bbd9ef9f37843 | [
"MIT"
] | 1 | 2018-02-02T17:11:24.000Z | 2018-02-02T17:11:24.000Z | extra_formulas.py | MasterOdin/LogicalEquivalency | c1f4e053c4c18b8fc23a5842944bbd9ef9f37843 | [
"MIT"
] | null | null | null | extra_formulas.py | MasterOdin/LogicalEquivalency | c1f4e053c4c18b8fc23a5842944bbd9ef9f37843 | [
"MIT"
] | 1 | 2019-01-16T21:11:52.000Z | 2019-01-16T21:11:52.000Z | # -*- coding: utf-8 -*-
"""
These should be moved into the main forseti package most likely as they are useful
in a variety of contexts
"""
from copy import deepcopy
from forseti.formula import Formula, LogicalOperator, And, Or
class GeneralizedAnd(LogicalOperator):
def __init__(self, *kwargs):
if len(kwargs) < 2:
raise Exception("Need to have at least 2 arguments")
super(GeneralizedAnd, self).__init__(*kwargs)
for kwarg in kwargs:
if isinstance(kwarg, And) or isinstance(kwarg, GeneralizedAnd):
for arg in kwarg.args:
self.args.append(arg)
else:
self.args.append(kwarg)
def __repr__(self):
return "and(" + ", ".join([repr(arg) for arg in self.args]) + ")"
def __str__(self):
return "(" + " & ".join([str(arg) for arg in self.args]) + ")"
def __eq__(self, other):
if not isinstance(other, GeneralizedAnd):
return False
test = deepcopy(self.args)
i = 0
while i < len(test):
for j in other.args:
if j == test[i]:
del test[i]
i -= 1
break
i += 1
return len(test) == 0
def __lt__(self, other):
if not isinstance(other, GeneralizedAnd):
raise TypeError("Can only compare GeneralizedAnd together. Got type '%s'." % type(other))
if len(self.args) < len(other.args):
return True
elif len(self.args) > len(other.args):
return False
else:
i = 0
while i < len(self.args):
if self.args[i] < other.args[i]:
return True
elif other.args[i] < self.args[i]:
return False
i += 1
return True
def __gt__(self, other):
if not isinstance(other, GeneralizedAnd):
raise TypeError("Can only compare GeneralizedAnd together. Got type '%s'." % type(other))
if len(self.args) < len(other.args):
return True
elif len(self.args) > len(other.args):
return False
else:
i = 0
while i < len(self.args):
if self.args[i] > other.args[i]:
return True
i += 1
return False
class GeneralizedOr(LogicalOperator):
def __init__(self, *kwargs):
if len(kwargs) < 2:
raise Exception("Need to have at least 2 arguments")
super(GeneralizedOr, self).__init__(*kwargs)
for kwarg in kwargs:
if isinstance(kwarg, Or) or isinstance(kwarg, GeneralizedOr):
for arg in kwarg.args:
self.args.append(arg)
else:
self.args.append(kwarg)
def __repr__(self):
return "or(" + ", ".join([repr(arg) for arg in self.args]) + ")"
def __str__(self):
return "(" + " | ".join([str(arg) for arg in self.args]) + ")"
class Contradiction(Formula):
def __init__(self):
super(Contradiction, self).__init__()
def __repr__(self):
return "FALSE"
def __str__(self):
return "⊥"
def __eq__(self, other):
if isinstance(other, Contradiction):
return True
else:
return False
def __ne__(self, other):
return not (self == other)
def __gt__(self, other):
return False
def __ge__(self, other):
pass
def __lt__(self, other):
pass
def __le__(self, other):
pass
class Tautology(Formula):
def __init__(self):
super(Tautology, self).__init__()
def __repr__(self):
return "TRUE"
def __str__(self):
return "T"
def __eq__(self, other):
if isinstance(other, Contradiction):
return True
else:
return False
def __ne__(self, other):
return not (self == other)
def __gt__(self, other):
return False
def __ge__(self, other):
pass
def __lt__(self, other):
pass
def __le__(self, other):
pass
| 26.506329 | 101 | 0.53319 | 482 | 4,188 | 4.377593 | 0.192946 | 0.068246 | 0.022749 | 0.032227 | 0.734123 | 0.706161 | 0.682464 | 0.660664 | 0.660664 | 0.660664 | 0 | 0.004845 | 0.35936 | 4,188 | 157 | 102 | 26.675159 | 0.781215 | 0.031041 | 0 | 0.739496 | 0 | 0 | 0.052346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.226891 | false | 0.05042 | 0.016807 | 0.10084 | 0.504202 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
038a0d3e57850b44ff44040c7b63002338994e42 | 106 | py | Python | matrixslow/core/__init__.py | nuaalixu/MatrixSlow | 490b7114130919b3d0f0320018308313951f8478 | [
"MIT"
] | null | null | null | matrixslow/core/__init__.py | nuaalixu/MatrixSlow | 490b7114130919b3d0f0320018308313951f8478 | [
"MIT"
] | null | null | null | matrixslow/core/__init__.py | nuaalixu/MatrixSlow | 490b7114130919b3d0f0320018308313951f8478 | [
"MIT"
] | 1 | 2020-11-19T11:22:39.000Z | 2020-11-19T11:22:39.000Z | """matrixslow核心组件包,包括图、节点等数据结构实现。
"""
from .graph import *
from .node import *
from .core import *
| 15.142857 | 34 | 0.660377 | 12 | 106 | 5.833333 | 0.666667 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198113 | 106 | 6 | 35 | 17.666667 | 0.823529 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0399525beaef8390ca9bfcaee0ede3e8750a413a | 92 | py | Python | tests/refactory_core_test.py | lmmx/refactory | 39e97528733f4cb2607a89184e3b0049f9784547 | [
"MIT"
] | null | null | null | tests/refactory_core_test.py | lmmx/refactory | 39e97528733f4cb2607a89184e3b0049f9784547 | [
"MIT"
] | 1 | 2022-02-07T22:18:30.000Z | 2022-02-07T22:18:30.000Z | tests/refactory_core_test.py | lmmx/refactory | 39e97528733f4cb2607a89184e3b0049f9784547 | [
"MIT"
] | null | null | null | import refactory
def test_load_package():
assert refactory.__package__ == "refactory"
| 15.333333 | 47 | 0.76087 | 10 | 92 | 6.4 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 92 | 5 | 48 | 18.4 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2082024de4ecadbbb7d6b4d0ab1d354b08375354 | 164 | py | Python | barriers/models/action_plans.py | uktrade/market-access-python-frontend | aee60bb5380754e67c0804ab57cd2fca731405f7 | [
"MIT"
] | 1 | 2021-12-15T04:14:03.000Z | 2021-12-15T04:14:03.000Z | barriers/models/action_plans.py | uktrade/market-access-python-frontend | aee60bb5380754e67c0804ab57cd2fca731405f7 | [
"MIT"
] | 19 | 2019-12-11T11:32:47.000Z | 2022-03-29T15:40:57.000Z | barriers/models/action_plans.py | uktrade/market-access-python-frontend | aee60bb5380754e67c0804ab57cd2fca731405f7 | [
"MIT"
] | 2 | 2021-02-09T09:38:45.000Z | 2021-03-29T19:07:09.000Z | from utils.models import APIModel
class ActionPlan(APIModel):
pass
class ActionPlanMilestone(APIModel):
pass
class ActionPlanTask(APIModel):
pass
| 11.714286 | 36 | 0.75 | 17 | 164 | 7.235294 | 0.588235 | 0.292683 | 0.276423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189024 | 164 | 13 | 37 | 12.615385 | 0.924812 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.428571 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
20869d5b17a6934e33176a3fff91dab3d1243691 | 96 | py | Python | venv/lib/python3.8/site-packages/clikit/ui/help/command_help.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/clikit/ui/help/command_help.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/clikit/ui/help/command_help.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/5d/37/2a/9aceabfba3094717e9c19c50ac189d6fcc0080af051555369f2515d465 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20b98a753c34a9063bc8934204563dd61efb0acd | 95 | py | Python | gigasecond/gigasecond.py | Macelai/exercism-python | 26cd711d8c646311b72e067cbe3010a316fc6788 | [
"MIT"
] | null | null | null | gigasecond/gigasecond.py | Macelai/exercism-python | 26cd711d8c646311b72e067cbe3010a316fc6788 | [
"MIT"
] | null | null | null | gigasecond/gigasecond.py | Macelai/exercism-python | 26cd711d8c646311b72e067cbe3010a316fc6788 | [
"MIT"
] | null | null | null | from datetime import *
def add_gigasecond(date):
return date + timedelta(seconds = 10**9)
| 19 | 44 | 0.715789 | 13 | 95 | 5.153846 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.178947 | 95 | 4 | 45 | 23.75 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
20f4b9b401ac8c61879b6c021637f693e3771b1c | 1,937 | py | Python | code-guessing/maze2.py | osmarks/random-stuff | 78602eabd69aebfa061cf201a4d44555fa9a36e7 | [
"MIT"
] | 4 | 2021-02-04T15:09:14.000Z | 2021-12-19T10:54:59.000Z | code-guessing/maze2.py | osmarks/random-stuff | 78602eabd69aebfa061cf201a4d44555fa9a36e7 | [
"MIT"
] | null | null | null | code-guessing/maze2.py | osmarks/random-stuff | 78602eabd69aebfa061cf201a4d44555fa9a36e7 | [
"MIT"
] | 1 | 2021-12-19T22:05:07.000Z | 2021-12-19T22:05:07.000Z | #!/usr/bin/env python3
T=10
def R(m,p=[0]):
*_,c=p
if c==99:return p
a=[c+T,c-T]
if e:=c%T:a+=~-c,
if e!=9:a+=-~c,
for b in a:
if 99>=b>=0and b not in p and m[b]==0and (r:=R(m,p+[b])):return r
def entry(m):
p=R(m)
return [{-T:1,1:2,T:3,-1:4}[y-x] for x,y in zip(p,p[1:])]
def divide_chunks(l, n):
# looping till length l
for i in range(0, len(l), n):
yield l[i:i + n]
def fail(e): raise Exception(e)
def print_maze(m, p):
out = [ [ "█" if x else "_" for x in row ] for row in divide_chunks(m, 10) ]
for ix, pc in enumerate(p):
out[pc // 10][pc % 10] = chr(ix % 26 + 97) if m[pc] == 0 else fail("all is bees")
print("\n".join([ " ".join(row) for row in out ]))
assert p[-1] == 99
def directions(x): return list(map({1: "up", 3: "down", 2: "right", 4: "left" }.get, x))
print(entry([
0,1,0,0,0,1,0,0,0,1,
0,1,0,1,0,1,0,1,0,0,
0,1,0,1,0,1,0,1,1,0,
0,1,0,1,0,1,0,1,0,0,
0,1,0,1,0,1,0,1,0,1,
0,1,0,1,0,1,0,1,0,0,
0,1,0,1,0,1,0,1,1,0,
0,1,0,1,0,1,0,1,0,0,
0,1,0,1,0,1,0,1,0,1,
0,0,0,1,0,0,0,1,0,0]))
print(entry([
0,1,0,0,0,1,0,0,0,1,
0,1,0,1,0,1,0,1,0,0,
0,0,0,1,0,1,0,1,1,0,
0,1,0,1,0,1,0,1,0,0,
0,1,0,1,0,1,0,1,0,1,
0,1,0,1,0,1,0,1,0,0,
0,1,0,0,0,1,0,1,1,0,
0,1,0,0,0,1,0,1,0,0,
0,1,0,1,0,1,0,1,1,0,
0,0,0,1,0,0,0,0,1,0
]))
print(entry([
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0
]))
print(entry([
0,1,0,0,0,0,0,0,1,0,
0,0,0,0,0,1,1,0,0,0,
1,1,1,0,1,1,0,0,1,0,
0,1,1,0,0,0,1,0,0,1,
0,0,0,0,1,0,1,1,0,0,
1,0,0,0,0,1,0,0,0,1,
0,0,1,1,1,0,1,0,1,0,
1,0,0,0,1,0,1,0,0,0,
0,0,0,0,1,0,0,1,1,1,
1,0,1,0,0,0,0,0,0,0
]))
print(entry([
0,0,0,0,0,0,1,0,0,0,
0,0,1,0,1,0,0,0,1,0,
0,0,1,1,0,0,1,1,1,0,
0,0,0,0,1,0,0,0,0,0,
0,1,0,0,1,0,1,0,0,0,
0,0,1,0,0,0,0,0,0,0,
0,1,0,0,0,0,1,0,1,0,
0,0,0,1,0,0,0,1,0,0,
0,0,0,1,0,0,0,0,0,0,
1,0,0,0,0,1,0,0,0,0
])) | 20.606383 | 88 | 0.510067 | 694 | 1,937 | 1.417867 | 0.102305 | 0.495935 | 0.570122 | 0.581301 | 0.558943 | 0.558943 | 0.558943 | 0.552846 | 0.542683 | 0.519309 | 0 | 0.313668 | 0.116159 | 1,937 | 94 | 89 | 20.606383 | 0.260514 | 0.022199 | 0 | 0.402439 | 0 | 0 | 0.016385 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 1 | 0.073171 | false | 0 | 0 | 0.012195 | 0.085366 | 0.085366 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
457ebfeb1967ecbf01d3be0acdef5ebaae04734f | 2,403 | py | Python | tests/zeus/api/resources/test_hook_details.py | conrad-kronos/zeus | ddb6bc313e51fb22222b30822b82d76f37dbbd35 | [
"Apache-2.0"
] | 221 | 2017-07-03T17:29:21.000Z | 2021-12-07T19:56:59.000Z | tests/zeus/api/resources/test_hook_details.py | conrad-kronos/zeus | ddb6bc313e51fb22222b30822b82d76f37dbbd35 | [
"Apache-2.0"
] | 298 | 2017-07-04T18:08:14.000Z | 2022-03-03T22:24:51.000Z | tests/zeus/api/resources/test_hook_details.py | conrad-kronos/zeus | ddb6bc313e51fb22222b30822b82d76f37dbbd35 | [
"Apache-2.0"
] | 24 | 2017-07-15T13:46:45.000Z | 2020-08-16T16:14:45.000Z | from zeus.models import Hook
def test_hook_details(
client, default_login, default_hook, default_repo, default_repo_access
):
resp = client.get("/api/hooks/{}".format(default_hook.id))
assert resp.status_code == 200
data = resp.json()
assert data["id"] == str(default_hook.id)
def test_cannot_load_hook_without_admin(
client, default_login, default_hook, default_repo, default_repo_write_access
):
resp = client.get("/api/hooks/{}".format(default_hook.id))
assert resp.status_code == 400
def test_hook_delete(
client, default_login, default_hook, default_repo, default_repo_access
):
resp = client.delete("/api/hooks/{}".format(default_hook.id))
assert resp.status_code == 204
assert not Hook.query.get(default_hook.id)
def test_cannot_delete_hook_without_admin(
client, default_login, default_hook, default_repo, default_repo_write_access
):
resp = client.delete("/api/hooks/{}".format(default_hook.id))
assert resp.status_code == 400
def test_hook_update(
client, default_login, default_hook, default_repo, default_repo_access
):
resp = client.put(
"/api/hooks/{}".format(default_hook.id),
json={"provider": "travis", "config": {"domain": "api.travis-ci.org"}},
)
assert resp.status_code == 200, repr(resp.data)
hook = Hook.query.unrestricted_unsafe().get(resp.json()["id"])
assert hook.repository_id == default_repo.id
assert hook.provider == "travis"
assert hook.config == {"domain": "api.travis-ci.org"}
assert hook.get_provider().get_name(default_hook.config) == "api.travis-ci.org"
def test_cannot_update_hook_without_admin(
client, default_login, default_hook, default_repo, default_repo_write_access
):
resp = client.put("/api/hooks/{}".format(default_hook.id))
assert resp.status_code == 400
def test_hook_update_without_config(
client, default_login, default_hook, default_repo, default_repo_access
):
resp = client.put(
"/api/hooks/{}".format(default_hook.id), json={"provider": "travis"}
)
assert resp.status_code == 200, repr(resp.data)
hook = Hook.query.unrestricted_unsafe().get(resp.json()["id"])
assert hook.repository_id == default_repo.id
assert hook.provider == "travis"
# we're ensuring that the config doesnt get overwritten by the defaults
assert hook.config == {"domain": "api.travis-ci.org"}
| 33.375 | 83 | 0.712859 | 332 | 2,403 | 4.894578 | 0.174699 | 0.115077 | 0.072 | 0.107692 | 0.844923 | 0.843077 | 0.811077 | 0.791385 | 0.747077 | 0.747077 | 0 | 0.010304 | 0.151893 | 2,403 | 71 | 84 | 33.84507 | 0.787046 | 0.028714 | 0 | 0.622642 | 0 | 0 | 0.098199 | 0 | 0 | 0 | 0 | 0 | 0.301887 | 1 | 0.132075 | false | 0 | 0.018868 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
457fa6e6e26aed3375e9a66a485edd56f5899a4b | 104 | py | Python | pyramid_storage/interfaces.py | timgates42/pyramid_storage | a45e54beb52957e1e9d0177ad5a225815df827c1 | [
"BSD-3-Clause"
] | 14 | 2015-02-13T03:12:56.000Z | 2020-07-21T21:53:05.000Z | pyramid_storage/interfaces.py | timgates42/pyramid_storage | a45e54beb52957e1e9d0177ad5a225815df827c1 | [
"BSD-3-Clause"
] | 26 | 2015-05-16T17:45:09.000Z | 2021-04-21T12:14:38.000Z | pyramid_storage/interfaces.py | timgates42/pyramid_storage | a45e54beb52957e1e9d0177ad5a225815df827c1 | [
"BSD-3-Clause"
] | 8 | 2015-11-11T00:31:30.000Z | 2020-12-31T20:34:18.000Z | # -*- coding: utf-8 -*-
from zope.interface import Interface
class IFileStorage(Interface):
pass
| 13 | 36 | 0.692308 | 12 | 104 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.182692 | 104 | 7 | 37 | 14.857143 | 0.835294 | 0.201923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
45826142911a4cf072fdfde214f6519b37e70bc7 | 14,035 | py | Python | merge.py | walker-liu/- | 3e5fde10e925cf75386c6b61012de495ea9cb945 | [
"Apache-2.0"
] | null | null | null | merge.py | walker-liu/- | 3e5fde10e925cf75386c6b61012de495ea9cb945 | [
"Apache-2.0"
] | null | null | null | merge.py | walker-liu/- | 3e5fde10e925cf75386c6b61012de495ea9cb945 | [
"Apache-2.0"
] | null | null | null | #encoding:utf-8
import xlrd
import xlsxwriter
import xlwt
'''首先读取所要处理的文件,其中源文件分为两个,一个是单义词,'''
def readfirst(src1):
data=xlrd.open_workbook(src1)
table=data.sheets()[2]
nrows=table.nrows
ncols=table.ncols
#此处设置一个字典,用来存储读取的文件内容,其中以词语为键值,以词语的具体内容为value
src1_dict={}
for i in range(1,nrows):
if table.cell(i,4).value not in src1_dict:
src1_dict[table.cell(i,4).value]=[]
src1_dict[table.cell(i,4).value].append(table.row_values(i)[4:7]+[table.cell(i,13).value])
else:
src1_dict[table.cell(i,4).value].append(table.row_values(i)[4:7]+[table.cell(i,13).value])
return src1_dict
'''其次读取所要处理的第二个文件,'''
def readsecond(src2):
data=xlrd.open_workbook(src2)
table=data.sheets()[0]
nrows=table.nrows
ncols=table.ncols
src2_dict={}
word_list=[]
s0='0'
s1='1'
s2='2'
for i in range(3,nrows):
if table.cell(i,1).value:
#print table.cell(i,1).value
tmp_word=table.cell(i,1).value #词语
word_list.append(tmp_word)
tmp_pinyin=table.cell(i,2).value #拼音
tmp_bianma=table.cell(i,3).value #编码
tmp_shiyi=table.cell(i,4).value #释义
tmp_shili=table.cell(i,5).value #示例
src2_dict[table.cell(i,1).value]={}
src2_dict[table.cell(i,1).value][s0]=[]
src2_dict[table.cell(i,1).value][s1]=[]
src2_dict[table.cell(i,1).value][s2]=[]
src2_dict[table.cell(i,1).value][s0].append(tmp_word)
src2_dict[table.cell(i,1).value][s0].append(tmp_pinyin)
src2_dict[table.cell(i,1).value][s0].append(tmp_bianma)
src2_dict[table.cell(i,1).value][s0].append(tmp_shiyi)
src2_dict[table.cell(i,1).value][s0].append(tmp_shili)
if table.cell(i,6).value:
src2_dict[table.cell(i,1).value][s1].append(table.row_values(i)[6:])
elif table.cell(i,12).value:
src2_dict[table.cell(i,1).value][s2].append(table.row_values(i)[6:])
else:
if table.cell(i,6).value:
src2_dict[tmp_word][s1].append(table.row_values(i)[6:])
elif table.cell(i,12):
src2_dict[tmp_word][s2].append(table.row_values(i)[6:])
return src2_dict,word_list
# [%当事 离 2017年 1月 20日 正式 宣誓 %] [# 上任 #] [%内容 还有 2 个 月 的 时间 %] ,
def preprocess(sentence):
buff=[]
s_dict={}
if '[%' not in sentence:
return buff
else:
tmp=sentence.split('%]')
for w in tmp:
if '[#' in w:
if '[%' in w:
if w.index('[#')<w.index('[%'):
buff.append('pred')
ind1=w.index('[%')
s1=w[ind1+2:]
ind2=s1.index(' ')
s2=s1[:ind2]
buff.append(s2)
s_dict[s2]=s1[ind2+1:]
else:
buff.append('pred')
else:
if '[%' in w:
ind1=w.index('[%')
s1=w[ind1+2:]
ind2=s1.index(' ')
s2=s1[:ind2]
buff.append(s2)
s_dict[s2]=s1[ind2+1:]
else:
pass
return buff,s_dict
#总处理程序
def process(src1_xls,src2_xls,res_xls):
s0='0' #表示前面共有的项
s1='1' #表示基本语义框架结构
s2='2' #表示扩展语义框架结构
#wb=xlsxwriter.Workbook(res_xlsx) # 建立文件
#ws=wb.add_worksheet('sheet1')
wt = xlwt.Workbook()
ws=wt.add_sheet('Sum')
ws1=wt.add_sheet('First')
num_1=3
num2_1=0
num_word_1=0
ws2=wt.add_sheet('Second')
num_2=3
num2_2=0
num_word_2=0
ws3=wt.add_sheet('Third')
num_3=3
num2_3=0
num_word_3=0
src1_dict=readfirst(src1_xls)
src2_dict,word_list=readsecond(src2_xls)
num=3 #总的数目
num2=0
num_word=0
sent=[u'施事',u'同事',u'当事',u'接事 ',u'受事',
u'系事',u'与事',u'结果',u'对象',u'内容',
u'工具',u'材料',u'方式',u'原因',u'目的',
u'事量',u'空间 ',u'时间',u'范围',u'起点',
u'终点',u'路径',u'方向',u'处所',u'起始',
u'结束',u'时点',u'时段']
flag=True
flag_s1=True
flag_s2=True
f_w=open(u'例句统计信息.txt','w')
f_w.write((u'编码'+'\t'+u'词语'+'\t'+u'合并前'+'\t'+u'合并后'+'\n').encode('utf-8'))
#for word in src2_dict:
for word in word_list:
num_word+=1
sents=src2_dict[word]
for sent1 in sents[s1]:
if u'是' in sent1[5]:
flag_s1=False
break
for sent2 in sents[s2]:
if u'是' in sent2[11]:
flag_s2=False
break
if word in src1_dict:
sentences=src1_dict[word]
f_w.write((str(num_word)+'\t'+word+'\t'+str(len(src2_dict[word][s1])+len(src2_dict[word][s2]))+'\t').encode('utf-8'))
for sentence in sentences:
buff,s_dict=preprocess(sentence[-1])
for w in buff:
if w in sent[10:]:
flag=False
break
if flag==False:
liju=sentence[1]+':'+sentence[-1]
laiyuan='new'
is_type=''
struct=''.join(['['+w+']' for w in buff])
type_struct=''
is_type_struct=''
ibuff=[' ']*28
for key in s_dict:
if key in sent:
ibuff[sent.index(key)]=s_dict[key]
obuff=[' ']*6
obuff.append(liju)
obuff.append(laiyuan)
obuff.append(is_type)
obuff.append(struct)
obuff.append(type_struct)
obuff.append(is_type_struct)
obuff+=ibuff
src2_dict[word][s2].append(obuff)
flag=True
else:
liju=sentence[1]+':'+sentence[-1]
laiyuan='new'
is_type=''
struct=''.join(['['+w+']' for w in buff])
type_struct=''
is_type_struct=''
ibuff=[' ']*28
for key in s_dict:
if key in sent:
ibuff[sent.index(key)]=s_dict[key]
obuff=[]
obuff.append(liju)
obuff.append(laiyuan)
obuff.append(is_type)
obuff.append(struct)
obuff.append(type_struct)
obuff.append(is_type_struct)
obuff+=[' ']*6
obuff+=ibuff
src2_dict[word][s1].append(obuff)
if len(src2_dict[word][s1])+len(src2_dict[word][s2])<3:
num_word_1+=1
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws1.write(num_1+num2_1,i+6,w_i)
num2_1+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws1.write(num_1+num2_1,i+6,w_i)
num2_1+=1
if num2_1!=0:
ws1.write_merge(num_1,num_1+num2_1-1,0,0,num_word_1)
for j,w in enumerate(src2_dict[word][s0]):
ws1.write_merge(num_1,num_1+num2_1-1,j+1,j+1,w)
else:
ws1.write_merge(num_1,num_1+num2_1,0,0,num_word_1)
for j,w in enumerate(src2_dict[word][s0]):
ws1.write_merge(num_1,num_1+num2_1,j+1,j+1,w)
num2_1+=1
num_1+=num2_1
num2_1=0
flag_s1=True
flag_s2=True
elif not flag_s1 and not flag_s2:
num_word_2+=1
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws2.write(num_2+num2_2,i+6,w_i)
num2_2+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws2.write(num_2+num2_2,i+6,w_i)
num2_2+=1
if num2_2!=0:
ws2.write_merge(num_2,num_2+num2_2-1,0,0,num_word_2)
for j,w in enumerate(src2_dict[word][s0]):
ws2.write_merge(num_2,num_2+num2_2-1,j+1,j+1,w)
else:
ws2.write_merge(num_2,num_2+num2_2,0,0,num_word_2)
for j,w in enumerate(src2_dict[word][s0]):
ws2.write_merge(num_2,num_2+num2_2,j+1,j+1,w)
num2_2+=1
num_2+=num2_2
num2_2=0
flag_s1=True
flag_s2=True
else:
num_word_3+=1
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws3.write(num_3+num2_3,i+6,w_i)
num2_3+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws3.write(num_3+num2_3,i+6,w_i)
num2_3+=1
if num2_3!=0:
ws3.write_merge(num_3,num_3+num2_3-1,0,0,num_word_3)
for j,w in enumerate(src2_dict[word][s0]):
ws3.write_merge(num_3,num_3+num2_3-1,j+1,j+1,w)
else:
ws3.write_merge(num_3,num_3+num2_3,0,0,num_word_3)
for j,w in enumerate(src2_dict[word][s0]):
ws3.write_merge(num_3,num_3+num2_3,j+1,j+1,w)
num2_3+=1
num_3+=num2_3
num2_3=0
flag_s1=True
flag_s2=True
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws.write(num+num2,i+6,w_i)
num2+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws.write(num+num2,i+6,w_i)
num2+=1
if num2!=0:
ws.write_merge(num,num+num2-1,0,0,num_word)
for j,w in enumerate(src2_dict[word][s0]):
ws.write_merge(num,num+num2-1,j+1,j+1,w)
else:
ws.write_merge(num,num+num2,0,0,num_word)
for j,w in enumerate(src2_dict[word][s0]):
ws.write_merge(num,num+num2,j+1,j+1,w)
num2+=1
num+=num2
num2=0
f_w.write((str(len(src2_dict[word][s1])+len(src2_dict[word][s2]))+'\n').encode('utf-8'))
#print num
else:
f_w.write((str(num_word)+'\t'+word+'\t'+str(len(src2_dict[word][s1])+len(src2_dict[word][s2]))+'\t'+str(len(src2_dict[word][s1])+len(src2_dict[word][s2]))+'\n').encode('utf-8'))
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
ws.write(num+num2,i+6,w_i)
num2+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws.write(num+num2,i+6,w_i)
num2+=1
if num2!=0:
ws.write_merge(num,num+num2-1,0,0,num_word)
for j,w in enumerate(src2_dict[word][s0]):
ws.write_merge(num,num+num2-1,j+1,j+1,w)
else:
ws.write_merge(num,num+num2,0,0,num_word)
for j,w in enumerate(src2_dict[word][s0]):
ws.write_merge(num,num+num2,j+1,j+1,w)
num2+=1
num+=num2
num2=0
pass
if len(src2_dict[word][s1])+len(src2_dict[word][s2])<3:
num_word_1+=1
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws1.write(num_1+num2_1,i+6,w_i)
num2_1+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws1.write(num_1+num2_1,i+6,w_i)
num2_1+=1
if num2_1!=0:
ws1.write_merge(num_1,num_1+num2_1-1,0,0,num_word_1)
for j,w in enumerate(src2_dict[word][s0]):
ws1.write_merge(num_1,num_1+num2_1-1,j+1,j+1,w)
else:
ws1.write_merge(num_1,num_1+num2_1,0,0,num_word_1)
for j,w in enumerate(src2_dict[word][s0]):
ws1.write_merge(num_1,num_1+num2_1,j+1,j+1,w)
num2_1+=1
num_1+=num2_1
num2_1=0
flag_s1=True
flag_s2=True
elif not flag_s1 and not flag_s2:
num_word_2+=1
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws2.write(num_2+num2_2,i+6,w_i)
num2_2+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws2.write(num_2+num2_2,i+6,w_i)
num2_2+=1
if num2_2!=0:
ws2.write_merge(num_2,num_2+num2_2-1,0,0,num_word_2)
for j,w in enumerate(src2_dict[word][s0]):
ws2.write_merge(num_2,num_2+num2_2-1,j+1,j+1,w)
else:
ws2.write_merge(num_2,num_2+num2_2,0,0,num_word_2)
for j,w in enumerate(src2_dict[word][s0]):
ws2.write_merge(num_2,num_2+num2_2,j+1,j+1,w)
num2_2+=1
num_2+=num2_2
num2_2=0
flag_s1=True
flag_s2=True
else:
num_word_3+=1
if len(src2_dict[word][s1])>0:
for w in src2_dict[word][s1]:
for i,w_i in enumerate(w):
#print len(w), word,i+6,w_i,num+num2
ws3.write(num_3+num2_3,i+6,w_i)
num2_3+=1
if len(src2_dict[word][s2])>0:
for w in src2_dict[word][s2]:
for i,w_i in enumerate(w):
ws3.write(num_3+num2_3,i+6,w_i)
num2_3+=1
if num2_3!=0:
ws3.write_merge(num_3,num_3+num2_3-1,0,0,num_word_3)
for j,w in enumerate(src2_dict[word][s0]):
ws3.write_merge(num_3,num_3+num2_3-1,j+1,j+1,w)
else:
ws3.write_merge(num_3,num_3+num2_3,0,0,num_word_3)
for j,w in enumerate(src2_dict[word][s0]):
ws3.write_merge(num_3,num_3+num2_3,j+1,j+1,w)
num2_3+=1
num_3+=num2_3
num2_3=0
flag_s1=True
flag_s2=True
#wb.close()
wt.save(res_xls)
f_w.close()
if __name__=='__main__':
import sys
process(sys.argv[1].decode('utf-8'),sys.argv[2].decode('utf-8'),sys.argv[3].decode('utf-8'))
#process(u'语料筛选处理结果.xls',u'results(单义词_二校 一万句)_已选定例句 及框架20170911.xls',u'sum.xls')
| 33.101415 | 189 | 0.536658 | 2,408 | 14,035 | 2.939784 | 0.081811 | 0.090408 | 0.11188 | 0.05933 | 0.759006 | 0.735415 | 0.724114 | 0.711541 | 0.685831 | 0.685831 | 0 | 0.083649 | 0.304952 | 14,035 | 423 | 190 | 33.179669 | 0.64203 | 0.044745 | 0 | 0.727034 | 0 | 0 | 0.01644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005249 | 0.010499 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
458c692003e7f6e5e14c7cb2ca8117bdec035a0c | 168 | py | Python | lang/py/src/avro/tether/__init__.py | liuyu81/avro | 66212c498bea5dfd6ec990528870535547bc9623 | [
"Apache-2.0"
] | 21 | 2015-09-11T19:52:23.000Z | 2021-12-17T02:50:47.000Z | lang/py/src/avro/tether/__init__.py | liuyu81/avro | 66212c498bea5dfd6ec990528870535547bc9623 | [
"Apache-2.0"
] | 10 | 2017-08-28T13:53:41.000Z | 2022-03-30T18:14:02.000Z | lang/py/src/avro/tether/__init__.py | liuyu81/avro | 66212c498bea5dfd6ec990528870535547bc9623 | [
"Apache-2.0"
] | 16 | 2015-12-21T10:30:24.000Z | 2022-02-27T13:19:43.000Z | from .util import *
from .tether_task import *
from .tether_task_runner import *
__all__=util.__all__
__all__+=tether_task.__all__
__all__+=tether_task_runner.__all__
| 21 | 35 | 0.821429 | 24 | 168 | 4.5 | 0.291667 | 0.37037 | 0.296296 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 168 | 7 | 36 | 24 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
45b3f3e30e2100bb0bd881de90858b5639f946f4 | 463 | py | Python | base_models/noop_features.py | apardyl/ProtoPNet | b2bbd7284bfc84a37385c0e975408c68cdf64205 | [
"MIT"
] | 1 | 2021-03-20T13:57:03.000Z | 2021-03-20T13:57:03.000Z | base_models/noop_features.py | apardyl/ProtoPNet | b2bbd7284bfc84a37385c0e975408c68cdf64205 | [
"MIT"
] | null | null | null | base_models/noop_features.py | apardyl/ProtoPNet | b2bbd7284bfc84a37385c0e975408c68cdf64205 | [
"MIT"
] | null | null | null | import torch.nn as nn
class NoopModel(nn.Module):
def forward(self, x):
return x
def conv_info(self):
# resnet18
return ([7, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[2, 2, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1],
[3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
def noop_features(pretrained=False, batch_norm=True, **kwargs):
model = NoopModel()
return model
| 25.722222 | 71 | 0.477322 | 88 | 463 | 2.477273 | 0.318182 | 0.229358 | 0.275229 | 0.275229 | 0.224771 | 0.224771 | 0.224771 | 0.224771 | 0.224771 | 0.224771 | 0 | 0.18241 | 0.336933 | 463 | 17 | 72 | 27.235294 | 0.527687 | 0.017279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.181818 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b306c871eacf0f8a31c2651d3cea768a3568ba0c | 17 | py | Python | Sorter.py | LittleEndu/Codeforces | 82c49b10702c58bc5ce062801d740a2f5f600062 | [
"MIT"
] | null | null | null | Sorter.py | LittleEndu/Codeforces | 82c49b10702c58bc5ce062801d740a2f5f600062 | [
"MIT"
] | null | null | null | Sorter.py | LittleEndu/Codeforces | 82c49b10702c58bc5ce062801d740a2f5f600062 | [
"MIT"
] | null | null | null | # TODO: Implement | 17 | 17 | 0.764706 | 2 | 17 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 17 | 1 | 17 | 17 | 0.866667 | 0.882353 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b32f254a6e31060eb6fa6c56e46ffc49b2848768 | 258 | py | Python | certifico/handlers/index.py | pantuza/certifico | 1fc6bbaf6a8ae68e9f64a2d3515ba049630c58eb | [
"MIT"
] | null | null | null | certifico/handlers/index.py | pantuza/certifico | 1fc6bbaf6a8ae68e9f64a2d3515ba049630c58eb | [
"MIT"
] | null | null | null | certifico/handlers/index.py | pantuza/certifico | 1fc6bbaf6a8ae68e9f64a2d3515ba049630c58eb | [
"MIT"
] | 2 | 2018-09-27T06:19:28.000Z | 2019-07-15T15:04:46.000Z | from flask import render_template
from certifico import app
from certifico.forms import CertificateForm
def index():
return render_template('index.html', form=CertificateForm(),
analytics=app.config.get('GOOGLE_ANALYTICS'))
| 25.8 | 72 | 0.717054 | 29 | 258 | 6.275862 | 0.62069 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20155 | 258 | 9 | 73 | 28.666667 | 0.883495 | 0 | 0 | 0 | 0 | 0 | 0.100775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.5 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.