hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31170262491d77a96f516039b854d0b5acf22f0c | 390 | py | Python | chainerex/dataset/indexers/__init__.py | corochann/chainerex | 15efb34a8fa6afab1ce5ad52c3802960ab6d49c2 | [
"MIT"
] | 1 | 2018-08-30T08:59:50.000Z | 2018-08-30T08:59:50.000Z | chainerex/dataset/indexers/__init__.py | corochann/chainerex | 15efb34a8fa6afab1ce5ad52c3802960ab6d49c2 | [
"MIT"
] | null | null | null | chainerex/dataset/indexers/__init__.py | corochann/chainerex | 15efb34a8fa6afab1ce5ad52c3802960ab6d49c2 | [
"MIT"
] | null | null | null | from chainerex.dataset.indexers.indexer import BaseIndexer # NOQA
from chainerex.dataset.indexers.indexer import ExtractBySliceNotSupportedError # NOQA from chainerex.dataset.indexers.indexer import seIndexer # NOQA
from chainerex.dataset.indexers.feature_indexer import BaseFeatureIndexer # NOQA
from chainerex.dataset.indexers.feature_indexer import install_features_indexer # NOQA
| 78 | 152 | 0.85641 | 44 | 390 | 7.5 | 0.318182 | 0.19697 | 0.30303 | 0.424242 | 0.712121 | 0.712121 | 0.587879 | 0.315152 | 0 | 0 | 0 | 0 | 0.092308 | 390 | 4 | 153 | 97.5 | 0.932203 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
31d8667dc358a9e552dbd6a20f61be7180b7c834 | 164 | py | Python | tests/test_environments.py | kaixinbaba/reinforch | 10a8c21054cbfa03e059e2e19bee7c257faab4bf | [
"MIT"
] | 3 | 2019-04-18T22:20:25.000Z | 2019-04-19T04:51:53.000Z | tests/test_environments.py | kaixinbaba/reinforch | 10a8c21054cbfa03e059e2e19bee7c257faab4bf | [
"MIT"
] | null | null | null | tests/test_environments.py | kaixinbaba/reinforch | 10a8c21054cbfa03e059e2e19bee7c257faab4bf | [
"MIT"
] | null | null | null | from reinforch.environments import OpenAIGym
from reinforch.environments import GymEnv
from reinforch.environments import Environment
def test_import():
pass
| 20.5 | 46 | 0.835366 | 19 | 164 | 7.157895 | 0.526316 | 0.286765 | 0.551471 | 0.683824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128049 | 164 | 7 | 47 | 23.428571 | 0.951049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0.2 | 0.8 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 9 |
9edea724b72478094155376140a63393b0dcb41a | 3,171 | py | Python | qvantum/check_qubit.py | vorpex/qvantum | 07e9f749e0a6c9b2ffdfa3e562ca52f23bd4d2a8 | [
"MIT"
] | 1 | 2019-05-13T06:28:25.000Z | 2019-05-13T06:28:25.000Z | qvantum/check_qubit.py | vorpex/qvantum | 07e9f749e0a6c9b2ffdfa3e562ca52f23bd4d2a8 | [
"MIT"
] | null | null | null | qvantum/check_qubit.py | vorpex/qvantum | 07e9f749e0a6c9b2ffdfa3e562ca52f23bd4d2a8 | [
"MIT"
] | null | null | null | '''checking functions for qubit class'''
# pylint: disable=E1101, W1401
def qubit_init_check(function):
"""Decorator to check the arguments of initialization function in qubit class.
Arguments:
function {} -- The tested function
"""
def wrapper(self, alpha, beta):
"""Method to initialize an instance of the qubit class. The squared sum of alpha and beta
must be equal to zero otherwise a ValueError will be thrown.
Arguments:
alpha {int, float, complex} -- Amplitude or probability of being in state 0
beta {int, float, complex} -- Amplitude or probability of being in state 1
Raises:
ValueError, TypeError
Examples:
>>> import math
>>> import qvantum
>>>
>>> q = qvantum.Qubit(1, 0)
>>> q.show()
'|Ψ> = (1.0000+0.0000i)|0> + (0.0000+0.0000i)|1>'
>>> qvantum.Qubit(1 / math.sqrt(2), 1 / math.sqrt(2)).show()
'|Ψ> = (0.7071+0.0000i)|0> + (0.7071+0.0000i)|1>'
"""
if all(isinstance(elem, (int, float, complex)) for elem in [alpha, beta]):
if round(abs(alpha) ** 2 + abs(beta) ** 2 - 1, 10) == 0:
return function(self, alpha, beta)
else:
raise ValueError('Invalid input! Alpha and beta must satisfy: ' +\
'|alpha|\u00b2 + |beta|\u00b2 = 1.')
else:
raise TypeError('Invalid input! Alpha and beta must be integer, float or complex.')
return wrapper
def set_amplitudes_check(function):
"""Decorator to check the arguments of setting new amplitudes function in qubit class.
Arguments:
function {} -- The tested function
"""
def wrapper(self, alpha, beta):
"""Setter method to replace the old coefficients to new ones. The squared sum of alpha and
beta must be equal to zero otherwise a ValueError will be thrown.
Arguments:
alpha {int, float, complex} -- Amplitude or probability of being in state 0
beta {int, float, complex} -- Amplitude or probability of being in state 1
Raises:
ValueError, TypeError
Examples:
>>> import math
>>> import qvantum
>>>
>>> q = qvantum.Qubit(1, 0)
>>> q.show()
'|Ψ> = (1.0000+0.0000i)|0> + (0.0000+0.0000i)|1>'
>>> q.set_amplitudes(0, 1)
>>> q.show()
'|Ψ> = (0.0000+0.0000i)|0> + (1.0000+0.0000i)|1>'
"""
if all(isinstance(elem, (int, float, complex)) for elem in [alpha, beta]):
if round(abs(alpha) ** 2 + abs(beta) ** 2 - 1, 10) == 0:
return function(self, alpha, beta)
else:
raise ValueError('Invalid input! Alpha and beta must satisfy: ' +\
'|alpha|\u00b2 + |beta|\u00b2 = 1.')
else:
raise TypeError('Invalid input! Alpha and beta must be integer, float or complex.')
return wrapper
| 35.629213 | 99 | 0.530432 | 379 | 3,171 | 4.427441 | 0.237467 | 0.028605 | 0.042908 | 0.057211 | 0.821216 | 0.821216 | 0.821216 | 0.821216 | 0.769964 | 0.769964 | 0 | 0.066019 | 0.350363 | 3,171 | 88 | 100 | 36.034091 | 0.748058 | 0.00883 | 0 | 0.909091 | 0 | 0 | 0.234414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7308303e1e3ab8f1da465beca0cbdffb341480ed | 8,199 | py | Python | pionic/antlr/IonTextListener.py | tlocke/pion | c89f2200727ef5647889e94b185406bc503c1924 | [
"MIT"
] | 11 | 2017-07-23T23:14:42.000Z | 2017-08-11T12:44:14.000Z | pionic/antlr/IonTextListener.py | tlocke/pion | c89f2200727ef5647889e94b185406bc503c1924 | [
"MIT"
] | null | null | null | pionic/antlr/IonTextListener.py | tlocke/pion | c89f2200727ef5647889e94b185406bc503c1924 | [
"MIT"
] | null | null | null | # Generated from IonText.g4 by ANTLR 4.7
from antlr4 import *
if __name__ is not None and "." in __name__:
from .IonTextParser import IonTextParser
else:
from IonTextParser import IonTextParser
# This class defines a complete listener for a parse tree produced by IonTextParser.
class IonTextListener(ParseTreeListener):
# Enter a parse tree produced by IonTextParser#top_level.
def enterTop_level(self, ctx:IonTextParser.Top_levelContext):
pass
# Exit a parse tree produced by IonTextParser#top_level.
def exitTop_level(self, ctx:IonTextParser.Top_levelContext):
pass
# Enter a parse tree produced by IonTextParser#top_level_value.
def enterTop_level_value(self, ctx:IonTextParser.Top_level_valueContext):
pass
# Exit a parse tree produced by IonTextParser#top_level_value.
def exitTop_level_value(self, ctx:IonTextParser.Top_level_valueContext):
pass
# Enter a parse tree produced by IonTextParser#value.
def enterValue(self, ctx:IonTextParser.ValueContext):
pass
# Exit a parse tree produced by IonTextParser#value.
def exitValue(self, ctx:IonTextParser.ValueContext):
pass
# Enter a parse tree produced by IonTextParser#entity.
def enterEntity(self, ctx:IonTextParser.EntityContext):
pass
# Exit a parse tree produced by IonTextParser#entity.
def exitEntity(self, ctx:IonTextParser.EntityContext):
pass
# Enter a parse tree produced by IonTextParser#delimiting_entity.
def enterDelimiting_entity(self, ctx:IonTextParser.Delimiting_entityContext):
pass
# Exit a parse tree produced by IonTextParser#delimiting_entity.
def exitDelimiting_entity(self, ctx:IonTextParser.Delimiting_entityContext):
pass
# Enter a parse tree produced by IonTextParser#keyword_delimiting_entity.
def enterKeyword_delimiting_entity(self, ctx:IonTextParser.Keyword_delimiting_entityContext):
pass
# Exit a parse tree produced by IonTextParser#keyword_delimiting_entity.
def exitKeyword_delimiting_entity(self, ctx:IonTextParser.Keyword_delimiting_entityContext):
pass
# Enter a parse tree produced by IonTextParser#keyword_entity.
def enterKeyword_entity(self, ctx:IonTextParser.Keyword_entityContext):
pass
# Exit a parse tree produced by IonTextParser#keyword_entity.
def exitKeyword_entity(self, ctx:IonTextParser.Keyword_entityContext):
pass
# Enter a parse tree produced by IonTextParser#numeric_entity.
def enterNumeric_entity(self, ctx:IonTextParser.Numeric_entityContext):
pass
# Exit a parse tree produced by IonTextParser#numeric_entity.
def exitNumeric_entity(self, ctx:IonTextParser.Numeric_entityContext):
pass
# Enter a parse tree produced by IonTextParser#annotation.
def enterAnnotation(self, ctx:IonTextParser.AnnotationContext):
pass
# Exit a parse tree produced by IonTextParser#annotation.
def exitAnnotation(self, ctx:IonTextParser.AnnotationContext):
pass
# Enter a parse tree produced by IonTextParser#quoted_annotation.
def enterQuoted_annotation(self, ctx:IonTextParser.Quoted_annotationContext):
pass
# Exit a parse tree produced by IonTextParser#quoted_annotation.
def exitQuoted_annotation(self, ctx:IonTextParser.Quoted_annotationContext):
pass
# Enter a parse tree produced by IonTextParser#list_type.
def enterList_type(self, ctx:IonTextParser.List_typeContext):
pass
# Exit a parse tree produced by IonTextParser#list_type.
def exitList_type(self, ctx:IonTextParser.List_typeContext):
pass
# Enter a parse tree produced by IonTextParser#sexp.
def enterSexp(self, ctx:IonTextParser.SexpContext):
pass
# Exit a parse tree produced by IonTextParser#sexp.
def exitSexp(self, ctx:IonTextParser.SexpContext):
pass
# Enter a parse tree produced by IonTextParser#sexp_value.
def enterSexp_value(self, ctx:IonTextParser.Sexp_valueContext):
pass
# Exit a parse tree produced by IonTextParser#sexp_value.
def exitSexp_value(self, ctx:IonTextParser.Sexp_valueContext):
pass
# Enter a parse tree produced by IonTextParser#sexp_delimiting_entity.
def enterSexp_delimiting_entity(self, ctx:IonTextParser.Sexp_delimiting_entityContext):
pass
# Exit a parse tree produced by IonTextParser#sexp_delimiting_entity.
def exitSexp_delimiting_entity(self, ctx:IonTextParser.Sexp_delimiting_entityContext):
pass
# Enter a parse tree produced by IonTextParser#sexp_keyword_delimiting_entity.
def enterSexp_keyword_delimiting_entity(self, ctx:IonTextParser.Sexp_keyword_delimiting_entityContext):
pass
# Exit a parse tree produced by IonTextParser#sexp_keyword_delimiting_entity.
def exitSexp_keyword_delimiting_entity(self, ctx:IonTextParser.Sexp_keyword_delimiting_entityContext):
pass
# Enter a parse tree produced by IonTextParser#sexp_null_delimiting_entity.
def enterSexp_null_delimiting_entity(self, ctx:IonTextParser.Sexp_null_delimiting_entityContext):
pass
# Exit a parse tree produced by IonTextParser#sexp_null_delimiting_entity.
def exitSexp_null_delimiting_entity(self, ctx:IonTextParser.Sexp_null_delimiting_entityContext):
pass
# Enter a parse tree produced by IonTextParser#sexp_keyword_entity.
def enterSexp_keyword_entity(self, ctx:IonTextParser.Sexp_keyword_entityContext):
pass
# Exit a parse tree produced by IonTextParser#sexp_keyword_entity.
def exitSexp_keyword_entity(self, ctx:IonTextParser.Sexp_keyword_entityContext):
pass
# Enter a parse tree produced by IonTextParser#operator.
def enterOperator(self, ctx:IonTextParser.OperatorContext):
pass
# Exit a parse tree produced by IonTextParser#operator.
def exitOperator(self, ctx:IonTextParser.OperatorContext):
pass
# Enter a parse tree produced by IonTextParser#struct.
def enterStruct(self, ctx:IonTextParser.StructContext):
pass
# Exit a parse tree produced by IonTextParser#struct.
def exitStruct(self, ctx:IonTextParser.StructContext):
pass
# Enter a parse tree produced by IonTextParser#field.
def enterField(self, ctx:IonTextParser.FieldContext):
pass
# Exit a parse tree produced by IonTextParser#field.
def exitField(self, ctx:IonTextParser.FieldContext):
pass
# Enter a parse tree produced by IonTextParser#any_null.
def enterAny_null(self, ctx:IonTextParser.Any_nullContext):
pass
# Exit a parse tree produced by IonTextParser#any_null.
def exitAny_null(self, ctx:IonTextParser.Any_nullContext):
pass
# Enter a parse tree produced by IonTextParser#typed_null.
def enterTyped_null(self, ctx:IonTextParser.Typed_nullContext):
pass
# Exit a parse tree produced by IonTextParser#typed_null.
def exitTyped_null(self, ctx:IonTextParser.Typed_nullContext):
pass
# Enter a parse tree produced by IonTextParser#field_name.
def enterField_name(self, ctx:IonTextParser.Field_nameContext):
pass
# Exit a parse tree produced by IonTextParser#field_name.
def exitField_name(self, ctx:IonTextParser.Field_nameContext):
pass
# Enter a parse tree produced by IonTextParser#quoted_text.
def enterQuoted_text(self, ctx:IonTextParser.Quoted_textContext):
pass
# Exit a parse tree produced by IonTextParser#quoted_text.
def exitQuoted_text(self, ctx:IonTextParser.Quoted_textContext):
pass
# Enter a parse tree produced by IonTextParser#symbol.
def enterSymbol(self, ctx:IonTextParser.SymbolContext):
pass
# Exit a parse tree produced by IonTextParser#symbol.
def exitSymbol(self, ctx:IonTextParser.SymbolContext):
pass
# Enter a parse tree produced by IonTextParser#ws.
def enterWs(self, ctx:IonTextParser.WsContext):
pass
# Exit a parse tree produced by IonTextParser#ws.
def exitWs(self, ctx:IonTextParser.WsContext):
pass
| 33.465306 | 107 | 0.747286 | 974 | 8,199 | 6.128337 | 0.10883 | 0.053275 | 0.088792 | 0.159826 | 0.877701 | 0.79226 | 0.786731 | 0.664098 | 0.588541 | 0.261853 | 0 | 0.000605 | 0.193804 | 8,199 | 244 | 108 | 33.602459 | 0.902421 | 0.381144 | 0 | 0.472727 | 1 | 0 | 0.000201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.472727 | false | 0.472727 | 0.027273 | 0 | 0.509091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
731888fe8524792b44ccc9f8640c4c87cda62697 | 78 | py | Python | stage-1/mMath.py | Julianhm9612/seti-python-course | 6751aa46199784418f5d0d32b6128c71f22e8d4e | [
"MIT"
] | null | null | null | stage-1/mMath.py | Julianhm9612/seti-python-course | 6751aa46199784418f5d0d32b6128c71f22e8d4e | [
"MIT"
] | null | null | null | stage-1/mMath.py | Julianhm9612/seti-python-course | 6751aa46199784418f5d0d32b6128c71f22e8d4e | [
"MIT"
] | null | null | null | def sum(n1, n2):
print(n1 + n2)
def substract(n1, n2):
print(n1 - n2) | 15.6 | 22 | 0.564103 | 14 | 78 | 3.142857 | 0.428571 | 0.363636 | 0.409091 | 0.5 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 0.25641 | 78 | 5 | 23 | 15.6 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
732bd78b16b6ac671a50b7c28b340ad0f014b96c | 10,215 | py | Python | tests/test_nonderivatives_extraction.py | rs-kellogg/edgar2data | 1c382e4ae36fa2ed7f240aa5b7d16dd154cbfb2b | [
"MIT"
] | null | null | null | tests/test_nonderivatives_extraction.py | rs-kellogg/edgar2data | 1c382e4ae36fa2ed7f240aa5b7d16dd154cbfb2b | [
"MIT"
] | null | null | null | tests/test_nonderivatives_extraction.py | rs-kellogg/edgar2data | 1c382e4ae36fa2ed7f240aa5b7d16dd154cbfb2b | [
"MIT"
] | 1 | 2021-02-16T13:43:18.000Z | 2021-02-16T13:43:18.000Z | """
Copyright (c) 2021 Northwestern University. All rights reserved.
This work is licensed under the terms of the MIT license.
For a copy, see <https://opensource.org/licenses/MIT>.
"""
import os
from conftest import validate
from edgar.forms.form3 import Form3
from edgar.forms.form4 import Form4
from edgar.forms.form5 import Form5
dir_path = os.path.dirname(os.path.realpath(__file__))
def test_extract_nonderivatives_form3_collection(test_form3_collection):
"""
Validate Form3 extraction code against a random sample of documents
:param test_form3_collection:
:return:
"""
for file in test_form3_collection.glob("*.txt"):
doc = Form3(file)
assert doc.filename == file.name
fields_list = doc.nonderivatives
assert len(fields_list) >= 0
for idx, fields in enumerate(fields_list):
assert (len(fields)) == 8
assert fields["filename"] == file.name
assert fields["accession_num"] == doc.accession_num
assert fields["order"] == f"{idx+1}"
assert fields["type"] == "nonDerivHolding"
assert fields["index"] == f"nonDerivHolding{idx+1}"
assert validate(file, fields["security_title"], r".+")
assert validate(
file, fields["shares_owned_following_transaction"], r"[\d\.]+"
)
assert validate(file, fields["direct_or_indirect_ownership"], r"[DI]")
def test_extract_signature_form3(test_form3):
"""
Validate Form3 extraction code against a single detailed example
:param test_form3:
:return:
"""
doc = Form3(test_form3)
assert doc.accession_num == "0001209191-20-054135"
assert doc.filename == test_form3.name
fields_list = doc.nonderivatives
assert len(fields_list) == 1
assert len(fields_list[0]) == 8
assert fields_list[0]["filename"] == test_form3.name
assert fields_list[0]["accession_num"] == doc.accession_num
assert fields_list[0]["order"] == "1"
assert fields_list[0]["type"] == "nonDerivHolding"
assert fields_list[0]["index"] == "nonDerivHolding1"
assert fields_list[0]["security_title"] == "Common Stock, $0.01 par value"
assert fields_list[0]["shares_owned_following_transaction"] == "157800"
assert fields_list[0]["direct_or_indirect_ownership"] == "D"
def test_extract_nonderivative_trans_form4_collection(test_form4_collection):
"""
Validate Form4 extraction code against a random sample of documents
:param test_form4_collection:
:return:
"""
for file in test_form4_collection.glob("*.txt"):
doc = Form4(file)
assert doc.filename == file.name
fields_list = doc.nonderivatives
assert len(fields_list) >= 0
trans_fields_list = [f for f in fields_list if f["type"] == "nonDerivTrans"]
for idx, fields in enumerate(trans_fields_list):
assert (len(fields)) == 19
assert fields["filename"] == file.name
assert fields["accession_num"] == doc.accession_num
assert fields["order"] == f"{idx+1}"
assert fields["type"] == "nonDerivTrans"
assert fields["index"] == f"nonDerivTrans{idx+1}"
assert validate(file, fields["security_title"], r".+")
assert validate(file, fields["transaction_date"], r"\d\d\d\d-\d\d-\d\d")
assert validate(file, fields["transaction_acquired_disposed_code"], r"[AD]")
assert validate(file, fields["transaction_price_per_share"], r"[\d\.]*")
assert validate(file, fields["transaction_shares"], r"[\d\.]*")
assert validate(file, fields["direct_or_indirect_ownership"], r"[DI]")
assert validate(file, fields["equity_swap_involved"], r"[10]")
assert validate(file, fields["transaction_form_type"], r"[45]")
assert validate(
file, fields["shares_owned_following_transaction"], r"[\d\.]*"
)
assert validate(file, fields["transaction_code"], r"[A-Z]")
holdings_fields_list = [
f for f in fields_list if f["type"] == "nonDerivHolding"
]
for idx, fields in enumerate(holdings_fields_list):
assert (len(fields)) == 19
assert fields["filename"] == file.name
assert fields["accession_num"] == doc.accession_num
assert fields["order"] == f"{idx+1}"
assert fields["type"] == "nonDerivHolding"
assert fields["index"] == f"nonDerivHolding{idx+1}"
assert validate(file, fields["accession_num"], r"[\d-]+")
assert validate(file, fields["security_title"], r".+")
assert validate(file, fields["direct_or_indirect_ownership"], r"[DI]")
def test_extract_nonderivative_trans_form4(test_form4):
"""
Validate Form4 extraction code against a single detailed example
:param test_form4:
:return:
"""
doc = Form4(test_form4)
assert doc.accession_num == "0001012975-17-000759"
assert doc.filename == test_form4.name
fields_list = doc.nonderivatives
assert len(fields_list) == 1
assert fields_list[0]["filename"] == test_form4.name
assert fields_list[0]["accession_num"] == doc.accession_num
assert fields_list[0]["order"] == "1"
assert fields_list[0]["type"] == "nonDerivTrans"
assert fields_list[0]["index"] == "nonDerivTrans1"
assert fields_list[0]["security_title"] == "Common Stock, par value $0.0001"
assert fields_list[0]["transaction_date"] == "2017-10-17"
assert fields_list[0]["deemed_execution_date"] is None
assert fields_list[0]["transaction_acquired_disposed_code"] == "D"
assert fields_list[0]["transaction_timeliness"] is None
assert fields_list[0]["transaction_price_per_share"] == "0"
assert fields_list[0]["transaction_shares"] == "1278471"
assert fields_list[0]["direct_or_indirect_ownership"] == "I"
assert fields_list[0]["equity_swap_involved"] == "0"
assert fields_list[0]["nature_of_ownership"] == "See Footnote"
assert fields_list[0]["transaction_form_type"] == "4"
assert fields_list[0]["shares_owned_following_transaction"] == "0"
assert fields_list[0]["value_owned_following_transaction"] is None
assert fields_list[0]["transaction_code"] == "J"
def test_extract_nonderivative_trans_form5_collection(test_form5_collection):
"""
Validate Form5 extraction code against a random sample of documents
:param test_form5_collection:
:return:
"""
for file in test_form5_collection.glob("*.txt"):
doc = Form5(file)
assert doc.filename == file.name
fields_list = doc.nonderivatives
trans_fields_list = [f for f in fields_list if f["type"] == "nonDerivTrans"]
for idx, fields in enumerate(trans_fields_list):
assert (len(fields)) == 19
assert fields["filename"] == file.name
assert fields["accession_num"] == doc.accession_num
assert fields["order"] == f"{idx+1}"
assert fields["type"] == "nonDerivTrans"
assert fields["index"] == f"nonDerivTrans{idx+1}"
assert validate(file, fields["security_title"], r".+")
assert validate(file, fields["transaction_date"], r"\d\d\d\d-\d\d-\d\d")
assert validate(file, fields["transaction_acquired_disposed_code"], r"[AD]")
assert validate(file, fields["transaction_price_per_share"], r"[\d\.]*")
assert validate(file, fields["transaction_shares"], r"[\d\.]*")
assert validate(file, fields["direct_or_indirect_ownership"], r"[DI]")
assert validate(file, fields["equity_swap_involved"], r"[10]")
assert validate(file, fields["transaction_form_type"], r"[45]")
assert validate(
file, fields["shares_owned_following_transaction"], r"[\d\.]*"
)
assert validate(file, fields["transaction_code"], r"[A-Z]")
holdings_fields_list = [
f for f in fields_list if f["type"] == "nonDerivHolding"
]
for idx, fields in enumerate(holdings_fields_list):
assert (len(fields)) == 19
assert fields["filename"] == file.name
assert fields["accession_num"] == doc.accession_num
assert fields["order"] == f"{idx+1}"
assert fields["type"] == "nonDerivHolding"
assert fields["index"] == f"nonDerivHolding{idx+1}"
assert validate(file, fields["accession_num"], r"[\d-]+")
assert validate(file, fields["security_title"], r".+")
assert validate(file, fields["direct_or_indirect_ownership"], r"[DI]")
def test_extract_nonderivative_trans_form5(test_form5):
"""
Validate Form5 extraction code against a single detailed example
:param test_form5:
:return:
"""
doc = Form5(test_form5)
assert doc.accession_num == "0000011544-20-000013"
assert doc.filename == test_form5.name
fields_list = doc.nonderivatives
assert len(fields_list) == 1
assert fields_list[0]["filename"] == test_form5.name
assert fields_list[0]["accession_num"] == doc.accession_num
assert fields_list[0]["order"] == "1"
assert fields_list[0]["type"] == "nonDerivTrans"
assert fields_list[0]["index"] == "nonDerivTrans1"
assert fields_list[0]["security_title"] == "Common Stock"
assert fields_list[0]["transaction_date"] == "2019-12-11"
assert fields_list[0]["deemed_execution_date"] is None
assert fields_list[0]["transaction_acquired_disposed_code"] == "A"
assert fields_list[0]["transaction_timeliness"] is None
assert fields_list[0]["transaction_price_per_share"] == "70.27"
assert fields_list[0]["transaction_shares"] == "3"
assert fields_list[0]["direct_or_indirect_ownership"] == "D"
assert fields_list[0]["equity_swap_involved"] == "0"
assert fields_list[0]["nature_of_ownership"] is None
assert fields_list[0]["transaction_form_type"] == "5"
assert fields_list[0]["shares_owned_following_transaction"] == "359"
assert fields_list[0]["value_owned_following_transaction"] is None
assert fields_list[0]["transaction_code"] == "P"
| 44.030172 | 88 | 0.651982 | 1,256 | 10,215 | 5.078822 | 0.119427 | 0.111303 | 0.084496 | 0.12259 | 0.847625 | 0.833987 | 0.769713 | 0.757486 | 0.7219 | 0.683963 | 0 | 0.030186 | 0.211943 | 10,215 | 231 | 89 | 44.220779 | 0.762236 | 0.076163 | 0 | 0.626506 | 0 | 0 | 0.257346 | 0.104833 | 0 | 0 | 0 | 0 | 0.722892 | 1 | 0.036145 | false | 0 | 0.03012 | 0 | 0.066265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
733161d3d1e822da3ae8f0809748aea691ba71c9 | 76,787 | py | Python | test_retrieval.py | mohamedaboalimaa/tirg | c3399b584f479a8eebe52e92eb9f62b46c8c969e | [
"Apache-2.0"
] | null | null | null | test_retrieval.py | mohamedaboalimaa/tirg | c3399b584f479a8eebe52e92eb9f62b46c8c969e | [
"Apache-2.0"
] | null | null | null | test_retrieval.py | mohamedaboalimaa/tirg | c3399b584f479a8eebe52e92eb9f62b46c8c969e | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Evaluates the retrieval model."""
import numpy as np
import pickle
import torch
from tqdm import tqdm as tqdm
from scipy.spatial import distance
import datasets
from BK import main2
import torchvision
Path1=r"D:\personal\master\MyCode\files"
#Path1=r"C:\MMaster\Files"
################# Test by accessing image directly #########################
def test(opt, model, testset):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
imgs = []
mods = []
for t in tqdm(test_queries):
imgs += [testset.get_img(t['source_img_id'])]
mods += [t['mod']['str']]
if len(imgs) >= opt.batch_size or t is test_queries[-1]:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)#.cuda()
f = model.compose_img_text(imgs, mods).data.cpu().numpy()
all_queries += [f]
imgs = []
mods = []
all_queries = np.concatenate(all_queries)
all_target_captions = [t['target_caption'] for t in test_queries]
# compute all image features
imgs = []
for i in tqdm(range(len(testset.imgs))):
imgs += [testset.get_img(i)]
if len(imgs) >= opt.batch_size or i == len(testset.imgs) - 1:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)#.cuda()
imgs = model.extract_img_feature(imgs).data.cpu().numpy()
all_imgs += [imgs]
imgs = []
all_imgs = np.concatenate(all_imgs)
all_captions = [img['captions'][0] for img in testset.imgs]
else:
# use training queries to approximate training retrieval performance
imgs0 = []
imgs = []
mods = []
for i in range(10000):
print('get images=',i,end='\r')
item = testset[i]
imgs += [item['source_img_data']]
mods += [item['mod']['str']]
if len(imgs) >= opt.batch_size or i == 9999:
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
f = model.compose_img_text(imgs, mods).data.cpu().numpy() #.cuda()
all_queries += [f]
imgs = []
mods = []
imgs0 += [item['target_img_data']]
if len(imgs0) >= opt.batch_size or i == 9999:
imgs0 = torch.stack(imgs0).float()
imgs0 = torch.autograd.Variable(imgs0)
imgs0 = model.extract_img_feature(imgs0).data.cpu().numpy() #.cuda()
all_imgs += [imgs0]
imgs0 = []
all_captions += [item['target_caption']]
all_target_captions += [item['target_caption']]
all_imgs = np.concatenate(all_imgs)
all_queries = np.concatenate(all_queries)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
################# Test by accessing files directly saved before #########################
def testLoaded(opt, model, testset):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedWithoutModel(opt, model, testset):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_imagesWithoutModelTrig()
all_captions = datasets.Features33K().Get_all_captionsWithoutModelTrig()
all_queries = datasets.Features33K().Get_all_queriesWithoutModelTrig()
all_target_captions = datasets.Features33K().Get_target_captionsWithoutModelTrig()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_imagesWithoutModelTrig()[:10000]
all_captions = datasets.Features172K().Get_all_captionsWithoutModelTrig()[:10000]
all_queries = datasets.Features172K().Get_all_queriesWithoutModelTrig()[:10000]
all_target_captions = datasets.Features172K().Get_all_captionsWithoutModelTrig()[:10000]
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedWithoutModeRegModel(opt, model, testset,reg):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_imagesWithoutModelTrig()
all_captions = datasets.Features33K().Get_all_captionsWithoutModelTrig()
all_queries = datasets.Features33K().Get_all_queriesWithoutModelTrig()
all_target_captions = datasets.Features33K().Get_target_captionsWithoutModelTrig()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_imagesWithoutModelTrig()[:10000]
all_captions = datasets.Features172K().Get_all_captionsWithoutModelTrig()[:10000]
all_queries = datasets.Features172K().Get_all_queriesWithoutModelTrig()[:10000]
all_target_captions = datasets.Features172K().Get_all_captionsWithoutModelTrig()[:10000]
#all_queries=all_queries1
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
all_queries =reg.predict(all_queries)
all_queries= np.array(all_queries)
# match test queries to target images, get nearest neighbors
nn_result = []
#euc_new_nn_result=[]
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
#euc_new_sims=np.sum(abs(all_imgs-all_queries[i, :]),axis=1)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
#euc_new_sims[test_queries[i]['source_img_id']]=10e10
nn_result.append(np.argsort(-sims[0, :])[:110])
#euc_new_nn_result.append(np.argsort(euc_new_sims)[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
#euc_new_nn_result = [[all_captions[nn] for nn in nns] for nns in euc_new_nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
# r = 0.0
# for i, nns in enumerate(euc_new_nn_result):
# if all_target_captions[i] in nns[:k]:
# r += 1
# r /= len(euc_new_nn_result)
# #out += [('recall_top' + str(k) + '_correct_composition', r)]
# out.append('EUC:' +str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedRegModel(opt, model, testset,reg):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries=all_queries1
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
all_queries =reg.predict(all_queries)
all_queries= np.array(all_queries)
# match test queries to target images, get nearest neighbors
nn_result = []
#euc_new_nn_result=[]
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
#euc_new_sims=np.sum(abs(all_imgs-all_queries[i, :]),axis=1)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
#euc_new_sims[test_queries[i]['source_img_id']]=10e10
nn_result.append(np.argsort(-sims[0, :])[:110])
#euc_new_nn_result.append(np.argsort(euc_new_sims)[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
#euc_new_nn_result = [[all_captions[nn] for nn in nns] for nns in euc_new_nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
# r = 0.0
# for i, nns in enumerate(euc_new_nn_result):
# if all_target_captions[i] in nns[:k]:
# r += 1
# r /= len(euc_new_nn_result)
# #out += [('recall_top' + str(k) + '_correct_composition', r)]
# out.append('EUC:' +str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedRegModelPlusFFL(opt, model, testset,reg,FFL):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries=all_queries1
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# all_queries =reg.predict(all_queries)
all_queries=FFL.myforward(torch.FloatTensor(all_queries)).data.cpu().numpy()
all_queries= np.array(all_queries)
# match test queries to target images, get nearest neighbors
nn_result = []
#euc_new_nn_result=[]
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
#euc_new_sims=np.sum(abs(all_imgs-all_queries[i, :]),axis=1)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
#euc_new_sims[test_queries[i]['source_img_id']]=10e10
nn_result.append(np.argsort(-sims[0, :])[:110])
#euc_new_nn_result.append(np.argsort(euc_new_sims)[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
#euc_new_nn_result = [[all_captions[nn] for nn in nns] for nns in euc_new_nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
# r = 0.0
# for i, nns in enumerate(euc_new_nn_result):
# if all_target_captions[i] in nns[:k]:
# r += 1
# r /= len(euc_new_nn_result)
# #out += [('recall_top' + str(k) + '_correct_composition', r)]
# out.append('EUC:' +str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedRandomForestRegressor(opt, model, testset,reg):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
# all_imgs = datasets.Features172K().Get_all_images()[140000:172048]
# all_captions = datasets.Features172K().Get_all_captions()[140000:172048]
# all_queries1 = datasets.Features172K().Get_all_queries()[140000:172048]
# all_target_captions = datasets.Features172K().Get_all_captions()[140000:172048]
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries=all_queries1
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
all_queries =reg.predict(all_queries)
all_queries= np.array(all_queries)
# match test queries to target images, get nearest neighbors
nn_result = []
#euc_new_nn_result=[]
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
#euc_new_sims=np.sum(abs(all_imgs-all_queries[i, :]),axis=1)
# if test_queries:
# sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
#euc_new_sims[test_queries[i]['source_img_id']]=10e10
nn_result.append(np.argsort(-sims[0, :])[:110])
#euc_new_nn_result.append(np.argsort(euc_new_sims)[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
#euc_new_nn_result = [[all_captions[nn] for nn in nns] for nns in euc_new_nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
# r = 0.0
# for i, nns in enumerate(euc_new_nn_result):
# if all_target_captions[i] in nns[:k]:
# r += 1
# r /= len(euc_new_nn_result)
# #out += [('recall_top' + str(k) + '_correct_composition', r)]
# out.append('EUC:' +str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedBeta(opt, model, testset,beta):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
# with open (Path1+"\\ERRORtrainLoaded.txt", 'rb') as fp:
# ERROR = pickle.load(fp)
# X2=X2+ERROR
all_queries.append(X2)
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
# with open (Path1+"\\ERRORtrainLoaded.txt", 'rb') as fp:
# ERROR = pickle.load(fp)
# X2=X2+ERROR
all_queries.append(X2)
all_queries= np.array(all_queries)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedold(opt, model, testset):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_imagesold()
all_captions = datasets.Features33K().Get_all_captionsold()
all_queries = datasets.Features33K().Get_all_queriesold()
all_target_captions = datasets.Features33K().Get_target_captionsold()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_imagesold()[:10000]
all_captions = datasets.Features172K().Get_all_captionsold()[:10000]
all_queries = datasets.Features172K().Get_all_queriesold()[:10000]
all_target_captions = datasets.Features172K().Get_all_captionsold()[:10000]
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedBetaold(opt, model, testset,beta):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_imagesold()
all_captions = datasets.Features33K().Get_all_captionsold()
all_queries1 = datasets.Features33K().Get_all_queriesold()
all_target_captions = datasets.Features33K().Get_target_captionsold()
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
# with open (Path1+"\\ERRORtrainLoaded.txt", 'rb') as fp:
# ERROR = pickle.load(fp)
# X2=X2+ERROR
all_queries.append(X2)
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_imagesold()[:10000]
all_captions = datasets.Features172K().Get_all_captionsold()[:10000]
all_queries1 = datasets.Features172K().Get_all_queriesold()[:10000]
all_target_captions = datasets.Features172K().Get_all_captionsold()[:10000]
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
# with open (Path1+"\\ERRORtrainLoaded.txt", 'rb') as fp:
# ERROR = pickle.load(fp)
# X2=X2+ERROR
all_queries.append(X2)
all_queries= np.array(all_queries)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedRegModelphix(opt, model, testset,reg):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_phixtarget()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_phix()
all_target_captions = datasets.Features33K().Get_target_captions()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_phixtarget()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_phix()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries=all_queries1
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
all_queries =reg.predict(all_queries)
all_queries= np.array(all_queries)
# match test queries to target images, get nearest neighbors
nn_result = []
#euc_new_nn_result=[]
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
#euc_new_sims=np.sum(abs(all_imgs-all_queries[i, :]),axis=1)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
#euc_new_sims[test_queries[i]['source_img_id']]=10e10
nn_result.append(np.argsort(-sims[0, :])[:110])
#euc_new_nn_result.append(np.argsort(euc_new_sims)[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
#euc_new_nn_result = [[all_captions[nn] for nn in nns] for nns in euc_new_nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
# r = 0.0
# for i, nns in enumerate(euc_new_nn_result):
# if all_target_captions[i] in nns[:k]:
# r += 1
# r /= len(euc_new_nn_result)
# #out += [('recall_top' + str(k) + '_correct_composition', r)]
# out.append('EUC:' +str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedRandomForestRegressorphix(opt, model, testset,reg):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_phixtarget()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_phix()
all_target_captions = datasets.Features33K().Get_target_captions()
# all_imgs = datasets.Features172K().Get_all_images()[140000:172048]
# all_captions = datasets.Features172K().Get_all_captions()[140000:172048]
# all_queries1 = datasets.Features172K().Get_all_queries()[140000:172048]
# all_target_captions = datasets.Features172K().Get_all_captions()[140000:172048]
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_phixtarget()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_phix()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries=all_queries1
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
all_queries =reg.predict(all_queries)
all_queries= np.array(all_queries)
# match test queries to target images, get nearest neighbors
nn_result = []
#euc_new_nn_result=[]
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
#euc_new_sims=np.sum(abs(all_imgs-all_queries[i, :]),axis=1)
# if test_queries:
# sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
#euc_new_sims[test_queries[i]['source_img_id']]=10e10
nn_result.append(np.argsort(-sims[0, :])[:110])
#euc_new_nn_result.append(np.argsort(euc_new_sims)[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
#euc_new_nn_result = [[all_captions[nn] for nn in nns] for nns in euc_new_nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
# r = 0.0
# for i, nns in enumerate(euc_new_nn_result):
# if all_target_captions[i] in nns[:k]:
# r += 1
# r /= len(euc_new_nn_result)
# #out += [('recall_top' + str(k) + '_correct_composition', r)]
# out.append('EUC:' +str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedNLP(opt, model, testset, model2):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
all_queries=(torch.Tensor(all_queries))
all_queries=model2.myforward(all_queries).data.cpu().numpy()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries=(torch.Tensor(all_queries))
all_queries=model2.myforward(all_queries).data.cpu().numpy()
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedBetaWNLP(opt, model, testset,beta, model2):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
all_queries.append(X2)
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
all_queries.append(X2)
all_queries= np.array(all_queries)
all_queries=(torch.Tensor(all_queries))
all_queries=model2.myforward(all_queries).data.cpu().numpy()
# feature normalization
# for i in range(all_queries.shape[0]):
# all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoadedNLPwBeta(opt, model, testset,beta, model2):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_queries1=[]
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries1 = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
all_queries1=(torch.Tensor(all_queries1))
all_queries1=model2.myforward(all_queries1).data.cpu().numpy()
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
all_queries.append(X2)
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()[:10000]
all_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1 = datasets.Features172K().Get_all_queries()[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()[:10000]
all_queries1=(torch.Tensor(all_queries1))
all_queries1=model2.myforward(all_queries1).data.cpu().numpy()
for j in range(len(all_queries1)):
all_queries1[j, :] /= np.linalg.norm(all_queries1[j, :])
X1 = np.insert(all_queries1[j],0, 1)
X2=np.matmul(X1,beta)
all_queries.append(X2)
all_queries= np.array(all_queries)
# feature normalization
# for i in range(all_queries.shape[0]):
# all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testLoaded_NLP(opt, model, testset):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
all_imgs = datasets.Features33K().Get_all_images()
all_captions = datasets.Features33K().Get_all_captions()
all_queries = datasets.Features33K().Get_all_queries()
all_target_captions = datasets.Features33K().Get_target_captions()
else:
# use training queries to approximate training retrieval performance
all_imgs = datasets.Features172K().Get_all_images()#[:10000]
all_captions = datasets.Features172K().Get_all_captions()#[:10000]
all_queries = datasets.Features172K().Get_all_queries()#[:10000]
all_target_captions = datasets.Features172K().Get_all_captions()#[:10000]
modelNLR=main2.NLR2(all_queries.shape[1],all_imgs.shape[1],700)
modelNLR.load_state_dict(torch.load(Path1+r'\NLPMohamed3.pth'))
modelNLR.eval()
all_queries=torch.from_numpy(all_queries)
# for t in range(int(len(all_queries))):
# print('get testdata=',t,end='\r')
# f=all_queries[t]
# all_queries[t] = modelNLR.myforward(f)
all_queries = modelNLR.myforward(all_queries)
#all_queries.detach().numpy()
all_queries = torch.tensor(all_queries,requires_grad=False)
all_queries=np.array(all_queries)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testWbeta(opt, model, testset,beta):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
imgs = []
mods = []
for t in tqdm(test_queries):
imgs += [testset.get_img(t['source_img_id'])]
mods += [t['mod']['str']]
if len(imgs) >= opt.batch_size or t is test_queries[-1]:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
f = model.compose_img_text(imgs, mods).data.cpu().numpy()
for j in range(len(f)):
# for i in range(f.shape[0]):
# f[i, :] /= np.linalg.norm(f[i, :])
f[j, :] /= np.linalg.norm(f[j, :])
X1 = np.insert(f[j],0, 1)
X2=np.matmul(X1,beta)
f[j]=X2
all_queries += [f]
imgs = []
mods = []
all_queries = np.concatenate(all_queries)
all_target_captions = [t['target_caption'] for t in test_queries]
# compute all image features
imgs = []
for i in tqdm(range(len(testset.imgs))):
imgs += [testset.get_img(i)]
if len(imgs) >= opt.batch_size or i == len(testset.imgs) - 1:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
imgs = model.extract_img_feature(imgs).data.cpu().numpy()
all_imgs += [imgs]
imgs = []
all_imgs = np.concatenate(all_imgs)
all_captions = [img['captions'][0] for img in testset.imgs]
else:
# use training queries to approximate training retrieval performance
imgs0 = []
imgs = []
mods = []
for i in range(10000):
print('get images=',i,end='\r')
item = testset[i]
imgs += [item['source_img_data']]
mods += [item['mod']['str']]
if len(imgs) >= opt.batch_size or i == 9999:
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
f = model.compose_img_text(imgs, mods).data.cpu().numpy()
for j in range(len(f)):
#for i in range(f.shape[0]):
#f[i, :] /= np.linalg.norm(f[i, :])
f[j, :] /= np.linalg.norm(f[j, :])
X1 = np.insert(f[j],0, 1)
X2=np.matmul(X1,beta)
f[j]=X2
all_queries += [f]
imgs = []
mods = []
imgs0 += [item['target_img_data']]
if len(imgs0) >= opt.batch_size or i == 9999:
imgs0 = torch.stack(imgs0).float()
imgs0 = torch.autograd.Variable(imgs0)
imgs0 = model.extract_img_feature(imgs0).data.cpu().numpy()
all_imgs += [imgs0]
imgs0 = []
all_captions += [item['target_caption']]
all_target_captions += [item['target_caption']]
all_imgs = np.concatenate(all_imgs)
all_queries = np.concatenate(all_queries)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testNLP(opt, model, testset,model2):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
imgs = []
mods = []
for t in tqdm(test_queries):
imgs += [testset.get_img(t['source_img_id'])]
mods += [t['mod']['str']]
if len(imgs) >= opt.batch_size or t is test_queries[-1]:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
f = model.compose_img_text(imgs, mods).data.cpu().numpy()
for i in range(f.shape[0]):
f[i, :] /= np.linalg.norm(f[i, :])
f =np.insert(f,0, 1)
f=np.expand_dims(f, axis=0)
f=torch.from_numpy(f)
f=model2.myforward(f).data.cpu().numpy()
all_queries += [f]
imgs = []
mods = []
all_queries = np.concatenate(all_queries)
all_target_captions = [t['target_caption'] for t in test_queries]
# compute all image features
imgs = []
for i in tqdm(range(len(testset.imgs))):
imgs += [testset.get_img(i)]
if len(imgs) >= opt.batch_size or i == len(testset.imgs) - 1:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
imgs = model.extract_img_feature(imgs).data.cpu().numpy()
all_imgs += [imgs]
imgs = []
all_imgs = np.concatenate(all_imgs)
all_captions = [img['captions'][0] for img in testset.imgs]
else:
# use training queries to approximate training retrieval performance
imgs0 = []
imgs = []
mods = []
for i in range(10000):
print('get images=',i,end='\r')
item = testset[i]
imgs += [item['source_img_data']]
mods += [item['mod']['str']]
if len(imgs) >= opt.batch_size or i == 9999:
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
f = model.compose_img_text(imgs, mods).data.cpu().numpy()
for i in range(f.shape[0]):
f[i, :] /= np.linalg.norm(f[i, :])
f =np.insert(f,0, 1)
f=np.expand_dims(f, axis=0)
f=torch.from_numpy(f)
f=model2.myforward(f).data.cpu().numpy()
all_queries += [f]
imgs = []
mods = []
imgs0 += [item['target_img_data']]
if len(imgs0) >= opt.batch_size or i == 9999:
imgs0 = torch.stack(imgs0).float()
imgs0 = torch.autograd.Variable(imgs0)
imgs0 = model.extract_img_feature(imgs0).data.cpu().numpy()
all_imgs += [imgs0]
imgs0 = []
all_captions += [item['target_caption']]
all_target_captions += [item['target_caption']]
all_imgs = np.concatenate(all_imgs)
all_queries = np.concatenate(all_queries)
# feature normalization
# for i in range(all_queries.shape[0]):
# all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def testWbetaWsaveddata(opt, model, testset,beta,savedtrain,savedtest):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
if test_queries:
# compute test query features
imgs = []
mods = []
for t in range(len(savedtest)):
print('get testdata=',t,end='\r')
f=savedtest[t]['SourceTrig']
f=np.expand_dims(f, axis=0)
for j in range(len(f)):
f[j, :] /= np.linalg.norm(f[j, :])
X1 = np.insert(f[j],0, 1)
X2=np.matmul(X1,beta)
f[j]=X2
all_queries += [f]
imgs = []
mods = []
all_queries = np.concatenate(all_queries)
all_target_captions = [t['target_caption'] for t in test_queries]
# compute all image features
imgs = []
for i in tqdm(range(len(testset.imgs))):
imgs += [testset.get_img(i)]
if len(imgs) >= opt.batch_size or i == len(testset.imgs) - 1:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
imgs = model.extract_img_feature(imgs).data.cpu().numpy()
all_imgs += [imgs]
imgs = []
all_imgs = np.concatenate(all_imgs)
all_captions = [img['captions'][0] for img in testset.imgs]
else:
# use training queries to approximate training retrieval performance
imgs0 = []
imgs = []
mods = []
for i in range(10000):
print('get images=',i,end='\r')
item = testset[i]
f=savedtrain[i]['SourceTrig']
f=np.expand_dims(f, axis=0)
for j in range(len(f)):
f[j, :] /= np.linalg.norm(f[j, :])
X1 = np.insert(f[j],0, 1)
X2=np.matmul(X1,beta)
f[j]=X2
all_queries += [f]
imgs = []
mods = []
imgs0 += [savedtrain[i]['TargetData']]
all_imgs += [imgs0]
imgs0 = []
all_captions += [item['target_caption']]
all_target_captions += [item['target_caption']]
f=[]
all_imgs = np.concatenate(all_imgs)
all_queries = np.concatenate(all_queries)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def test_and_save(opt, model, testset):
"""Tests a model over the given testset."""
model.eval()
test_queries = testset.get_test_queries()
all_imgs = []
all_captions = []
all_queries = []
all_target_captions = []
all_captions=[]
if test_queries:
# compute test query features
imgs = []
mods = []
for t in tqdm(test_queries):
imgs += [testset.get_img(t['source_img_id'])]
all_captions += [t['source_caption']]
all_target_captions += [t['target_caption']]
mods += [t['mod']['str']]
if len(imgs) >= opt.batch_size or t is test_queries[-1]:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)#.cuda()
f = model.compose_img_text(imgs, mods).data.cpu().numpy()
all_queries += [f]
imgs = []
mods = []
all_queries = np.concatenate(all_queries)
#all_target_captions = [t['target_caption'] for t in test_queries]
# compute all image features
imgs = []
for i in tqdm(range(len(testset.imgs))):
imgs += [testset.get_img(i)]
if len(imgs) >= opt.batch_size or i == len(testset.imgs) - 1:
if 'torch' not in str(type(imgs[0])):
imgs = [torch.from_numpy(d).float() for d in imgs]
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)#.cuda()
imgs = model.extract_img_feature(imgs).data.cpu().numpy()
all_imgs += [imgs]
imgs = []
all_imgs = np.concatenate(all_imgs)
all_captions = [img['captions'][0] for img in testset.imgs]
else:
# use training queries to approximate training retrieval performance
imgs0 = []
imgs = []
mods = []
for i in range(len(testset)):
item = testset[i]
imgs += [item['source_img_data']]
mods += [item['mod']['str']]
if len(imgs) >= opt.batch_size or i == 9999:
imgs = torch.stack(imgs).float()
imgs = torch.autograd.Variable(imgs)
f = model.compose_img_text(imgs, mods).data.cpu().numpy() #.cuda()
all_queries += [f]
imgs = []
mods = []
imgs0 += [item['target_img_data']]
if len(imgs0) >= opt.batch_size or i == 9999:
imgs0 = torch.stack(imgs0).float()
imgs0 = torch.autograd.Variable(imgs0)
imgs0 = model.extract_img_feature(imgs0).data.cpu().numpy() #.cuda()
all_imgs += [imgs0]
imgs0 = []
all_captions += [item['source_caption']]
all_target_captions += [item['target_caption']]
all_imgs = np.concatenate(all_imgs)
all_queries = np.concatenate(all_queries)
if test_queries:
with open(Path1+r"/"+'test_test_queries.pkl', 'wb') as fp:
pickle.dump(test_queries, fp)
with open(Path1+r"/"+'test_all_queries.pkl', 'wb') as fp:
pickle.dump(all_queries, fp)
with open(Path1+r"/"+'test_all_imgs.pkl', 'wb') as fp:
pickle.dump(all_imgs, fp)
with open(Path1+r"/"+'test_all_captions.pkl', 'wb') as fp:
pickle.dump(all_captions, fp)
with open(Path1+r"/"+'test_all_target_captions.pkl', 'wb') as fp:
pickle.dump(all_target_captions, fp)
else:
with open(Path1+r"/"+'test_queries172k.pkl', 'wb') as fp:
pickle.dump(test_queries, fp)
with open(Path1+r"/"+'all_queries172k.pkl', 'wb') as fp:
pickle.dump(all_queries, fp)
with open(Path1+r"/"+'all_imgs172k.pkl', 'wb') as fp:
pickle.dump(all_imgs, fp)
with open(Path1+r"/"+'all_captions172k.pkl', 'wb') as fp:
pickle.dump(all_captions, fp)
with open(Path1+r"/"+'all_target_captions172k.pkl', 'wb') as fp:
pickle.dump(all_target_captions, fp)
# feature normalization
for i in range(all_queries.shape[0]):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
for i in tqdm(range(all_queries.shape[0])):
sims = all_queries[i:(i+1), :].dot(all_imgs.T)
if test_queries:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:110])
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
if opt.dataset == 'mitstates':
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[0] in [c.split()[0] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_adj', r)]
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i].split()[1] in [c.split()[1] for c in nns[:k]]:
r += 1
r /= len(nn_result)
out += [('recall_top' + str(k) + '_correct_noun', r)]
return out
def test_on_saved(test_train,normal_beta,create_load,filename,normal_normalize,sz,dot_eucld):
# test_queries:
if test_train==0:
with open(Path1+r"/"+'test_test_queries.pkl', 'rb') as fp:
test_queries=pickle.load( fp)
with open(Path1+r"/"+'test_all_queriesG.pkl', 'rb') as fp:
all_queries=pickle.load( fp)
with open(Path1+r"/"+'test_all_imgsG.pkl', 'rb') as fp:
all_imgs=pickle.load( fp)
with open(Path1+r"/"+'test_all_target_captionsG.pkl', 'rb') as fp:
all_captions=pickle.load( fp)
with open(Path1+r"/"+'test_all_target_captionsG.pkl', 'rb') as fp:
all_target_captions=pickle.load( fp)
else:
with open(Path1+r"/"+'test_queries1806172k.pkl', 'rb') as fp:
test_queries=pickle.load( fp)
with open(Path1+r"/"+'all_queries1806172k.pkl', 'rb') as fp:
all_queries=pickle.load( fp)
with open(Path1+r"/"+'all_imgs1806172k.pkl', 'rb') as fp:
all_imgs=pickle.load( fp)
with open(Path1+r"/"+'all_captions1806172k.pkl', 'rb') as fp:
all_captions=pickle.load( fp)
with open(Path1+r"/"+'all_target_captions1806172k.pkl', 'rb') as fp:
all_target_captions=pickle.load( fp)
if (normal_beta==1 ):
if(create_load==0):
#################################
new_all_queries=np.zeros((all_queries.shape[0],all_queries.shape[1]+1))
for i in range(all_queries.shape[0]):
f=all_queries[i,:]
if (normal_normalize==1):
f/=np.linalg.norm(f)
f=np.insert(f,0,1)
new_all_queries[i,:]=f
if (normal_normalize==1):
for i in range(all_imgs.shape[0]):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
new_all_queriest=new_all_queries.transpose()
X1=np.matmul(new_all_queriest,new_all_queries)
X2=np.linalg.inv(X1)
X3=np.matmul(X2,new_all_queriest)
beta=np.matmul(X3,all_imgs)
new_all_queries=[]
new_all_queriest=[]
#################################
with open(Path1+r"/"+filename, 'wb') as fp:
pickle.dump( beta, fp)
else:
with open(Path1+r"/"+filename, 'rb') as fp:
beta=pickle.load( fp)
for t in range(int(len(all_queries)/sz)):
if (t%100==0):
print('get testdata=',t,end='\r')
f=all_queries[t,:]
if (normal_normalize==1):
f/=np.linalg.norm(f)
f=np.insert(f,0,1)
X2=np.matmul(f,beta)
all_queries[t,:] = X2
# feature normalization
for i in range(int(all_queries.shape[0]/sz)):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(int(all_imgs.shape[0]/sz)):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
sims=np.zeros((1,int(all_imgs.shape[0]/sz)))
for i in tqdm(range(int(all_queries.shape[0]/sz))):
if (dot_eucld==0):
sims = all_queries[i:(i+1), :].dot(all_imgs[:int(all_imgs.shape[0]/sz)].T)
else:
sims[0,:]=np.sum(abs(all_imgs[:int(all_imgs.shape[0]/sz),:]-all_queries[i, :]),axis=1)
#for j in range(int(all_imgs.shape[0]/sz)):
# sims[0,j] =distance.euclidean( all_queries[i, :],all_imgs[j,:])
if test_train==0:
if (dot_eucld==0):
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
else:
sims[0, test_queries[i]['source_img_id']] = 10e10 # remove query image
if (dot_eucld==0):
nn_result.append(np.argsort(-sims[0, :])[:110])
else:
nn_result.append(np.argsort(sims[0, :])[:110])
all_imgs=[]
all_queries=[]
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
print(out)
return out
def train_network_on_saved(test_train,create_load,normal_normalize,filename,sz,dot_eucld):
if test_train==0:
with open(Path1+r"/"+'test_test_queries.pkl', 'rb') as fp:
test_queries=pickle.load( fp)
with open(Path1+r"/"+'test_all_queriesG.pkl', 'rb') as fp:
all_queries=pickle.load( fp)
with open(Path1+r"/"+'test_all_imgsG.pkl', 'rb') as fp:
all_imgs=pickle.load( fp)
with open(Path1+r"/"+'test_all_target_captionsG.pkl', 'rb') as fp:
all_captions=pickle.load( fp)
with open(Path1+r"/"+'test_all_target_captionsG.pkl', 'rb') as fp:
all_target_captions=pickle.load( fp)
else:
with open(Path1+r"/"+'test_queries172k.pkl', 'rb') as fp:
test_queries=pickle.load( fp)
with open(Path1+r"/"+'all_queries172k.pkl', 'rb') as fp:
all_queries=pickle.load( fp)
with open(Path1+r"/"+'all_imgs172k.pkl', 'rb') as fp:
all_imgs=pickle.load( fp)
with open(Path1+r"/"+'all_captions172k.pkl', 'rb') as fp:
all_captions=pickle.load( fp)
with open(Path1+r"/"+'all_target_captions172k.pkl', 'rb') as fp:
all_target_captions=pickle.load( fp)
#################################
# feature normalization
for i in range(int(all_queries.shape[0]/sz)):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(int(all_imgs.shape[0]/sz)):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
sims=np.zeros((1,int(all_imgs.shape[0]/sz)))
for i in tqdm(range(int(all_queries.shape[0]/sz))):
if (dot_eucld==0):
sims = all_queries[i:(i+1), :].dot(all_imgs[:int(all_imgs.shape[0]/sz)].T)
else:
sims[0,:]=np.sum(abs(all_imgs[:int(all_imgs.shape[0]/sz),:]-all_queries[i, :]),axis=1)
#for j in range(int(all_imgs.shape[0]/sz)):
# sims[0,j] =distance.euclidean( all_queries[i, :],all_imgs[j,:])
if test_train==0:
if (dot_eucld==0):
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
else:
sims[0, test_queries[i]['source_img_id']] = 10e10 # remove query image
if (dot_eucld==0):
nn_result.append(np.argsort(-sims[0, :])[:105])
else:
nn_result.append(np.argsort(sims[0, :])[:105])
all_imgs=[]
all_queries=[]
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
print(out)
return out
def Phase2_networks_tests(test_train,all_queries):
if test_train==0:
test_queries = datasets.Fashion200k(path=Path1,
split='train',
transform=torchvision.transforms.Compose([
torchvision.transforms.Resize(224),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])).test_queries
all_imgs = datasets.Features172K().Get_phixtarget()
all_captions= datasets.Features172K().Get_all_captions()
all_target_captions=datasets.Features172K().Get_all_captions()
else:
test_queries = datasets.Fashion200k(path=Path1,
split='test',
transform=torchvision.transforms.Compose([
torchvision.transforms.Resize(224),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])).test_queries
all_imgs=datasets.Features33K().Get_all_images()
all_captions=datasets.Features33K().Get_target_captions()
all_target_captions=datasets.Features33K().Get_target_captions()
#################################
# feature normalization
all_imgs=np.concatenate(all_imgs)
for i in range(int(all_queries.shape[0])):
all_queries[i, :] /= np.linalg.norm(all_queries[i, :])
for i in range(int(all_imgs.shape[0])):
all_imgs[i, :] /= np.linalg.norm(all_imgs[i, :])
# match test queries to target images, get nearest neighbors
nn_result = []
sims=np.zeros((1,int(all_imgs.shape[0])))
for i in tqdm(range(int(all_queries.shape[0]))):
sims = all_queries[i:(i+1), :].dot(all_imgs[:int(all_imgs.shape[0])].T)
if test_train==0:
sims[0, test_queries[i]['source_img_id']] = -10e10 # remove query image
nn_result.append(np.argsort(-sims[0, :])[:105])
all_imgs=[]
all_queries=[]
# compute recalls
out = []
nn_result = [[all_captions[nn] for nn in nns] for nns in nn_result]
for k in [1, 5, 10, 50, 100]:
r = 0.0
for i, nns in enumerate(nn_result):
if all_target_captions[i] in nns[:k]:
r += 1
r /= len(nn_result)
#out += [('recall_top' + str(k) + '_correct_composition', r)]
out.append(str(k) + ' ---> '+ str(r*100))
print(out)
return out
| 33.516805 | 93 | 0.620769 | 11,255 | 76,787 | 4.037761 | 0.027277 | 0.066674 | 0.052745 | 0.00911 | 0.955617 | 0.95225 | 0.946463 | 0.942106 | 0.933898 | 0.931896 | 0 | 0.037153 | 0.218331 | 76,787 | 2,290 | 94 | 33.531441 | 0.719986 | 0.167593 | 0 | 0.932138 | 0 | 0 | 0.047074 | 0.007222 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014725 | false | 0 | 0.005122 | 0 | 0.034571 | 0.005762 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
733263cdec2c01a6057d6891624cff8d4a5f137e | 10,799 | py | Python | main/grade_views.py | waterwoodwind/QA_web | 307398106fc38716d3d2cd9f91805a801bd37368 | [
"MIT"
] | null | null | null | main/grade_views.py | waterwoodwind/QA_web | 307398106fc38716d3d2cd9f91805a801bd37368 | [
"MIT"
] | null | null | null | main/grade_views.py | waterwoodwind/QA_web | 307398106fc38716d3d2cd9f91805a801bd37368 | [
"MIT"
] | null | null | null | #coding=utf-8
from django.shortcuts import render
from django.http import HttpResponse
from main.models import qa_info
from django.core import serializers
import json
import pandas as pd
import arrow
import re
from views import df_chinese_data
from views import date_range_df_chinese_data
from save_load_func import list_all_data
from func_for_grade_views import get_qa_info_with_grade
from func_for_grade_views import get_hr_info
from func_for_grade_views import get_hr_info_df
def staff_grade_year(request):
if request.method == 'POST':
post_data = request.POST
date_range = post_data["date_range"]
date_start = date_range.split(' to ')[0]
date_end = date_range.split(' to ')[1]
print date_start,date_end
df_data = pd.DataFrame(date_range_df_chinese_data(date_start,date_end))
df_data = df_data[df_data[u"严重程度"] > 0]
df_data = df_data.loc[:, [u"责任人", u"检查者", u"严重程度"]]
df_data[[u"严重程度"]] = df_data[[u"严重程度"]].apply(pd.to_numeric)
else:
df_data = get_qa_info_with_grade()
if df_data.empty:
return HttpResponse(u"该时间范围内无数据,请返回上一页")
#print df_data
# 获取输入时间,对df_data按时间截取一次
# 创建name_grade_department_list
name_grade_department_list = []
list_hr_info = get_hr_info()
# 人员list for 循环
for i,element in enumerate(list_hr_info):
sum_single_person = 0
name = element[1]
department = element[2]
#print i, name
# 对单人进行分数计算,取出df_data中包含该人全部行df_single_person
df_single_person = df_data[df_data[u"责任人"]==name]
if not df_single_person.empty:
#print df_single_person
# 对df_single_person合并总分
sum_single_person = df_single_person[u"严重程度"].sum()
sum_single_person = int(sum_single_person)
# 人名、总分、部门压入name_grade_department_list
single_dict = {}
single_dict[u"责任人"] = name
single_dict[u"部门"] = department
single_dict[u"安全分"] = 100 - sum_single_person
name_grade_department_list.append(single_dict)
#print name_grade_department_list
json_grade = json.dumps(name_grade_department_list)
return render(request, 'staff_grade_year.html', {'json_grade': json_grade})
def strutator_grade(request):
if request.method == 'POST':
post_data = request.POST
date_range = post_data["date_range"]
date_start = date_range.split(' to ')[0]
date_end = date_range.split(' to ')[1]
print date_start, date_end
df_data = pd.DataFrame(date_range_df_chinese_data(date_start, date_end))
df_data = df_data[df_data[u"严重程度"] > 0]
df_data = df_data.loc[:, [u"责任人", u"检查者", u"严重程度"]]
df_data[[u"严重程度"]] = df_data[[u"严重程度"]].apply(pd.to_numeric)
else:
df_data = get_qa_info_with_grade()
if df_data.empty:
return HttpResponse(u"该时间范围内无数据,请返回上一页")
#print df_data
# 获取输入时间,对df_data按时间截取一次
# 创建name_grade_department_list
name_grade_department_list = []
list_hr_info = get_hr_info()
# 人员list for 循环
for i,element in enumerate(list_hr_info):
sum_single_person = 0
name = element[1]
department = element[2]
#print i, name
# 对单人进行分数计算,取出df_data中包含该人全部行df_single_person
df_single_person = df_data[df_data[u"检查者"]==name]
if not df_single_person.empty:
#print df_single_person
# 对df_single_person合并总分
sum_single_person = df_single_person[u"严重程度"].sum()
sum_single_person = int(sum_single_person)
# 人名、总分、部门压入name_grade_department_list
single_dict = {}
single_dict[u"检查者"] = name
single_dict[u"部门"] = department
single_dict[u"检查分"] = sum_single_person
name_grade_department_list.append(single_dict)
#print name_grade_department_list
json_grade = json.dumps(name_grade_department_list)
return render(request, 'strutator_grade.html', {'json_grade': json_grade})
def department_grade(request):
if request.method == 'POST':
post_data = request.POST
date_range = post_data["date_range"]
date_start = date_range.split(' to ')[0]
date_end = date_range.split(' to ')[1]
print date_start, date_end
df_data = pd.DataFrame(date_range_df_chinese_data(date_start, date_end))
df_data = df_data[df_data[u"严重程度"] > 0]
df_data = df_data.loc[:, [u"责任人", u"检查者", u"严重程度"]]
df_data[[u"严重程度"]] = df_data[[u"严重程度"]].apply(pd.to_numeric)
else:
df_data = get_qa_info_with_grade()
if df_data.empty:
return HttpResponse(u"该时间范围内无数据,请返回上一页")
#print df_data
# 获取输入时间,对df_data按时间截取一次
# 创建name_grade_department_list
person_grade_list = []
list_hr_info = get_hr_info()
df_hr_info = get_hr_info_df()
# 人员list for 循环
for i,element in enumerate(list_hr_info):
sum_single_person = 0
name = element[1]
department = element[2]
#print i, name
# 对单人进行分数计算,取出df_data中包含该人全部行df_single_person
df_single_person = df_data[df_data[u"责任人"]==name]
if not df_single_person.empty:
#print df_single_person
# 对df_single_person合并总分
sum_single_person = df_single_person[u"严重程度"].sum()
sum_single_person = int(sum_single_person)
# 人名、总分、部门压入name_grade_department_list
person_grade_list.append(100 - sum_single_person)
else:
person_grade_list.append(100)
# print name_grade_department_list
df_hr_info[u'安全分'] = person_grade_list
df_agg = df_hr_info.groupby(u'部门').agg('mean').round(2)
print df_agg,type(df_agg)
df_agg = df_agg.reset_index()
df_dict = df_agg.to_dict('records')
print df_dict
json_grade = json.dumps(df_dict)
# 班组分数
df_team_mean = df_hr_info.groupby(u'班组').agg('mean').round(2)
print df_team_mean
df_team_mean_reset = df_team_mean.reset_index()
dict_team_mean = df_team_mean_reset.to_dict('records')
print dict_team_mean
team_mean = json.dumps(dict_team_mean)
return render(request, 'department_grade.html', {'json_grade': json_grade,
'json_team':team_mean})
def self_checking_grade(request):
if request.method == 'POST':
post_data = request.POST
date_range = post_data["date_range"]
date_start = date_range.split(' to ')[0]
date_end = date_range.split(' to ')[1]
print date_start, date_end
df_data = pd.DataFrame(date_range_df_chinese_data(date_start, date_end))
df_data = df_data[df_data[u"严重程度"] > 0]
df_data = df_data.loc[:, [u"责任人", u"检查者", u"严重程度"]]
df_data[[u"严重程度"]] = df_data[[u"严重程度"]].apply(pd.to_numeric)
else:
df_data = get_qa_info_with_grade()
if df_data.empty:
return HttpResponse(u"该时间范围内无数据,请返回上一页")
#print df_data
# 获取输入时间,对df_data按时间截取一次
# 创建name_grade_department_list
person_grade_list = []
list_hr_info = get_hr_info()
df_hr_info = get_hr_info_df()
# 人员list for 循环
for i,element in enumerate(list_hr_info):
sum_single_person = 0
name = element[1]
department = element[2]
#print i, name
# 对单人进行分数计算,取出df_data中包含该人全部行df_single_person
df_single_person = df_data[df_data[u"检查者"]==name]
if not df_single_person.empty:
#print df_single_person
# 对df_single_person合并总分
sum_single_person = df_single_person[u"严重程度"].sum()
sum_single_person = int(sum_single_person)
# 人名、总分、部门压入name_grade_department_list
person_grade_list.append(sum_single_person)
else:
person_grade_list.append(0)
# print name_grade_department_list
df_hr_info[u'安全分'] = person_grade_list
df_scrutator = df_hr_info[df_hr_info[u'安全分']>0]
df_agg = df_scrutator.groupby(u'部门').agg('mean').round(2)
df_agg = df_agg.reset_index()
df_dict = df_agg.to_dict('records')
json_grade = json.dumps(df_dict)
# 班组分数
df_team_mean = df_scrutator.groupby(u'班组').agg('mean').round(2)
df_team_mean_reset = df_team_mean.reset_index()
dict_team_mean = df_team_mean_reset.to_dict('records')
team_mean = json.dumps(dict_team_mean)
return render(request, 'department_grade.html', {'json_grade': json_grade,
'json_team':team_mean})
def self_checking_grade_totlal(request):
if request.method == 'POST':
post_data = request.POST
date_range = post_data["date_range"]
date_start = date_range.split(' to ')[0]
date_end = date_range.split(' to ')[1]
print date_start, date_end
df_data = pd.DataFrame(date_range_df_chinese_data(date_start, date_end))
df_data = df_data[df_data[u"严重程度"] > 0]
df_data = df_data.loc[:, [u"责任人", u"检查者", u"严重程度"]]
df_data[[u"严重程度"]] = df_data[[u"严重程度"]].apply(pd.to_numeric)
else:
df_data = get_qa_info_with_grade()
if df_data.empty:
return HttpResponse(u"该时间范围内无数据,请返回上一页")
#print df_data
# 获取输入时间,对df_data按时间截取一次
# 创建name_grade_department_list
person_grade_list = []
list_hr_info = get_hr_info()
df_hr_info = get_hr_info_df()
# 人员list for 循环
for i,element in enumerate(list_hr_info):
sum_single_person = 0
name = element[1]
department = element[2]
#print i, name
# 对单人进行分数计算,取出df_data中包含该人全部行df_single_person
df_single_person = df_data[df_data[u"检查者"]==name]
if not df_single_person.empty:
#print df_single_person
# 对df_single_person合并总分
sum_single_person = df_single_person[u"严重程度"].sum()
sum_single_person = int(sum_single_person)
# 人名、总分、部门压入name_grade_department_list
person_grade_list.append(sum_single_person)
else:
person_grade_list.append(0)
# print name_grade_department_list
df_hr_info[u'安全分'] = person_grade_list
df_scrutator = df_hr_info[df_hr_info[u'安全分']>0]
df_agg = df_scrutator.groupby(u'部门').sum()
df_agg = df_agg.reset_index()
df_dict = df_agg.to_dict('records')
json_grade = json.dumps(df_dict)
# 班组分数
df_team_mean = df_scrutator.groupby(u'班组').sum()
df_team_mean_reset = df_team_mean.reset_index()
dict_team_mean = df_team_mean_reset.to_dict('records')
for item in dict_team_mean:
if u"无" ==item[u'班组']:
dict_team_mean.remove(item)
team_mean = json.dumps(dict_team_mean)
return render(request, 'department_grade.html', {'json_grade': json_grade,
'json_team':team_mean}) | 37.237931 | 80 | 0.65719 | 1,553 | 10,799 | 4.179008 | 0.075338 | 0.060092 | 0.057781 | 0.03698 | 0.93282 | 0.924037 | 0.912173 | 0.896456 | 0.890293 | 0.867488 | 0 | 0.005857 | 0.241041 | 10,799 | 290 | 81 | 37.237931 | 0.785993 | 0.118066 | 0 | 0.806763 | 0 | 0 | 0.065463 | 0.008869 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.067633 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7df63290f8bd5fb254054ca6cb86e165997ccca1 | 9,133 | py | Python | tests/test_historical_prices_flat.py | benjaminSTW/ig-markets-api-python-library | 8bab5d20f8b53ddf9c30202c5cc81d03518ddff0 | [
"BSD-3-Clause"
] | 244 | 2015-11-01T12:34:56.000Z | 2022-03-30T15:52:41.000Z | tests/test_historical_prices_flat.py | benjaminSTW/ig-markets-api-python-library | 8bab5d20f8b53ddf9c30202c5cc81d03518ddff0 | [
"BSD-3-Clause"
] | 174 | 2015-10-02T09:42:38.000Z | 2022-03-19T02:55:28.000Z | tests/test_historical_prices_flat.py | benjaminSTW/ig-markets-api-python-library | 8bab5d20f8b53ddf9c30202c5cc81d03518ddff0 | [
"BSD-3-Clause"
] | 201 | 2015-10-02T10:10:39.000Z | 2022-03-21T19:57:04.000Z | from trading_ig.rest import IGService
import responses
import json
import pandas as pd
import datetime
import pytest
"""
unit tests for historical prices methods with flat output formatting
"""
class TestHistoricalPricesFlat:
@responses.activate
def test_historical_prices_v3_defaults_happy(self):
# fetch_historical_prices v3 - default params
with open('tests/data/historic_prices.json', 'r') as file:
response_body = json.loads(file.read())
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.GC.Month2.IP',
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json=response_body,
status=200)
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
format=ig_service.flat_prices)
prices = result['prices']
assert isinstance(result, dict)
assert isinstance(prices, pd.DataFrame)
# with no other params, default returns 10 rows at MINUTE resolution
assert prices.shape[0] == 10
assert prices.shape[1] == 9
# assert time series rows are 1 minute apart
prices['tvalue'] = prices.index
prices['delta'] = (prices['tvalue'] - prices['tvalue'].shift())
assert any(prices["delta"].dropna() == datetime.timedelta(minutes=1))
@responses.activate
def test_historical_prices_v3_datetime_happy(self):
# fetch_historical_prices v3 - between two dates, daily resolution
with open('tests/data/historic_prices_dates.json', 'r') as file:
response_body = json.loads(file.read())
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.GC.Month2.IP',
match_querystring=False,
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json=response_body,
status=200)
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
resolution='D',
start_date='2020-09-01T00:00:00',
end_date='2020-09-04T23:59:59',
format=ig_service.flat_prices)
prices = result['prices']
assert isinstance(result, dict)
assert isinstance(prices, pd.DataFrame)
# assert DataFrame shape
assert prices.shape[0] == 4
assert prices.shape[1] == 9
# assert time series rows are 1 day apart
prices['tvalue'] = prices.index
prices['delta'] = (prices['tvalue'] - prices['tvalue'].shift())
assert any(prices["delta"].dropna() == datetime.timedelta(days=1))
# assert default paging
assert result['metadata']['pageData']['pageSize'] == 20
assert result['metadata']['pageData']['pageNumber'] == 1
assert result['metadata']['pageData']['totalPages'] == 1
@responses.activate
def test_historical_prices_v3_num_points_happy(self):
# fetch_historical_prices v3 - number of data points, weekly resolution
with open('tests/data/historic_prices_num_points.json', 'r') as file:
response_body = json.loads(file.read())
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.GC.Month2.IP',
match_querystring=False,
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json=response_body,
status=200)
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
resolution='W',
numpoints=10,
format=ig_service.flat_prices)
prices = result['prices']
assert isinstance(result, dict)
assert isinstance(prices, pd.DataFrame)
# assert DataFrame shape
assert prices.shape[0] == 10
assert prices.shape[1] == 9
# assert time series rows are 1 week apart
prices['tvalue'] = prices.index
prices['delta'] = (prices['tvalue'] - prices['tvalue'].shift())
assert any(prices["delta"].dropna() == datetime.timedelta(weeks=1))
@responses.activate
def test_historical_prices_v3_num_points_bad_numpoints(self):
# fetch_historical_prices v3 - number of data points, invalid numpoints
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.GC.Month2.IP',
match_querystring=False,
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json={'errorCode': 'Unable to convert value=3.14159 to type= Integer int'}, # noqa
status=400)
with pytest.raises(Exception):
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
resolution='X',
numpoints=3.14159,
format=ig_service.flat_prices)
assert result['errorCode'].startswith('Unable to convert value')
@responses.activate
def test_historical_prices_v3_num_points_bad_resolution(self):
# fetch_historical_prices v3 - number of data points, invalid resolution
with open('tests/data/historic_prices_num_points.json', 'r') as file:
response_body = json.loads(file.read())
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.GC.Month2.IP',
match_querystring=False,
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json=response_body,
status=200)
with pytest.raises(ValueError) as excinfo:
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
resolution='X',
numpoints=10,
format=ig_service.flat_prices)
assert "Invalid frequency" in str(excinfo.value)
@responses.activate
def test_historical_prices_v3_bad_epic(self):
# fetch_historical_prices v3 - bad epic
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.X.Month1.IP',
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json={'errorCode': 'error.error.price-history.io-error'},
status=404)
with pytest.raises(Exception):
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.X.Month1.IP',
format=ig_service.flat_prices)
assert result['errorCode'] == 'error.error.price-history.io-error'
@responses.activate
def test_historical_prices_v3_bad_date_format(self):
# fetch_historical_prices v3 - bad date format
responses.add(
responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.XX.Month1.IP',
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json={'errorCode': 'Unable to parse datetime=2020/09/01T00:00:00'},
status=400)
with pytest.raises(Exception):
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
resolution='D',
start_date='2020/09/01T00:00:00',
end_date='2020-09-04T23:59:59',
format=ig_service.flat_prices)
assert result['errorCode'].startswith('Unable to parse datetime')
@responses.activate
def test_historical_prices_v3_bad_date_order(self):
# fetch_historical_prices v3 - bad date order
responses.add(responses.GET,
'https://demo-api.ig.com/gateway/deal/prices/MT.D.XX.Month1.IP',
headers={'CST': 'abc123', 'X-SECURITY-TOKEN': 'xyz987'},
json={"errorCode": "error.invalid.daterange"},
status=400)
with pytest.raises(Exception):
ig_service = IGService('username', 'password', 'api_key', 'DEMO')
result = ig_service.fetch_historical_prices_by_epic(
epic='MT.D.GC.Month2.IP',
resolution='D',
start_date='2020-09-04T23:59:59',
end_date='2020/09/01T00:00:00',
format=ig_service.flat_prices)
assert result['errorCode'] == 'error.invalid.daterange'
| 40.057018 | 104 | 0.591481 | 1,044 | 9,133 | 5.018199 | 0.161877 | 0.07635 | 0.054972 | 0.025196 | 0.870204 | 0.864287 | 0.828402 | 0.783165 | 0.75606 | 0.709296 | 0 | 0.037343 | 0.284572 | 9,133 | 227 | 105 | 40.23348 | 0.764463 | 0.07774 | 0 | 0.7125 | 0 | 0.05 | 0.226411 | 0.035294 | 0 | 0 | 0 | 0 | 0.14375 | 1 | 0.05 | false | 0.05 | 0.0375 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b406468ef372d5c7e50eafcd701b25e0623ee59a | 27,214 | py | Python | t2/IBL.py | hausp/INE5443 | 749ec770218d555a26ffaa681cd7c3146550f66b | [
"Apache-2.0"
] | null | null | null | t2/IBL.py | hausp/INE5443 | 749ec770218d555a26ffaa681cd7c3146550f66b | [
"Apache-2.0"
] | null | null | null | t2/IBL.py | hausp/INE5443 | 749ec770218d555a26ffaa681cd7c3146550f66b | [
"Apache-2.0"
] | null | null | null | from classifiers import *
import math
import numpy as np
from scipy.spatial import KDTree
import utils
def kdtree_classify(classifier, entry, class_index=-1, k=1):
prepared_entry = utils.without_column(entry, class_index)
result = classifier.descriptor.query([prepared_entry], k=k)
scoreboard = {}
indexes = result[1]
if k > 1:
indexes = indexes[0]
for index in indexes:
category = classifier.categories[index]
if category not in scoreboard:
scoreboard[category] = 0
scoreboard[category] += 1
winner = (0, None)
for key, value in scoreboard.items():
if value > winner[0]:
winner = (value, key)
return winner[1]
def classify(self, entry, class_index=-1, k=1):
max_similarity = -float("inf")
best_categories = []
prepared_external_entry = utils.without_column(entry, class_index)
for i in range(len(self.descriptor)):
# prepared_internal_entry = utils.without_column(internal_entry, class_index)
internal_entry = self.descriptor[i]
class_value = self.categories[i]
similarity = -euclidian_dist(prepared_external_entry, internal_entry)
if similarity > max_similarity:
max_similarity = similarity
best_categories = [class_value]
elif similarity == max_similarity:
best_categories.append(class_value)
best_category = self.pick_one(best_categories)
return best_category
class Classifier:
def __init__(self):
self.hits = 0
self.fails = 0
self.descriptor = []
self.categories = []
def classify(self, entry, class_index=-1, k=1):
return self.on_classify(self, entry, class_index, k)
def add_random_entry(self, training_set):
self.descriptor.append(self.pick_one(training_set))
def pick_index(self, array):
return round(np.random.uniform(0, len(array) - 1))
def pick_one(self, array):
return array[self.pick_index(array)]
def remove_one(self, array):
index = self.pick_index(array)
value = array[index]
del array[index]
return value
class IBL1(Classifier):
def __init__(self, training_set, class_index=-1, params={}):
super(IBL1, self).__init__()
self.on_classify = kdtree_classify
if len(self.descriptor) == 0:
self.add_random_entry(training_set)
for external_entry in training_set:
max_similarity = -float("inf")
best_entries = []
for internal_entry in self.descriptor:
prepared_external_entry = utils.without_column(external_entry, class_index)
prepared_internal_entry = utils.without_column(internal_entry, class_index)
similarity = -euclidian_dist(prepared_external_entry, prepared_internal_entry)
if similarity > max_similarity:
max_similarity = similarity
best_entries = [internal_entry]
elif similarity == max_similarity:
best_entries.append(internal_entry)
best_entry = self.pick_one(best_entries)
if external_entry[class_index] == best_entry[class_index]:
self.hits += 1
else:
self.fails += 1
self.descriptor.append(external_entry)
for i in range(len(self.descriptor)):
self.categories.append(self.descriptor[i][class_index])
self.descriptor[i] = utils.without_column(self.descriptor[i], class_index)
self.descriptor = KDTree(np.array(self.descriptor))
class IBL2(Classifier):
def __init__(self, training_set, class_index=-1, params={}):
super(IBL2, self).__init__()
if len(self.descriptor) == 0:
self.add_random_entry(training_set)
self.on_classify = kdtree_classify
for external_entry in training_set:
max_similarity = -float("inf")
best_entries = []
for internal_entry in self.descriptor:
prepared_external_entry = utils.without_column(external_entry, class_index)
prepared_internal_entry = utils.without_column(internal_entry, class_index)
similarity = -euclidian_dist(prepared_external_entry, prepared_internal_entry)
if similarity > max_similarity:
max_similarity = similarity
best_entries = [internal_entry]
elif similarity == max_similarity:
best_entries.append(internal_entry)
best_entry = self.pick_one(best_entries)
if external_entry[class_index] == best_entry[class_index]:
self.hits += 1
else:
self.fails += 1
self.descriptor.append(external_entry)
for i in range(len(self.descriptor)):
self.categories.append(self.descriptor[i][class_index])
self.descriptor[i] = utils.without_column(self.descriptor[i], class_index)
self.descriptor = KDTree(np.array(self.descriptor))
class IBL3(Classifier):
class Register:
counter = 0
def __init__(self, entry, category):
self.id = self.counter
self.category = category
self.entry = entry
self.hits = 0
self.fails = 0
self.counter += 1
def __init__(self, training_set, class_index=-1, params={}):
super(IBL3, self).__init__()
self.on_classify = kdtree_classify
self.dropped = []
frequency_data = {}
processed_instances = 0
dropped_instances = 0
# Adds a random instance to the descriptor
if len(self.descriptor) == 0:
random_entry = self.remove_one(training_set)
(entry, class_value) = self.prepare(random_entry, class_index)
frequency_data[class_value] = 1
processed_instances += 1
register = self.Register(entry, class_value)
register.hits += 1
self.descriptor.append(register)
training_size = len(training_set)
for external_entry in training_set:
(entry, class_value) = self.prepare(external_entry, class_index)
# Searches for acceptable instances in the descriptor
best_acceptable = None
similarity_table = {}
for register in self.descriptor:
category = register.category
# Populates the similarity table
similarity = -euclidian_dist(entry, register.entry)
similarity_table[register.id] = similarity
# classifying acceptability factors
zf = params["zfa"]
zp = params["zpa"]
# Calculates the frequency interval (class)
p = frequency_data[category] / len(self.descriptor)
n = processed_instances
frequency_interval = self.interval(p, zf, n)
# Calculates the precision interval (instance)
n = register.hits + register.fails
p = register.hits / n
precision_interval = self.interval(p, zp, n)
if frequency_interval["sup"] < precision_interval["inf"]:
# Accept the instance
if not best_acceptable or best_acceptable[1] < similarity:
best_acceptable = (register, similarity)
if not best_acceptable and len(self.descriptor) > 0:
# No acceptable instances were found,
# so use a random register instead
random_register = self.pick_one(self.descriptor)
similarity = similarity_table[random_register.id]
best_acceptable = (random_register, similarity)
# Flag that indicates if we learned a new entry
learned = False
if best_acceptable and best_acceptable[0].category == class_value:
# Correct evaluation, simply update the hit counter
self.hits += 1
else:
# Incorrect evaluation, update the fail counter, then learn
self.fails += 1
# Learn the new entry
new_register = self.Register(entry, class_value)
new_register.hits += 1
self.descriptor.append(new_register)
learned = True
# Updates the frequency data
# TODO: is this the right place to do it?
if class_value not in frequency_data:
frequency_data[class_value] = 0
frequency_data[class_value] += 1
# Updates the processed instances counter
processed_instances += 1
# Size of the search space
# If we just appended a new entry, ignore it
descriptor_size = len(self.descriptor)
if learned:
descriptor_size -= 1
# Update all registers in range
i = 0
while i < descriptor_size:
register = self.descriptor[i]
# Similarity of the register used as the best "acceptable"
outer_similarity = best_acceptable[1]
similarity = similarity_table[register.id]
if similarity >= outer_similarity:
category = register.category
# Update the current register
if category == class_value:
register.hits += 1
else:
register.fails += 1
# discard factor
zf = params["zfd"]
zp = params["zpd"]
# Calculates the frequency interval (class)
p = frequency_data[category] / len(self.descriptor)
n = processed_instances
frequency_interval = self.interval(p, zf, n)
# Calculates the precision interval (instance)
n = register.hits + register.fails
p = register.hits / n
precision_interval = self.interval(p, zp, n)
if precision_interval["sup"] < frequency_interval["inf"]:
# Discard the instance
self.dropped.append(self.descriptor[i].entry)
del self.descriptor[i]
descriptor_size -= 1
frequency_data[category] -= 1
dropped_instances += 1
i -= 1
i += 1
print("Dropped: %s" % (dropped_instances))
# Transforms the descriptor into a KD-Tree
for i in range(len(self.descriptor)):
self.categories.append(self.descriptor[i].category)
self.descriptor[i] = self.descriptor[i].entry
self.descriptor = KDTree(np.array(self.descriptor))
def prepare(self, entry, class_index=-1):
return (utils.without_column(entry, class_index), entry[class_index])
def interval(self, p, z, n):
d = (1 + (z * z) / n)
f1 = p + (z * z) / (2 * n)
f2 = z * math.sqrt(p * (1 - p) / n + (z * z) / (4 * n * n))
return {
"inf": (f1 - f2) / d,
"sup": (f1 + f2) / d
}
class IBL4(Classifier):
class Register:
counter = 0
def __init__(self, entry, category):
self.id = self.counter
self.category = category
self.entry = entry
self.hits = 0
self.fails = 0
self.counter += 1
def __init__(self, training_set, class_index=-1, params={}):
super(IBL4, self).__init__()
self.on_classify = classify
self.dropped = []
frequency_data = {}
processed_instances = 0
dropped_instances = 0
accumulated_weights = []
normalized_weights = []
weights = []
# Adds a random instance to the descriptor
if len(self.descriptor) == 0:
random_entry = self.remove_one(training_set)
(entry, class_value) = self.prepare(random_entry, class_index)
# Sets initial values for the weights
num_attributes = len(entry)
for i in range(len(entry)):
accumulated_weights.append(0.01)
normalized_weights.append(0.01)
weights.append(1 / num_attributes)
frequency_data[class_value] = 1
processed_instances += 1
register = self.Register(entry, class_value)
register.hits += 1
self.descriptor.append(register)
training_size = len(training_set)
for external_entry in training_set:
(entry, class_value) = self.prepare(external_entry, class_index)
if class_value not in frequency_data:
frequency_data[class_value] = 0
# Searches for acceptable instances in the descriptor
best_acceptable = None
similarity_table = {}
for register in self.descriptor:
category = register.category
# Populates the similarity table
similarity = self.weighted_similarity(entry, register.entry, weights)
similarity_table[register.id] = similarity
# classifying acceptability factors
zf = params["zfa"]
zp = params["zpa"]
# Calculates the frequency interval (class)
p = frequency_data[category] / len(self.descriptor)
n = processed_instances
frequency_interval = self.interval(p, zf, n)
# Calculates the precision interval (instance)
n = register.hits + register.fails
p = register.hits / n
precision_interval = self.interval(p, zp, n)
if frequency_interval["sup"] < precision_interval["inf"]:
# Accept the instance
if not best_acceptable or best_acceptable[1] < similarity:
best_acceptable = (register, similarity)
if not best_acceptable and len(self.descriptor) > 0:
# No acceptable instances were found,
# so use a random register instead
random_register = self.pick_one(self.descriptor)
similarity = similarity_table[random_register.id]
best_acceptable = (random_register, similarity)
# Flag that indicates if we learned a new entry
learned = False
if best_acceptable and best_acceptable[0].category == class_value:
# Correct evaluation, simply update the hit counter
self.hits += 1
else:
# Incorrect evaluation, update the fail counter, then learn
self.fails += 1
# Learn the new entry
new_register = self.Register(entry, class_value)
new_register.hits += 1
self.descriptor.append(new_register)
learned = True
# Updates the frequency data
frequency_data[class_value] += 1
# Updates the processed instances counter
processed_instances += 1
# Size of the search space
# If we just appended a new entry, ignore it
descriptor_size = len(self.descriptor)
if learned:
descriptor_size -= 1
# Update all registers in range
i = 0
while i < descriptor_size:
register = self.descriptor[i]
# Similarity of the register used as the best "acceptable"
outer_similarity = best_acceptable[1]
similarity = similarity_table[register.id]
if similarity >= outer_similarity:
category = register.category
# Update the current register
if category == class_value:
register.hits += 1
else:
register.fails += 1
# discard factor
zf = params["zfd"]
zp = params["zpd"]
# Calculates the frequency interval (class)
p = frequency_data[category] / len(self.descriptor)
n = processed_instances
frequency_interval = self.interval(p, zf, n)
# Calculates the precision interval (instance)
n = register.hits + register.fails
p = register.hits / n
precision_interval = self.interval(p, zp, n)
if precision_interval["sup"] < frequency_interval["inf"]:
# Discard the instance
self.dropped.append(self.descriptor[i].entry)
del self.descriptor[i]
descriptor_size -= 1
frequency_data[category] -= 1
dropped_instances += 1
i -= 1
i += 1
# Iterates over the attributes, updating its weights
if len(self.descriptor) > 0:
reference = best_acceptable[0]
category = reference.category
for i in range(len(reference.entry)):
delta = abs(entry[i] - reference.entry[i])
lambd = max(frequency_data[class_value], frequency_data[category])
lambd /= len(self.descriptor)
complement = 1 - lambd
if class_value == reference.entry[i]:
accumulated_weights[i] += complement * (1 - delta)
else:
accumulated_weights[i] += complement * delta
normalized_weights[i] += complement
acc = accumulated_weights[i]
norm = normalized_weights[i]
weights[i] = max(0, acc / norm - 0.5)
print("Dropped: %s" % (dropped_instances))
print("Weights: %s" % weights)
for i in range(len(self.descriptor)):
self.categories.append(self.descriptor[i].category)
self.descriptor[i] = self.descriptor[i].entry
def weighted_similarity(self, first, second, weights):
result = 0
for i in range(len(first)):
result += (weights[i] * (first[i] - second[i])) ** 2
return -math.sqrt(result)
def prepare(self, entry, class_index=-1):
return (utils.without_column(entry, class_index), entry[class_index])
def interval(self, p, z, n):
d = (1 + (z * z) / n)
f1 = p + (z * z) / (2 * n)
f2 = z * math.sqrt(p * (1 - p) / n + (z * z) / (4 * n * n))
return {
"inf": (f1 - f2) / d,
"sup": (f1 + f2) / d
}
class IBL5(Classifier):
class Register:
counter = 0
def __init__(self, entry, category):
self.id = self.counter
self.category = category
self.entry = entry
self.hits = 0
self.fails = 0
self.counter += 1
def __init__(self, training_set, class_index=-1, params={}):
super(IBL5, self).__init__()
self.on_classify = classify
self.dropped = []
frequency_data = {}
processed_instances = 0
dropped_instances = 0
accumulated_weights = []
normalized_weights = []
weights = []
# Adds a random instance to the descriptor
if len(self.descriptor) == 0:
random_entry = self.remove_one(training_set)
(entry, class_value) = self.prepare(random_entry, class_index)
# Sets initial values for the weights
num_attributes = len(entry)
for i in range(len(entry)):
accumulated_weights.append(0.01)
normalized_weights.append(0.01)
weights.append(1 / num_attributes)
frequency_data[class_value] = 1
processed_instances += 1
register = self.Register(entry, class_value)
register.hits += 1
self.descriptor.append(register)
training_size = len(training_set)
for external_entry in training_set:
(entry, class_value) = self.prepare(external_entry, class_index)
if class_value not in frequency_data:
frequency_data[class_value] = 0
# Searches for acceptable instances in the descriptor
best_acceptable = None
similarity_table = {}
for register in self.descriptor:
category = register.category
# Populates the similarity table
similarity = self.weighted_similarity(entry, register.entry, weights)
similarity_table[register.id] = similarity
# classifying acceptability factors
zf = params["zfa"]
zp = params["zpa"]
# Calculates the frequency interval (class)
p = frequency_data[category] / len(self.descriptor)
n = processed_instances
frequency_interval = self.interval(p, zf, n)
# Calculates the precision interval (instance)
n = register.hits + register.fails
p = register.hits / n
precision_interval = self.interval(p, zp, n)
if frequency_interval["sup"] < precision_interval["inf"]:
# Accept the instance
if not best_acceptable or best_acceptable[1] < similarity:
best_acceptable = (register, similarity)
if not best_acceptable and len(self.descriptor) > 0:
# No acceptable instances were found,
# so use a random register instead
random_register = self.pick_one(self.descriptor)
similarity = similarity_table[random_register.id]
best_acceptable = (random_register, similarity)
# Flag that indicates if we learned a new entry
learned = False
if best_acceptable and best_acceptable[0].category == class_value:
# Correct evaluation, simply update the hit counter
self.hits += 1
else:
# Incorrect evaluation, update the fail counter, then learn
self.fails += 1
# Learn the new entry
new_register = self.Register(entry, class_value)
new_register.hits += 1
self.descriptor.append(new_register)
learned = True
# Updates the frequency data
frequency_data[class_value] += 1
# Updates the processed instances counter
processed_instances += 1
# Size of the search space
# If we just appended a new entry, ignore it
descriptor_size = len(self.descriptor)
if learned:
descriptor_size -= 1
# Update all registers in range
i = 0
while i < descriptor_size:
register = self.descriptor[i]
# Similarity of the register used as the best "acceptable"
outer_similarity = best_acceptable[1]
similarity = similarity_table[register.id]
if similarity >= outer_similarity:
category = register.category
# Update the current register
if category == class_value:
register.hits += 1
else:
register.fails += 1
# discard factor
zf = params["zfd"]
zp = params["zpd"]
# Calculates the frequency interval (class)
p = frequency_data[category] / len(self.descriptor)
n = processed_instances
frequency_interval = self.interval(p, zf, n)
# Calculates the precision interval (instance)
n = register.hits + register.fails
p = register.hits / n
precision_interval = self.interval(p, zp, n)
if precision_interval["sup"] < frequency_interval["inf"]:
# Discard the instance
self.dropped.append(self.descriptor[i].entry)
del self.descriptor[i]
descriptor_size -= 1
frequency_data[category] -= 1
dropped_instances += 1
i -= 1
i += 1
# Iterates over the attributes, updating its weights
if len(self.descriptor) > 0:
reference = best_acceptable[0]
category = reference.category
for i in range(len(reference.entry)):
if not self.both_known(entry[i], reference.entry[i]):
continue
delta = abs(entry[i] - reference.entry[i])
lambd = max(frequency_data[class_value], frequency_data[category])
lambd /= len(self.descriptor)
complement = 1 - lambd
if class_value == reference.entry[i]:
accumulated_weights[i] += complement * (1 - delta)
else:
accumulated_weights[i] += complement * delta
normalized_weights[i] += complement
acc = accumulated_weights[i]
norm = normalized_weights[i]
weights[i] = max(0, acc / norm - 0.5)
print("Dropped: %s" % (dropped_instances))
for i in range(len(self.descriptor)):
self.categories.append(self.descriptor[i].category)
self.descriptor[i] = self.descriptor[i].entry
def weighted_similarity(self, first, second, weights):
result = 0
for i in range(len(first)):
if self.both_known(first[i], second[i]):
dif = first[i] - second[i]
else:
dif = 0
result += (weights[i] * dif) ** 2
return -math.sqrt(result)
def both_known(self, first, second):
return first != "" and second != ""
def prepare(self, entry, class_index=-1):
return (utils.without_column(entry, class_index), entry[class_index])
def interval(self, p, z, n):
d = (1 + (z * z) / n)
f1 = p + (z * z) / (2 * n)
f2 = z * math.sqrt(p * (1 - p) / n + (z * z) / (4 * n * n))
return {
"inf": (f1 - f2) / d,
"sup": (f1 + f2) / d
}
| 38.061538 | 94 | 0.545308 | 2,807 | 27,214 | 5.119701 | 0.071963 | 0.074038 | 0.031313 | 0.009185 | 0.914828 | 0.903625 | 0.895066 | 0.878923 | 0.878923 | 0.87433 | 0 | 0.011243 | 0.372492 | 27,214 | 714 | 95 | 38.114846 | 0.830298 | 0.109245 | 0 | 0.85743 | 0 | 0 | 0.005919 | 0 | 0 | 0 | 0 | 0.001401 | 0 | 1 | 0.050201 | false | 0 | 0.01004 | 0.014056 | 0.108434 | 0.008032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c34ac83b4b9f1ed35a88fbd784d86fd36458e25e | 121 | py | Python | test-crates/pyo3-mixed/pyo3_mixed/__init__.py | thedrow/maturin | 53dd4c2a2b548e8e9a1a390dccf161f61ee6137e | [
"Apache-2.0",
"MIT"
] | 854 | 2019-09-01T13:08:28.000Z | 2022-03-30T11:52:48.000Z | test-crates/pyo3-mixed/pyo3_mixed/__init__.py | evandroforks/maturin | d38f9280f9562fa199ddaa5f6a8a96c1f9c962e8 | [
"Apache-2.0",
"MIT"
] | 546 | 2019-08-30T18:13:18.000Z | 2022-03-31T16:00:19.000Z | test-crates/pyo3-mixed/pyo3_mixed/__init__.py | pombredanne/pyo3-pack | b556ed1d3a1eab65f180e5da5c00648173e77f1d | [
"Apache-2.0",
"MIT"
] | 92 | 2019-09-06T07:34:38.000Z | 2022-03-30T22:03:49.000Z | from .python_module.double import double
from .pyo3_mixed import get_21
def get_42() -> int:
return double(get_21)
| 17.285714 | 40 | 0.752066 | 20 | 121 | 4.3 | 0.65 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 0.165289 | 121 | 6 | 41 | 20.166667 | 0.782178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
c368e28ebe68ec913302dc88d19db87caf661d9d | 262 | py | Python | entity/cards/LETL_022H/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 102 | 2021-10-20T09:06:39.000Z | 2022-03-28T13:35:11.000Z | entity/cards/LETL_022H/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 98 | 2021-10-19T16:13:27.000Z | 2022-03-27T13:27:49.000Z | entity/cards/LETL_022H/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 55 | 2021-10-19T03:56:50.000Z | 2022-03-25T08:25:26.000Z | # -*- coding: utf-8 -*-
import entity.cards.LETL_022H.LETL_022P1
import entity.cards.LETL_022H.LETL_022P7
import entity.cards.LETL_022H.LETL_373
import entity.cards.LETL_022H.LETL_617
import entity.cards.LETL_022H.LETL_656
import entity.cards.LETL_022H.LETL_657
| 32.75 | 40 | 0.832061 | 45 | 262 | 4.577778 | 0.311111 | 0.349515 | 0.495146 | 0.61165 | 0.84466 | 0.84466 | 0 | 0 | 0 | 0 | 0 | 0.159184 | 0.064886 | 262 | 7 | 41 | 37.428571 | 0.681633 | 0.080153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
c37111d4099833140f6991d38fc7081ecdc14c7d | 10,783 | py | Python | migrations/versions/7d3595961ca1_course_table_club_id_alias_table_race_.py | louking/rrwebapp | 5c73f84e1a21bc3b5fa51d83ba576c3152e6cf27 | [
"Apache-2.0"
] | null | null | null | migrations/versions/7d3595961ca1_course_table_club_id_alias_table_race_.py | louking/rrwebapp | 5c73f84e1a21bc3b5fa51d83ba576c3152e6cf27 | [
"Apache-2.0"
] | 417 | 2015-05-07T16:50:22.000Z | 2022-03-14T16:16:13.000Z | migrations/versions/7d3595961ca1_course_table_club_id_alias_table_race_.py | louking/rrwebapp | 5c73f84e1a21bc3b5fa51d83ba576c3152e6cf27 | [
"Apache-2.0"
] | null | null | null | """course table club_id, alias table, race table hidden, role and user table updates for flask-security
Revision ID: 7d3595961ca1
Revises: e175fb986917
Create Date: 2016-11-10 14:39:07.694000
"""
# revision identifiers, used by Alembic.
revision = '7d3595961ca1'
down_revision = 'e175fb986917'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import mysql
#added
from sqlalchemy.sql import table, column
# note we need the id to identify rows when column requires data specific to the row
raceresult = table('raceresult',
column('hidden', sa.Boolean),
)
#end added
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('course', sa.Column('club_id', sa.Integer(), nullable=True))
op.drop_index('source', table_name='course')
op.create_unique_constraint(None, 'course', ['club_id', 'source', 'sourceid'])
op.create_foreign_key(None, 'course', 'club', ['club_id'], ['id'])
op.alter_column('divisions', 'active',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('managedresult', 'confirmed',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('managedresult', 'initialdisposition',
existing_type=mysql.ENUM('definite', 'similar', 'missed', 'excluded', ''),
type_=sa.String(length=15),
existing_nullable=True)
op.alter_column('race', 'active',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('race', 'external',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.add_column('raceresult', sa.Column('hidden', sa.Boolean(), nullable=True))
op.alter_column('raceresult', 'fuzzyage',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('raceresult', 'instandings',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('raceseries', 'active',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.add_column('role', sa.Column('description', sa.String(length=255), nullable=True))
op.alter_column('role', 'name',
existing_type=mysql.VARCHAR(length=10),
type_=sa.String(length=80),
existing_nullable=True)
op.alter_column('runner', 'active',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('runner', 'member',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'active',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'allowties',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'averagetie',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'calcagegrade',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'calcdivisions',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'calcoverall',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'hightolow',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'maxbynumrunners',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('series', 'membersonly',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.add_column('user', sa.Column('confirmed_at', sa.DateTime(), nullable=True))
op.add_column('user', sa.Column('current_login_at', sa.DateTime(), nullable=True))
op.add_column('user', sa.Column('current_login_ip', sa.String(length=39), nullable=True))
op.add_column('user', sa.Column('last_login_at', sa.DateTime(), nullable=True))
op.add_column('user', sa.Column('last_login_ip', sa.String(length=39), nullable=True))
op.add_column('user', sa.Column('login_count', sa.Integer(), nullable=True))
op.add_column('user', sa.Column('password', sa.String(length=255), nullable=True))
op.alter_column('user', 'active',
existing_type=mysql.TINYINT(display_width=1),
type_=sa.Boolean(),
existing_nullable=True)
op.alter_column('user', 'email',
existing_type=mysql.VARCHAR(length=120),
type_=sa.String(length=255),
existing_nullable=True)
op.drop_column('user', 'pwresetrequired')
### end Alembic commands ###
# added
op.execute(raceresult.update().values({
'hidden':op.inline_literal(False),
}))
# end added
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('user', sa.Column('pwresetrequired', mysql.TINYINT(display_width=1), autoincrement=False, nullable=True))
op.alter_column('user', 'email',
existing_type=sa.String(length=255),
type_=mysql.VARCHAR(length=120),
existing_nullable=True)
op.alter_column('user', 'active',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.drop_column('user', 'password')
op.drop_column('user', 'login_count')
op.drop_column('user', 'last_login_ip')
op.drop_column('user', 'last_login_at')
op.drop_column('user', 'current_login_ip')
op.drop_column('user', 'current_login_at')
op.drop_column('user', 'confirmed_at')
op.alter_column('series', 'membersonly',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'maxbynumrunners',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'hightolow',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'calcoverall',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'calcdivisions',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'calcagegrade',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'averagetie',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'allowties',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('series', 'active',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('runner', 'member',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('runner', 'active',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('role', 'name',
existing_type=sa.String(length=80),
type_=mysql.VARCHAR(length=10),
existing_nullable=True)
op.drop_column('role', 'description')
op.alter_column('raceseries', 'active',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('raceresult', 'instandings',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('raceresult', 'fuzzyage',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.drop_column('raceresult', 'hidden')
op.alter_column('race', 'external',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('race', 'active',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('managedresult', 'initialdisposition',
existing_type=sa.String(length=15),
type_=mysql.ENUM('definite', 'similar', 'missed', 'excluded', ''),
existing_nullable=True)
op.alter_column('managedresult', 'confirmed',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.alter_column('divisions', 'active',
existing_type=sa.Boolean(),
type_=mysql.TINYINT(display_width=1),
existing_nullable=True)
op.drop_constraint(None, 'course', type_='foreignkey')
op.drop_constraint(None, 'course', type_='unique')
op.create_index('source', 'course', ['source', 'sourceid'], unique=True)
op.drop_column('course', 'club_id')
### end Alembic commands ###
| 44.374486 | 123 | 0.609385 | 1,186 | 10,783 | 5.3086 | 0.116358 | 0.053367 | 0.1223 | 0.153748 | 0.836722 | 0.795902 | 0.759689 | 0.716804 | 0.709339 | 0.665661 | 0 | 0.016141 | 0.258833 | 10,783 | 242 | 124 | 44.557851 | 0.771647 | 0.045535 | 0 | 0.752294 | 0 | 0 | 0.123659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009174 | false | 0.009174 | 0.018349 | 0 | 0.027523 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c376d4be796fb6590b0af36405584085942c06a1 | 10,759 | py | Python | tests/test_quickstart.py | msabramo/tox | 7ecff09bfff0e8734520c865c3255a356d67a78f | [
"MIT"
] | 1 | 2015-01-04T12:20:07.000Z | 2015-01-04T12:20:07.000Z | tests/test_quickstart.py | msabramo/tox | 7ecff09bfff0e8734520c865c3255a356d67a78f | [
"MIT"
] | null | null | null | tests/test_quickstart.py | msabramo/tox | 7ecff09bfff0e8734520c865c3255a356d67a78f | [
"MIT"
] | null | null | null | import pytest
import tox._quickstart
@pytest.fixture(autouse=True)
def cleandir(tmpdir):
tmpdir.chdir()
class TestToxQuickstartMain(object):
def mock_term_input_return_values(self, return_values):
for return_val in return_values:
yield return_val
def get_mock_term_input(self, return_values):
generator = self.mock_term_input_return_values(return_values)
def mock_term_input(prompt):
try:
return next(generator)
except NameError:
return generator.next()
return mock_term_input
def test_quickstart_main_choose_individual_pythons_and_pytest(self,
monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(
['4', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'py.test', 'pytest']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy
[testenv]
commands = py.test
deps =
pytest
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_individual_pythons_and_nose_adds_deps(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['4', 'Y', 'Y', 'Y', 'Y', 'Y', 'N',
'nosetests', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy
[testenv]
commands = nosetests
deps =
nose
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_individual_pythons_and_trial_adds_deps(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['4', 'Y', 'Y', 'Y', 'Y', 'Y', 'N',
'trial', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy
[testenv]
commands = trial
deps =
twisted
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_individual_pythons_and_pytest_adds_deps(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['4', 'Y', 'Y', 'Y', 'Y', 'Y', 'N',
'py.test', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy
[testenv]
commands = py.test
deps =
pytest
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_py27_and_pytest_adds_deps(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['1', 'py.test', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py27
[testenv]
commands = py.test
deps =
pytest
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_py27_and_py33_and_pytest_adds_deps(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['2', 'py.test', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py27, py33
[testenv]
commands = py.test
deps =
pytest
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_all_pythons_and_pytest_adds_deps(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['3', 'py.test', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy, jython
[testenv]
commands = py.test
deps =
pytest
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_choose_individual_pythons_and_defaults(self, monkeypatch):
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['4', '', '', '', '', '', '', '', '', '', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy, jython
[testenv]
commands = {envpython} setup.py test
deps =
""".lstrip()
result = open('tox.ini').read()
assert(result == expected_tox_ini)
def test_quickstart_main_existing_tox_ini(self, monkeypatch):
try:
f = open('tox.ini', 'w')
f.write('foo bar\n')
finally:
f.close()
monkeypatch.setattr(
tox._quickstart, 'term_input',
self.get_mock_term_input(['4', '', '', '', '', '', '', '', '', '', '', '']))
tox._quickstart.main(argv=['tox-quickstart'])
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy, jython
[testenv]
commands = {envpython} setup.py test
deps =
""".lstrip()
result = open('tox-generated.ini').read()
assert(result == expected_tox_ini)
class TestToxQuickstart(object):
def test_pytest(self):
d = {
'py26': True,
'py27': True,
'py32': True,
'py33': True,
'pypy': True,
'commands': 'py.test',
'deps': 'pytest',
}
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py32, py33, pypy
[testenv]
commands = py.test
deps =
pytest
""".lstrip()
d = tox._quickstart.process_input(d)
tox._quickstart.generate(d)
result = open('tox.ini').read()
# print(result)
assert(result == expected_tox_ini)
def test_setup_py_test(self):
d = {
'py26': True,
'py27': True,
'commands': 'python setup.py test',
'deps': '',
}
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27
[testenv]
commands = python setup.py test
deps =
""".lstrip()
d = tox._quickstart.process_input(d)
tox._quickstart.generate(d)
result = open('tox.ini').read()
# print(result)
assert(result == expected_tox_ini)
def test_trial(self):
d = {
'py27': True,
'commands': 'trial',
'deps': 'Twisted',
}
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py27
[testenv]
commands = trial
deps =
Twisted
""".lstrip()
d = tox._quickstart.process_input(d)
tox._quickstart.generate(d)
result = open('tox.ini').read()
# print(result)
assert(result == expected_tox_ini)
def test_nosetests(self):
d = {
'py27': True,
'py32': True,
'py33': True,
'pypy': True,
'commands': 'nosetests -v',
'deps': 'nose',
}
expected_tox_ini = """
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py27, py32, py33, pypy
[testenv]
commands = nosetests -v
deps =
nose
""".lstrip()
d = tox._quickstart.process_input(d)
tox._quickstart.generate(d)
result = open('tox.ini').read()
# print(result)
assert(result == expected_tox_ini)
| 29.476712 | 95 | 0.625151 | 1,369 | 10,759 | 4.753104 | 0.085464 | 0.036883 | 0.05594 | 0.033963 | 0.910251 | 0.895958 | 0.876902 | 0.864915 | 0.864915 | 0.844475 | 0 | 0.014434 | 0.246584 | 10,759 | 364 | 96 | 29.557692 | 0.788305 | 0.005112 | 0 | 0.773973 | 0 | 0 | 0.459295 | 0 | 0 | 0 | 0 | 0 | 0.044521 | 1 | 0.058219 | false | 0 | 0.006849 | 0 | 0.082192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5eecbadf2af9135aa1a3bd79de149988609d1f95 | 159 | py | Python | utils.py | GerryLarios/order-typescript-types | adae41a5fcf667396541911f6e4541ce977ee269 | [
"MIT"
] | null | null | null | utils.py | GerryLarios/order-typescript-types | adae41a5fcf667396541911f6e4541ce977ee269 | [
"MIT"
] | null | null | null | utils.py | GerryLarios/order-typescript-types | adae41a5fcf667396541911f6e4541ce977ee269 | [
"MIT"
] | null | null | null | import re
def is_match(line):
return re.search(r'type', line)
def get_type_name(line):
return re.findall(r'\b(?:(?!export|type|=|{)\w)+\b', line)[0]
| 19.875 | 65 | 0.628931 | 28 | 159 | 3.464286 | 0.607143 | 0.206186 | 0.247423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007299 | 0.138365 | 159 | 7 | 66 | 22.714286 | 0.70073 | 0 | 0 | 0 | 0 | 0 | 0.213836 | 0.188679 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
6f19dadc4b13621f2f90821045b1ee05c4c7e672 | 1,251 | py | Python | src/arch/x86/isa/insts/simd512/integer/arithmetic/vpminsq.py | jyhuang91/gem5-avx | f988da46080f8db49beb39e20af437219f3aa4cb | [
"BSD-3-Clause"
] | 2 | 2021-01-15T17:32:18.000Z | 2021-12-21T02:53:58.000Z | src/arch/x86/isa/insts/simd512/integer/arithmetic/vpminsq.py | jyhuang91/gem5-avx | f988da46080f8db49beb39e20af437219f3aa4cb | [
"BSD-3-Clause"
] | 3 | 2021-03-26T20:33:59.000Z | 2022-01-24T22:54:03.000Z | src/arch/x86/isa/insts/simd512/integer/arithmetic/vpminsq.py | jyhuang91/gem5-avx | f988da46080f8db49beb39e20af437219f3aa4cb | [
"BSD-3-Clause"
] | 3 | 2021-03-27T16:36:19.000Z | 2022-03-28T18:32:57.000Z | microcode = '''
def macroop VPMINSQ_XMM_XMM {
vminsi dest=xmm0, src1=xmm0v, src2=xmm0m, size=8, VL=16
};
def macroop VPMINSQ_XMM_M {
ldfp128 ufp1, seg, sib, "DISPLACEMENT + 0", dataSize=16
vminsi dest=xmm0, src1=xmm0v, src2=ufp1, size=8, VL=16
};
def macroop VPMINSQ_XMM_P {
rdip t7
ldfp128 ufp1, seg, riprel, "DISPLACEMENT + 0", dataSize=16
vminsi dest=xmm0, src1=xmm0v, src2=ufp1, size=8, VL=16
};
def macroop VPMINSQ_YMM_YMM {
vminsi dest=xmm0, src1=xmm0v, src2=xmm0m, size=8, VL=32
};
def macroop VPMINSQ_YMM_M {
ldfp256 ufp1, seg, sib, "DISPLACEMENT + 0", dataSize=32
vminsi dest=xmm0, src1=xmm0v, src2=ufp1, size=8, VL=32
};
def macroop VPMINSQ_YMM_P {
rdip t7
ldfp256 ufp1, seg, riprel, "DISPLACEMENT + 0", dataSize=32
vminsi dest=xmm0, src1=xmm0v, src2=ufp1, size=8, VL=32
};
def macroop VPMINSQ_ZMM_ZMM {
vminsi dest=xmm0, src1=xmm0v, src2=xmm0m, size=8, VL=64
};
def macroop VPMINSQ_ZMM_M {
ldfp512 ufp1, seg, sib, "DISPLACEMENT + 0", dataSize=64
vminsi dest=xmm0, src1=xmm0v, src2=ufp1, size=8, VL=64
};
def macroop VPMINSQ_ZMM_P {
rdip t7
ldfp512 ufp1, seg, riprel, "DISPLACEMENT + 0", dataSize=64
vminsi dest=xmm0, src1=xmm0v, src2=ufp1, size=8, VL=64
};
''' | 27.195652 | 62 | 0.681055 | 199 | 1,251 | 4.190955 | 0.170854 | 0.107914 | 0.183453 | 0.194245 | 0.868106 | 0.868106 | 0.785372 | 0.785372 | 0.67506 | 0.67506 | 0 | 0.115271 | 0.188649 | 1,251 | 46 | 63 | 27.195652 | 0.706404 | 0 | 0 | 0.473684 | 0 | 0 | 0.984824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6f3c48204532c63dbff3f86565bddc37e9c06b86 | 98 | py | Python | cluster/__init__.py | IoT-Ticket/iotticketml | e555208df06b80cf467906573d0a366035ee2531 | [
"MIT"
] | 2 | 2019-03-13T15:55:55.000Z | 2019-12-04T09:35:20.000Z | cluster/__init__.py | IoT-Ticket/iotticketml | e555208df06b80cf467906573d0a366035ee2531 | [
"MIT"
] | null | null | null | cluster/__init__.py | IoT-Ticket/iotticketml | e555208df06b80cf467906573d0a366035ee2531 | [
"MIT"
] | 1 | 2021-07-06T05:56:26.000Z | 2021-07-06T05:56:26.000Z | from iotticketml.cluster.corrclust import CHUNX
from iotticketml.cluster.corrclust import CRUSHES
| 32.666667 | 49 | 0.877551 | 12 | 98 | 7.166667 | 0.583333 | 0.348837 | 0.511628 | 0.72093 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 98 | 2 | 50 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
48a97295c5b36df25606a23d4fc67a2a12b49dd4 | 1,711 | py | Python | snapshots/snap_test_saau.py | Mause/statistical_atlas_of_au | 9a1e46cdb1075f993086640827dabb0f4df4fd17 | [
"MIT"
] | null | null | null | snapshots/snap_test_saau.py | Mause/statistical_atlas_of_au | 9a1e46cdb1075f993086640827dabb0f4df4fd17 | [
"MIT"
] | null | null | null | snapshots/snap_test_saau.py | Mause/statistical_atlas_of_au | 9a1e46cdb1075f993086640827dabb0f4df4fd17 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots['test_roads 1'] = [
[
[
146.12118750000002,
-34.61399849999998
],
[
146.12117149999995,
-34.614528500000006
],
[
146.12119099999995,
-34.61497850000001
],
[
146.12131499999998,
-34.61571600000002
],
[
146.12143249999997,
-34.61642699999999
],
[
146.121491,
-34.61764049999999
],
[
146.12150399999996,
-34.61803850000001
],
[
146.12171049999995,
-34.618738500000006
],
[
146.1219115,
-34.619140000000016
]
],
[
[
146.12118750000002,
-34.61399849999998
],
[
146.12117149999995,
-34.614528500000006
],
[
146.12119099999995,
-34.61497850000001
],
[
146.12131499999998,
-34.61571600000002
],
[
146.12143249999997,
-34.61642699999999
],
[
146.121491,
-34.61764049999999
],
[
146.12150399999996,
-34.61803850000001
],
[
146.12171049999995,
-34.618738500000006
],
[
146.1219115,
-34.619140000000016
]
]
]
| 19.443182 | 42 | 0.416715 | 96 | 1,711 | 7.364583 | 0.40625 | 0.048091 | 0.053748 | 0.093352 | 0.806223 | 0.806223 | 0.806223 | 0.806223 | 0.806223 | 0.806223 | 0 | 0.660529 | 0.49211 | 1,711 | 87 | 43 | 19.666667 | 0.153049 | 0.036236 | 0 | 0.654321 | 0 | 0 | 0.00729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.024691 | 0 | 0.024691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
48aa4c0f059d24d2b8150b2b78069d7fc80a8cb1 | 3,099 | py | Python | Project-Euler/Problem8/problem8.py | lilsweetcaligula/MIT6.00.1x | ee2902782a08ff685e388b2f40c09ea8c9c5fcfe | [
"MIT"
] | null | null | null | Project-Euler/Problem8/problem8.py | lilsweetcaligula/MIT6.00.1x | ee2902782a08ff685e388b2f40c09ea8c9c5fcfe | [
"MIT"
] | null | null | null | Project-Euler/Problem8/problem8.py | lilsweetcaligula/MIT6.00.1x | ee2902782a08ff685e388b2f40c09ea8c9c5fcfe | [
"MIT"
] | null | null | null | """
[ref.href] https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest product are:
9 * 9 * 8 * 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product.
What is the value of this product?
"""
BIGNUM = "73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450"
adjcount = 13
biglen = len(BIGNUM)
i = 0
j = 0
maxproduct = 0
product = 1
while i < biglen:
digit = int(BIGNUM[i])
if digit != 0:
if product < 1:
product = 1
product *= digit
j += 1
if j >= adjcount:
discard = int(BIGNUM[i - adjcount])
if discard > 0:
product /= discard
if product > maxproduct:
maxproduct = product
else:
product = 0
j = 0
i += 1
print "The greatest product of the %d adjacent digits in the %d-digit number is %d." % (adjcount, biglen, maxproduct)
| 37.792683 | 117 | 0.825428 | 171 | 3,099 | 14.959064 | 0.403509 | 0.016419 | 0.018765 | 0.022283 | 0.828772 | 0.828772 | 0.828772 | 0.828772 | 0.828772 | 0.828772 | 0 | 0.762674 | 0.140691 | 3,099 | 81 | 118 | 38.259259 | 0.197897 | 0 | 0 | 0.090909 | 0 | 0 | 0.0454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.022727 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
5b01721270e7b4c9865b71b332d75debf9e79f34 | 13,248 | py | Python | CeLEry_package/CeLEryPy/DNN.py | QihuangZhang/CeLEry | f7ebce1f6326b1bb66de1924050afa1e6d9c2ffd | [
"MIT"
] | null | null | null | CeLEry_package/CeLEryPy/DNN.py | QihuangZhang/CeLEry | f7ebce1f6326b1bb66de1924050afa1e6d9c2ffd | [
"MIT"
] | null | null | null | CeLEry_package/CeLEryPy/DNN.py | QihuangZhang/CeLEry | f7ebce1f6326b1bb66de1924050afa1e6d9c2ffd | [
"MIT"
] | null | null | null | import torch
from torch import nn
from torch.nn import functional as F
from . types_ import *
import math
class DNN(nn.Module):
def __init__(self,
in_channels: int,
hidden_dims: List = None,
**kwargs) -> None:
super(DNN, self).__init__()
if hidden_dims is None:
hidden_dims = [200, 100, 50]
self.fclayer1 = nn.Sequential(
nn.Linear(in_channels, hidden_dims[0]),
# nn.BatchNorm1d(hidden_dims[0]),
nn.ReLU())
self.fclayer2 = nn.Sequential(
nn.Linear(hidden_dims[0], hidden_dims[1]),
# nn.BatchNorm1d(hidden_dims[1]),
nn.ReLU())
self.fclayer3 = nn.Sequential(
nn.Linear(hidden_dims[1], hidden_dims[2]),
# nn.BatchNorm1d(hidden_dims[2]),
nn.ReLU())
self.fclayer4 = nn.Sequential(
nn.Linear(hidden_dims[2], 2),
# nn.BatchNorm1d(2),
nn.Sigmoid())
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
z = self.fclayer1(input[0])
z = self.fclayer2(z)
z = self.fclayer3(z)
z = self.fclayer4(z)
return [z,input]
def loss_function(self,
*args,
**kwargs) -> dict:
"""
Computes the spatial coordinates loss function
:param args: results data and input matrix
:return:
"""
cord_pred = args[0]
input = args[1]
loss = F.mse_loss(cord_pred, input[1])
return {'loss': loss}
class DNNordinal_v2(DNN):
def __init__(self,
# in_channels: int,
in_channels: int,
num_classes: int,
hidden_dims: List = None,
**kwargs) -> None:
super(DNNordinal, self).__init__(in_channels, hidden_dims, **kwargs)
if hidden_dims is None:
hidden_dims = [200, 100, 50]
self.fclayer1 = nn.Sequential(
nn.Linear(in_channels, hidden_dims[0]),
nn.ReLU())
self.fclayer2 = nn.Sequential(
nn.Linear(hidden_dims[0], hidden_dims[1]),
nn.ReLU())
self.fclayer3 = nn.Sequential(
nn.Linear(hidden_dims[1], hidden_dims[2]),
nn.ReLU())
self.fclayer4 = nn.Sequential(
nn.Linear(hidden_dims[2], 2))
self.coral_bias = torch.nn.Parameter(
torch.arange(num_classes - 1, 0, -1).float() / (num_classes-1))
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
"""
Computes forward pass.
Parameters
-----------
x : torch.tensor, shape=(num_examples, num_features)
Input features.
Returns
-----------
logits : torch.tensor, shape=(num_examples, num_classes-1)
"""
z = self.fclayer1(input[0])
z = self.fclayer2(z)
z = self.fclayer3(z)
z = self.fclayer4(z)
logits = z[0,1] + self.coral_bias
logitWM = z[0,0]
return [logits, logitWM, input]
def loss_function(self,
*args,
**kwargs) -> dict:
"""Computes the CORAL loss described in
Cao, Mirjalili, and Raschka (2020)
*Rank Consistent Ordinal Regression for Neural Networks
with Application to Age Estimation*
Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2020.11.008
Parameters
----------
logits : torch.tensor, shape(num_examples, num_classes-1)
Outputs of the CORAL layer.
levels : torch.tensor, shape(num_examples, num_classes-1)
True labels represented as extended binary vectors
(via `coral_pytorch.dataset.levels_from_labelbatch`).
importance_weights : torch.tensor, shape=(num_classes-1,) (default=None)
Optional weights for the different labels in levels.
A tensor of ones, i.e.,
`torch.ones(num_classes-1, dtype=torch.float32)`
will result in uniform weights that have the same effect as None.
reduction : str or None (default='mean')
If 'mean' or 'sum', returns the averaged or summed loss value across
all data points (rows) in logits. If None, returns a vector of
shape (num_examples,)
"""
logits = args[0]
logitWM = args[1]
levelALL = args[2][1]
levels = levelALL[0,:(levelALL.shape[1]-1)]
levelWM = levelALL[0,levelALL.shape[1]-1]
if not logits.shape == levels.shape:
raise ValueError("Please ensure that logits (%s) has the same shape as levels (%s). "
% (logits.shape, levels.shape))
term1 = (F.logsigmoid(logits)*levels + (F.logsigmoid(logits) - logits)*(1-levels))
term2 = F.logsigmoid(logitWM)*levelWM + (F.logsigmoid(logitWM) - logitWM + term1)*(1-levelWM)
val = (-torch.sum(term2, dim=0))
# loss = torch.sum(val)
return {'loss': val}
class DNNordinal(DNN):
def __init__(self,
# in_channels: int,
in_channels: int,
num_classes: int,
hidden_dims: List = None,
importance_weights: List=None,
**kwargs) -> None:
super(DNNordinal, self).__init__(in_channels, hidden_dims, **kwargs)
if hidden_dims is None:
hidden_dims = [200, 100, 50]
self.fclayer1 = nn.Sequential(
nn.Linear(in_channels, hidden_dims[0]),
nn.Dropout(0.25),
nn.ReLU())
self.fclayer2 = nn.Sequential(
nn.Linear(hidden_dims[0], hidden_dims[1]),
nn.ReLU())
self.fclayer3 = nn.Sequential(
nn.Linear(hidden_dims[1], hidden_dims[2]),
nn.ReLU())
self.fclayer4 = nn.Sequential(
nn.Linear(hidden_dims[2], 1))
self.coral_bias = torch.nn.Parameter(
torch.arange(num_classes - 1, 0, -1).float() / (num_classes-1))
self.importance_weights = importance_weights
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
"""
Computes forward pass.
Parameters
-----------
x : torch.tensor, shape=(num_examples, num_features)
Input features.
Returns
-----------
logits : torch.tensor, shape=(num_examples, num_classes-1)
"""
z = self.fclayer1(input[0])
z = self.fclayer2(z)
z = self.fclayer3(z)
z = self.fclayer4(z)
logits = z + self.coral_bias
return [logits, input]
def loss_function(self,
*args,
**kwargs) -> dict:
"""Computes the CORAL loss described in
Cao, Mirjalili, and Raschka (2020)
*Rank Consistent Ordinal Regression for Neural Networks
with Application to Age Estimation*
Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2020.11.008
Parameters
----------
logits : torch.tensor, shape(num_examples, num_classes-1)
Outputs of the CORAL layer.
levels : torch.tensor, shape(num_examples, num_classes-1)
True labels represented as extended binary vectors
(via `coral_pytorch.dataset.levels_from_labelbatch`).
importance_weights : torch.tensor, shape=(num_classes-1,) (default=None)
Optional weights for the different labels in levels.
A tensor of ones, i.e.,
`torch.ones(num_classes, dtype=torch.float32)`
will result in uniform weights that have the same effect as None.
reduction : str or None (default='mean')
If 'mean' or 'sum', returns the averaged or summed loss value across
all data points (rows) in logits. If None, returns a vector of
shape (num_examples,)
"""
logits = args[0]
levels = args[1][1]
if not logits.shape == levels.shape:
raise ValueError("Please ensure that logits (%s) has the same shape as levels (%s). "
% (logits.shape, levels.shape))
term1 = (F.logsigmoid(logits)*levels + (F.logsigmoid(logits) - logits)*(1-levels))
layerid = torch.sum(levels, dim = 1)
if self.importance_weights is not None:
term2 = torch.mul(self.importance_weights[layerid.numpy()], term1.transpose(0,1))
else:
term2 = term1.transpose(0,1)
val = (-torch.sum(term2, dim=0))
loss = torch.mean(val)
return {'loss': loss}
class DNNregion(DNN):
def __init__(self,
# in_channels: int,
in_channels: int,
alpha,
hidden_dims: List = None,
**kwargs) -> None:
super(DNNregion, self).__init__(in_channels, hidden_dims, **kwargs)
if hidden_dims is None:
hidden_dims = [200, 100, 50]
self.fclayer1 = nn.Sequential(
nn.Linear(in_channels, hidden_dims[0]),
nn.Dropout(0.25),
nn.ReLU())
self.fclayer2 = nn.Sequential(
nn.Linear(hidden_dims[0], hidden_dims[1]),
nn.ReLU())
self.fclayer3 = nn.Sequential(
nn.Linear(hidden_dims[1], hidden_dims[2]),
nn.ReLU())
self.fclayer4 = nn.Sequential(
nn.Linear(hidden_dims[2], 5))
self.alpha = alpha
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
"""
Computes forward pass.
Parameters
-----------
x : torch.tensor, shape=(num_examples, num_features)
Input features.
Returns
-----------
logits : torch.tensor, shape=(num_examples, num_classes-1)
"""
z = self.fclayer1(input[0])
z = self.fclayer2(z)
z = self.fclayer3(z)
z = self.fclayer4(z)
cord = F.sigmoid( z[:,0:2] )
r = F.softplus( z[:,2:4] )
theta = F.sigmoid( z[:,4] ) * math.pi
return [cord, r, theta, input]
def loss_function(self,
*args,
**kwargs) -> dict:
"""Computes the loss described in Justin's
Parameters
----------
logits : torch.tensor, shape(num_examples, num_classes-1)
Outputs of the CORAL layer.
levels : torch.tensor, shape(num_examples, num_classes-1)
True labels represented as extended binary vectors
(via `coral_pytorch.dataset.levels_from_labelbatch`).
importance_weights : torch.tensor, shape=(num_classes-1,) (default=None)
Optional weights for the different labels in levels.
A tensor of ones, i.e.,
`torch.ones(num_classes, dtype=torch.float32)`
will result in uniform weights that have the same effect as None.
reduction : str or None (default='mean')
If 'mean' or 'sum', returns the averaged or summed loss value across
all data points (rows) in logits. If None, returns a vector of
shape (num_examples,)
"""
cord_pred = args[0]
r_pred = args[1]
theta_pred = args[2]
input = args[3]
roration_x = torch.cat((torch.cos(theta_pred).unsqueeze(1), torch.sin(theta_pred).unsqueeze(1)), 1)
roration_y = torch.cat((torch.sin(theta_pred).unsqueeze(1), -torch.cos(theta_pred).unsqueeze(1)), 1)
# MSE_Adjust = (cord_pred - input[1]) / (r_pred + 1e-7): old version - without considering rotation
cord_decenter = cord_pred - input[1]
semi_x = torch.sum(cord_decenter * roration_x , dim = 1)
semi_y = torch.sum(cord_decenter * roration_y , dim = 1)
cord_trans = torch.cat( (semi_x.unsqueeze(1), semi_y.unsqueeze(1)), 1)
MSE_Adjust = cord_trans / (r_pred + 1e-7)
area = torch.prod(r_pred)
MSE_sum = torch.sum(torch.square(MSE_Adjust), dim = 1)
Si = (MSE_sum <= 1) * 1
val = (self.alpha * (1-Si)+(1-self.alpha)*Si) * torch.abs(MSE_sum - 1)
loss = torch.mean(val)
return {'loss': loss, 'MSE_pure': MSE_sum, 'Inside_indic':Si, 'Area':area.detach().numpy()}
class DNNdomain(DNN):
def __init__(self,
# in_channels: int,
in_channels: int,
num_classes: int,
hidden_dims: List = None,
importance_weights: List=None,
**kwargs) -> None:
super(DNNdomain, self).__init__(in_channels, hidden_dims, **kwargs)
if hidden_dims is None:
hidden_dims = [200, 100, 50]
self.fclayer1 = nn.Sequential(
nn.Linear(in_channels, hidden_dims[0]),
nn.Dropout(0.25),
nn.ReLU())
self.fclayer2 = nn.Sequential(
nn.Linear(hidden_dims[0], hidden_dims[1]),
nn.ReLU())
self.fclayer3 = nn.Sequential(
nn.Linear(hidden_dims[1], hidden_dims[2]),
nn.ReLU())
self.fclayer4 = nn.Sequential(
nn.Linear(hidden_dims[2], num_classes))
self.importance_weights = importance_weights
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
"""
Computes forward pass.
Parameters
-----------
x : torch.tensor, shape=(num_examples, num_features)
Input features.
Returns
-----------
logits : torch.tensor, shape=(num_examples, num_classes-1)
"""
z = self.fclayer1(input[0])
z = self.fclayer2(z)
z = self.fclayer3(z)
logits = self.fclayer4(z)
return [logits, input]
def loss_function(self,
*args,
**kwargs) -> dict:
"""Computes the CORAL loss described in
Cao, Mirjalili, and Raschka (2020)
*Rank Consistent Ordinal Regression for Neural Networks
with Application to Age Estimation*
Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2020.11.008
Parameters
----------
logits : torch.tensor, shape(num_examples, num_classes-1)
Outputs of the CORAL layer.
levels : torch.tensor, shape(num_examples, num_classes-1)
True labels represented as extended binary vectors
(via `coral_pytorch.dataset.levels_from_labelbatch`).
importance_weights : torch.tensor, shape=(num_classes-1,) (default=None)
Optional weights for the different labels in levels.
A tensor of ones, i.e.,
`torch.ones(num_classes, dtype=torch.float32)`
will result in uniform weights that have the same effect as None.
reduction : str or None (default='mean')
If 'mean' or 'sum', returns the averaged or summed loss value across
all data points (rows) in logits. If None, returns a vector of
shape (num_examples,)
"""
logits = args[0]
levels = args[1][1]
# if not logits.shape == levels.shape:
# raise ValueError("Please ensure that logits (%s) has the same shape as levels (%s). "
# % (logits.shape, levels.shape))
if self.importance_weights is not None:
loss = nn.CrossEntropyLoss(weight = self.importance_weights)
else:
loss = nn.CrossEntropyLoss()
# layerid = torch.sum(levels, dim = 1)
output = loss(logits, levels)
return {'loss': output}
| 30.525346 | 102 | 0.6601 | 1,858 | 13,248 | 4.579656 | 0.115716 | 0.061112 | 0.027148 | 0.047009 | 0.848513 | 0.841815 | 0.810906 | 0.793513 | 0.777647 | 0.777647 | 0 | 0.028582 | 0.199804 | 13,248 | 433 | 103 | 30.595843 | 0.774078 | 0.407382 | 0 | 0.726457 | 0 | 0 | 0.023164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067265 | false | 0 | 0.058296 | 0 | 0.192825 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d29a598500992a4d1acb61f238b83872cd999db5 | 29,389 | py | Python | datahub_client/apis/user_api.py | amkimian/mimir_python | 994c1542437fa6bd1d0e53b0c0c4c8f692575374 | [
"Apache-2.0"
] | null | null | null | datahub_client/apis/user_api.py | amkimian/mimir_python | 994c1542437fa6bd1d0e53b0c0c4c8f692575374 | [
"Apache-2.0"
] | null | null | null | datahub_client/apis/user_api.py | amkimian/mimir_python | 994c1542437fa6bd1d0e53b0c0c4c8f692575374 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
DataHub API
DataHub API
OpenAPI spec version: 0.0.11
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class UserApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def delete_user(self, user_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_user(user_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str user_id: (required)
:param str admin_key: The admin user api key
:param str api_key: The user api key
:return: GeneralStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_user_with_http_info(user_id, **kwargs)
else:
(data) = self.delete_user_with_http_info(user_id, **kwargs)
return data
def delete_user_with_http_info(self, user_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_user_with_http_info(user_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str user_id: (required)
:param str admin_key: The admin user api key
:param str api_key: The user api key
:return: GeneralStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['user_id', 'admin_key', 'api_key']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_user" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'user_id' is set
if ('user_id' not in params) or (params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `delete_user`")
resource_path = '/admin/user/{userId}'.replace('{format}', 'json')
path_params = {}
if 'user_id' in params:
path_params['userId'] = params['user_id']
query_params = {}
header_params = {}
if 'admin_key' in params:
header_params['admin_key'] = params['admin_key']
if 'api_key' in params:
header_params['api_key'] = params['api_key']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GeneralStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_user(self, user_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user(user_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str user_id: (required)
:param str admin_key: The admin user api key
:param str api_key: The user api key
:return: User
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_user_with_http_info(user_id, **kwargs)
else:
(data) = self.get_user_with_http_info(user_id, **kwargs)
return data
def get_user_with_http_info(self, user_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_with_http_info(user_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str user_id: (required)
:param str admin_key: The admin user api key
:param str api_key: The user api key
:return: User
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['user_id', 'admin_key', 'api_key']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_user" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'user_id' is set
if ('user_id' not in params) or (params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `get_user`")
resource_path = '/admin/user/{userId}'.replace('{format}', 'json')
path_params = {}
if 'user_id' in params:
path_params['userId'] = params['user_id']
query_params = {}
header_params = {}
if 'admin_key' in params:
header_params['admin_key'] = params['admin_key']
if 'api_key' in params:
header_params['api_key'] = params['api_key']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='User',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_user_by_email(self, admin_key, email, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_by_email(admin_key, email, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str admin_key: The admin user api key (required)
:param str email: The email to search for (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_user_by_email_with_http_info(admin_key, email, **kwargs)
else:
(data) = self.get_user_by_email_with_http_info(admin_key, email, **kwargs)
return data
def get_user_by_email_with_http_info(self, admin_key, email, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_by_email_with_http_info(admin_key, email, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str admin_key: The admin user api key (required)
:param str email: The email to search for (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['admin_key', 'email']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_user_by_email" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'admin_key' is set
if ('admin_key' not in params) or (params['admin_key'] is None):
raise ValueError("Missing the required parameter `admin_key` when calling `get_user_by_email`")
# verify the required parameter 'email' is set
if ('email' not in params) or (params['email'] is None):
raise ValueError("Missing the required parameter `email` when calling `get_user_by_email`")
resource_path = '/admin/getUserByEmail'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'email' in params:
query_params['email'] = params['email']
header_params = {}
if 'admin_key' in params:
header_params['admin_key'] = params['admin_key']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='User',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_user_by_tag(self, admin_key, tag_name, tag_value, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_by_tag(admin_key, tag_name, tag_value, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str admin_key: The admin user api key (required)
:param str tag_name: The tag field to search (e.g. github) (required)
:param str tag_value: The tag value to search (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_user_by_tag_with_http_info(admin_key, tag_name, tag_value, **kwargs)
else:
(data) = self.get_user_by_tag_with_http_info(admin_key, tag_name, tag_value, **kwargs)
return data
def get_user_by_tag_with_http_info(self, admin_key, tag_name, tag_value, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_by_tag_with_http_info(admin_key, tag_name, tag_value, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str admin_key: The admin user api key (required)
:param str tag_name: The tag field to search (e.g. github) (required)
:param str tag_value: The tag value to search (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['admin_key', 'tag_name', 'tag_value']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_user_by_tag" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'admin_key' is set
if ('admin_key' not in params) or (params['admin_key'] is None):
raise ValueError("Missing the required parameter `admin_key` when calling `get_user_by_tag`")
# verify the required parameter 'tag_name' is set
if ('tag_name' not in params) or (params['tag_name'] is None):
raise ValueError("Missing the required parameter `tag_name` when calling `get_user_by_tag`")
# verify the required parameter 'tag_value' is set
if ('tag_value' not in params) or (params['tag_value'] is None):
raise ValueError("Missing the required parameter `tag_value` when calling `get_user_by_tag`")
resource_path = '/admin/getUserByTag'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'tag_name' in params:
query_params['tagName'] = params['tag_name']
if 'tag_value' in params:
query_params['tagValue'] = params['tag_value']
header_params = {}
if 'admin_key' in params:
header_params['admin_key'] = params['admin_key']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='User',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_user_by_token(self, admin_key, token, expiry, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_by_token(admin_key, token, expiry, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str admin_key: The admin user api key (required)
:param str token: The token passed by an email (required)
:param date expiry: The latest date for which the token is valid (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_user_by_token_with_http_info(admin_key, token, expiry, **kwargs)
else:
(data) = self.get_user_by_token_with_http_info(admin_key, token, expiry, **kwargs)
return data
def get_user_by_token_with_http_info(self, admin_key, token, expiry, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_by_token_with_http_info(admin_key, token, expiry, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str admin_key: The admin user api key (required)
:param str token: The token passed by an email (required)
:param date expiry: The latest date for which the token is valid (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['admin_key', 'token', 'expiry']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_user_by_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'admin_key' is set
if ('admin_key' not in params) or (params['admin_key'] is None):
raise ValueError("Missing the required parameter `admin_key` when calling `get_user_by_token`")
# verify the required parameter 'token' is set
if ('token' not in params) or (params['token'] is None):
raise ValueError("Missing the required parameter `token` when calling `get_user_by_token`")
# verify the required parameter 'expiry' is set
if ('expiry' not in params) or (params['expiry'] is None):
raise ValueError("Missing the required parameter `expiry` when calling `get_user_by_token`")
resource_path = '/admin/getUserByToken'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'token' in params:
query_params['token'] = params['token']
if 'expiry' in params:
query_params['expiry'] = params['expiry']
header_params = {}
if 'admin_key' in params:
header_params['admin_key'] = params['admin_key']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='User',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def put_user(self, user_id, body, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.put_user(user_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str user_id: (required)
:param User body: (required)
:param str api_key: The user api key
:return: GeneralStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.put_user_with_http_info(user_id, body, **kwargs)
else:
(data) = self.put_user_with_http_info(user_id, body, **kwargs)
return data
def put_user_with_http_info(self, user_id, body, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.put_user_with_http_info(user_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str user_id: (required)
:param User body: (required)
:param str api_key: The user api key
:return: GeneralStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['user_id', 'body', 'api_key']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method put_user" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'user_id' is set
if ('user_id' not in params) or (params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `put_user`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `put_user`")
resource_path = '/admin/user/{userId}'.replace('{format}', 'json')
path_params = {}
if 'user_id' in params:
path_params['userId'] = params['user_id']
query_params = {}
header_params = {}
if 'api_key' in params:
header_params['api_key'] = params['api_key']
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GeneralStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
| 39.768606 | 115 | 0.563782 | 3,214 | 29,389 | 4.931238 | 0.065339 | 0.060572 | 0.016468 | 0.027257 | 0.903275 | 0.892927 | 0.884094 | 0.869519 | 0.849076 | 0.841693 | 0 | 0.000578 | 0.352002 | 29,389 | 738 | 116 | 39.822493 | 0.831653 | 0.329341 | 0 | 0.738372 | 1 | 0 | 0.169764 | 0.024662 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037791 | false | 0 | 0.020349 | 0 | 0.113372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d2f51e55a01d79bab2628f8fbaa0c57118994710 | 85 | py | Python | pyqtgraph/jupyter/__init__.py | StSav012/pyqtgraph | 65e17c4e3707eb3bd4d91cdc13504d9b150f4360 | [
"MIT"
] | 1 | 2022-01-30T20:04:51.000Z | 2022-01-30T20:04:51.000Z | pyqtgraph/jupyter/__init__.py | StSav012/pyqtgraph | 65e17c4e3707eb3bd4d91cdc13504d9b150f4360 | [
"MIT"
] | null | null | null | pyqtgraph/jupyter/__init__.py | StSav012/pyqtgraph | 65e17c4e3707eb3bd4d91cdc13504d9b150f4360 | [
"MIT"
] | null | null | null | from .GraphicsView import GraphicsLayoutWidget
from .GraphicsView import PlotWidget
| 21.25 | 46 | 0.870588 | 8 | 85 | 9.25 | 0.625 | 0.432432 | 0.594595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105882 | 85 | 3 | 47 | 28.333333 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
960f68a34a0574ccbfb765fe0b7b141a8e894816 | 169 | py | Python | avionix_airflow/kubernetes/redis/__init__.py | zbrookle/avionix_airflow | a9b4665ce7699bcee7252a3f10d588a57c1f32c4 | [
"BSD-3-Clause"
] | 5 | 2020-08-31T07:33:47.000Z | 2022-01-19T09:03:09.000Z | avionix_airflow/kubernetes/redis/__init__.py | zbrookle/avionix_airflow | a9b4665ce7699bcee7252a3f10d588a57c1f32c4 | [
"BSD-3-Clause"
] | 20 | 2020-07-28T23:39:22.000Z | 2020-10-06T20:21:32.000Z | avionix_airflow/kubernetes/redis/__init__.py | zbrookle/avionix_airflow | a9b4665ce7699bcee7252a3f10d588a57c1f32c4 | [
"BSD-3-Clause"
] | 1 | 2021-09-27T14:48:41.000Z | 2021-09-27T14:48:41.000Z | # flake8: noqa
from avionix_airflow.kubernetes.redis.redis_options import RedisOptions
from avionix_airflow.kubernetes.redis.redis_orchestrator import RedisOrchestrator
| 42.25 | 81 | 0.887574 | 20 | 169 | 7.3 | 0.6 | 0.150685 | 0.246575 | 0.383562 | 0.520548 | 0.520548 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0.065089 | 169 | 3 | 82 | 56.333333 | 0.917722 | 0.071006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
826ec61f199d823b53a50ce219e5ffb593663351 | 13,195 | py | Python | senlin-7.0.0/senlin/tests/unit/engine/service/test_cluster_op.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | null | null | null | senlin-7.0.0/senlin/tests/unit/engine/service/test_cluster_op.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | senlin-7.0.0/senlin/tests/unit/engine/service/test_cluster_op.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 2 | 2020-03-15T01:24:15.000Z | 2020-07-22T20:34:26.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_messaging.rpc import dispatcher as rpc
import six
from senlin.common import consts
from senlin.common import exception as exc
from senlin.engine.actions import base as am
from senlin.engine import cluster as cm
from senlin.engine import dispatcher
from senlin.engine import service
from senlin.objects import cluster as co
from senlin.objects import node as no
from senlin.objects.requests import clusters as orco
from senlin.tests.unit.common import base
from senlin.tests.unit.common import utils
class ClusterOpTest(base.SenlinTestCase):
def setUp(self):
super(ClusterOpTest, self).setUp()
self.ctx = utils.dummy_context(project='cluster_op_test_project')
self.eng = service.EngineService('host-a', 'topic-a')
@mock.patch.object(dispatcher, 'start_action')
@mock.patch.object(am.Action, 'create')
@mock.patch.object(no.Node, 'ids_by_cluster')
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op(self, mock_find, mock_cluster, mock_nodes, mock_action,
mock_start):
x_db_cluster = mock.Mock()
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_profile = mock.Mock(OPERATIONS={'dance': x_schema})
x_cluster = mock.Mock(id='12345678AB')
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
mock_action.return_value = 'ACTION_ID'
params = {'style': 'tango'}
filters = {'role': 'slave'}
mock_nodes.return_value = ['NODE1', 'NODE2']
req = orco.ClusterOperationRequest(identity='FAKE_CLUSTER',
operation='dance',
params=params,
filters=filters)
result = self.eng.cluster_op(self.ctx, req.obj_to_primitive())
self.assertEqual({'action': 'ACTION_ID'}, result)
mock_find.assert_called_once_with(self.ctx, 'FAKE_CLUSTER')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
x_schema.validate.assert_called_once_with({'style': 'tango'})
mock_nodes.assert_called_once_with(self.ctx, '12345678AB',
filters={'role': 'slave'})
mock_action.assert_called_once_with(
self.ctx, '12345678AB', consts.CLUSTER_OPERATION,
name='cluster_dance_12345678',
cause=consts.CAUSE_RPC,
status=am.Action.READY,
inputs={
'operation': 'dance',
'params': {'style': 'tango'},
'nodes': ['NODE1', 'NODE2']
}
)
mock_start.assert_called_once_with()
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_cluster_not_found(self, mock_find):
mock_find.side_effect = exc.ResourceNotFound(
type='cluster', id='Bogus')
req = orco.ClusterOperationRequest(identity='Bogus', operation='dance')
ex = self.assertRaises(rpc.ExpectedException,
self.eng.cluster_op,
self.ctx, req.obj_to_primitive())
self.assertEqual(exc.ResourceNotFound, ex.exc_info[0])
self.assertEqual("The cluster 'Bogus' could not be found.",
six.text_type(ex.exc_info[1]))
mock_find.assert_called_once_with(self.ctx, 'Bogus')
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_unsupported_operation(self, mock_find, mock_cluster):
x_db_cluster = mock.Mock(id='12345678AB')
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_profile = mock.Mock(OPERATIONS={'dance': x_schema}, type='cow')
x_cluster = mock.Mock()
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
req = orco.ClusterOperationRequest(identity='node1', operation='swim')
ex = self.assertRaises(rpc.ExpectedException,
self.eng.cluster_op,
self.ctx, req.obj_to_primitive())
self.assertEqual(exc.BadRequest, ex.exc_info[0])
self.assertEqual("The requested operation 'swim' is not supported "
"by the profile type 'cow'.",
six.text_type(ex.exc_info[1]))
mock_find.assert_called_once_with(self.ctx, 'node1')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_bad_parameters(self, mock_find, mock_cluster):
x_db_cluster = mock.Mock(id='12345678AB')
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_schema.validate.side_effect = exc.ESchema(message='Boom')
x_profile = mock.Mock(OPERATIONS={'dance': x_schema})
x_cluster = mock.Mock()
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
req = orco.ClusterOperationRequest(identity='node1', operation='dance',
params={'style': 'tango'})
ex = self.assertRaises(rpc.ExpectedException,
self.eng.cluster_op,
self.ctx, req.obj_to_primitive())
self.assertEqual(exc.BadRequest, ex.exc_info[0])
self.assertEqual("Boom.", six.text_type(ex.exc_info[1]))
mock_find.assert_called_once_with(self.ctx, 'node1')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
x_schema.validate.assert_called_once_with({'style': 'tango'})
@mock.patch.object(dispatcher, 'start_action')
@mock.patch.object(am.Action, 'create')
@mock.patch.object(no.Node, 'ids_by_cluster')
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_no_parameters(self, mock_find, mock_cluster,
mock_nodes, mock_action, mock_start):
x_db_cluster = mock.Mock()
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_profile = mock.Mock(OPERATIONS={'dance': x_schema})
x_cluster = mock.Mock(id='12345678AB')
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
mock_action.return_value = 'ACTION_ID'
filters = {'role': 'slave'}
mock_nodes.return_value = ['NODE1', 'NODE2']
req = orco.ClusterOperationRequest(identity='FAKE_CLUSTER',
operation='dance',
filters=filters)
result = self.eng.cluster_op(self.ctx, req.obj_to_primitive())
self.assertEqual({'action': 'ACTION_ID'}, result)
mock_find.assert_called_once_with(self.ctx, 'FAKE_CLUSTER')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
self.assertEqual(0, x_schema.validate.call_count)
mock_nodes.assert_called_once_with(self.ctx, '12345678AB',
filters={'role': 'slave'})
mock_action.assert_called_once_with(
self.ctx, '12345678AB', consts.CLUSTER_OPERATION,
name='cluster_dance_12345678',
cause=consts.CAUSE_RPC,
status=am.Action.READY,
inputs={
'operation': 'dance',
'params': {},
'nodes': ['NODE1', 'NODE2']
}
)
mock_start.assert_called_once_with()
@mock.patch.object(dispatcher, 'start_action')
@mock.patch.object(am.Action, 'create')
@mock.patch.object(no.Node, 'ids_by_cluster')
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_no_filters(self, mock_find, mock_cluster,
mock_nodes, mock_action, mock_start):
x_db_cluster = mock.Mock()
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_profile = mock.Mock(OPERATIONS={'dance': x_schema})
x_cluster = mock.Mock(id='12345678AB')
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
mock_action.return_value = 'ACTION_ID'
mock_nodes.return_value = ['NODE1', 'NODE2']
req = orco.ClusterOperationRequest(identity='FAKE_CLUSTER',
operation='dance')
result = self.eng.cluster_op(self.ctx, req.obj_to_primitive())
self.assertEqual({'action': 'ACTION_ID'}, result)
mock_find.assert_called_once_with(self.ctx, 'FAKE_CLUSTER')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
self.assertEqual(0, x_schema.validate.call_count)
mock_nodes.assert_called_once_with(self.ctx, '12345678AB')
mock_action.assert_called_once_with(
self.ctx, '12345678AB', consts.CLUSTER_OPERATION,
name='cluster_dance_12345678',
cause=consts.CAUSE_RPC,
status=am.Action.READY,
inputs={
'operation': 'dance',
'params': {},
'nodes': ['NODE1', 'NODE2']
}
)
mock_start.assert_called_once_with()
@mock.patch.object(am.Action, 'create')
@mock.patch.object(no.Node, 'ids_by_cluster')
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_bad_filters(self, mock_find, mock_cluster,
mock_nodes, mock_action):
x_db_cluster = mock.Mock()
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_profile = mock.Mock(OPERATIONS={'dance': x_schema})
x_cluster = mock.Mock(id='12345678AB')
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
mock_action.return_value = 'ACTION_ID'
mock_nodes.return_value = ['NODE1', 'NODE2']
filters = {'shape': 'round'}
req = orco.ClusterOperationRequest(identity='FAKE_CLUSTER',
operation='dance',
filters=filters)
ex = self.assertRaises(rpc.ExpectedException,
self.eng.cluster_op,
self.ctx, req.obj_to_primitive())
self.assertEqual(exc.BadRequest, ex.exc_info[0])
self.assertEqual("Filter key 'shape' is unsupported.",
six.text_type(ex.exc_info[1]))
mock_find.assert_called_once_with(self.ctx, 'FAKE_CLUSTER')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
self.assertEqual(0, x_schema.validate.call_count)
self.assertEqual(0, mock_nodes.call_count)
self.assertEqual(0, mock_action.call_count)
@mock.patch.object(am.Action, 'create')
@mock.patch.object(no.Node, 'ids_by_cluster')
@mock.patch.object(cm.Cluster, 'load')
@mock.patch.object(co.Cluster, 'find')
def test_cluster_op_no_nodes_found(self, mock_find, mock_cluster,
mock_nodes, mock_action):
x_db_cluster = mock.Mock()
mock_find.return_value = x_db_cluster
x_schema = mock.Mock()
x_profile = mock.Mock(OPERATIONS={'dance': x_schema})
x_cluster = mock.Mock(id='12345678AB')
x_cluster.rt = {'profile': x_profile}
mock_cluster.return_value = x_cluster
mock_nodes.return_value = []
mock_action.return_value = 'ACTION_ID'
filters = {'role': 'slave'}
req = orco.ClusterOperationRequest(identity='FAKE_CLUSTER',
operation='dance', filters=filters)
ex = self.assertRaises(rpc.ExpectedException,
self.eng.cluster_op,
self.ctx, req.obj_to_primitive())
self.assertEqual(exc.BadRequest, ex.exc_info[0])
self.assertEqual("No node (matching the filter) could be found.",
six.text_type(ex.exc_info[1]))
mock_find.assert_called_once_with(self.ctx, 'FAKE_CLUSTER')
mock_cluster.assert_called_once_with(self.ctx, dbcluster=x_db_cluster)
mock_nodes.assert_called_once_with(self.ctx, '12345678AB',
filters={'role': 'slave'})
self.assertEqual(0, mock_action.call_count)
| 45.815972 | 79 | 0.619401 | 1,578 | 13,195 | 4.919518 | 0.117871 | 0.051011 | 0.054103 | 0.069561 | 0.823007 | 0.815664 | 0.801365 | 0.788355 | 0.788355 | 0.782558 | 0 | 0.018024 | 0.268359 | 13,195 | 287 | 80 | 45.97561 | 0.786099 | 0.039788 | 0 | 0.761134 | 0 | 0 | 0.100806 | 0.007031 | 0 | 0 | 0 | 0 | 0.206478 | 1 | 0.036437 | false | 0 | 0.05668 | 0 | 0.097166 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82e02e133d070a2980fe91ff6bce22e27f8556a5 | 27,286 | py | Python | models/s3fd.py | lippman1125/S3FD.PyTorch | aab280e8640507f4dc3751761d4270b4b0b5df89 | [
"Apache-2.0"
] | 15 | 2019-05-28T06:47:50.000Z | 2021-02-21T18:16:46.000Z | models/s3fd.py | lippman1125/S3FD.PyTorch | aab280e8640507f4dc3751761d4270b4b0b5df89 | [
"Apache-2.0"
] | null | null | null | models/s3fd.py | lippman1125/S3FD.PyTorch | aab280e8640507f4dc3751761d4270b4b0b5df89 | [
"Apache-2.0"
] | 6 | 2019-05-28T07:39:31.000Z | 2021-02-21T18:16:52.000Z | import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from layers.modules.l2norm import L2Norm
from models.mobilenet_v2_xiaomi import MobileNetV2, InvertedResidual
from models.FairNAS_A import FairNasA
from models.FairNAS_B import FairNasB
import numpy as np
from collections import namedtuple
GraphPath = namedtuple("GraphPath", ['s0', 'name', 's1']) #
def add_extras(cfg, i, batch_norm=False):
# Extra layers added to VGG for feature scaling
layers = []
in_channels = i
flag = False
for k, v in enumerate(cfg):
if in_channels != 'S':
if v == 'S':
layers += [nn.Conv2d(in_channels, cfg[k + 1],
kernel_size=(1, 3)[flag], stride=2, padding=1)]
else:
layers += [nn.Conv2d(in_channels, v, kernel_size=(1, 3)[flag])]
flag = not flag
in_channels = v
return layers
def vgg(cfg, i, batch_norm=False):
layers = []
in_channels = i
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'C':
layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
pool5 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=1)
conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
layers += [pool5, conv6,
nn.ReLU(inplace=True), conv7, nn.ReLU(inplace=True)]
return layers
class S3FD(nn.Module):
def __init__(self, phase, size, num_classes):
super(S3FD, self).__init__()
self.phase = phase
self.num_classes = num_classes
self.size = size
vgg_cfg = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M',
512, 512, 512]
self.vgg = nn.ModuleList(vgg(vgg_cfg, 3))
extras_cfg = [256, 'S', 512, 128, 'S', 256]
self.extras = nn.ModuleList(add_extras(extras_cfg, 1024))
self.conv3_3_L2Norm = L2Norm(256, 10)
self.conv4_3_L2Norm = L2Norm(512, 8)
self.conv5_3_L2Norm = L2Norm(512, 5)
self.loc, self.conf = self.multibox(self.num_classes)
if self.phase == 'test':
self.softmax = nn.Softmax(dim=-1)
if self.phase == 'train':
for m in self.modules():
if isinstance(m, nn.Conv2d):
if m.bias is not None:
nn.init.xavier_normal_(m.weight.data)
m.bias.data.fill_(0.02)
else:
m.weight.data.normal_(0, 0.01)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def multibox(self, num_classes):
loc_layers = []
conf_layers = []
# Max-out BG label
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
#conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
loc_layers += [nn.Conv2d(512, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, 1 * num_classes, kernel_size=3, padding=1)]
loc_layers += [nn.Conv2d(512, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, 1 * num_classes, kernel_size=3, padding=1)]
loc_layers += [nn.Conv2d(1024, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(1024, 1 * num_classes, kernel_size=3, padding=1)]
loc_layers += [nn.Conv2d(512, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, 1 * num_classes, kernel_size=3, padding=1)]
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
return nn.Sequential(*loc_layers), nn.Sequential(*conf_layers)
def load_weights(self, base_file):
other, ext = os.path.splitext(base_file)
if ext == '.pkl' or '.pth':
print('Loading weights into state dict...')
self.load_state_dict(torch.load(base_file,
map_location=lambda storage, loc: storage))
print('Finished!')
else:
print('Sorry only .pth and .pkl files supported.')
def forward(self, x):
sources = list()
loc = list()
conf = list()
detection_dimension = list()
# apply vgg up to conv4_3 relu and conv5_3 relu
for k in range(30):
x = self.vgg[k](x)
if 15 == k:
s = self.conv3_3_L2Norm(x)
sources.append(s)
detection_dimension.append(x.shape[2:])
elif 22 == k:
s = self.conv4_3_L2Norm(x)
sources.append(s)
detection_dimension.append(x.shape[2:])
elif 29 == k:
s = self.conv5_3_L2Norm(x)
sources.append(s)
detection_dimension.append(x.shape[2:])
# apply vgg up to fc7
for k in range(30, len(self.vgg)):
x = self.vgg[k](x)
sources.append(x)
detection_dimension.append(x.shape[2:])
# apply extra layers and cache source layer outputs
for k, v in enumerate(self.extras):
x = F.relu(v(x), inplace=True)
if k % 2 == 1:
sources.append(x)
detection_dimension.append(x.shape[2:])
detection_dimension = torch.Tensor(detection_dimension)
detection_dimension = detection_dimension.cuda()
for index, (x, l, c) in enumerate(zip(sources, self.loc, self.conf)):
#print(x.size())
if index != 0:
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf.append(c(x).permute(0, 2, 3, 1).contiguous())
else:
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf_t = c(x)
max_conf, _ = conf_t[:, 0:3, :, :].max(1, keepdim=True)
lab_conf = conf_t[:, 3:, :, :]
out_conf = torch.cat((max_conf, lab_conf), dim=1)
conf.append(out_conf.permute(0, 2, 3, 1).contiguous())
'''for index, (x, l, c) in enumerate(zip(sources, self.loc, self.conf)):
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf.append(c(x).permute(0, 2, 3, 1).contiguous())'''
loc = torch.cat([o.view(o.size(0), -1) for o in loc], 1)
conf = torch.cat([o.view(o.size(0), -1) for o in conf], 1)
if self.phase == "test":
output = (loc.view(loc.size(0), -1, 4),
self.softmax(conf.view(-1, self.num_classes)),
detection_dimension)
else:
output = (loc.view(loc.size(0), -1, 4),
conf.view(conf.size(0), -1, self.num_classes),
detection_dimension)
return output
class S3FD_MV2(nn.Module):
def __init__(self, phase, size, num_classes):
super(S3FD_MV2, self).__init__()
self.phase = phase
self.num_classes = num_classes
self.size = size
self.base_net = MobileNetV2(width_mult=1.0).features
self.source_layer_indexes = [
GraphPath(7, 'conv', 3),
GraphPath(14, 'conv', 3),
19,
]
# print(self.base_net)
self.extras = nn.ModuleList([
InvertedResidual(1280, 512, stride=2, expand_ratio=0.2),
InvertedResidual(512, 256, stride=2, expand_ratio=0.25),
InvertedResidual(256, 256, stride=2, expand_ratio=0.5)
])
self.conv3_3_L2Norm = L2Norm(192, 10)
self.conv4_3_L2Norm = L2Norm(576, 8)
self.conv5_3_L2Norm = L2Norm(1280, 5)
self.loc, self.conf = self.multibox(self.num_classes)
if self.phase == 'test':
self.softmax = nn.Softmax(dim=-1)
if self.phase == 'train':
for m in self.modules():
if isinstance(m, nn.Conv2d):
if m.bias is not None:
nn.init.xavier_normal_(m.weight.data)
m.bias.data.fill_(0.02)
else:
m.weight.data.normal_(0, 0.01)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def multibox(self, num_classes):
loc_layers = []
conf_layers = []
# Max-out BG label
loc_layers += [nn.Conv2d(192, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(192, 1 * 4, kernel_size=3, padding=1)]
# conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
# conv4_3
loc_layers += [nn.Conv2d(576, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(576, 1 * num_classes, kernel_size=3, padding=1)]
# conv5_3
loc_layers += [nn.Conv2d(1280, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(1280, 1 * num_classes, kernel_size=3, padding=1)]
# fc6
loc_layers += [nn.Conv2d(512, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, 1 * num_classes, kernel_size=3, padding=1)]
# fc7
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
# conv7_2
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
return nn.Sequential(*loc_layers), nn.Sequential(*conf_layers)
def load_weights(self, base_file):
other, ext = os.path.splitext(base_file)
if ext == '.pkl' or '.pth':
print('Loading weights into state dict...')
self.load_state_dict(torch.load(base_file,
map_location=lambda storage, loc: storage))
print('Finished!')
else:
print('Sorry only .pth and .pkl files supported.')
def forward(self, x: torch.Tensor):
confidences = []
locations = []
start_layer_index = 0
header_index = 0
detection_dimension = list()
for end_layer_index in self.source_layer_indexes:
if isinstance(end_layer_index, GraphPath):
path = end_layer_index
end_layer_index = end_layer_index.s0
added_layer = None
elif isinstance(end_layer_index, tuple):
added_layer = end_layer_index[1]
end_layer_index = end_layer_index[0]
path = None
else:
added_layer = None
path = None
for layer in self.base_net[start_layer_index: end_layer_index]:
x = layer(x)
if added_layer:
y = added_layer(x)
else:
y = x
if path:
sub = getattr(self.base_net[end_layer_index], path.name)
for layer in sub[:path.s1]:
x = layer(x)
y = x
for layer in sub[path.s1:]:
x = layer(x)
end_layer_index += 1
start_layer_index = end_layer_index
confidence, location, dims = self.compute_header(header_index, y)
header_index += 1
confidences.append(confidence)
locations.append(location)
detection_dimension.append(dims)
for layer in self.base_net[end_layer_index:]:
x = layer(x)
for layer in self.extras:
x = layer(x)
confidence, location, dims = self.compute_header(header_index, x)
header_index += 1
confidences.append(confidence)
locations.append(location)
detection_dimension.append(dims)
confidences = torch.cat(confidences, 1)
locations = torch.cat(locations, 1)
detection_dimension = torch.Tensor(detection_dimension)
if self.phase == "test":
output = (locations,
self.softmax(confidences),
detection_dimension)
else:
output = (locations,
confidences,
detection_dimension)
return output
def compute_header(self, i, x):
# add extra normalization
if i == 0:
x = self.conv3_3_L2Norm(x)
elif i == 1:
x = self.conv4_3_L2Norm(x)
elif i == 2:
x = self.conv5_3_L2Norm(x)
# maxout
if i == 0:
conf_t = self.conf[i](x)
max_conf, _ = conf_t[:, 0:3, :, :].max(1, keepdim=True)
lab_conf = conf_t[:, 3:, :, :]
confidence = torch.cat((max_conf, lab_conf), dim=1)
else:
confidence = self.conf[i](x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.loc[i](x)
location = location.permute(0, 2, 3, 1).contiguous()
location = location.view(location.size(0), -1, 4)
return confidence, location, x.shape[2:]
class S3FD_FairNAS_A(nn.Module):
def __init__(self, phase, size, num_classes):
super(S3FD_FairNAS_A, self).__init__()
self.phase = phase
self.num_classes = num_classes
self.size = size
self.base_net = FairNasA().features
self.source_layer_indexes = [
GraphPath(8, 'conv', 3),
GraphPath(16, 'conv', 3),
22,
]
# print(self.base_net)
self.extras = nn.ModuleList([
InvertedResidual(1280, 512, stride=2, expand_ratio=0.2),
InvertedResidual(512, 256, stride=2, expand_ratio=0.25),
InvertedResidual(256, 256, stride=2, expand_ratio=0.5)
])
# self.expand = InvertedResidual(120, 192, stride=1, expand_ratio=1)
self.conv3_3_L2Norm = L2Norm(120, 10)
self.conv4_3_L2Norm = L2Norm(576, 8)
self.conv5_3_L2Norm = L2Norm(1280, 5)
self.loc, self.conf = self.multibox(self.num_classes)
if self.phase == 'test':
self.softmax = nn.Softmax(dim=-1)
if self.phase == 'train':
for m in self.modules():
if isinstance(m, nn.Conv2d):
if m.bias is not None:
nn.init.xavier_normal_(m.weight.data)
m.bias.data.fill_(0.02)
else:
m.weight.data.normal_(0, 0.01)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def multibox(self, num_classes):
loc_layers = []
conf_layers = []
# Max-out BG label
loc_layers += [nn.Conv2d(120, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(120, 1 * 4, kernel_size=3, padding=1)]
# conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
# conv4_3
loc_layers += [nn.Conv2d(576, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(576, 1 * num_classes, kernel_size=3, padding=1)]
# conv5_3
loc_layers += [nn.Conv2d(1280, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(1280, 1 * num_classes, kernel_size=3, padding=1)]
# fc6
loc_layers += [nn.Conv2d(512, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, 1 * num_classes, kernel_size=3, padding=1)]
# fc7
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
# conv7_2
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
return nn.Sequential(*loc_layers), nn.Sequential(*conf_layers)
def load_weights(self, base_file):
other, ext = os.path.splitext(base_file)
if ext == '.pkl' or '.pth':
print('Loading weights into state dict...')
self.load_state_dict(torch.load(base_file,
map_location=lambda storage, loc: storage))
print('Finished!')
else:
print('Sorry only .pth and .pkl files supported.')
def forward(self, x: torch.Tensor):
confidences = []
locations = []
start_layer_index = 0
header_index = 0
detection_dimension = list()
for end_layer_index in self.source_layer_indexes:
if isinstance(end_layer_index, GraphPath):
path = end_layer_index
end_layer_index = end_layer_index.s0
added_layer = None
elif isinstance(end_layer_index, tuple):
added_layer = end_layer_index[1]
end_layer_index = end_layer_index[0]
path = None
else:
added_layer = None
path = None
for layer in self.base_net[start_layer_index: end_layer_index]:
x = layer(x)
if added_layer:
y = added_layer(x)
else:
y = x
if path:
sub = getattr(self.base_net[end_layer_index], path.name)
for layer in sub[:path.s1]:
x = layer(x)
y = x
for layer in sub[path.s1:]:
x = layer(x)
end_layer_index += 1
start_layer_index = end_layer_index
confidence, location, dims = self.compute_header(header_index, y)
header_index += 1
confidences.append(confidence)
locations.append(location)
detection_dimension.append(dims)
for layer in self.base_net[end_layer_index:]:
x = layer(x)
for layer in self.extras:
x = layer(x)
confidence, location, dims = self.compute_header(header_index, x)
header_index += 1
confidences.append(confidence)
locations.append(location)
detection_dimension.append(dims)
confidences = torch.cat(confidences, 1)
locations = torch.cat(locations, 1)
detection_dimension = torch.Tensor(detection_dimension)
if self.phase == "test":
output = (locations,
self.softmax(confidences),
detection_dimension)
else:
output = (locations,
confidences,
detection_dimension)
return output
def compute_header(self, i, x):
# add extra normalization
if i == 0:
# x = self.expand(x)
x = self.conv3_3_L2Norm(x)
elif i == 1:
x = self.conv4_3_L2Norm(x)
elif i == 2:
x = self.conv5_3_L2Norm(x)
# maxout
if i == 0:
conf_t = self.conf[i](x)
max_conf, _ = conf_t[:, 0:3, :, :].max(1, keepdim=True)
lab_conf = conf_t[:, 3:, :, :]
confidence = torch.cat((max_conf, lab_conf), dim=1)
else:
confidence = self.conf[i](x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.loc[i](x)
location = location.permute(0, 2, 3, 1).contiguous()
location = location.view(location.size(0), -1, 4)
return confidence, location, x.shape[2:]
class S3FD_FairNAS_B(nn.Module):
def __init__(self, phase, size, num_classes):
super(S3FD_FairNAS_B, self).__init__()
self.phase = phase
self.num_classes = num_classes
self.size = size
self.base_net = FairNasB().features
self.source_layer_indexes = [
GraphPath(8, 'conv', 3),
GraphPath(16, 'conv', 3),
22,
]
# print(self.base_net)
self.extras = nn.ModuleList([
InvertedResidual(1280, 512, stride=2, expand_ratio=0.2),
InvertedResidual(512, 256, stride=2, expand_ratio=0.25),
InvertedResidual(256, 256, stride=2, expand_ratio=0.5)
])
# self.expand = InvertedResidual(120, 192, stride=1, expand_ratio=1)
self.conv3_3_L2Norm = L2Norm(120, 10)
self.conv4_3_L2Norm = L2Norm(576, 8)
self.conv5_3_L2Norm = L2Norm(1280, 5)
self.loc, self.conf = self.multibox(self.num_classes)
if self.phase == 'test':
self.softmax = nn.Softmax(dim=-1)
if self.phase == 'train':
for m in self.modules():
if isinstance(m, nn.Conv2d):
if m.bias is not None:
nn.init.xavier_normal_(m.weight.data)
m.bias.data.fill_(0.02)
else:
m.weight.data.normal_(0, 0.01)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def multibox(self, num_classes):
loc_layers = []
conf_layers = []
# Max-out BG label
loc_layers += [nn.Conv2d(120, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(120, 1 * 4, kernel_size=3, padding=1)]
# conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
# conv4_3
loc_layers += [nn.Conv2d(576, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(576, 1 * num_classes, kernel_size=3, padding=1)]
# conv5_3
loc_layers += [nn.Conv2d(1280, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(1280, 1 * num_classes, kernel_size=3, padding=1)]
# fc6
loc_layers += [nn.Conv2d(512, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, 1 * num_classes, kernel_size=3, padding=1)]
# fc7
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
# conv7_2
loc_layers += [nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1)]
return nn.Sequential(*loc_layers), nn.Sequential(*conf_layers)
def load_weights(self, base_file):
other, ext = os.path.splitext(base_file)
if ext == '.pkl' or '.pth':
print('Loading weights into state dict...')
self.load_state_dict(torch.load(base_file,
map_location=lambda storage, loc: storage))
print('Finished!')
else:
print('Sorry only .pth and .pkl files supported.')
def forward(self, x: torch.Tensor):
confidences = []
locations = []
start_layer_index = 0
header_index = 0
detection_dimension = list()
for end_layer_index in self.source_layer_indexes:
if isinstance(end_layer_index, GraphPath):
path = end_layer_index
end_layer_index = end_layer_index.s0
added_layer = None
elif isinstance(end_layer_index, tuple):
added_layer = end_layer_index[1]
end_layer_index = end_layer_index[0]
path = None
else:
added_layer = None
path = None
for layer in self.base_net[start_layer_index: end_layer_index]:
x = layer(x)
if added_layer:
y = added_layer(x)
else:
y = x
if path:
sub = getattr(self.base_net[end_layer_index], path.name)
for layer in sub[:path.s1]:
x = layer(x)
y = x
for layer in sub[path.s1:]:
x = layer(x)
end_layer_index += 1
start_layer_index = end_layer_index
confidence, location, dims = self.compute_header(header_index, y)
header_index += 1
confidences.append(confidence)
locations.append(location)
detection_dimension.append(dims)
for layer in self.base_net[end_layer_index:]:
x = layer(x)
for layer in self.extras:
x = layer(x)
confidence, location, dims = self.compute_header(header_index, x)
header_index += 1
confidences.append(confidence)
locations.append(location)
detection_dimension.append(dims)
confidences = torch.cat(confidences, 1)
locations = torch.cat(locations, 1)
detection_dimension = torch.Tensor(detection_dimension)
if self.phase == "test":
output = (locations,
self.softmax(confidences),
detection_dimension)
else:
output = (locations,
confidences,
detection_dimension)
return output
def compute_header(self, i, x):
# add extra normalization
if i == 0:
# x = self.expand(x)
x = self.conv3_3_L2Norm(x)
elif i == 1:
x = self.conv4_3_L2Norm(x)
elif i == 2:
x = self.conv5_3_L2Norm(x)
# maxout
if i == 0:
conf_t = self.conf[i](x)
max_conf, _ = conf_t[:, 0:3, :, :].max(1, keepdim=True)
lab_conf = conf_t[:, 3:, :, :]
confidence = torch.cat((max_conf, lab_conf), dim=1)
else:
confidence = self.conf[i](x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.loc[i](x)
location = location.permute(0, 2, 3, 1).contiguous()
location = location.view(location.size(0), -1, 4)
return confidence, location, x.shape[2:]
if __name__ == '__main__':
net = S3FD_MV2('train', 640, 2)
image = np.zeros((1,3,640,640), dtype=np.float32)
output = net.forward(torch.from_numpy(image))
print(output)
net = S3FD_FairNAS_A('train', 640, 2)
image = np.zeros((1,3,640,640), dtype=np.float32)
output = net.forward(torch.from_numpy(image))
print(output)
net = S3FD_FairNAS_B('train', 640, 2)
image = np.zeros((1,3,640,640), dtype=np.float32)
output = net.forward(torch.from_numpy(image))
print(output)
| 37.84466 | 87 | 0.543539 | 3,446 | 27,286 | 4.125363 | 0.065003 | 0.034328 | 0.05318 | 0.068374 | 0.904896 | 0.886255 | 0.868739 | 0.861916 | 0.851927 | 0.845597 | 0 | 0.059344 | 0.336729 | 27,286 | 720 | 88 | 37.897222 | 0.726158 | 0.035439 | 0 | 0.83391 | 0 | 0 | 0.018945 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036332 | false | 0 | 0.017301 | 0 | 0.083045 | 0.025952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82e93cb649027ed9e6fb292076dcf01f51c02879 | 174,559 | py | Python | pystlouisfed/client.py | TomasKoutek/pystlouisfed | 61ca2ef9148daa03ee85019d22d3a3f298588dd1 | [
"MIT"
] | 8 | 2022-01-27T18:47:05.000Z | 2022-03-15T22:46:04.000Z | pystlouisfed/client.py | TomasKoutek/pystlouisfed | 61ca2ef9148daa03ee85019d22d3a3f298588dd1 | [
"MIT"
] | null | null | null | pystlouisfed/client.py | TomasKoutek/pystlouisfed | 61ca2ef9148daa03ee85019d22d3a3f298588dd1 | [
"MIT"
] | 2 | 2022-01-31T20:41:38.000Z | 2022-03-30T04:38:08.000Z | import logging
import xml.etree.ElementTree as ET
from contextlib import nullcontext
from datetime import datetime, date, timedelta
from enum import Enum
from functools import reduce
from typing import List, Generator
import numpy as np
import pandas as pd
import requests
import sickle
from ratelimiter import RateLimiter
import pystlouisfed.enums as enums
import pystlouisfed.models as models
logger = logging.getLogger(__name__)
class URLFactory:
SEPARATOR = ';'
def __init__(self, key: str, base: str = 'https://api.stlouisfed.org'):
self._base = base
self._params = {
'api_key': key,
'file_type': 'json'
}
def create(self, endpoint: str, params: dict) -> str:
# remove None values
filtered = {k: v for k, v in params.items() if v is not None}
for k, v in filtered.items():
if isinstance(v, Enum):
filtered[k] = v.value
if isinstance(v, list):
filtered[k] = self.SEPARATOR.join(v)
if isinstance(v, bool):
filtered[k] = str(v).lower()
# YYYYMMDDHhmm formatted string
# Example: 2018-03-02 2:20 would be 201803020220
if isinstance(v, datetime):
filtered[k] = v.strftime('%Y%m%d%H%M')
# replace whitespaces with +
if isinstance(filtered[k], str):
filtered[k] = filtered[k].replace(' ', '+')
payload_str = "&".join("%s=%s" % (k, v) for k, v in {**self._params, **filtered}.items())
url = '{}{}?{}'.format(self._base, endpoint, payload_str)
logger.debug('URL: {}'.format(url))
return url
class Client:
rate_limit: int
_rate_limit_remaining: int
_rate_limit_last_check: datetime
_rate_limiter: RateLimiter
_rate_limiter_enabled: bool
_headers: dict = {
"Accept": "application/json",
"Accept-Encoding": 'gzip',
"Cache-Control": "no-cache",
"User-Agent": "Python FRED Client"
}
_url: URLFactory
def __init__(self, key: str, ratelimiter_enabled: bool, ratelimiter_max_calls: int, ratelimiter_period: int, request_params: dict = None):
self._url = URLFactory(key)
self.rate_limit = 120
self._rate_limit_remaining = None
self._rate_limit_last_check = None
if ratelimiter_enabled:
self._rate_limiter = RateLimiter(max_calls=ratelimiter_max_calls, period=ratelimiter_period)
else:
self._rate_limiter = nullcontext()
if request_params is None:
request_params = dict()
if 'headers' not in request_params:
request_params['headers'] = self._headers
self.request_params = request_params
@property
def rate_limit_remaining(self) -> int:
return self.rate_limit if self._rate_limit_last_check is None or self._rate_limit_last_check <= (datetime.now() - timedelta(seconds=60)) else self._rate_limit_remaining
def get(self, endpoint: str, list_key: str, limit: int = None, **kwargs) -> list:
offset = 0 if limit is not None else None
stop = False
result = list()
request_number = 1
while not stop:
url = self._url.create(
endpoint, {
**kwargs,
**{
'limit': limit,
'offset': offset
}
}
)
with self._rate_limiter:
res = requests.get(url, **self.request_params)
# GeoFRED return error codes and messages in XML
if res.headers.get('content-type').startswith('text/xml') and res.status_code != 200:
element = ET.fromstring(res.content.decode())
raise Exception('Received error code: "{}" and message: "{}" for URL {}'.format(element.get('code'), element.get('message'), url))
elif not res.headers.get('content-type').startswith('application/json'):
raise Exception('Unexpected content-type "{}" for URL {}'.format(res.headers.get('content-type'), url))
data = res.json()
if res.status_code in [400, 403, 420, 429, 500]:
raise Exception('Received error code: "{}" and message: "{}" for URL {}'.format(data['error_code'], " ".join(data['error_message'].split()), url))
elif res.status_code != 200:
raise Exception('Received status code: "{}" for URL {}'.format(res.status_code, url))
self.rate_limit = int(res.headers['x-rate-limit-limit'])
self._rate_limit_remaining = int(res.headers['x-rate-limit-remaining'])
self._rate_limit_last_check = datetime.now()
logger.debug("Api rate limit: {} out of {} requests per minute remaining".format(self.rate_limit_remaining, self.rate_limit))
list_data = self._deep_get(data, list_key)
if 'count' not in data:
# GeoFRED.series_data and GeoFRED.regional_data return dict of years
return list_data if isinstance(list_data, list) else [list_data]
else:
number_of_requests = int(data['count'] / limit) + 1 if limit is not None else 1
logger.debug("Number of records: {}, Request {} of {}".format(data['count'], request_number, number_of_requests))
if limit is None or data['count'] < limit or len(list_data) < limit:
stop = True
if len(list_data) == limit:
offset += limit
result += list_data
request_number += 1
return result
def _deep_get(self, dictionary: dict, keys: str, default=None):
return reduce(lambda d, key: d.get(key, default) if isinstance(d, dict) else default, keys.split("."), dictionary)
class FRED:
"""
The FRED API is a web service that allows developers to write programs and build applications that retrieve economic data from the FRED and ALFRED websites hosted by the Economic Research Division of the Federal Reserve Bank of St. Louis.
Requests can be customized according to data source, release, category, series, and other preferences.
https://fred.stlouisfed.org
https://fred.stlouisfed.org/docs/api/fred/
"""
EMPTY_VALUE = '.'
"""
FRED/ALFRED returns empty values as dot
"""
def __init__(self, api_key: str, ratelimiter_enabled: bool = True, ratelimiter_max_calls: int = 2, ratelimiter_period: int = 1, request_params: dict = None):
"""
Parameters
----------
api_key: str
32 character alpha-numeric lowercase string
ratelimiter_enabled: bool
ratelimiter_max_calls: int
ratelimiter_period: int
request_params: dict
HTTP GET method parameters, see https://docs.python-requests.org/en/latest/api/#requests.request
"""
if api_key is None or len(api_key) != 32:
raise Exception('Variable api_key must be 32 character length alphanumeric string.')
self._client = Client(
key=api_key.lower(),
ratelimiter_enabled=ratelimiter_enabled,
ratelimiter_max_calls=ratelimiter_max_calls,
ratelimiter_period=ratelimiter_period,
request_params=request_params
)
@property
def rate_limit(self) -> int:
return self._client.rate_limit
@property
def rate_limit_remaining(self) -> int:
return self._client.rate_limit_remaining
"""
Category
https://fred.stlouisfed.org/categories
"""
def category(self, category_id: int = 0) -> models.Category:
"""
## Parameters
`category_id`
The id for a category.
## Description
https://fred.stlouisfed.org/docs/api/fred/category.html
Get a category.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/category?category_id=125&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"categories": [
{
"id": 125,
"name": "Trade Balance",
"parent_id": 13
}
]
}
```
## Returns
`pystlouisfed.models.Category`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.category(category_id=125)
Category(id=125, name='Trade Balance', parent_id=13)
```
"""
if int(category_id) < 0:
raise ValueError('Variable category_id is not 0 or a positive integer.')
data = self._client.get(
'/fred/category',
'categories',
category_id=category_id
)
return models.Category(**data[0])
def category_children(self, category_id: int = 0, realtime_start: date = None, realtime_end: date = None) -> pd.DataFrame:
"""
## Parameters
`category_id`
The id for a category.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/category_children.html
Get the child categories for a specified parent category.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/category/children?category_id=13&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"categories": [
{
"id": 16,
"name": "Exports",
"parent_id": 13
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.category_children(category_id=13).head()
name parent_id
id
16 Exports 13
17 Imports 13
3000 Income Payments & Receipts 13
33705 International Investment Position 13
125 Trade Balance 13
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
data = self._client.get(
'/fred/category/children',
'categories',
category_id=category_id,
realtime_start=realtime_start,
realtime_end=realtime_end
)
return pd.DataFrame(data).astype(dtype={
'name': 'string'
}).set_index('id')
def category_related(self, category_id: int = 0, realtime_start: date = None, realtime_end: date = None) -> pd.DataFrame:
"""
## Parameters
`category_id`
The id for a category.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/category_related.html
Get the related categories for a category.
A related category is a one-way relation between 2 categories that is not part of a parent-child category hierarchy.
Most categories do not have related categories.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/category/related?category_id=32073&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"categories": [
{
"id": 149,
"name": "Arkansas",
"parent_id": 27281
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.category_related(category_id=32073).head()
name parent_id
id
149 Arkansas 27281
150 Illinois 27281
151 Indiana 27281
152 Kentucky 27281
153 Mississippi 27281
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
data = self._client.get(
'/fred/category/related',
'categories',
category_id=category_id,
realtime_start=realtime_start,
realtime_end=realtime_end
)
return pd.DataFrame(data).astype(dtype={
'name': 'string'
}).set_index('id')
def category_series(
self,
category_id: int = 0,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.series_id,
sort_order: enums.SortOrder = enums.SortOrder.asc,
filter_variable: enums.FilterVariable = None,
filter_value: enums.FilterValue = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None
) -> pd.DataFrame:
"""
## Parameters
`category_id`
The id for a category.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
`filter_variable`
The attribute to filter results by.
`filter_value`
The value of the filter_variable attribute to filter results by.
`tag_names`
Tuple of tag names that series match all of.
`exclude_tag_names`
Tuple of tag names that series match none of.
## Description
https://fred.stlouisfed.org/docs/api/fred/category_series.html
Get the series in a category.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/category/series?category_id=125&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"order_by": "series_id",
"sort_order": "asc",
"count": 45,
"offset": 0,
"limit": 1000,
"seriess": [
{
"id": "BOPBCA",
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"title": "Balance on Current Account (DISCONTINUED)",
"observation_start": "1960-01-01",
"observation_end": "2014-01-01",
"frequency": "Quarterly",
"frequency_short": "Q",
"units": "Billions of Dollars",
"units_short": "Bil. of $",
"seasonal_adjustment": "Seasonally Adjusted",
"seasonal_adjustment_short": "SA",
"last_updated": "2014-06-18 08:41:28-05",
"popularity": 32,
"group_popularity": 34,
"notes": "This series has been discontinued as a result of the comprehensive restructuring of the international economic accounts (http://www.bea.gov/international/modern.htm).For a crosswalk of the old and new series in FRED see: http://research.stlouisfed.org/CompRevisionReleaseID49.xlsx."
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.category_series(category_id=125).head()
realtime_start realtime_end title observation_start observation_end frequency frequency_short units units_short seasonal_adjustment seasonal_adjustment_short last_updated popularity group_popularity notes
id
AITGCBN 2022-02-05 2022-02-05 Advance U.S. International Trade in Goods: Bal... 2021-12-01 2021-12-01 Monthly M Millions of Dollars Mil. of $ Not Seasonally Adjusted NSA 2022-01-26 13:31:05+00:00 3 26 This advance estimate represents the current m...
AITGCBS 2022-02-05 2022-02-05 Advance U.S. International Trade in Goods: Bal... 2021-12-01 2021-12-01 Monthly M Millions of Dollars Mil. of $ Seasonally Adjusted SA 2022-01-26 13:31:02+00:00 26 26 This advance estimate represents the current m...
BOPBCA 2022-02-05 2022-02-05 Balance on Current Account (DISCONTINUED) 1960-01-01 2014-01-01 Quarterly Q Billions of Dollars Bil. of $ Seasonally Adjusted SA 2014-06-18 13:41:28+00:00 10 11 This series has been discontinued as a result ...
BOPBCAA 2022-02-05 2022-02-05 Balance on Current Account (DISCONTINUED) 1960-01-01 2013-01-01 Annual A Billions of Dollars Bil. of $ Not Seasonally Adjusted NSA 2014-06-18 13:41:28+00:00 2 11 This series has been discontinued as a result ...
BOPBCAN 2022-02-05 2022-02-05 Balance on Current Account (DISCONTINUED) 1960-01-01 2014-01-01 Quarterly Q Billions of Dollars Bil. of $ Not Seasonally Adjusted NSA 2014-06-18 13:41:28+00:00 1 11 This series has been discontinued as a result ...
```
"""
allowed_orders = [
enums.OrderBy.series_id,
enums.OrderBy.title,
enums.OrderBy.units,
enums.OrderBy.frequency,
enums.OrderBy.seasonal_adjustment,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end,
enums.OrderBy.last_updated,
enums.OrderBy.observation_start,
enums.OrderBy.observation_end,
enums.OrderBy.popularity,
enums.OrderBy.group_popularity
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if filter_variable is not None and filter_variable not in enums.FilterVariable:
raise ValueError('Variable filter_variable ({}) is not one of the values: {}'.format(filter_variable, ', '.join(map(str, enums.FilterVariable))))
if exclude_tag_names is not None and tag_names is None:
raise ValueError('Parameter exclude_tag_names requires that parameter tag_names also be set to limit the number of matching series.')
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/category/series',
'seriess',
limit=1000,
category_id=category_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order,
filter_variable=filter_variable,
filter_value=filter_value,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names
)
)
date_columns = [
'realtime_start', 'realtime_end',
'observation_start', 'observation_end',
]
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df.last_updated = pd.to_datetime(df.last_updated + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'id': 'string',
'title': 'string',
'notes': 'string',
'seasonal_adjustment_short': 'category',
'seasonal_adjustment': 'category',
'units_short': 'category',
'units': 'category',
'frequency_short': 'category',
'frequency': 'category'
}).set_index('id')
return df
def category_tags(
self,
category_id: int = 0,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`category_id`
The id for a category.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
Tuple of tag names that series match all of.
`tag_group_id`
A tag group id to filter tags by type.
`search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/category_tags.html
Get the FRED tags for a category.
Optionally, filter results by tag name, tag group, or search. Series are assigned tags and categories.
Indirectly through series, it is possible to get the tags for a category. No tags exist for a category that does not have series.
See the related request fred/category/related_tags.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/category/tags?category_id=125&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-13",
"realtime_end": "2013-08-13",
"order_by": "series_count",
"sort_order": "desc",
"count": 21,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "bea",
"group_id": "src",
"notes": "U.S. Department of Commerce: Bureau of Economic Analysis",
"created": "2012-02-27 10:18:19-06",
"popularity": 87,
"series_count": 24
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.category_tags(category_id=125).head()
group_id notes created popularity series_count
name
headline figure gen 2013-11-19 19:55:53+00:00 53 2
primary gen 2012-02-27 16:18:19+00:00 42 2
transfers gen 2012-02-27 16:18:19+00:00 31 2
census src Census 2012-02-27 16:18:19+00:00 80 4
investment gen 2012-02-27 16:18:19+00:00 56 4
```
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id
]
allowed_tag_group_ids = [
enums.TagGroupID.frequency,
enums.TagGroupID.general_or_concept,
enums.TagGroupID.geography,
enums.TagGroupID.geography_type,
enums.TagGroupID.release,
enums.TagGroupID.seasonal_adjustment,
enums.TagGroupID.source
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if tag_group_id is not None and tag_group_id not in allowed_tag_group_ids:
raise ValueError('Variable tag_group_id is not one of the values: {}'.format(', '.join(map(str, allowed_tag_group_ids))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/category/tags',
'tags',
limit=1000,
category_id=category_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
tag_group_id=tag_group_id,
search_text=search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def category_related_tags(
self,
category_id: int = 0,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`category_id`
The id for a category.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
Tuple of tag names that series match all of.
`exclude_tag_names`
Tuple of tag names that series match none of.
`tag_group_id`
A tag group id to filter tags by type.
`search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/category_related_tags.html
Get the related FRED tags for one or more FRED tags within a category.
Optionally, filter results by tag group or search.
FRED tags are attributes assigned to series.
For this request, related FRED tags are the tags assigned to series that match all tags in the tag_names parameter, no tags in the exclude_tag_names parameter, and the category set by the category_id parameter.
See the related request fred/category/tags.
Series are assigned tags and categories. Indirectly through series, it is possible to get the tags for a category.
No tags exist for a category that does not have series.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/category/related_tags?category_id=125&tag_names=services;quarterly&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-13",
"realtime_end": "2013-08-13",
"order_by": "series_count",
"sort_order": "desc",
"count": 7,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "balance",
"group_id": "gen",
"notes": "",
"created": "2012-02-27 10:18:19-06",
"popularity": 65,
"series_count": 4
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.category_related_tags(category_id=125, tag_names=['services', 'quarterly']).head()
group_id notes created popularity series_count
name
discontinued gen 2012-02-27 16:18:19+00:00 67 4
nsa seas Not Seasonally Adjusted 2012-02-27 16:18:19+00:00 100 6
sa seas Seasonally Adjusted 2012-02-27 16:18:19+00:00 88 6
goods gen 2012-02-27 16:18:19+00:00 68 8
balance gen 2012-02-27 16:18:19+00:00 47 12
```
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/category/related_tags',
'tags',
limit=1000,
category_id=category_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names,
tag_group_id=tag_group_id,
search_text=search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
"""
Releases
https://fred.stlouisfed.org/releases
"""
def releases(
self,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.release_id,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/releases.html
Get all releases of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/releases?api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-13",
"realtime_end": "2013-08-13",
"order_by": "release_id",
"sort_order": "asc",
"count": 158,
"offset": 0,
"limit": 1000,
"releases": [
{
"id": 9,
"realtime_start": "2013-08-13",
"realtime_end": "2013-08-13",
"name": "Advance Monthly Sales for Retail and Food Services",
"press_release": true,
"link": "http://www.census.gov/retail/"
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.releases().head()
realtime_start realtime_end name press_release link notes
id
9 2022-02-05 2022-02-05 Advance Monthly Sales for Retail and Food Serv... True http://www.census.gov/retail/ The U.S. Census Bureau conducts the Advance Mo...
10 2022-02-05 2022-02-05 Consumer Price Index True http://www.bls.gov/cpi/ <NA>
11 2022-02-05 2022-02-05 Employment Cost Index True http://www.bls.gov/ncs/ect/ <NA>
13 2022-02-05 2022-02-05 G.17 Industrial Production and Capacity Utiliz... True http://www.federalreserve.gov/releases/g17/ <NA>
14 2022-02-05 2022-02-05 G.19 Consumer Credit True http://www.federalreserve.gov/releases/g19/ <NA>
```
"""
allowed_orders = [
enums.OrderBy.release_id,
enums.OrderBy.name,
enums.OrderBy.press_release,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/releases',
'releases',
limit=1000,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order
)
)
date_columns = ['realtime_start', 'realtime_end']
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df = df.astype(dtype={
'name': 'string',
'link': 'string',
'notes': 'string',
'press_release': 'bool'
}).set_index('id')
return df
def releases_dates(
self,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.release_id,
sort_order: enums.SortOrder = enums.SortOrder.desc,
include_release_dates_with_no_data: bool = False
) -> pd.DataFrame:
"""
## Parameters
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
`include_release_dates_with_no_data`
Determines whether release dates with no data available are returned.
The defalut value 'false' excludes release dates that do not have data.
In particular, this excludes future release dates which may be available in the FRED release calendar or the ALFRED release calendar.
If include_release_dates_with_no_data is set to true, the XML tag release_date has an extra attribute release_last_updated that can be compared to the release date to determine if data has been updated.
## Description
https://fred.stlouisfed.org/docs/api/fred/releases_dates.html
Get release dates for all releases of economic data.
Note that release dates are published by data sources and do not necessarily represent when data will be available on the FRED or ALFRED websites.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/releases/dates?api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-01-01",
"realtime_end": "9999-12-31",
"order_by": "release_date",
"sort_order": "desc",
"count": 1129,
"offset": 0,
"limit": 1000,
"release_dates": [
{
"release_id": 9,
"release_name": "Advance Monthly Sales for Retail and Food Services",
"date": "2013-08-13"
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.releases_dates(realtime_start=date.today() - timedelta(days=1)).head()
release_name date
release_id
502 Euro Short Term Rate 2022-02-04
492 SONIA Interest Rate Benchmark 2022-02-04
484 Key ECB Interest Rates 2022-02-04
483 SOFR Averages and Index Data 2022-02-04
469 State Unemployment Insurance Weekly Claims Report 2022-02-04
```
"""
allowed_orders = [
enums.OrderBy.release_date,
enums.OrderBy.release_id,
enums.OrderBy.release_name,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/releases/dates',
'release_dates',
limit=1000,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order,
include_release_dates_with_no_data=include_release_dates_with_no_data
)
)
if not df.empty:
df.date = pd.to_datetime(df.date, format='%Y-%m-%d')
df = df.astype(dtype={
'release_name': 'string'
}).set_index('release_id')
return df
def release(self, release_id: int, realtime_start: date = None, realtime_end: date = None) -> models.Release:
"""
## Parameters
`release_id`
The id for a release.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/release.html
Get a release of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release?release_id=53&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"releases": [
{
"id": 53,
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"name": "Gross Domestic Product",
"press_release": true,
"link": "http://www.bea.gov/national/index.htm"
}
]
};
```
## Returns
`pystlouisfed.models.Release`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.release(release_id=53)
Release(id=53, realtime_start=datetime.date(2022, 1, 14), realtime_end=datetime.date(2022, 1, 14), name='Gross Domestic Product', press_release=True, link='https://www.bea.gov/data/gdp/gross-domestic-product')
```
"""
if int(release_id) <= 0:
raise ValueError('Variable release_id is not 0 or a positive integer.')
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
data = self._client.get(
'/fred/release',
'releases',
release_id=release_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
)
return models.Release(**data[0])
def release_dates(
self,
release_id: int,
realtime_start: date = date(1776, 7, 4),
realtime_end: date = date(9999, 12, 31),
sort_order: enums.SortOrder = enums.SortOrder.asc,
include_release_dates_with_no_data: bool = False
) -> pd.DataFrame:
"""
## Parameters
`release_id`
The id for a release.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/release_dates.html
Get release dates for a release of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release/dates?release_id=82&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "1776-07-04",
"realtime_end": "9999-12-31",
"order_by": "release_date",
"sort_order": "asc",
"count": 17,
"offset": 0,
"limit": 10000,
"release_dates": [
{
"release_id": 82,
"date": "1997-02-10"
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.release_dates(release_id=82).head()
release_id date
0 82 1997-02-10
1 82 1998-02-10
2 82 1999-02-04
3 82 2000-02-10
4 82 2001-01-16
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/release/dates',
'release_dates',
limit=10000,
release_id=release_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
sort_order=sort_order,
include_release_dates_with_no_data=include_release_dates_with_no_data
)
)
if not df.empty:
df.date = pd.to_datetime(df.date, format='%Y-%m-%d')
return df
def release_series(
self,
release_id: int,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.series_id,
sort_order: enums.SortOrder = enums.SortOrder.asc,
filter_variable: enums.FilterVariable = None,
filter_value: enums.FilterValue = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None
) -> pd.DataFrame:
"""
## Parameters
`release_id`
The id for a release.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
`filter_variable`
The attribute to filter results by.
`filter_value`
The value of the filter_variable attribute to filter results by.
`tag_names`
Tuple of tag names that series match all of.
`exclude_tag_names`
Tuple of tag names that series match none of.
## Description
https://fred.stlouisfed.org/docs/api/fred/release_series.html
Get the series on a release of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release/series?release_id=51&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"order_by": "series_id",
"sort_order": "asc",
"count": 57,
"offset": 0,
"limit": 1000,
"seriess": [
{
"id": "BOMTVLM133S",
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"title": "U.S. Imports of Services - Travel",
"observation_start": "1992-01-01",
"observation_end": "2017-05-01",
"frequency": "Monthly",
"frequency_short": "M",
"units": "Million of Dollars",
"units_short": "Mil. of $",
"seasonal_adjustment": "Seasonally Adjusted",
"seasonal_adjustment_short": "SA",
"last_updated": "2017-07-06 09:34:00-05",
"popularity": 0,
"group_popularity": 0
},
...
]
)
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.release_series(release_id=51).head()
realtime_start realtime_end title observation_start observation_end frequency frequency_short units units_short seasonal_adjustment seasonal_adjustment_short last_updated popularity group_popularity notes
id
BOMTVLM133S 2022-02-05 2022-02-05 U.S. Imports of Services - Travel 1992-01-01 2017-09-01 Monthly M Million of Dollars Mil. of $ Seasonally Adjusted SA 2017-11-03 13:12:15+00:00 1 1 Further information related to the internation...
BOMVGMM133S 2022-02-05 2022-02-05 U.S. Imports of Services: U.S. Government Misc... 1992-01-01 2013-12-01 Monthly M Millions of Dollars Mil. of $ Seasonally Adjusted SA 2014-10-20 14:27:37+00:00 1 1 BEA has introduced new table presentations, in...
BOMVJMM133S 2022-02-05 2022-02-05 U.S. Imports of Services - Direct Defense Expe... 1992-01-01 2013-12-01 Monthly M Millions of Dollars Mil. of $ Seasonally Adjusted SA 2014-10-20 14:26:44+00:00 1 1 BEA has introduced new table presentations, in...
BOMVMPM133S 2022-02-05 2022-02-05 U.S. Imports of Services - Passenger Fares 1992-01-01 2017-09-01 Monthly M Million of Dollars Mil. of $ Seasonally Adjusted SA 2017-11-03 13:12:15+00:00 1 1 Further information related to the internation...
BOMVOMM133S 2022-02-05 2022-02-05 U.S. Imports of Services - Other Private Servi... 1992-01-01 2013-12-01 Monthly M Million of Dollars Mil. of $ Seasonally Adjusted SA 2014-10-20 14:25:54+00:00 1 1 BEA has introduced new table presentations, in...
```
"""
allowed_orders = [
enums.OrderBy.series_id,
enums.OrderBy.title,
enums.OrderBy.units,
enums.OrderBy.frequency,
enums.OrderBy.seasonal_adjustment,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end,
enums.OrderBy.last_updated,
enums.OrderBy.observation_start,
enums.OrderBy.observation_end,
enums.OrderBy.popularity,
enums.OrderBy.group_popularity,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if filter_variable is not None and filter_variable not in enums.FilterVariable:
raise ValueError('Variable allowed_filter_variables ({}) is not one of the values: {}'.format(filter_variable, ', '.join(map(str, enums.FilterVariable))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/release/series',
'seriess',
limit=1000,
release_id=release_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order,
filter_variable=filter_variable,
filter_value=filter_value,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names
)
)
date_columns = [
'realtime_start', 'realtime_end',
'observation_start', 'observation_end',
]
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df.last_updated = pd.to_datetime(df.last_updated + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'id': 'string',
'title': 'string',
'notes': 'string',
'frequency': 'category',
'frequency_short': 'category',
'units': 'category',
'units_short': 'category',
'seasonal_adjustment': 'category',
'seasonal_adjustment_short': 'category'
}).set_index('id')
return df
def release_sources(self, release_id: int, realtime_start: date = None, realtime_end: date = None) -> pd.DataFrame:
"""
## Parameters
`release_id`
The id for a release.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/release_sources.html
Get the sources for a release of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release/sources?release_id=51&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"sources": [
{
"id": 18,
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"name": "U.S. Department of Commerce: Bureau of Economic Analysis",
"link": "http://www.bea.gov/"
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.release_sources(release_id=51).head()
realtime_start realtime_end name link
id
19 2022-02-05 2022-02-05 U.S. Census Bureau http://www.census.gov/
18 2022-02-05 2022-02-05 U.S. Bureau of Economic Analysis http://www.bea.gov/
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/release/sources',
'sources',
release_id=release_id,
realtime_start=realtime_start,
realtime_end=realtime_end
)
)
date_columns = ['realtime_start', 'realtime_end']
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df = df.astype(dtype={
'name': 'string',
'link': 'string'
}).set_index('id')
return df
def release_tags(
self,
release_id: int,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`release_id`
The id for a release.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
Tuple of tag names that series match all of.
`tag_group_id`
A tag group id to filter tags by type.
`search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/release_tags.html
Get the FRED tags for a release.
Optionally, filter results by tag name, tag group, or search.
Series are assigned tags and releases.
Indirectly through series, it is possible to get the tags for a release.
See the related request fred/release/related_tags.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release/tags?release_id=86&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 13,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "commercial paper",
"group_id": "gen",
"notes": "",
"created": "2012-03-19 10:40:59-05",
"popularity": 55,
"series_count": 18
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.release_tags(release_id=86).head()
group_id notes created popularity series_count
name
1-month gen 2012-02-27 16:18:19+00:00 39 2
2-month gen 2012-05-25 16:29:21+00:00 17 2
owned gen 2012-06-25 20:04:36+00:00 33 2
tier-2 gen 2014-02-12 17:18:16+00:00 -13 2
10-20 days gen 2014-02-12 17:08:07+00:00 -16 4
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/release/tags',
'tags',
limit=1000,
release_id=release_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
tag_group_id=tag_group_id,
search_text=search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def release_related_tags(
self,
release_id: int,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`release_id`
The id for a release.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
Tuple of tag names that series match all of.
`exclude_tag_names`
Tuple of tag names that series match none of.
`tag_group_id`
A tag group id to filter tags by type.
`search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/release_related_tags.html
Get the related FRED tags for one or more FRED tags within a release. Optionally, filter results by tag group or search.
FRED tags are attributes assigned to series.
For this request, related FRED tags are the tags assigned to series that match all tags in the tag_names parameter, no tags in the exclude_tag_names parameter, and the release set by the release_id parameter.
See the related request fred/release/tags.
Series are assigned tags and releases. Indirectly through series, it is possible to get the tags for a release.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release/related_tags?release_id=86&tag_names=sa;foreign&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 7,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "commercial paper",
"group_id": "gen",
"notes": "",
"created": "2012-03-19 10:40:59-05",
"popularity": 55,
"series_count": 2
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.release_related_tags(release_id=86, tag_names=['sa', 'foreign']).head()
group_id notes created popularity series_count
name
financial gen 2012-02-27 16:18:19+00:00 55 2
monthly freq 2012-02-27 16:18:19+00:00 93 2
nonfinancial gen 2012-02-27 16:18:19+00:00 55 2
weekly freq 2012-02-27 16:18:19+00:00 68 2
commercial gen 2012-02-27 16:18:19+00:00 61 4
```
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/release/related_tags',
'tags',
limit=1000,
release_id=release_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names,
tag_group_id=tag_group_id,
search_text=search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def release_tables(
self,
release_id: int,
element_id: int = None,
include_observation_values: bool = False,
observation_date: date = date(9999, 12, 31)
) -> pd.DataFrame:
"""
## Parameters
`release_id`
The id for a release.
`element_id`
The release table element id you would like to retrieve.
When the parameter is not passed, the root(top most) element for the release is given.
`include_observation_values`
A flag to indicate that observations need to be returned. Observation value and date will only be returned for a series type element.
`observation_date`
The observation date to be included with the returned release table.
## Description
https://fred.stlouisfed.org/docs/api/fred/release_tables.html
Get release table trees for a given release.
You can go directly to the tree structure by passing the appropriate element_id.
You may also use a drill-down approach to start at the root (top most) element by leaving the element_id off.
Note that release dates are published by data sources and do not necessarily represent when data will be available on the FRED or ALFRED websites.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/release/tables?release_id=53&api_key=abcdefghijklmnopqrstuvwxyz123456&element_id=12886&file_type=json
## API Response
```json
{
{
"name": "Personal consumption expenditures",
"element_id": 12886,
"release_id": "53",
"elements": {
"12887": {
"element_id": 12887,
"release_id": 53,
"series_id": "DGDSRL1A225NBEA",
"parent_id": 12886,
"line": "3",
"type": "series",
"name": "Goods",
"level": "1",
"children": [
{
"element_id": 12888,
"release_id": 53,
"series_id": "DDURRL1A225NBEA",
"parent_id": 12887,
"line": "4",
"type": "series",
"name": "Durable goods",
"level": "2",
"children": [
]
},
...
]
}
}
```
"""
raise NotImplementedError('Method "FRED.release_tables" is not implemented')
"""
Series
"""
def series(self, series_id: str, realtime_start: date = None, realtime_end: date = None) -> models.Series:
"""
## Parameters
`series_id`
The id for a series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/series.html
Get an economic data series.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series?series_id=GNPCA&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"seriess": [
{
"id": "GNPCA",
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"title": "Real Gross National Product",
"observation_start": "1929-01-01",
"observation_end": "2012-01-01",
"frequency": "Annual",
"frequency_short": "A",
"units": "Billions of Chained 2009 Dollars",
"units_short": "Bil. of Chn. 2009 $",
"seasonal_adjustment": "Not Seasonally Adjusted",
"seasonal_adjustment_short": "NSA",
"last_updated": "2013-07-31 09:26:16-05",
"popularity": 39,
"notes": "BEA Account Code: A001RX1"
}
]
}
```
## Returns
`pystlouisfed.models.Series`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series(series_id='GNPCA')
Series(id='GNPCA', realtime_start=datetime.date(2022, 1, 14), realtime_end=datetime.date(2022, 1, 14), title='Real Gross National Product', observation_start=datetime.date(1929, 1, 1), observation_end=datetime.date(2020, 1, 1), frequency='Annual', frequency_short='A', units='Billions of Chained 2012 Dollars', units_short='Bil. of Chn. 2012 $', seasonal_adjustment='Not Seasonally Adjusted', seasonal_adjustment_short='NSA', last_updated=datetime.datetime(2021, 7, 29, 7, 45, 58, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=68400))), popularity=12, notes='BEA Account Code: A001RX\\n\\n')
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
data = self._client.get(
'/fred/series',
'seriess',
series_id=series_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
)
return models.Series(**data[0])
def series_categories(self, series_id: str, realtime_start: date = None, realtime_end: date = None) -> pd.DataFrame:
"""
## Parameters
`series_id`
The id for a series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/series_categories.html
Get the categories for an economic data series.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/categories?series_id=EXJPUS&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"categories": [
{
"id": 95,
"name": "Monthly Rates",
"parent_id": 15
}
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_categories(series_id='EXJPUS')
name parent_id
id
95 Monthly Rates 15
275 Japan 158
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/categories',
'categories',
series_id=series_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
)
).astype(dtype={
'name': 'string'
}).set_index('id')
return df
def series_observations(
self,
series_id: str,
realtime_start: date = None,
realtime_end: date = None,
sort_order: enums.SortOrder = enums.SortOrder.asc,
observation_start: date = date(1776, 7, 4),
observation_end: date = date(9999, 12, 31),
units: enums.Unit = enums.Unit.lin,
frequency: enums.Frequency = None,
aggregation_method: enums.AggregationMethod = enums.AggregationMethod.average,
output_type: enums.OutputType = enums.OutputType.realtime_period,
vintage_dates: List[str] = None
) -> pd.DataFrame:
"""
## Parameters
`series_id`
The id for a series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`sort_order`
Sort results is ascending or descending observation_date order.
`observation_start`
The start of the observation period.
`observation_end`
The end of the observation period.
`units`
A key that indicates a data value transformation.
`frequency`
An optional parameter that indicates a lower frequency to aggregate values to.
`aggregation_method`
A key that indicates the aggregation method used for frequency aggregation.
This parameter has no affect if the frequency parameter is not set.
`output_type`
An integer that indicates an output type.
`vintage_dates`
A comma separated string of YYYY-MM-DD formatted dates in history (e.g. 2000-01-01,2005-02-24).
Vintage dates are used to download data as it existed on these specified dates in history.
Vintage dates can be specified instead of a real-time period using realtime_start and realtime_end.
Sometimes it may be useful to enter a vintage date that is not a date when the data values were revised. For instance you may want to know the latest available revisions on 2001-09-11 (World Trade Center and Pentagon attacks) or as of a Federal Open Market Committee (FOMC) meeting date.
Entering a vintage date is also useful to compare series on different releases with different release dates.
## Description
https://fred.stlouisfed.org/docs/api/fred/series_observations.html
Get the observations or data values for an economic data series.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/observations?series_id=GNPCA&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"observation_start": "1776-07-04",
"observation_end": "9999-12-31",
"units": "lin",
"output_type": 1,
"file_type": "json",
"order_by": "observation_date",
"sort_order": "asc",
"count": 84,
"offset": 0,
"limit": 100000,
"observations": [
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1929-01-01",
"value": "1065.9"
},
...
]
}
```
## Returns
`pystlouisfed.models.Series`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_observations(series_id='GNPCA').head()
realtime_start realtime_end value
date
1929-01-01 2022-02-05 2022-02-05 1120.718
1930-01-01 2022-02-05 2022-02-05 1025.678
1931-01-01 2022-02-05 2022-02-05 958.927
1932-01-01 2022-02-05 2022-02-05 834.769
1933-01-01 2022-02-05 2022-02-05 823.628
```
```python
>>> from matplotlib import pyplot as plt
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> df = fred.series_observations(series_id='T10Y2Y')
>>> df.plot(y='value', grid=True)
>>> plt.show()
```
.. image:: T10Y2Y.png
"""
if units not in enums.Unit:
raise ValueError('Variable units ({}) is not one of the values: {}'.format(units, ', '.join(map(str, enums.Unit))))
if frequency is not None and frequency not in enums.Frequency:
raise ValueError('Variable frequency ({}) is not one of the values: {}'.format(frequency, ', '.join(map(str, enums.Frequency))))
if aggregation_method not in enums.AggregationMethod:
raise ValueError('Variable aggregation_method ({}) is not one of the values: {}'.format(aggregation_method, ', '.join(map(str, enums.AggregationMethod))))
if output_type not in enums.OutputType:
raise ValueError('Variable output_type ({}) is not one of the values: {}'.format(output_type, ', '.join(map(str, enums.OutputType))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/observations',
'observations',
limit=100000,
series_id=series_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
sort_order=sort_order,
observation_start=observation_start,
observation_end=observation_end,
units=units,
frequency=frequency,
aggregation_method=aggregation_method,
output_type=output_type,
vintage_dates=vintage_dates
)
)
date_columns = ['realtime_start', 'realtime_end', 'date']
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df.value = df.value.replace(self.EMPTY_VALUE, np.nan)
df = df.astype(dtype={
'value': 'float'
}).set_index('date')
return df
def series_release(self, series_id: str, realtime_start: date = None, realtime_end: date = None) -> pd.DataFrame:
"""
## Parameters
`series_id`
The id for a series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/series_release.html
Get the release for an economic data series.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/release?series_id=IRA&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"releases": [
{
"id": 21,
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"name": "H.6 Money Stock Measures",
"press_release": true,
"link": "http://www.federalreserve.gov/releases/h6/"
}
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_release(series_id='IRA').head()
realtime_start realtime_end name press_release link
id
21 2022-02-05 2022-02-05 H.6 Money Stock Measures True http://www.federalreserve.gov/releases/h6/
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/release',
'releases',
series_id=series_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
)
)
date_columns = ['realtime_start', 'realtime_end']
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df = df.astype(dtype={
'name': 'string',
'link': 'string',
'press_release': 'bool'
}).set_index('id')
return df
def series_search(
self,
search_text: str,
search_type: enums.SearchType = enums.SearchType.full_text,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = None,
sort_order: enums.SortOrder = None,
filter_variable: enums.FilterVariable = None,
filter_value: enums.FilterValue = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None
) -> pd.DataFrame:
"""
## Parameters
`search_text`
The words to match against economic data series.
`search_type`
Determines the type of search to perform.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
`filter_variable`
The attribute to filter results by.
`filter_value`
The value of the filter_variable attribute to filter results by.
`tag_names`
A semicolon delimited list of tag names that series match all of.
`exclude_tag_names`
A semicolon delimited list of tag names that series match none of.
## Description
https://fred.stlouisfed.org/docs/api/fred/series_search.html
Get economic data series that match search text.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/search?search_text=monetary+service+index&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"order_by": "search_rank",
"sort_order": "desc",
"count": 32,
"offset": 0,
"limit": 1000,
"seriess": [
{
"id": "MSIM2",
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"title": "Monetary Services Index: M2 (preferred)",
"observation_start": "1967-01-01",
"observation_end": "2013-12-01",
"frequency": "Monthly",
"frequency_short": "M",
"units": "Billions of Dollars",
"units_short": "Bil. of $",
"seasonal_adjustment": "Seasonally Adjusted",
"seasonal_adjustment_short": "SA",
"last_updated": "2014-01-17 07:16:44-06",
"popularity": 34,
"group_popularity": 33,
"notes": "The MSI measure the flow of monetary services received each period by households and firms from their holdings of monetary assets (levels of the indexes are sometimes referred to as Divisia monetary aggregates).\\nPreferred benchmark rate equals 100 basis points plus the largest rate in the set of rates.\\nAlternative benchmark rate equals the larger of the preferred benchmark rate and the Baa corporate bond yield.\\nMore information about the new MSI can be found at\\nhttp://research.stlouisfed.org/msi/index.html."
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_search(search_text='monetary service index').head()
realtime_start realtime_end title observation_start observation_end frequency frequency_short units units_short seasonal_adjustment seasonal_adjustment_short last_updated popularity group_popularity notes
id
MSIMZMP 2022-02-05 2022-02-05 Monetary Services Index: MZM (preferred) 1967-01-01 2013-12-01 Monthly M Billions of Dollars Bil. of $ Seasonally Adjusted SA 2014-01-17 13:16:42+00:00 20 20 The MSI measure the flow of monetary services ...
MSIM2 2022-02-05 2022-02-05 Monetary Services Index: M2 (preferred) 1967-01-01 2013-12-01 Monthly M Billions of Dollars Bil. of $ Seasonally Adjusted SA 2014-01-17 13:16:44+00:00 16 16 The MSI measure the flow of monetary services ...
MSIALLP 2022-02-05 2022-02-05 Monetary Services Index: ALL Assets (preferred) 1967-01-01 2013-12-01 Monthly M Billions of Dollars Bil. of $ Seasonally Adjusted SA 2014-01-17 13:16:45+00:00 14 14 The MSI measure the flow of monetary services ...
MSIM1P 2022-02-05 2022-02-05 Monetary Services Index: M1 (preferred) 1967-01-01 2013-12-01 Monthly M Billions of Dollars Bil. of $ Seasonally Adjusted SA 2014-01-17 13:16:45+00:00 9 9 The MSI measure the flow of monetary services ...
MSIM2A 2022-02-05 2022-02-05 Monetary Services Index: M2 (alternative) 1967-01-01 2013-12-01 Monthly M Billions of Dollars Bil. of $ Seasonally Adjusted SA 2014-01-17 13:16:44+00:00 8 8 The MSI measure the flow of monetary services ...
```
"""
allowed_orders = [
enums.OrderBy.search_rank,
enums.OrderBy.series_id,
enums.OrderBy.title,
enums.OrderBy.units,
enums.OrderBy.frequency,
enums.OrderBy.seasonal_adjustment,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end,
enums.OrderBy.last_updated,
enums.OrderBy.observation_start,
enums.OrderBy.observation_end,
enums.OrderBy.popularity,
enums.OrderBy.group_popularity
]
# If the value of search_type is 'full_text' then the default value of order_by is 'search_rank'.
if search_type == enums.SearchType.full_text and order_by is None:
order_by = enums.OrderBy.search_rank
# If the value of search_type is 'series_id' then the default value of order_by is 'series_id'.
elif search_text == enums.SearchType.series_id and order_by is None:
order_by = enums.OrderBy.series_id
# If order_by is equal to 'search_rank' or 'popularity', then the default value of sort_order is 'desc'. Otherwise, the default sort order is 'asc'.
if order_by == enums.OrderBy.search_rank or order_by == enums.OrderBy.popularity and sort_order is None:
sort_order = enums.SortOrder.desc
else:
sort_order = enums.SortOrder.asc
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if search_type not in enums.SearchType:
raise ValueError('Variable search_type ({}) is not one of the values: {}'.format(search_type, ', '.join(map(str, enums.SearchType))))
if filter_variable is not None and filter_variable not in enums.FilterVariable:
raise ValueError('Variable filter_variable ({}) is not one of the values: {}'.format(filter_variable, ', '.join(map(str, enums.FilterVariable))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/search',
'seriess',
limit=1000,
search_text=search_text,
search_type=search_type,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order,
filter_variable=filter_variable,
filter_value=filter_value,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names
)
)
date_columns = [
'realtime_start', 'realtime_end',
'observation_start', 'observation_end',
]
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df.last_updated = pd.to_datetime(df.last_updated + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'id': 'string',
'title': 'string',
'notes': 'string',
'frequency': 'category',
'frequency_short': 'category',
'units_short': 'category',
'units': 'category',
'seasonal_adjustment': 'category',
'seasonal_adjustment_short': 'category'
}).set_index('id')
return df
def series_search_tags(
self,
series_search_text: str,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
tag_search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`series_search_text`
The words to match against economic data series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
A semicolon delimited list of tag names to only include in the response. See the related request fred/series/search/related_tags.
`tag_group_id`
A tag group id to filter tags by type.
`tag_search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/series_search_tags.html
Get the FRED tags for a series search. Optionally, filter results by tag name, tag group, or tag search. See the related request fred/series/search/related_tags.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/search/tags?series_search_text=monetary+service+index&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 18,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "academic data",
"group_id": "gen",
"notes": "Time series data created mainly by academia to address growing demand in understanding specific concerns in the economy that are not well modeled by ordinary statistical agencies.",
"created": "2012-08-29 10:22:19-05",
"popularity": 62,
"series_count": 25
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_search_tags(series_search_text='monetary service index').head()
group_id notes created popularity series_count
name
accounting gen 2012-02-27 16:18:19+00:00 43 2
advertisement gen 2012-08-06 19:50:07+00:00 17 2
assets gen 2012-02-27 16:18:19+00:00 64 2
boe src Bank of England 2013-02-25 22:21:19+00:00 42 2
communication gen 2012-02-27 16:18:19+00:00 22 2
```
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/search/tags',
'tags',
limit=1000,
series_search_text=series_search_text,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
tag_group_id=tag_group_id,
tag_search_text=tag_search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def series_search_related_tags(
self,
series_search_text: str,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
tag_search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`series_search_text`
The words to match against economic data series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
A semicolon delimited list of tag names to only include in the response. See the related request fred/series/search/related_tags.
`exclude_tag_names`
Tuple of tag names that series match none of.
`tag_group_id`
A tag group id to filter tags by type.
`tag_search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/series_search_related_tags.html
Get the related FRED tags for one or more FRED tags matching a series search. Optionally, filter results by tag group or tag search.
FRED tags are attributes assigned to series.
For this request, related FRED tags are the tags assigned to series that match all tags in the tag_names parameter, no tags in the exclude_tag_names parameter,
and the search words set by the series_search_text parameter.
See the related request fred/series/search/tags.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/search/related_tags?series_search_text=mortgage+rate&tag_names=30-year;frb&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 10,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "conventional",
"group_id": "gen",
"notes": "",
"created": "2012-02-27 10:18:19-06",
"popularity": 63,
"series_count": 3
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_search_related_tags(series_search_text='mortgage rate', tag_names=['30-year', 'frb'], realtime_start=date(2022, 1, 5), realtime_end=date(2022, 1, 5)).head()
group_id notes created popularity series_count
name
conventional gen 2012-02-27 16:18:19+00:00 21 2
discontinued gen 2012-02-27 16:18:19+00:00 67 2
h15 rls H.15 Selected Interest Rates 2012-08-16 20:21:17+00:00 57 2
interest gen 2012-02-27 16:18:19+00:00 74 2
interest rate gen 2012-05-29 15:14:19+00:00 74 2
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/search/related_tags',
'tags',
limit=1000,
series_search_text=series_search_text,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names,
tag_group_id=tag_group_id,
tag_search_text=tag_search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def series_tags(
self,
series_id: str,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`series_id`
The id for a series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/series_tags.html
Get the FRED tags for a series.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/tags?series_id=STLFSI&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 8,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "nation",
"group_id": "geot",
"notes": "Country Level",
"created": "2012-02-27 10:18:19-06",
"popularity": 100,
"series_count": 105200
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_tags(series_id='STLFSI').head()
group_id notes created popularity series_count
name
stlfsi rls St. Louis Financial Stress Index 2012-08-16 20:21:17+00:00 19 4
fsi gen Financial Stress Index 2014-08-08 19:01:37+00:00 26 26
weekly freq 2012-02-27 16:18:19+00:00 68 3548
financial gen 2012-02-27 16:18:19+00:00 55 21652
discontinued gen 2012-02-27 16:18:19+00:00 67 40386
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/tags',
'tags',
series_id=series_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def series_updates(
self,
realtime_start: date = None,
realtime_end: date = None,
filter_value: enums.FilterValue = enums.FilterValue.all,
start_time: datetime = None,
end_time: datetime = None
) -> pd.DataFrame:
"""
## Parameters
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`filter_value`
Limit results by geographic type of economic data series; namely 'macro', 'regional', and 'all'.
`start_time`
Start time for limiting results for a time range, can filter down to minutes
`end_time`
End time for limiting results for a time range, can filter down to minutes
## Description
https://fred.stlouisfed.org/docs/api/fred/series_updates.html
Get economic data series sorted by when observations were updated on the FRED server (attribute last_updated).
Results are limited to series updated within the last two weeks.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/updates?api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"filter_variable": "geography",
"filter_value": "all",
"order_by": "last_updated",
"sort_order": "desc",
"count": 143535,
"offset": 0,
"limit": 100,
"seriess": [
{
"id": "PPIITM",
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"title": "Producer Price Index: Intermediate Materials: Supplies & Components",
"observation_start": "1947-04-01",
"observation_end": "2013-07-01",
"frequency": "Monthly",
"frequency_short": "M",
"units": "Index 1982=100",
"units_short": "Index 1982=100",
"seasonal_adjustment": "Seasonally Adjusted",
"seasonal_adjustment_short": "SA",
"last_updated": "2013-08-14 08:36:05-05",
"popularity": 52
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_updates(start_time=datetime(2022, 1, 15), end_time=datetime(2022, 1, 16)).head()
realtime_start realtime_end title observation_start observation_end frequency frequency_short units units_short seasonal_adjustment seasonal_adjustment_short last_updated popularity notes
id
SP500 2022-02-05 2022-02-05 S&P 500 2012-02-06 2022-02-04 Daily, Close D Index Index Not Seasonally Adjusted NSA 2022-02-05 01:11:04+00:00 85 The observations for the S&P 500 represent the...
CBBCHUSD 2022-02-05 2022-02-05 Coinbase Bitcoin Cash 2017-12-20 2022-02-04 Daily, 7-Day D U.S. Dollars U.S. $ Not Seasonally Adjusted NSA 2022-02-05 01:04:07+00:00 22 All data is as of 5 PM PST.
CBBTCUSD 2022-02-05 2022-02-05 Coinbase Bitcoin 2014-12-01 2022-02-04 Daily, 7-Day D U.S. Dollars U.S. $ Not Seasonally Adjusted NSA 2022-02-05 01:04:06+00:00 65 All data is as of 5 PM PST.
CBETHUSD 2022-02-05 2022-02-05 Coinbase Ethereum 2016-05-18 2022-02-04 Daily, 7-Day D U.S. Dollars U.S. $ Not Seasonally Adjusted NSA 2022-02-05 01:04:05+00:00 44 All data is as of 5 PM PST.
CBLTCUSD 2022-02-05 2022-02-05 Coinbase Litecoin 2016-08-17 2022-02-04 Daily, 7-Day D U.S. Dollars U.S. $ Not Seasonally Adjusted NSA 2022-02-05 01:04:03+00:00 20 All data is as of 5 PM PST.
```
"""
if filter_value not in enums.FilterValue:
raise ValueError('Variable filter_value ({}) is not one of the values: {}'.format(filter_value, ', '.join(map(str, enums.FilterValue))))
if start_time is not None and end_time is None:
raise ValueError('end_time is required if start_time is set')
if end_time is not None and start_time is None:
raise ValueError('start_time is required if end_time is set')
if start_time is not None and end_time is not None and start_time >= end_time:
raise ValueError('end_time must be greater than start_time')
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/series/updates',
'seriess',
limit=1000,
realtime_start=realtime_start,
realtime_end=realtime_end,
filter_value=filter_value,
start_time=start_time,
end_time=end_time
)
)
date_columns = [
'realtime_start', 'realtime_end',
'observation_start', 'observation_end'
]
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df.last_updated = pd.to_datetime(df.last_updated + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'id': 'string',
'notes': 'string',
'title': 'string',
'seasonal_adjustment': 'category',
'seasonal_adjustment_short': 'category',
'units': 'category',
'units_short': 'category',
'frequency': 'category',
'frequency_short': 'category',
}).set_index('id')
return df
def series_vintagedates(
self,
series_id: str,
realtime_start: date = date(1776, 7, 4),
realtime_end: date = date(9999, 12, 31),
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.Series:
"""
## Parameters
`series_id`
The id for a series.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/series_vintagedates.html
Get the dates in history when a series' data values were revised or new data values were released.
Vintage dates are the release dates for a series excluding release dates when the data for the series did not change.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/series/vintagedates?series_id=GNPCA&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "1776-07-04",
"realtime_end": "9999-12-31",
"order_by": "vintage_date",
"sort_order": "asc",
"count": 162,
"offset": 0,
"limit": 10000,
"vintage_dates": [
"1958-12-21",
"1959-02-19",
...
]
}
```
## Returns
`pandas.Series`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.series_vintagedates(series_id='GNPCA').head()
0 1958-12-21
1 1959-02-19
2 1959-07-19
3 1960-02-16
4 1960-07-22
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
return pd.to_datetime(
pd.Series(
self._client.get(
'/fred/series/vintagedates',
'vintage_dates',
limit=10000,
series_id=series_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
sort_order=sort_order
)
), format='%Y-%m-%d'
)
"""
Sources
https://fred.stlouisfed.org/sources
"""
def sources(
self,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.source_id,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/sources.html
Get all sources of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/sources?api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "source_id",
"sort_order": "asc",
"count": 58,
"offset": 0,
"limit": 1000,
"sources": [
{
"id": 1,
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"name": "Board of Governors of the Federal Reserve System",
"link": "http://www.federalreserve.gov/"
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.sources()
realtime_start realtime_end name link notes
id
1 2022-02-05 2022-02-05 Board of Governors of the Federal Reserve Syst... http://www.federalreserve.gov/ <NA>
3 2022-02-05 2022-02-05 Federal Reserve Bank of Philadelphia https://www.philadelphiafed.org/ <NA>
4 2022-02-05 2022-02-05 Federal Reserve Bank of St. Louis http://www.stlouisfed.org/ <NA>
6 2022-02-05 2022-02-05 Federal Financial Institutions Examination Cou... http://www.ffiec.gov/ <NA>
11 2022-02-05 2022-02-05 Dow Jones & Company http://www.dowjones.com <NA>
```
"""
allowed_orders = [
enums.OrderBy.source_id,
enums.OrderBy.name,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/sources',
'sources',
limit=1000,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order
)
)
date_columns = ['realtime_start', 'realtime_end']
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'link': 'string'
}).set_index('id')
return df
def source(self, source_id: int, realtime_start: date = None, realtime_end: date = None) -> models.Source:
"""
## Parameters
`source_id`
The id for a source.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
## Description
https://fred.stlouisfed.org/docs/api/fred/source.html
Get a source of economic data.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/source?source_id=1&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"sources": [
{
"id": 1,
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"name": "Board of Governors of the Federal Reserve System",
"link": "http://www.federalreserve.gov/"
}
]
}
```
## Returns
`pystlouisfed.models.Source`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.source(source_id=1)
Source(id=1, realtime_start='2022-01-14', realtime_end='2022-01-14', name='Board of Governors of the Federal Reserve System (US)', link='http://www.federalreserve.gov/')
```
"""
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
data = self._client.get(
'/fred/source',
'sources',
source_id=source_id,
realtime_start=realtime_start,
realtime_end=realtime_end
)
return models.Source(**data[0])
def source_releases(
self,
source_id: int,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.release_id,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`source_id`
The id for a source.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/source_releases.html
Get the releases for a source.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/source/releases?source_id=1&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "release_id",
"sort_order": "asc",
"count": 26,
"offset": 0,
"limit": 1000,
"releases": [
{
"id": 13,
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"name": "G.17 Industrial Production and Capacity Utilization",
"press_release": true,
"link": "http://www.federalreserve.gov/releases/g17/"
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.source_releases(source_id=1).head()
realtime_start realtime_end name press_release link notes
id
13 2022-02-05 2022-02-05 G.17 Industrial Production and Capacity Utiliz... True http://www.federalreserve.gov/releases/g17/ <NA>
14 2022-02-05 2022-02-05 G.19 Consumer Credit True http://www.federalreserve.gov/releases/g19/ <NA>
15 2022-02-05 2022-02-05 G.5 Foreign Exchange Rates True http://www.federalreserve.gov/releases/g5/ <NA>
17 2022-02-05 2022-02-05 H.10 Foreign Exchange Rates True http://www.federalreserve.gov/releases/h10/ <NA>
18 2022-02-05 2022-02-05 H.15 Selected Interest Rates True http://www.federalreserve.gov/releases/h15/ <NA>
```
"""
allowed_orders = [
enums.OrderBy.release_id,
enums.OrderBy.name,
enums.OrderBy.press_release,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/source/releases',
'releases',
limit=1000,
source_id=source_id,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order
)
)
date_columns = ['realtime_start', 'realtime_end']
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df = df.astype(dtype={
'name': 'string',
'link': 'string',
'notes': 'string',
'press_release': 'bool'
}).set_index('id')
return df
"""
Tags
https://fred.stlouisfed.org/tags
"""
def tags(
self,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
Tuple of tag names that series match all of.
`tag_group_id`
A tag group id to filter tags by type.
`search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/tags.html
Get FRED tags. Optionally, filter results by tag name, tag group, or search. FRED tags are attributes assigned to series. See the related request fred/related_tags.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/tags?api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 4794,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "nation",
"group_id": "geot",
"notes": "Country Level",
"created": "2012-02-27 10:18:19-06",
"popularity": 100,
"series_count": 105200
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.tags().head()
group_id notes created popularity series_count
name
14 years + gen 2012-08-06 19:40:56+00:00 -6 2
2-month + gen 2012-08-06 19:34:05+00:00 -62 2
2-week gen 2012-05-25 16:29:34+00:00 -6 2
30 to 34 years gen 2013-10-10 21:13:04+00:00 -13 2
3-family + gen 2012-08-06 19:48:11+00:00 -49 2
```
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if tag_group_id is not None and tag_group_id not in enums.TagGroupID:
raise ValueError('Variable tag_group_id ({}) is not one of the values: {}'.format(tag_group_id, ', '.join(map(str, enums.TagGroupID))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/tags',
'tags',
limit=1000,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
tag_group_id=tag_group_id,
search_text=search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def related_tags(
self,
realtime_start: date = None,
realtime_end: date = None,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None,
tag_group_id: enums.TagGroupID = None,
search_text: str = None,
order_by: enums.OrderBy = enums.OrderBy.series_count,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`tag_names`
Tuple of tag names that series match all of.
`exclude_tag_names`
Tuple of tag names that series match none of.
`tag_group_id`
A tag group id to filter tags by type.
`search_text`
The words to find matching tags with.
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/related_tags.html
Get the related FRED tags for one or more FRED tags.
Optionally, filter results by tag group or search.
FRED tags are attributes assigned to series.
Related FRED tags are the tags assigned to series that match all tags in the tag_names parameter and no tags in the exclude_tag_names parameter.
See the related request fred/tags.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/related_tags?tag_names=monetary+aggregates;weekly&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"order_by": "series_count",
"sort_order": "desc",
"count": 13,
"offset": 0,
"limit": 1000,
"tags": [
{
"name": "nation",
"group_id": "geot",
"notes": "Country Level",
"created": "2012-02-27 10:18:19-06",
"popularity": 100,
"series_count": 12
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.related_tags(tag_names=['monetary aggregates', 'weekly']).head()
group_id notes created popularity series_count
name
copyrighted: citation required cc <NA> 2018-12-18 05:33:13+00:00 88 2
currency gen 2012-02-27 16:18:19+00:00 62 2
frb stl src St. Louis Fed 2012-02-27 16:18:19+00:00 68 2
m1 gen M1 Money Stock 2012-02-27 16:18:19+00:00 47 2
m3 gen M3 Money Stock 2012-02-27 16:18:19+00:00 39 2
```
"""
allowed_orders = [
enums.OrderBy.series_count,
enums.OrderBy.popularity,
enums.OrderBy.created,
enums.OrderBy.name,
enums.OrderBy.group_id
]
allowed_tag_group_ids = [
enums.TagGroupID.frequency,
enums.TagGroupID.general_or_concept,
enums.TagGroupID.geography,
enums.TagGroupID.geography_type,
enums.TagGroupID.release,
enums.TagGroupID.seasonal_adjustment,
enums.TagGroupID.source
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if tag_group_id is not None and tag_group_id not in allowed_tag_group_ids:
raise ValueError('Variable tag_group_id ({}) is not one of the values: {}'.format(tag_group_id, ', '.join(map(str, allowed_tag_group_ids))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/related_tags',
'tags',
limit=1000,
realtime_start=realtime_start,
realtime_end=realtime_end,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names,
tag_group_id=tag_group_id,
search_text=search_text,
order_by=order_by,
sort_order=sort_order
)
)
if not df.empty:
df.created = pd.to_datetime(df.created + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'name': 'string',
'notes': 'string',
'group_id': 'category'
}).set_index('name')
return df
def tags_series(
self,
tag_names: List[str] = None,
exclude_tag_names: List[str] = None,
realtime_start: date = None,
realtime_end: date = None,
order_by: enums.OrderBy = enums.OrderBy.series_id,
sort_order: enums.SortOrder = enums.SortOrder.asc
) -> pd.DataFrame:
"""
## Parameters
`tag_names`
Tuple of tag names that series match all of.
`exclude_tag_names`
Tuple of tag names that series match none of.
`realtime_start`
The start of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`realtime_end`
The end of the real-time period. For more information, see [Real-Time Periods](https://fred.stlouisfed.org/docs/api/fred/realtime_period.html).
`order_by`
Order results by values of the specified attribute.
`sort_order`
Sort results is ascending or descending order for attribute values specified by order_by.
## Description
https://fred.stlouisfed.org/docs/api/fred/tags_series.html
Get the series matching all tags in the tag_names parameter and no tags in the exclude_tag_names parameter.
## API Request (HTTPS GET)
https://api.stlouisfed.org/fred/tags/series?tag_names=slovenia;food;oecd&api_key=abcdefghijklmnopqrstuvwxyz123456&file_type=json
## API Response
```json
{
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"order_by": "series_id",
"sort_order": "asc",
"count": 18,
"offset": 0,
"limit": 1000,
"seriess": [
{
"id": "CPGDFD02SIA657N",
"realtime_start": "2017-08-01",
"realtime_end": "2017-08-01",
"title": "Consumer Price Index: Total Food Excluding Restaurants for Slovenia\u00a9",
"observation_start": "1996-01-01",
"observation_end": "2016-01-01",
"frequency": "Annual",
"frequency_short": "A",
"units": "Growth Rate Previous Period",
"units_short": "Growth Rate Previous Period",
"seasonal_adjustment": "Not Seasonally Adjusted",
"seasonal_adjustment_short": "NSA",
"last_updated": "2017-04-20 00:48:35-05",
"popularity": 0,
"group_popularity": 0,
"notes": "OECD descriptor ID: CPGDFD02\\nOECD unit ID: GP\\nOECD country ID: SVN\\n\\nAll OECD data should be cited as follows: OECD, \\"Main Economic Indicators - complete database\\", Main Economic Indicators (database),http://dx.doi.org/10.1787/data-00052-en (Accessed on date)\\nCopyright, 2016, OECD. Reprinted with permission."
},
...
]
}
```
## Returns
`pandas.DataFrame`
## Example
```python
>>> fred = FRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> fred.tags_series(tag_names=['food', 'oecd']).head()
realtime_start realtime_end title observation_start observation_end frequency frequency_short units units_short seasonal_adjustment seasonal_adjustment_short last_updated popularity group_popularity notes
id
AUSCPICORAINMEI 2022-02-05 2022-02-05 Consumer Price Index: All Items Excluding Food... 1972-01-01 2020-01-01 Annual A Index 2015=100 Index 2015=100 Not Seasonally Adjusted NSA 2021-02-17 18:27:39+00:00 1 12 Copyright, 2016, OECD. Reprinted with permissi...
AUSCPICORQINMEI 2022-02-05 2022-02-05 Consumer Price Index: All Items Excluding Food... 1971-04-01 2021-07-01 Quarterly Q Index 2015=100 Index 2015=100 Not Seasonally Adjusted NSA 2021-12-14 21:57:04+00:00 12 12 Copyright, 2016, OECD. Reprinted with permissi...
AUSCPIFODAINMEI 2022-02-05 2022-02-05 Consumer Price Index: Food for Australia 1977-01-01 2017-01-01 Annual A Index 2010=100 Index 2010=100 Not Seasonally Adjusted NSA 2018-03-09 21:12:09+00:00 1 2 Copyright, 2016, OECD. Reprinted with permissi...
AUSCPIFODQINMEI 2022-02-05 2022-02-05 Consumer Price Index: Food for Australia 1976-07-01 2018-01-01 Quarterly Q Index 2010=100 Index 2010=100 Not Seasonally Adjusted NSA 2018-04-24 19:51:04+00:00 2 2 Copyright, 2016, OECD. Reprinted with permissi...
AUTCPICORAINMEI 2022-02-05 2022-02-05 Consumer Price Index: All Items Excluding Food... 1966-01-01 2020-01-01 Annual A Index 2015=100 Index 2015=100 Not Seasonally Adjusted NSA 2021-03-16 22:37:57+00:00 0 1 Copyright, 2016, OECD. Reprinted with permissi...
```
"""
allowed_orders = [
enums.OrderBy.series_id,
enums.OrderBy.title,
enums.OrderBy.units,
enums.OrderBy.frequency,
enums.OrderBy.seasonal_adjustment,
enums.OrderBy.realtime_start,
enums.OrderBy.realtime_end,
enums.OrderBy.last_updated,
enums.OrderBy.observation_start,
enums.OrderBy.observation_end,
enums.OrderBy.popularity,
enums.OrderBy.group_popularity,
]
if order_by not in allowed_orders:
raise ValueError('Variable order_by ({}) is not one of the values: {}'.format(order_by, ', '.join(map(str, allowed_orders))))
if realtime_start is not None and realtime_start < date(1776, 7, 4):
raise ValueError('Variable realtime_start ("{}") is before min date 1776-07-04.'.format(realtime_start))
if realtime_start is not None and realtime_end is not None and realtime_start > realtime_end:
raise ValueError('The date set by variable realtime_start ("{}") can not be after the date set by variable realtime_end ("{}").'.format(realtime_start, realtime_end))
df = pd.DataFrame(
self._client.get(
'/fred/tags/series',
'seriess',
limit=1000,
tag_names=tag_names,
exclude_tag_names=exclude_tag_names,
realtime_start=realtime_start,
realtime_end=realtime_end,
order_by=order_by,
sort_order=sort_order
)
)
date_columns = [
'realtime_start', 'realtime_end',
'observation_start', 'observation_end',
]
if not df.empty:
df[date_columns] = df[date_columns].apply(pd.to_datetime, format='%Y-%m-%d')
df.last_updated = pd.to_datetime(df.last_updated + '00', utc=True, format='%Y-%m-%d %H:%M:%S%z')
df = df.astype(dtype={
'id': 'string',
'notes': 'string',
'title': 'string',
'frequency': 'category',
'frequency_short': 'category',
'units_short': 'category',
'units': 'category',
'seasonal_adjustment': 'category',
'seasonal_adjustment_short': 'category'
}).set_index('id')
return df
class ALFRED(FRED):
"""
ALFRED stands for Archival Federal Reserve Economic Data.
ALFRED archives FRED data by adding the real-time period when values were originally released and later revised. For instance on February 2, 1990, the US Bureau of Labor Statistics reported the US unemployment rate for the month of January, 1990 as 5.3 percent.
Over 6 years later on March 8, 1996, the US unemployment rate for the same month January, 1990 was revised to 5.4 percent.
https://alfred.stlouisfed.org/
https://fred.stlouisfed.org/docs/api/fred/alfred.html
"""
class GeoFRED:
"""
The GeoFRED API is a web service that allows developers to write programs and build applications to harvest data and shape files found in GeoFRED website hosted by the Economic Research Division of the Federal Reserve Bank of St. Louis.
https://geofred.stlouisfed.org/
https://geofred.stlouisfed.org/docs/api/geofred/
"""
EMPTY_VALUE = '.'
"""
https://geofred.stlouisfed.org/
https://geofred.stlouisfed.org/docs/api/geofred/
"""
def __init__(self, api_key: str, ratelimiter_enabled: bool = False, ratelimiter_max_calls: int = 2, ratelimiter_period: int = 1, request_params: dict = None):
"""
Parameters
----------
api_key: str
32 character alpha-numeric lowercase string
ratelimiter_enabled: bool
ratelimiter_max_calls: int
ratelimiter_period: int
request_params: dict
HTTP GET method parameters, see https://docs.python-requests.org/en/latest/api/#requests.request
"""
if api_key is None or len(api_key) != 32:
raise Exception('Variable api_key must be 32 character length alphanumeric string.')
self._client = Client(
key=api_key.lower(),
ratelimiter_enabled=ratelimiter_enabled,
ratelimiter_max_calls=ratelimiter_max_calls,
ratelimiter_period=ratelimiter_period,
request_params=request_params
)
@property
def rate_limit(self) -> int:
return self._client.rate_limit
@property
def rate_limit_remaining(self) -> int:
return self._client.rate_limit_remaining
def shapes(self, shape: enums.ShapeType) -> List[models.Shape]:
"""
https://geofred.stlouisfed.org/docs/api/geofred/shapes.html
## Description
This request returns shape files from GeoFRED in Well-known text (WKT) format.
## API Request (HTTPS GET)
https://api.stlouisfed.org/geofred/shapes/file?shape=bea&api_key=abcdefghijklmnopqrstuvwxyz123456
## API Response
```json
{
"bea": [
{
"name": "Far West",
"code": "8",
"centroid": "POINT(-142.362948378432 57.1478734829085)",
"geometry": "MULTIPOLYGON(((-155.778234 20.245743,-155.772734 ...)))",
"report name": "North Adams, MA-VT"
}
...
]
}
```
## Returns
`List[pystlouisfed.models.Shape]`
## Example
```python
import matplotlib.pyplot as plt
from descartes import PolygonPatch
from pystlouisfed.client import GeoFRED, ShapeType
plt.figure()
ax = plt.axes()
geo_fred = GeoFRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
for country_shape in geo_fred.shapes(shape=ShapeType.country):
ax.add_patch(PolygonPatch(country_shape.geometry, ec='#999999', fc='#6699cc', alpha=0.5, zorder=2))
ax.axis('scaled')
plt.show()
```
.. image:: geofred_shape_map.png
"""
wkt_list = self._client.get(
'/geofred/shapes/file',
shape.value,
shape=shape
)
# replace whitespaces in dict keys
for wkt in wkt_list:
for key in list(wkt.keys()):
wkt[key.replace(' ', '_')] = wkt.pop(key)
return list(map(lambda wkt: models.Shape(**wkt), wkt_list))
def series_group(self, series_id: str) -> models.SeriesGroup:
"""
https://geofred.stlouisfed.org/docs/api/geofred/series_group.html
## Description
This request returns the meta information needed to make requests for GeoFRED data.
Minimum and maximum date are also supplied for the data range available.
## API Request (HTTPS GET)
https://api.stlouisfed.org/geofred/series/group?series_id=SMU56000000500000001a&api_key=abcdefghijklmnopqrstuvwxyz123456
## API Response
```json
{
"series_group": [
{
"title": "All Employees: Total Private",
"geom_type": "state",
"group_id": "192",
"season": "NSA",
"units": "Thousands of Persons",
"frequency": "m",
"min_start_date": "1990-01-01",
"max_start_date": "2015-06-01"
}
]
}
```
## Returns
`models.SeriesGroup`
## Example
```python
>>> geo_fred = GeoFRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> print(geo_fred.series_group(series_id='SMU56000000500000001a'))
SeriesGroup(title='All Employees: Total Private', region_type='state', series_group='1223', season='NSA', units='Thousands of Persons', frequency='a', min_date=datetime.date(1990, 1, 1), max_date=datetime.date(2020, 1, 1))
```
"""
data = self._client.get(
'/geofred/series/group',
'series_group',
series_id=series_id
)
return models.SeriesGroup(**data[0])
def series_data(self, series_id: str, date: date = None, start_date: date = None) -> pd.DataFrame:
"""
## Parameters
`series_id`
The FRED series_id you want to request GeoFRED data for. Not all series that are in FRED have geographical data.
`date`
The date you want to request series group data from.
`start_date`
The start date you want to request series group data from. This allows you to pull a range of data
## Description
https://geofred.stlouisfed.org/docs/api/geofred/series_data.html
This request returns a cross section of regional data for a specified release date.
If no date is specified, the most recent data available are returned.
## API Request (HTTPS GET)
https://api.stlouisfed.org/geofred/series/data?series_id=WIPCPI&api_key=abcdefghijklmnopqrstuvwxyz123456&date=2012-01-01
## API Response
```json
{
"meta": {
"title": "Per Capita Personal Income by State (Dollars)",
"region": "state",
"seasonality": "Not Seasonally Adjusted",
"units": "Dollars",
"frequency": "Annual",
"date": "2012-01-01",
"data": {
"2020": [
{
"region": "Alabama",
"code": "01",
"value": "46479.0",
"series_id": "ALPCPI"
},
...
]
}
}
```
## Returns
`pandas.DataFrame`
## Example (`pandas.DataFrame`)
```python
>>> geo_fred = GeoFRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> geo_fred.series_data(series_id='WIPCPI')
region code value series_id year
0 Alabama 01 46479.0 ALPCPI 2020
1 Alaska 02 63502.0 AKPCPI 2020
2 Arizona 04 49648.0 AZPCPI 2020
3 Arkansas 05 47235.0 ARPCPI 2020
4 California 06 70192.0 CAPCPI 2020
```
"""
years = self._client.get(
'/geofred/series/data',
'meta.data',
series_id=series_id,
date=date,
start_date=start_date
)
df = pd.DataFrame(
self._add_years(years[0])
)
if not df.empty:
df.value = df.value.replace(self.EMPTY_VALUE, np.nan)
df = df.astype(dtype={
'value': 'float64',
'year': 'int64'
})
return df
def regional_data(
self,
series_group: str,
region_type: enums.RegionType,
date: date,
season: enums.Seasonality,
units: str = 'Dollars', # Documentation is missing
start_date: date = None,
frequency: enums.Frequency = None,
transformation: enums.Unit = enums.Unit.lin,
aggregation_method: enums.AggregationMethod = enums.AggregationMethod.average
) -> pd.DataFrame:
"""
## Parameters
`series_group`
The ID for a group of seriess found in GeoFRED.
`region_type`
The region you want want to pull data for.
`date
The date you want to pull a series group data from.
`start_date`
The start date you want to request series group data from.
This allows you to pull a range of data
`units`
The units of the series you want to pull.
`season`
The seasonality of the series group.
`frequency`
An optional parameter that indicates a lower frequency to aggregate values to.
The GeoFRED frequency aggregation feature converts higher frequency data series into lower frequency data series (e.g. converts a monthly data series into an annual data series).
In GeoFRED, the highest frequency data is daily, and the lowest frequency data is annual.
There are 3 aggregation methods available- average, sum, and end of period.
See the aggregation_method parameter.
`transformation`
A key that indicates a data value transformation.
## Description
https://geofred.stlouisfed.org/docs/api/geofred/regional_data.html
This request returns a cross section of regional data
## API Request (HTTPS GET)
https://api.stlouisfed.org/geofred/regional/data?api_key=abcdefghijklmnopqrstuvwxyz123456&series_group=882&date=2013-01-01®ion_type=state&units=Dollars&frequency=a&season=NSA
## API Response
```json
{
"meta": {
"title": "Per Capita Personal Income by State (Dollars)",
"region": "state",
"seasonality": "Not Seasonally Adjusted",
"units": "Dollars",
"frequency": "Annual",
"data": {
"2013": [
{
"region": "Alabama",
"code": "01",
"value": "36258.0",
"series_id": "ALPCPI"
},
...
]
}
}
}
```
## Returns
`pandas.DataFrame`
## Example (`pandas.DataFrame`)
```python
>>> geo_fred = GeoFRED(api_key='abcdefghijklmnopqrstuvwxyz123456')
>>> geo_fred.regional_data(series_group='882', date=date(2013, 1, 1), region_type=RegionType.state, frequency=Frequency.anual, season=Seasonality.not_seasonally_adjusted)
region code value series_id year
0 Alabama 01 36258.0 ALPCPI 2013
1 Alaska 02 52843.0 AKPCPI 2013
2 Arizona 04 36739.0 AZPCPI 2013
3 Arkansas 05 36605.0 ARPCPI 2013
4 California 06 48549.0 CAPCPI 2013
```
"""
if frequency is not None and frequency not in enums.Frequency:
raise ValueError('Variable frequency is not one of the values: {}'.format(', '.join(map(str, enums.Frequency))))
if aggregation_method not in enums.AggregationMethod:
raise ValueError('Variable aggregation_method is not one of the values: {}'.format(', '.join(map(str, enums.AggregationMethod))))
if transformation not in enums.Unit:
raise ValueError('Variable transformation is not one of the values: {}'.format(', '.join(map(str, enums.Unit))))
years = self._client.get(
'/geofred/regional/data',
'meta.data',
series_group=series_group,
region_type=region_type,
date=date,
units=units,
season=season,
start_date=start_date,
frequency=frequency,
transformation=transformation,
aggregation_method=aggregation_method
)
df = pd.DataFrame(
self._add_years(years[0])
)
if not df.empty:
df.value = df.value.replace(self.EMPTY_VALUE, np.nan)
df = df.astype(dtype={
'value': 'float64',
'year': 'int64'
})
return df
def _add_years(self, data: dict) -> Generator[dict, None, None]:
"""
transform dict indexed by year from:
```json
{
"2020": [
{
"region": "Alabama",
"code": "01",
"value": "46479.0",
"series_id": "ALPCPI"
},
...
]
}
```
to
```json
[
{
"region": "Alabama",
"code": "01",
"value": "46479.0",
"series_id": "ALPCPI",
"year": "2020"
},
...
]
```
"""
for year, rows in data.items():
for row in rows:
row['year'] = year
yield row
class FRASER:
"""
FRASER is a digital library of U.S. economic, financial, and banking history—particularly the history of the Federal Reserve System.
Providing economic information and data to the public is an important mission for the St. Louis Fed started by former St. Louis Fed Research Director Homer Jones in 1958.
FRASER began as a data preservation and accessibility project of the Federal Reserve Bank of St. Louis in 2004 and now provides access to data and policy documents from the Federal Reserve System and many other institutions.
https://fraser.stlouisfed.org/
https://research.stlouisfed.org/docs/api/fraser/
"""
def __init__(self):
self._sickle = sickle.Sickle('https://fraser.stlouisfed.org/oai')
def list_records(self, ignore_deleted: bool = False, set: str = None) -> sickle.iterator.BaseOAIIterator:
"""
## Parameters
`set`
This parameter specifies the setSpec value and limits the records that are retrieved to only those in the specified set.
Ignore this parameter to return all records.
## Description
https://research.stlouisfed.org/docs/api/fraser/listRecords.html
This request returns title records from the FRASER repository.
A resumptionToken can be used to retrieve all records using multiple requests.
Additional information about an individual title, including the title's child records, can be retrieved using the GetRecord request.
## API Request (HTTPS GET)
https://fraser.stlouisfed.org/oai/?verb=ListRecords&metadataPrefix=mods&resumptionToken=1469299598:0
## Returns
`sickle.iterator.BaseOAIIterator`
## Example (`pandas.DataFrame`)
```python
from pystlouisfed import FRASER
for record in FRASER().list_records():
print(record.get_metadata())
```
"""
return self._sickle.ListRecords(metadataPrefix='mods', ignore_deleted=ignore_deleted, set=set)
def list_sets(self) -> sickle.iterator.BaseOAIIterator:
"""
## Description
https://research.stlouisfed.org/docs/api/fraser/listSets.html
This request returns the set structure for records in the FRASER repository.
A resumptionToken can be used to retrieve the complete set structure using multiple requests.
## API Request (HTTPS GET)
https://fraser.stlouisfed.org/oai/?verb=ListSets&resumptionToken=1478707638:0
## Returns
`sickle.iterator.BaseOAIIterator`
## Example (`pandas.DataFrame`)
```python
from pystlouisfed import FRASER
for set in FRASER().list_sets():
print(set)
```
"""
return self._sickle.ListSets()
def list_identifiers(self, ignore_deleted: bool = False, set: str = None) -> sickle.iterator.BaseOAIIterator:
"""
## Parameters
`set`
This parameter specifies the setSpec value and limits the records that are retrieved to only those in the specified set Ignore this parameter to return all records.
## Description
https://research.stlouisfed.org/docs/api/fraser/listIdentifiers.html
This request returns headers for records in the FRASER repository.
A resumptionToken can be used to retrieve all records using multiple requests.
## API Request (HTTPS GET)
https://fraser.stlouisfed.org/oai/?verb=ListIdentifiers&resumptionToken=1469300451:0
## Returns
`sickle.iterator.BaseOAIIterator`
## Example (`pandas.DataFrame`)
```python
from pystlouisfed import FRASER
for header in FRASER().list_identifiers():
print(header.identifier)
```
"""
return self._sickle.ListIdentifiers(metadataPrefix='mods', ignore_deleted=ignore_deleted, set=set)
def get_record(self, identifier: str) -> sickle.models.Record:
"""
## Description
https://research.stlouisfed.org/docs/api/fraser/getRecord.html
This request returns a single record from the FRASER repository.
## API Request (HTTPS GET)
https://fraser.stlouisfed.org/oai/?verb=GetRecord&identifier=oai:fraser.stlouisfed.org:title:176
## Returns
`sickle.models.Record`
## Example (`pandas.DataFrame`)
```python
from pystlouisfed import FRASER
FRASER().get_record(identifier='oai:fraser.stlouisfed.org:title:176')
```
"""
return self._sickle.GetRecord(identifier=identifier, metadataPrefix='mods')
| 40.444625 | 618 | 0.557594 | 19,823 | 174,559 | 4.773294 | 0.058215 | 0.057841 | 0.030627 | 0.027647 | 0.80052 | 0.777438 | 0.753976 | 0.737056 | 0.714894 | 0.682607 | 0 | 0.054995 | 0.344056 | 174,559 | 4,315 | 619 | 40.453998 | 0.771373 | 0.50049 | 0 | 0.720056 | 0 | 0 | 0.158751 | 0.007463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036908 | false | 0 | 0.009749 | 0.004178 | 0.090529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d77b3e9cbc03b9d3adde0405e5c9b2441e10121 | 8,565 | py | Python | randomras/smoothrast.py | quentinll/pertrenderer | d292ed4f09d49a957fba9d2f8bdd9d5c66930261 | [
"BSD-2-Clause"
] | 1 | 2022-02-03T08:31:40.000Z | 2022-02-03T08:31:40.000Z | randomras/smoothrast.py | quentinll/pertrenderer | d292ed4f09d49a957fba9d2f8bdd9d5c66930261 | [
"BSD-2-Clause"
] | null | null | null | randomras/smoothrast.py | quentinll/pertrenderer | d292ed4f09d49a957fba9d2f8bdd9d5c66930261 | [
"BSD-2-Clause"
] | null | null | null | """
Inspired from Pytorch3D.
"""
import torch
from torch.nn import Module
from torch.autograd import Function
import numpy as np
from torch.distributions.transforms import SigmoidTransform
class randomHeaviside(Function):
@staticmethod
def forward(ctx,distances,nb_samples = 1,noise_intensity = 1e-1, noise_type ="gaussian"):
device = distances.device
dist_size = distances.size()
noise_dict ={"gaussian": torch.tensor(0),"cauchy": torch.tensor(1), "logistic": torch.tensor(2)}
noise_type = noise_dict[noise_type]
if noise_type == noise_dict["gaussian"]:
noise = torch.normal(mean = torch.zeros((nb_samples,dist_size[0],dist_size[1],dist_size[2],dist_size[3]),device=device),std = 1. )
elif noise_type == noise_dict["cauchy"]:
m = torch.distributions.cauchy.Cauchy(torch.tensor([0.]).to(device=device), torch.tensor([1.]).to(device=device))
noise = torch.clamp(m.sample((nb_samples,dist_size[0],dist_size[1],dist_size[2],dist_size[3])).squeeze(-1),min=-1e7, max=1e7) #we need to clamp the noise to avoid inf values
elif noise_type == noise_dict["logistic"]:
base_distribution = torch.distributions.uniform.Uniform(0, 1)
transforms = [SigmoidTransform().inv]
logistic = torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms)
noise = logistic.sample((nb_samples,dist_size[0],dist_size[1],dist_size[2],dist_size[3]))
else:
print("noise type not implemented")
maps = distances + noise_intensity*noise
maps = torch.heaviside(maps, values = torch.ones(maps.size(), device=device))
vr_var = torch.heaviside(distances, values = torch.ones(distances.size(), device=device)) #used during backward to reduce variance of gradient estimator
ctx.save_for_backward(maps,noise,noise_intensity,vr_var,noise_type)
map = maps.mean(dim=0)
return map
@staticmethod
def backward(ctx, grad_l):
grad_dist = None
grad_sigma = None
maps, noise, noise_intensity,vr_var, noise_type = ctx.saved_tensors
noise_dict ={"gaussian": torch.tensor(0), "cauchy": torch.tensor(1), "logistic": torch.tensor(2) }
if noise_type == noise_dict["gaussian"]:
grad_maps = (maps - vr_var.unsqueeze(0).repeat(maps.size()[0],1,1,1,1)) * noise/noise_intensity
grad_sigma = (maps - vr_var.unsqueeze(0).repeat(maps.size()[0],1,1,1,1))*(torch.square(noise) - 1.)/noise_intensity
elif noise_type == noise_dict["cauchy"]:
grad_maps = (maps - vr_var.unsqueeze(0).repeat(maps.size()[0],1,1,1,1)) * ((2*noise)/(1+torch.square(noise)))/noise_intensity
grad_sigma = maps*(noise*((2*noise)/(1+torch.square(noise))) - 1.)/noise_intensity
else:
print("noise_type not implemented")
grad_maps = grad_maps.mean(dim=0)
grad_sigma = grad_sigma.mean(dim=0)
if ctx.needs_input_grad[0]:
grad_dist = grad_maps*grad_l
if ctx.needs_input_grad[2]:
grad_sigma = torch.sum(grad_maps*grad_l)
return grad_dist, None, grad_sigma, None
class randomHeaviside_wovr(Function):
@staticmethod
def forward(ctx,distances,nb_samples = 1,noise_intensity = 1e-1, noise_type ="gaussian"):
device = distances.device
dist_size = distances.size()
noise_dict ={"gaussian": torch.tensor(0),"cauchy": torch.tensor(1), "logistic": torch.tensor(2)}
noise_type = noise_dict[noise_type]
if noise_type == noise_dict["gaussian"]:
noise = torch.normal(mean = torch.zeros((nb_samples,dist_size[0],dist_size[1],dist_size[2],dist_size[3]),device=device),std = 1. )
elif noise_type == noise_dict["cauchy"]:
m = torch.distributions.cauchy.Cauchy(torch.tensor([0.]).to(device=device), torch.tensor([1.]).to(device=device))
noise = torch.clamp(m.sample((nb_samples,dist_size[0],dist_size[1],dist_size[2],dist_size[3])).squeeze(-1),min=-1e7, max=1e7) #we need to clamp the noise to avoid inf values
elif noise_type == noise_dict["logistic"]:
base_distribution = torch.distributions.uniform.Uniform(0, 1)
transforms = [SigmoidTransform().inv]
logistic = torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms)
noise = logistic.sample((nb_samples,dist_size[0],dist_size[1],dist_size[2],dist_size[3]))
else:
print("noise type not implemented")
maps = distances + noise_intensity*noise
maps = torch.heaviside(maps, values = torch.ones(maps.size(), device=device))
vr_var = torch.heaviside(distances, values = torch.ones(distances.size(), device=device)) #used during backward to reduce variance of gradient estimator
ctx.save_for_backward(maps,noise,noise_intensity,vr_var,noise_type)
map = maps.mean(dim=0)
return map
@staticmethod
def backward(ctx, grad_l):
grad_dist = None
grad_sigma = None
maps, noise, noise_intensity,vr_var, noise_type = ctx.saved_tensors
noise_dict ={"gaussian": torch.tensor(0), "cauchy": torch.tensor(1), "logistic": torch.tensor(2) }
if noise_type == noise_dict["gaussian"]:
grad_maps = (maps) * noise/noise_intensity
grad_sigma = maps*(torch.square(noise) - 1.)/noise_intensity
elif noise_type == noise_dict["cauchy"]:
grad_maps = (maps) * ((2*noise)/(1+torch.square(noise)))/noise_intensity
grad_sigma = maps*(noise*((2*noise)/(1+torch.square(noise))) - 1.)/noise_intensity
else:
print("noise_type not implemented")
grad_maps = grad_maps.mean(dim=0)
grad_sigma = grad_sigma.mean(dim=0)
if ctx.needs_input_grad[0]:
grad_dist = grad_maps*grad_l
if ctx.needs_input_grad[2]:
grad_sigma = torch.sum(grad_maps*grad_l)
return grad_dist, None, grad_sigma, None
class SmoothRastBase(Module):
def __init__(self,
sigma=2e-4):
super(SmoothRastBase,self).__init__()
self.sigma = torch.tensor(sigma, requires_grad= True)
self.nb_samples = 1
def update_smoothing(self, sigma):
self.sigma = torch.tensor(sigma, requires_grad= True)
def update_nb_samples(self, nb_samples):
self.nb_samples = nb_samples
class SoftRast(SmoothRastBase):
def __init__(self,
sigma=2e-4):
super(SoftRast,self).__init__(sigma)
def rasterize(self,dists):
prob_map = torch.sigmoid(-dists/self.sigma)
return prob_map
class GaussianRast(SmoothRastBase):
def __init__(self,
nb_samples=16,
sigma= 2e-4):
super(GaussianRast,self).__init__(sigma)
self.nb_samples = nb_samples
def rasterize(self,dists):
randomheavi = randomHeaviside().apply
prob_map = randomheavi(-dists, self.nb_samples, self.sigma)
return prob_map
class GaussianRast_wovr(SmoothRastBase):
def __init__(self,
nb_samples=16,
sigma= 2e-4):
super(GaussianRast_wovr,self).__init__(sigma)
self.nb_samples = nb_samples
def rasterize(self,dists):
randomheavi = randomHeaviside_wovr().apply
prob_map = randomheavi(-dists, self.nb_samples, self.sigma)
return prob_map
class ArctanRast(SmoothRastBase):
def __init__(self,
nb_samples=16,
sigma= 2e-4):
super(ArctanRast,self).__init__(sigma)
self.nb_samples = nb_samples
def rasterize(self,dists):
randomheavi = randomHeaviside().apply
prob_map = randomheavi(-dists, self.nb_samples, self.sigma,"cauchy")
#prob_map= torch.arctan(-dists/self.sigma)/np.pi + .5
return prob_map
class AffineRast(SmoothRastBase):
def __init__(self,
nb_samples=16,
sigma= 2e-4):
super().__init__(sigma)
self.nb_samples = nb_samples
def rasterize(self,dists):
prob_map = torch.where(-dists/self.sigma > .5, torch.ones_like(dists),-dists/self.sigma + .5)
prob_map = torch.where(prob_map < 0., torch.zeros_like(prob_map),prob_map)
return prob_map
class HardRast():
def __init__(self):
return
def rasterize(self,dists):
device = dists.device
prob_map = torch.heaviside(-dists, values = torch.ones(dists.size(), device=device))
return prob_map | 44.149485 | 181 | 0.655809 | 1,120 | 8,565 | 4.788393 | 0.114286 | 0.046989 | 0.033936 | 0.040276 | 0.867425 | 0.859221 | 0.848033 | 0.821182 | 0.805892 | 0.805892 | 0 | 0.01911 | 0.21798 | 8,565 | 194 | 182 | 44.149485 | 0.781577 | 0.033975 | 0 | 0.763975 | 0 | 0 | 0.034612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118012 | false | 0 | 0.031056 | 0.006211 | 0.273292 | 0.024845 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d82d31e85e31cfc0b27f515b4ff0696332b9ee3 | 1,462 | py | Python | tournaments/forms.py | ChristianJStarr/sbs-website | db891f0a67f46cc9cdadc95714304b2ea91a162a | [
"MIT"
] | 1 | 2022-01-09T18:54:32.000Z | 2022-01-09T18:54:32.000Z | tournaments/forms.py | ChristianJStarr/sbs-website | db891f0a67f46cc9cdadc95714304b2ea91a162a | [
"MIT"
] | null | null | null | tournaments/forms.py | ChristianJStarr/sbs-website | db891f0a67f46cc9cdadc95714304b2ea91a162a | [
"MIT"
] | null | null | null |
from django import forms
from tournaments.models import Tournament
class CreateTournament(forms.ModelForm):
tournament_date = forms.DateField(required=False)
tournament_time = forms.TimeField(required=False)
picture = forms.ImageField(required=False)
center = forms.UUIDField(required=False)
format = forms.UUIDField(required=False)
entry_fee = forms.FloatField(required=False)
total_games = forms.IntegerField(required=False)
placements = forms.JSONField(required=False)
class Meta:
model = Tournament
fields = ['name', 'description', 'datetime', 'picture', 'center', 'format', 'entry_fee', 'total_games', 'placements']
def clean(self):
cleaned_data = super().clean()
return cleaned_data
class ModifyTournament(forms.ModelForm):
tournament_date = forms.DateField(required=False)
tournament_time = forms.TimeField(required=False)
picture = forms.ImageField(required=False)
center = forms.UUIDField(required=False)
format = forms.UUIDField(required=False)
entry_fee = forms.FloatField(required=False)
total_games = forms.IntegerField(required=False)
placements = forms.JSONField(required=False)
class Meta:
model = Tournament
fields = ['name', 'description', 'datetime', 'picture', 'center', 'format', 'entry_fee', 'total_games', 'placements']
def clean(self):
cleaned_data = super().clean()
return cleaned_data
| 36.55 | 125 | 0.708618 | 157 | 1,462 | 6.496815 | 0.273885 | 0.203922 | 0.086275 | 0.105882 | 0.901961 | 0.901961 | 0.901961 | 0.901961 | 0.901961 | 0.901961 | 0 | 0 | 0.175787 | 1,462 | 39 | 126 | 37.487179 | 0.846473 | 0 | 0 | 0.875 | 0 | 0 | 0.098563 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.8125 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
7dd7368850b102ed22768943fc4dde0ef2a0944b | 250 | py | Python | OpenGLCffi/GLES1/EXT/APPLE/copy_texture_levels.py | cydenix/OpenGLCffi | c78f51ae5e6b655eb2ea98f072771cf69e2197f3 | [
"MIT"
] | null | null | null | OpenGLCffi/GLES1/EXT/APPLE/copy_texture_levels.py | cydenix/OpenGLCffi | c78f51ae5e6b655eb2ea98f072771cf69e2197f3 | [
"MIT"
] | null | null | null | OpenGLCffi/GLES1/EXT/APPLE/copy_texture_levels.py | cydenix/OpenGLCffi | c78f51ae5e6b655eb2ea98f072771cf69e2197f3 | [
"MIT"
] | null | null | null | from OpenGLCffi.GLES1 import params
@params(api='gles1', prms=['destinationTexture', 'sourceTexture', 'sourceBaseLevel', 'sourceLevelCount'])
def glCopyTextureLevelsAPPLE(destinationTexture, sourceTexture, sourceBaseLevel, sourceLevelCount):
pass
| 35.714286 | 105 | 0.816 | 20 | 250 | 10.2 | 0.7 | 0.303922 | 0.45098 | 0.607843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.072 | 250 | 6 | 106 | 41.666667 | 0.87069 | 0 | 0 | 0 | 0 | 0 | 0.270161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
815b84b484ccdbe3909443dfd4084a9427889617 | 4,771 | py | Python | python/ais_sdk/moderation_video.py | huaweicloudsdk/ais-sdk | 623dd366dc3b26ef2eb949a8284f5b0a55e5b0e6 | [
"Apache-2.0"
] | 48 | 2018-03-13T06:21:46.000Z | 2021-06-23T02:32:14.000Z | python/ais_sdk/moderation_video.py | huaweicloudsdk/ais-sdk | 623dd366dc3b26ef2eb949a8284f5b0a55e5b0e6 | [
"Apache-2.0"
] | 29 | 2019-03-27T08:52:38.000Z | 2021-12-14T21:15:11.000Z | python/ais_sdk/moderation_video.py | huaweicloudsdk/ais-sdk | 623dd366dc3b26ef2eb949a8284f5b0a55e5b0e6 | [
"Apache-2.0"
] | 38 | 2018-03-22T01:06:51.000Z | 2020-12-30T12:42:42.000Z | # -*- coding:utf-8 -*-
import sys
import time
import json
import ais_sdk.ais as ais
import ais_sdk.utils as utils
import ais_sdk.signer as signer
#
# access asr, long_sentence,post data by token
#
_RETRY_TIMES = 3
def moderation_video(token, url, frame_interval=5, categories=['politics','terrorism']):
endpoint = utils.get_endpoint(ais.AisService.MODERATION_SERVICE)
status, r = _moderation_video(endpoint, token, url, frame_interval, categories)
if status != 200:
return r
submit_result = json.loads(r)
job_id = submit_result['result'].get('job_id', '')
#print "Process job id is :", job_id
time.sleep(1.0)
retry_times = 0
try:
while True:
status, r = _get_result(endpoint, token, job_id)
if status != 200:
if retry_times < _RETRY_TIMES:
retry_times += 1
time.sleep(2.0)
continue
else:
return r
rec_result = json.loads(r)
process_status = rec_result["result"].get('status')
if process_status == 'failed':
return r
elif process_status == 'finish':
return r
#
# process_status == 0 || process_status == 1
#
else:
time.sleep(2.0)
continue
except Exception:
return ''
#
# moderation_video, post the data
#
def _moderation_video(endpoint, token, url, frame_interval=5, categories=['politics', 'terrorism']):
_url = 'https://%s/v1.0/moderation/video' % endpoint
_data = {
"url": url,
"frame_interval": frame_interval,
"categories": categories
}
status, resp = utils.request_token(_url, _data, token)
if sys.version_info.major < 3:
return status, resp
else:
return status, resp.decode('utf-8')
#
# access asr, moderation vedio, get the result
#
def _get_result(endpoint, token, job_id):
_url_tmpl = 'https://%s/v1.0/moderation/video?job_id=%s'
_url = _url_tmpl % (endpoint, job_id)
return utils.request_job_result_token(_url, token)
#
# access asr, long_sentence,post data by ak,sk
#
def moderation_video_aksk(_ak, _sk, url, frame_interval=5, categories=['politics', 'terrorism']):
sig = signer.Signer()
sig.AppKey = _ak
sig.AppSecret = _sk
endpoint = utils.get_endpoint(ais.AisService.MODERATION_SERVICE)
status, r = _moderation_video_aksk(endpoint, sig, url, frame_interval, categories)
if status != 200:
return r
submit_result = json.loads(r)
job_id = submit_result['result'].get('job_id', '')
#print "Process job id is :", job_id
time.sleep(1.0)
retry_times = 0
try:
while True:
status, r = _get_result_aksk(endpoint, sig, job_id)
if status != 200:
if retry_times < _RETRY_TIMES:
retry_times += 1
time.sleep(2.0)
continue
else:
return r
rec_result = json.loads(r)
process_status = rec_result["result"].get('status')
if process_status == 'failed':
return r
elif process_status == 'finish':
return r
#
# process_status == 0 || process_status == 1
#
else:
time.sleep(2.0)
continue
except Exception:
return ''
#
# moderation_video, post the data
#
def _moderation_video_aksk(endpoint, sig, url, frame_interval=5, categories=['politics', 'terrorism']):
_url = 'https://%s/v1.0/moderation/video' % endpoint
_data = {
"url": url,
"frame_interval": frame_interval,
"categories": categories
}
kreq = signer.HttpRequest()
kreq.scheme = "https"
kreq.host = endpoint
kreq.uri = "/v1.0/moderation/video"
kreq.method = "POST"
kreq.headers = {"Content-Type": "application/json"}
kreq.body = json.dumps(_data)
status, resp = utils.request_aksk(sig, kreq, _url)
if sys.version_info.major < 3:
return status, resp
else:
return status, resp.decode('utf-8')
#
# access asr, moderation vedio, get the result
#
def _get_result_aksk(endpoint, sig, job_id):
_url_tmpl = 'https://%s/v1.0/moderation/video?job_id=%s'
_url = _url_tmpl % (endpoint, job_id)
kreq = signer.HttpRequest()
kreq.scheme = "https"
kreq.host = endpoint
kreq.uri = "/v1.0/moderation/video"
kreq.method = "GET"
kreq.headers = {"Content-Type": "application/json"}
kreq.query = {'job_id': job_id}
return utils.request_job_result_aksk(sig, kreq, _url)
| 26.214286 | 103 | 0.586669 | 576 | 4,771 | 4.644097 | 0.166667 | 0.033645 | 0.04785 | 0.040374 | 0.869533 | 0.869533 | 0.859813 | 0.748411 | 0.717009 | 0.717009 | 0 | 0.016124 | 0.298051 | 4,771 | 181 | 104 | 26.359116 | 0.782622 | 0.088032 | 0 | 0.741379 | 0 | 0 | 0.109877 | 0.010178 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0 | 0.051724 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81a668475a1b64bd338326494760cfb598e42c0c | 7,908 | py | Python | FHNC_3D.py | mpanho/FHNC_3D | 9a1f87bfa1dbe59f59cbeb93c472e4587c8b93e7 | [
"MIT"
] | 2 | 2018-09-03T07:44:23.000Z | 2018-09-03T07:44:28.000Z | FHNC_3D.py | mpanho/FHNC_3D | 9a1f87bfa1dbe59f59cbeb93c472e4587c8b93e7 | [
"MIT"
] | null | null | null | FHNC_3D.py | mpanho/FHNC_3D | 9a1f87bfa1dbe59f59cbeb93c472e4587c8b93e7 | [
"MIT"
] | null | null | null |
import numpy as np
#from scipy.fftpack import fft, ifft,diff
from numpy.fft import fft, ifft #,diff
def set_default():
global n,xmax,x,dx, k, dk, pi, nu, r0kF, imax
n=3000
xmax=50.
dx=xmax/n
x=np.arange(0,xmax,xmax/n)+dx/2
pi=np.pi
dk=pi/xmax#*(n)/(n+1)
k=np.arange(0,dk*n,dk) + dk/2
nu=2 # degeneracy factor, for spin polarized calculation set to 1
r0kF=(9*pi/2/nu)**(1/3.)
imax=400
def update():
global n,xmax,x,dx, k, dk, nu, r0kF
print('new values for n=', n,' xmax=',xmax, ' nu=',nu)
dx=xmax/n
x=np.arange(0,xmax,xmax/n)+dx/2
dk=pi/xmax#*(n)/(n+1)
k=np.arange(0,dk*n,dk) + dk/2
r0kF=(9*pi/2/nu)**(1/3.)
def FFT(inp):
ln=np.arange(n*2)
inpp=x*inp
inlong=np.exp(-1j*pi*ln/n/2)*np.concatenate( (inpp,np.flip(inpp,0)) ,0)
res=np.imag(np.exp(-1j*pi*(1./n/4 + ln/n/2)) * fft(inlong))[0:n]/k
return -res*dx*3/2#,inlong
def IFFT(inp):
ln=np.arange(n*2)
inpp=k*inp
inlong=np.exp(1j*pi*ln/n/2)*np.concatenate( (inpp,np.flip(inpp,0)) ,0)
res=np.imag(np.exp(1j*pi*(1./n/4 + ln/n/2)) * ifft(inlong))[0:n]/x
return n*2*res/xmax/3#*n/(n+1)#,inlong
def Diff(inp):
#res=np.ediff1d(inp,to_end=0)
res=np.zeros(n)
res[0]=inp[1]-inp[0]
for i in range(n-2):
res[i+1]=inp[i+2]-inp[i]
return res/dx/2
def lF(x):
res=3*(np.sin(x)/x**2-np.cos(x)/x)/x
return res
def SF(x):
"Free static structure function"
res= np.heaviside(x-2,0.5)
res=res +(3*x/4-x**3/16)*np.heaviside(2-x,0.5)
return res
def multip(gin,gold):
res=np.exp(1*(gin-gold))
i=0
while (gin[i]<0.00 and i<n):
res[i]=np.exp(2*(gin[i]-gold[i]))
i=i+1
return res
def solve_ladder(rs):
"Do the FHNC calculation"
global g0r,rslist,imax
print('rs=',rs)
print('nu=',nu)
print('If nu=1, spin polarized calculation; if nu=2 paramagnetic calculation')
print('n=',n)
print('xmax=',xmax)
vcoulrs=1*2/x #*np.exp(-x/0.3/xmax)
gF=1-lF(r0kF*x)**2/nu
SF0=1+FFT(gF-1) #SF(k2/r0kF)
rs_start=.5
Vphk=FFT(vcoulrs/rs_start/10)
S0k=SF0 #1/np.sqrt(1 + 2*rs_start**2/k**2 * Vphk)
g0old=gF
lap_gFrs= 2/(x**2*np.sqrt(gF)) * Diff(x**2*Diff(np.sqrt(gF)))
drs=0.5
tmp=int(rs*2)+2
rslist=np.linspace(rs_start,rs,tmp)
j=0
accuracy=4e-4
hist=[]
gall=[]
print('Calculation has started ...')
for rsi in rslist:
j=j+1
i=0
residual=0.1
if (rsi==rs):
accuracy=4e-5
while (residual>accuracy and i<imax):
i=i+1
if (i==imax):
print("not converged at rs=", rsi)
print("Note: accuracy might be ok for your needs, otherwiese increase imax")
S0kold=S0k
wI=-k**2/2/rsi**2 * (1/SF0-1/S0k)**2 * (2*S0k/SF0 +1)
wIr=IFFT(wI)
wIB=-k**2/2/rsi**2 * (1-1/S0k)**2 * (2*S0k +1)
wIBr=IFFT(wIB)
g00=1+IFFT(S0k-1)
g0r=g0old*multip(g00,g0old) #np.exp(1*(g00-g0old))
g0old=g0r
Vph= (g0r-0)*vcoulrs/rsi + g0r*(lap_gFrs/rsi**2 +wIr) + (-1)*wIBr + 2/rsi**2*np.abs(Diff(np.sqrt(g0r)))**2
Vphk=FFT(Vph) + 0*6/k**2/rsi
if any(np.less_equal(np.real(1 + 2*rsi**2/k**2 * Vphk), np.zeros(n))):
print("instability at rs=",rsi)
#break
S0k=1/np.sqrt(np.abs(1 + 2*rsi**2/k**2 * Vphk))
g00=1+IFFT(S0k-1)
residual=np.sum(np.abs(g00-g0r))*dx
hist.append(residual)
gall.append(g00[0])
print('...Finished')
print('residual=',residual, 'iterations=',i)
return g0r
def solve_Kallio(rs):
"Do the calculation"
global g0r,rslist,imax
print('rs=',rs)
print('nu=',nu)
print('If nu=1, spin polarized calculation; if nu=2 paramagnetic calculation')
print('n=',n)
print('xmax=',xmax)
vcoulrs=1*2/x #*np.exp(-x/0.3/xmax)
gF=1-lF(r0kF*x)**2/nu
SF0=1+FFT(gF-1) #SF(k2/r0kF)
rs_start=.5
Vphk=FFT(vcoulrs/rs_start/10)
S0k=SF0 #1/np.sqrt(1 + 2*rs_start**2/k**2 * Vphk)
g0old=gF
Gddold=IFFT((S0k/SF0-1)/SF0)
wIFrs=-k**2/2 * (1-1/SF0)**2 * (2*SF0 +1)
wIFrrs=IFFT(wIFrs)
lap_gFrs= 2/(x**2*np.sqrt(gF)) * Diff(x**2*Diff(np.sqrt(gF)))
drs=0.5
tmp=int(rs*2)+2
rslist=np.linspace(rs_start,rs,tmp)
j=0
accuracy=4e-4
hist=[]
gall=[]
print('Calculation has started ...')
for rsi in rslist:
j=j+1
i=0
residual=0.1
if (rsi==rs):
accuracy=4e-5
while (residual>accuracy and i<imax):
i=i+1
if (i==imax):
print("not converged at rs=", rsi)
print("Note: accuracy might be ok for your needs, otherwiese increase imax")
S0kold=S0k
wI=-k**2/2/rsi**2 * (1/SF0-1/S0k)**2 * (2*S0k/SF0 +1)
wIr=IFFT(wI)
wIB=-k**2/2/rsi**2 * (1-1/S0k)**2 * (2*S0k +1)
wIBr=IFFT(wIB)
g00=1+IFFT(S0k-1)
g0r=g0old*multip(g00,g0old) #np.exp(1*(g00-g0old))
g0old=g0r
#Vph= (g0r-0)*vcoulrs/rsi + g0r*(lap_gFrs/rsi**2 +wIr) + (-1)*wIBr + 2/rsi**2*np.abs(Diff(np.sqrt(g0r)))**2
Vph= (g0r-0)*vcoulrs/rsi + g0r*(lap_gFrs -wIFrrs)/rsi**2 + (g0r-1)*wIBr + 2/rs**2*np.abs(Diff(np.sqrt(g0r)))**2
Vphk=FFT(Vph) + 0*6/k**2/rsi
if any(np.less_equal(np.real(1 + 2*rsi**2/k**2 * Vphk), np.zeros(n))):
print("instability at rs=",rsi)
#break
S0k=1/np.sqrt(np.abs(1 + 2*rsi**2/k**2 * Vphk))
g00=1+IFFT(S0k-1)
residual=np.sum(np.abs(g00-g0r))*dx
hist.append(residual)
gall.append(g00[0])
print('...Finished')
print('residual=',residual, 'iterations=',i)
return g0r
def solve_sFHNC(rs):
"Do the FHNC calculation"
global g0r,rslist,imax
print('rs=',rs)
print('nu=',nu)
print('If nu=1, spin polarized calculation; if nu=2 paramagnetic calculation')
print('n=',n)
print('xmax=',xmax)
vcoulrs=1*2/x #*np.exp(-x/0.3/xmax)
gF=1-lF(r0kF*x)**2/nu
SF0=1+FFT(gF-1) #SF(k2/r0kF)
rs_start=.5
Vphk=FFT(vcoulrs/rs_start/10)
S0k=SF0 #1/np.sqrt(1 + 2*rs_start**2/k**2 * Vphk)
g0old=gF
Gddold=IFFT((S0k/SF0-1)/SF0)
drs=0.5
tmp=int(rs*2)+2
rslist=np.linspace(rs_start,rs,tmp)
j=0
accuracy=4e-4
hist=[]
gall=[]
print('Calculation has started ...')
for rsi in rslist:
j=j+1
i=0
residual=0.1
if (rsi==rs):
accuracy=4e-5
while (residual>accuracy and i<imax):
i=i+1
if (i==imax):
print("not converged at rs=", rsi)
print("Note: accuracy might be ok for your needs, otherwiese increase imax")
S0kold=S0k
wI=-k**2/2/rsi**2 * (1/SF0-1/S0k)**2 * (2*S0k/SF0 +1)
wIr=IFFT(wI)
g00=1+IFFT(S0k-1)
#g0r=g0old*multip(g00,g0old) #np.exp(1*(g00-g0old))
#g0old=g0r
Gddr=Gddold + .05*(IFFT( (S0k/SF0-1)/SF0 ) - Gddold)
Gddold=Gddr
Vph= (1+Gddr)*vcoulrs/rsi + 2/rsi**2*Diff(np.sqrt(np.abs(1+Gddr)))**2 + Gddr*wIr
Vphk=FFT(Vph) + 0*6/k**2/rsi
if any(np.less_equal(np.real(1/SF0 + 2*rsi**2/k**2 * Vphk), np.zeros(n))):
print("instability at rs=",rsi)
#break
S0k=1/np.sqrt(np.abs( 1/SF0**2 + 2*rsi**2/k**2 * Vphk))
g00=1+IFFT(S0k-1)
residual=np.sum(np.abs(S0k-S0kold))*dx
hist.append(residual)
gall.append(g00[0])
print('...Finished')
print('residual=',residual, 'iterations=',i)
return g00
| 30.770428 | 123 | 0.520486 | 1,362 | 7,908 | 3.003671 | 0.110866 | 0.0088 | 0.017111 | 0.0154 | 0.809582 | 0.803227 | 0.803227 | 0.77976 | 0.773161 | 0.773161 | 0 | 0.080742 | 0.284269 | 7,908 | 256 | 124 | 30.890625 | 0.642049 | 0.089403 | 0 | 0.760181 | 0 | 0 | 0.117728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049774 | false | 0 | 0.00905 | 0 | 0.099548 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4ac5a59e5268c581c5591b970ee8df930dc6119 | 2,415 | py | Python | tests/test_kitsune.py | aluttik/kitsune | 43824cccf46f433a71b30a7febc0e3500b831067 | [
"MIT"
] | 2 | 2021-11-16T08:50:28.000Z | 2022-01-31T10:28:38.000Z | tests/test_kitsune.py | aluttik/kitsune | 43824cccf46f433a71b30a7febc0e3500b831067 | [
"MIT"
] | null | null | null | tests/test_kitsune.py | aluttik/kitsune | 43824cccf46f433a71b30a7febc0e3500b831067 | [
"MIT"
] | 1 | 2021-08-21T03:11:02.000Z | 2021-08-21T03:11:02.000Z | # -*- coding: utf-8 -*-
import io
import re
import time
from kitsune import KitsunePrinter
# longer delay = more reliable tests
DELAY_SECONDS = 0.5
def test_kitsune_printer_plain(tmpdir):
a = tmpdir.join("a.log")
b = tmpdir.join("bb.log")
# 'touch' the files
a.ensure(file=True)
b.ensure(file=True)
buf = io.StringIO()
printer = KitsunePrinter([a.strpath, b.strpath], color=False, stream=buf)
printer.start()
time.sleep(DELAY_SECONDS)
a.write("foo\n", mode="a+")
time.sleep(DELAY_SECONDS)
b.write("bar\n", mode="a+")
time.sleep(DELAY_SECONDS)
a.write("baz\n", mode="a+")
time.sleep(DELAY_SECONDS)
output = buf.getvalue().splitlines()
assert len(output) == 3
assert output[0] == "a.log | foo"
assert output[1] == "bb.log | bar"
assert output[2] == "a.log | baz"
printer.stop()
a.write("printer already stopped\n", mode="a+")
time.sleep(DELAY_SECONDS)
output = buf.getvalue().splitlines()
assert len(output) == 3
assert output[0] == "a.log | foo"
assert output[1] == "bb.log | bar"
assert output[2] == "a.log | baz"
def test_kitsune_printer_rainbow(tmpdir):
a = tmpdir.join("a.log")
b = tmpdir.join("bb.log")
# 'touch' the files
a.ensure(file=True)
b.ensure(file=True)
buf = io.StringIO()
printer = KitsunePrinter([a.strpath, b.strpath], color=True, stream=buf)
printer.start()
time.sleep(DELAY_SECONDS)
a.write("foo\n", mode="a+")
time.sleep(DELAY_SECONDS)
b.write("bar\n", mode="a+")
time.sleep(DELAY_SECONDS)
a.write("baz\n", mode="a+")
time.sleep(DELAY_SECONDS)
colored_line_regex = re.compile(r"(\x1b\[3[0-7](?:;1)?m)(\w+\..+ +\|\x1b\[0m .*)\n")
value = buf.getvalue()
matches = colored_line_regex.findall(value)
assert len(matches) == 3
output = tuple(zip(*matches))[1]
assert output[0] == "a.log |\033[0m foo"
assert output[1] == "bb.log |\033[0m bar"
assert output[2] == "a.log |\033[0m baz"
printer.stop()
a.write("printer already stopped\n", mode="a+")
time.sleep(DELAY_SECONDS)
value = buf.getvalue()
matches = colored_line_regex.findall(value)
assert len(matches) == 3
output = tuple(zip(*matches))[1]
assert output[0] == "a.log |\033[0m foo"
assert output[1] == "bb.log |\033[0m bar"
assert output[2] == "a.log |\033[0m baz"
| 24.642857 | 88 | 0.61118 | 354 | 2,415 | 4.10452 | 0.214689 | 0.099105 | 0.096352 | 0.144529 | 0.853407 | 0.853407 | 0.853407 | 0.853407 | 0.853407 | 0.853407 | 0 | 0.027168 | 0.207453 | 2,415 | 97 | 89 | 24.896907 | 0.731975 | 0.038095 | 0 | 0.848485 | 0 | 0.015152 | 0.151855 | 0.012942 | 0 | 0 | 0 | 0 | 0.242424 | 1 | 0.030303 | false | 0 | 0.060606 | 0 | 0.090909 | 0.151515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4b5b40f616ceac57406cde4cacfa193e4a3f541 | 3,599 | py | Python | src/rooms/geography/astar_test.py | gomyar/rooms | ba20cb77380f439d60d452d2bc69bd94c9c21c24 | [
"MIT"
] | null | null | null | src/rooms/geography/astar_test.py | gomyar/rooms | ba20cb77380f439d60d452d2bc69bd94c9c21c24 | [
"MIT"
] | null | null | null | src/rooms/geography/astar_test.py | gomyar/rooms | ba20cb77380f439d60d452d2bc69bd94c9c21c24 | [
"MIT"
] | null | null | null |
import unittest
from astar import AStar
from astar import PointMap
from astar import Point
class AStarTest(unittest.TestCase):
def setUp(self):
self.point_map = PointMap()
self.point_map.init_square_points(0, 0, 7, 5)
def testCreatePointMap(self):
self.assertEquals(Point(2, 2), self.point_map[2, 2])
self.assertEquals(7, self.point_map.width)
self.assertEquals(5, self.point_map.height)
self.assertEquals(set([(0, 1), (1, 1), (1, 0)]),
set(self.point_map[0, 0].connected_points()))
self.assertEquals(set([(2, 0), (4, 0), (2, 1), (3, 1), (4, 1)]),
set(self.point_map[3, 0].connected_points()))
self.point_map.make_impassable((3, 1))
self.assertEquals(set([(2, 0), (4, 0), (2, 1), (4, 1)]),
set(self.point_map[3, 0].connected_points()))
self.point_map.make_impassable((3, 2), (3, 3))
self.assertTrue((3, 2) not in set(self.point_map[2, 2].connected_points()))
self.assertTrue((3, 3) not in set(self.point_map[2, 2].connected_points()))
def testCreatespacedPointMap(self):
self.point_map = PointMap()
self.point_map.init_square_points(0, 0, 70, 50, 10)
self.assertEquals(Point(20, 20), self.point_map[20, 20])
self.assertEquals(70, self.point_map.width)
self.assertEquals(50, self.point_map.height)
self.assertEquals(set([(0, 10), (10, 10), (10, 0)]),
set(self.point_map[0, 0].connected_points()))
self.assertEquals(set([(20, 0), (40, 0), (20, 10), (30, 10), (40, 10)]),
set(self.point_map[30, 0].connected_points()))
self.point_map.make_impassable((30, 10))
self.assertEquals(set([(20, 0), (40, 0), (20, 10), (40, 10)]),
set(self.point_map[30, 0].connected_points()))
self.point_map.make_impassable((30, 20), (30, 30))
self.assertTrue((30, 20) not in set(self.point_map[20, 20].connected_points()))
self.assertTrue((30, 30) not in set(self.point_map[20, 20].connected_points()))
def testExample(self):
self.point_map.make_impassable((3, 1), (3, 3))
path = AStar(self.point_map).find_path(self.point_map[(1, 2)], self.point_map[(5, 2)])
self.assertEquals([(1, 2), (2, 1), (3, 0), (4, 1), (5, 2)], path)
def testExample2(self):
self.point_map = PointMap()
self.point_map.init_square_points(0, 0, 7, 5)
self.point_map.make_impassable((3, 0), (3, 3))
path = AStar(self.point_map).find_path(self.point_map[(1, 2)], self.point_map[(5, 2)])
self.assertEquals([(1, 2), (2, 3), (3, 4), (4, 3), (5, 2)], path)
def testExample2(self):
self.point_map = PointMap()
self.point_map.init_square_points(0, 0, 7, 5)
self.point_map.make_impassable((3, 0), (3, 4))
path = AStar(self.point_map).find_path(self.point_map[(1, 2)], self.point_map[(5, 2)])
self.assertEquals([], path)
def testCanUseAStarTwice(self):
self.point_map.make_impassable((3, 1), (3, 3))
astar = AStar(self.point_map)
path = astar.find_path(self.point_map[(1, 2)], self.point_map[(5, 2)])
self.assertEquals([(1, 2), (2, 1), (3, 0), (4, 1), (5, 2)], path)
path = astar.find_path(self.point_map[(1, 2)], self.point_map[(5, 2)])
self.assertEquals([(1, 2), (2, 1), (3, 0), (4, 1), (5, 2)], path)
def testPointSpacing(self):
self.point_map = PointMap()
self.point_map.init_square_points(0, 0, 7, 5, 10)
self.assertEquals(Point(0, 0), self.point_map[1, 1])
| 39.119565 | 94 | 0.595443 | 549 | 3,599 | 3.754098 | 0.08561 | 0.213974 | 0.285298 | 0.07278 | 0.768559 | 0.754003 | 0.72198 | 0.721494 | 0.684619 | 0.652111 | 0 | 0.087912 | 0.216171 | 3,599 | 91 | 95 | 39.549451 | 0.64268 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.338462 | 1 | 0.123077 | false | 0.123077 | 0.061538 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
c4c4e380553bcc283df9b86ed2f18c4ef56ab1f1 | 100 | py | Python | src/pipeline_oriented_analytics/dataframe/__init__.py | bbiletskyy/pipeline-oriented-analytics | 35ece0907a0792e9b7d13a7759c9a32045819842 | [
"MIT"
] | 8 | 2020-02-19T12:35:32.000Z | 2022-03-24T13:16:04.000Z | src/pipeline_oriented_analytics/dataframe/__init__.py | bbiletskyy/pipeline-oriented-analytics | 35ece0907a0792e9b7d13a7759c9a32045819842 | [
"MIT"
] | null | null | null | src/pipeline_oriented_analytics/dataframe/__init__.py | bbiletskyy/pipeline-oriented-analytics | 35ece0907a0792e9b7d13a7759c9a32045819842 | [
"MIT"
] | 1 | 2020-02-27T09:22:55.000Z | 2020-02-27T09:22:55.000Z | from .csv_data_frame import *
from .parquet_data_frame import *
from.temp_view_data_frame import *
| 20 | 34 | 0.82 | 16 | 100 | 4.6875 | 0.5 | 0.36 | 0.6 | 0.506667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 4 | 35 | 25 | 0.852273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f22837649b3002e0678479a20620f0cbf287d56a | 14,511 | py | Python | polaris/polaris/tests/withdraw_test.py | vcarl/django-polaris | bacb92b6fa30e867ce6cd780cb9e6970de4d8abc | [
"Apache-2.0"
] | null | null | null | polaris/polaris/tests/withdraw_test.py | vcarl/django-polaris | bacb92b6fa30e867ce6cd780cb9e6970de4d8abc | [
"Apache-2.0"
] | null | null | null | polaris/polaris/tests/withdraw_test.py | vcarl/django-polaris | bacb92b6fa30e867ce6cd780cb9e6970de4d8abc | [
"Apache-2.0"
] | null | null | null | """This module tests the `/withdraw` endpoint."""
import json
from unittest.mock import patch
import pytest
from stellar_sdk.keypair import Keypair
from stellar_sdk.transaction_envelope import TransactionEnvelope
from polaris import settings
from polaris.helpers import format_memo_horizon
from polaris.management.commands.watch_transactions import update_transaction
from polaris.models import Transaction
from polaris.tests.helpers import mock_check_auth_success
WITHDRAW_PATH = "/transactions/withdraw/interactive"
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_success(mock_check, client, acc1_usd_withdrawal_transaction_factory):
"""`GET /withdraw` succeeds with no optional arguments."""
del mock_check
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_invalid_asset(
mock_check, client, acc1_usd_withdrawal_transaction_factory
):
"""`GET /withdraw` fails with an invalid asset argument."""
del mock_check
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "ETH"}, follow=True)
content = json.loads(response.content)
assert response.status_code == 400
assert content == {"error": "invalid operation for asset ETH"}
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_no_asset(mock_check, client):
"""`GET /withdraw fails with no asset argument."""
del mock_check
response = client.post(WITHDRAW_PATH, follow=True)
content = json.loads(response.content)
assert response.status_code == 400
assert content == {"error": "'asset_code' is required"}
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
@patch("polaris.helpers.authenticate_session_helper")
def test_withdraw_interactive_no_txid(
mock_check, mock_auth, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with no transaction_id.
"""
del mock_check, mock_auth
acc1_usd_withdrawal_transaction_factory()
response = client.get(f"/transactions/withdraw/webapp?", follow=True)
assert response.status_code == 400
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
@patch("polaris.helpers.authenticate_session_helper")
def test_withdraw_interactive_no_asset(
mock_check, mock_auth, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with no asset_code.
"""
del mock_check, mock_auth
acc1_usd_withdrawal_transaction_factory()
response = client.get(
f"/transactions/withdraw/webapp?transaction_id=2", follow=True
)
assert response.status_code == 400
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
@patch("polaris.helpers.authenticate_session_helper")
def test_withdraw_interactive_invalid_asset(
mock_check, mock_auth, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with invalid asset_code.
"""
del mock_check, mock_auth
acc1_usd_withdrawal_transaction_factory()
response = client.get(
f"/transactions/withdraw/webapp?transaction_id=2&asset_code=ETH", follow=True
)
assert response.status_code == 400
# TODO: Decompose the below tests, since they call the same logic. The issue: Pytest complains
# about decomposition when passing fixtures to a helper function.
@pytest.mark.django_db
@patch(
"polaris.management.commands.watch_transactions.stream_transactions",
return_value=[{}],
)
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_interactive_failure_no_memotype(
mock_check, mock_transactions, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with no `memo_type` in Horizon response.
"""
del mock_check, mock_transactions
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
transaction_id = content["id"]
url = content["url"]
response = client.get(url)
assert response.status_code == 200
assert client.session["authenticated"] is True
url, args_str = url.split("?")
response = client.post(
url + "/submit?" + args_str,
{"amount": 20, "bank_account": "123456", "bank": "Bank", "account": "Account"},
)
assert response.status_code == 302
assert (
Transaction.objects.get(id=transaction_id).status
== Transaction.STATUS.pending_user_transfer_start
)
@pytest.mark.django_db
@patch(
"polaris.management.commands.watch_transactions.stream_transactions",
return_value=[{"memo_type": "not_hash"}],
)
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_interactive_failure_incorrect_memotype(
mock_check, mock_transactions, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with incorrect `memo_type` in Horizon response.
"""
del mock_check, mock_transactions
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
transaction_id = content["id"]
url = content["url"]
response = client.get(url)
assert response.status_code == 200
assert client.session["authenticated"] is True
url, args_str = url.split("?")
response = client.post(
url + "/submit?" + args_str,
{"amount": 20, "bank_account": "123456", "bank": "Bank", "account": "Account"},
)
assert response.status_code == 302
assert (
Transaction.objects.get(id=transaction_id).status
== Transaction.STATUS.pending_user_transfer_start
)
@pytest.mark.django_db
@patch(
"polaris.management.commands.watch_transactions.stream_transactions",
return_value=[{"memo_type": "hash"}],
)
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_interactive_failure_no_memo(
mock_check, mock_transactions, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with no `memo` in Horizon response.
"""
del mock_check, mock_transactions
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
transaction_id = content["id"]
url = content["url"]
response = client.get(url)
assert response.status_code == 200
assert client.session["authenticated"] is True
url, args_str = url.split("?")
response = client.post(
url + "/submit?" + args_str,
{"amount": 20, "bank_account": "123456", "bank": "Bank", "account": "Account"},
)
assert response.status_code == 302
assert (
Transaction.objects.get(id=transaction_id).status
== Transaction.STATUS.pending_user_transfer_start
)
@pytest.mark.django_db
@patch(
"polaris.management.commands.watch_transactions.stream_transactions",
return_value=[{"memo_type": "hash", "memo": "wrong_memo"}],
)
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_interactive_failure_incorrect_memo(
mock_check, mock_transactions, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` fails with incorrect `memo` in Horizon response.
"""
del mock_check, mock_transactions
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
transaction_id = content["id"]
url = content["url"]
response = client.get(url)
assert response.status_code == 200
assert client.session["authenticated"] is True
url, args_str = url.split("?")
response = client.post(
url + "/submit?" + args_str,
{"amount": 20, "bank_account": "123456", "bank": "Bank", "account": "Account"},
)
assert response.status_code == 302
assert (
Transaction.objects.get(id=transaction_id).status
== Transaction.STATUS.pending_user_transfer_start
)
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_interactive_success_transaction_unsuccessful(
mock_check, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` changes transaction to `pending_stellar`
with unsuccessful transaction.
"""
del mock_check
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
transaction_id = content["id"]
url = content["url"]
response = client.get(url)
assert response.status_code == 200
assert client.session["authenticated"] is True
url, args_str = url.split("?")
response = client.post(
url + "/submit?" + args_str,
{"amount": 50, "bank_account": "123456", "bank": "Bank", "account": "Account"},
)
assert response.status_code == 302
transaction = Transaction.objects.get(id=transaction_id)
assert transaction.status == Transaction.STATUS.pending_user_transfer_start
withdraw_memo = transaction.withdraw_memo
mock_response = {
"memo_type": "hash",
"memo": format_memo_horizon(withdraw_memo),
"successful": False,
"id": "c5e8ada72c0e3c248ac7e1ec0ec97e204c06c295113eedbe632020cd6dc29ff8",
"envelope_xdr": "AAAAAEU1B1qeJrucdqkbk1mJsnuFaNORfrOAzJyaAy1yzW8TAAAAZAAE2s4AAAABAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAAoUKq+1Z2GGB98qurLSmocHafvG6S+YzKNE6oiHIXo6kAAAABVVNEAAAAAACnUE2lfwuFZ+G+dkc+qiL0MwxB0CoR0au324j+JC9exQAAAAAdzWUAAAAAAAAAAAA=",
}
update_transaction(mock_response, transaction)
assert Transaction.objects.get(id=transaction_id).status == Transaction.STATUS.error
@pytest.mark.django_db
@patch("polaris.helpers.check_auth", side_effect=mock_check_auth_success)
def test_withdraw_interactive_success_transaction_successful(
mock_check, client, acc1_usd_withdrawal_transaction_factory
):
"""
`GET /transactions/withdraw/webapp` changes transaction to `completed`
with successful transaction.
"""
del mock_check
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
transaction_id = content["id"]
url = content["url"]
response = client.get(url)
assert response.status_code == 200
assert client.session["authenticated"] is True
url, args_str = url.split("?")
response = client.post(
url + "/submit?" + args_str,
{"amount": 50, "bank_account": "123456", "bank": "Bank", "account": "Account"},
)
assert response.status_code == 302
transaction = Transaction.objects.get(id=transaction_id)
assert transaction.status == Transaction.STATUS.pending_user_transfer_start
withdraw_memo = transaction.withdraw_memo
mock_response = {
"memo_type": "hash",
"memo": format_memo_horizon(withdraw_memo),
"successful": True,
"id": "c5e8ada72c0e3c248ac7e1ec0ec97e204c06c295113eedbe632020cd6dc29ff8",
"envelope_xdr": "AAAAAEU1B1qeJrucdqkbk1mJsnuFaNORfrOAzJyaAy1yzW8TAAAAZAAE2s4AAAABAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAAoUKq+1Z2GGB98qurLSmocHafvG6S+YzKNE6oiHIXo6kAAAABVVNEAAAAAACnUE2lfwuFZ+G+dkc+qiL0MwxB0CoR0au324j+JC9exQAAAAAdzWUAAAAAAAAAAAA=",
}
update_transaction(mock_response, transaction)
assert transaction.status == Transaction.STATUS.completed
assert transaction.completed_at
@pytest.mark.django_db
def test_withdraw_authenticated_success(
client, acc1_usd_withdrawal_transaction_factory
):
"""`GET /withdraw` succeeds with the SEP 10 authentication flow."""
client_address = "GDKFNRUATPH4BSZGVFDRBIGZ5QAFILVFRIRYNSQ4UO7V2ZQAPRNL73RI"
client_seed = "SDKWSBERDHP3SXW5A3LXSI7FWMMO5H7HG33KNYBKWH2HYOXJG2DXQHQY"
acc1_usd_withdrawal_transaction_factory()
# SEP 10.
response = client.get(f"/auth?account={client_address}", follow=True)
content = json.loads(response.content)
envelope_xdr = content["transaction"]
envelope_object = TransactionEnvelope.from_xdr(
envelope_xdr, network_passphrase=settings.STELLAR_NETWORK_PASSPHRASE
)
client_signing_key = Keypair.from_secret(client_seed)
envelope_object.sign(client_signing_key)
client_signed_envelope_xdr = envelope_object.to_xdr()
response = client.post(
"/auth",
data={"transaction": client_signed_envelope_xdr},
content_type="application/json",
)
content = json.loads(response.content)
encoded_jwt = content["token"]
assert encoded_jwt
header = {"HTTP_AUTHORIZATION": f"Bearer {encoded_jwt}"}
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True, **header)
content = json.loads(response.content)
assert content["type"] == "interactive_customer_info_needed"
@pytest.mark.django_db
def test_withdraw_no_jwt(client, acc1_usd_withdrawal_transaction_factory):
"""`GET /withdraw` fails if a required JWT isn't provided."""
acc1_usd_withdrawal_transaction_factory()
response = client.post(WITHDRAW_PATH, {"asset_code": "USD"}, follow=True)
content = json.loads(response.content)
assert response.status_code == 400
assert content == {"error": "JWT must be passed as 'Authorization' header"}
| 38.086614 | 247 | 0.73999 | 1,673 | 14,511 | 6.132098 | 0.109384 | 0.032459 | 0.043084 | 0.070962 | 0.849011 | 0.828151 | 0.828151 | 0.815869 | 0.815869 | 0.809826 | 0 | 0.021492 | 0.1503 | 14,511 | 380 | 248 | 38.186842 | 0.810543 | 0.083936 | 0 | 0.727586 | 0 | 0 | 0.212015 | 0.140428 | 0 | 0 | 0 | 0.002632 | 0.155172 | 1 | 0.048276 | false | 0.006897 | 0.034483 | 0 | 0.082759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1ef5e10e6608f3274eaeb2f917b246ee0756ad40 | 36,996 | py | Python | tensorflow/python/ops/bincount_ops_test.py | chenzhengda/tensorflow | 8debb698097670458b5f21d728bc6f734a7b5a53 | [
"Apache-2.0"
] | 74 | 2020-07-06T17:11:39.000Z | 2022-01-28T06:31:28.000Z | tensorflow/python/ops/bincount_ops_test.py | chenzhengda/tensorflow | 8debb698097670458b5f21d728bc6f734a7b5a53 | [
"Apache-2.0"
] | 9 | 2020-10-13T23:25:29.000Z | 2022-02-10T06:54:48.000Z | tensorflow/python/ops/bincount_ops_test.py | chenzhengda/tensorflow | 8debb698097670458b5f21d728bc6f734a7b5a53 | [
"Apache-2.0"
] | 12 | 2020-07-08T07:27:17.000Z | 2021-12-27T08:54:27.000Z | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# maxlengthations under the License.
# ==============================================================================
"""Tests for bincount ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow.python.eager import context
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import test_util
from tensorflow.python.ops import bincount_ops
from tensorflow.python.ops import gen_count_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.ops.ragged import ragged_factory_ops
from tensorflow.python.ops.ragged import ragged_tensor
from tensorflow.python.platform import test
class TestSparseCount(test.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
{
"testcase_name": "_no_maxlength",
"x": np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32),
"expected_indices": [[0, 1], [0, 2], [0, 3], [1, 4], [1, 5]],
"expected_values": [1, 1, 1, 2, 1],
"expected_shape": [2, 6]
}, {
"testcase_name": "_maxlength",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"maxlength": 7,
"expected_indices": [[0, 1], [0, 2], [0, 3], [1, 0], [1, 4]],
"expected_values": [1, 1, 1, 1, 2],
"expected_shape": [2, 7]
}, {
"testcase_name": "_minlength",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"minlength": 9,
"expected_indices": [[0, 1], [0, 2], [0, 3], [0, 7], [1, 0], [1, 4],
[1, 7]],
"expected_values": [1, 1, 1, 1, 1, 2, 1],
"expected_shape": [2, 9]
}, {
"testcase_name": "_minlength_larger_values",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"minlength": 3,
"expected_indices": [[0, 1], [0, 2], [0, 3], [0, 7], [1, 0], [1, 4],
[1, 7]],
"expected_values": [1, 1, 1, 1, 1, 2, 1],
"expected_shape": [2, 8]
}, {
"testcase_name": "_no_maxlength_binary",
"x": np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32),
"expected_indices": [[0, 1], [0, 2], [0, 3], [1, 4], [1, 5]],
"expected_values": [1, 1, 1, 1, 1],
"expected_shape": [2, 6],
"binary_output": True,
}, {
"testcase_name": "_maxlength_binary",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"maxlength": 7,
"expected_indices": [[0, 1], [0, 2], [0, 3], [1, 0], [1, 4]],
"expected_values": [1, 1, 1, 1, 1],
"expected_shape": [2, 7],
"binary_output": True,
}, {
"testcase_name": "_minlength_binary",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"minlength": 9,
"expected_indices": [[0, 1], [0, 2], [0, 3], [0, 7], [1, 0], [1, 4],
[1, 7]],
"expected_values": [1, 1, 1, 1, 1, 1, 1],
"expected_shape": [2, 9],
"binary_output": True,
}, {
"testcase_name": "_minlength_larger_values_binary",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"minlength": 3,
"expected_indices": [[0, 1], [0, 2], [0, 3], [0, 7], [1, 0], [1, 4],
[1, 7]],
"expected_values": [1, 1, 1, 1, 1, 1, 1],
"expected_shape": [2, 8],
"binary_output": True,
}, {
"testcase_name": "_no_maxlength_weights",
"x": np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32),
"expected_indices": [[0, 1], [0, 2], [0, 3], [1, 4], [1, 5]],
"expected_values": [2, 1, 0.5, 9, 3],
"expected_shape": [2, 6],
"weights": [[0.5, 1, 2], [3, 4, 5]]
}, {
"testcase_name": "_maxlength_weights",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"maxlength": 7,
"expected_indices": [[0, 1], [0, 2], [0, 3], [1, 0], [1, 4]],
"expected_values": [2, 1, 0.5, 3, 9],
"expected_shape": [2, 7],
"weights": [[0.5, 1, 2, 11], [7, 3, 4, 5]]
}, {
"testcase_name": "_minlength_weights",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"minlength": 9,
"expected_indices": [[0, 1], [0, 2], [0, 3], [0, 7], [1, 0], [1, 4],
[1, 7]],
"expected_values": [2, 1, 0.5, 3, 5, 13, 4],
"expected_shape": [2, 9],
"weights": [[0.5, 1, 2, 3], [4, 5, 6, 7]]
}, {
"testcase_name": "_minlength_larger_values_weights",
"x": np.array([[3, 2, 1, 7], [7, 0, 4, 4]], dtype=np.int32),
"minlength": 3,
"expected_indices": [[0, 1], [0, 2], [0, 3], [0, 7], [1, 0], [1, 4],
[1, 7]],
"expected_values": [2, 1, 0.5, 3, 5, 13, 4],
"expected_shape": [2, 8],
"weights": [[0.5, 1, 2, 3], [4, 5, 6, 7]]
}, {
"testcase_name": "_1d",
"x": np.array([3, 2, 1, 1], dtype=np.int32),
"expected_indices": [[1], [2], [3]],
"expected_values": [2, 1, 1],
"expected_shape": [4]
}, {
"testcase_name": "_all_axes",
"x": np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32),
"expected_indices": [[1], [2], [3], [4], [5]],
"expected_values": [1, 1, 1, 2, 1],
"expected_shape": [6],
"axis": None
})
def test_dense_input(self,
x,
expected_indices,
expected_values,
expected_shape,
minlength=None,
maxlength=None,
binary_output=False,
weights=None,
axis=-1):
y = bincount_ops.sparse_bincount(
x,
weights=weights,
minlength=minlength,
maxlength=maxlength,
binary_output=binary_output,
axis=axis)
self.assertAllEqual(expected_indices, y.indices)
self.assertAllEqual(expected_values, y.values)
self.assertAllEqual(expected_shape, y.dense_shape)
@parameterized.named_parameters(
{
"testcase_name":
"_no_maxlength",
"x":
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [2, 4], [2, 5]],
"expected_values": [1, 1, 2, 1],
"expected_shape": [3, 6],
},
{
"testcase_name":
"_maxlength",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [2, 4], [2, 5]],
"expected_values": [1, 1, 2, 1],
"expected_shape": [3, 7],
"maxlength":
7,
},
{
"testcase_name":
"_minlength",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [1, 7], [2, 4], [2, 5]],
"expected_values": [1, 1, 1, 2, 1],
"expected_shape": [3, 9],
"minlength":
9,
},
{
"testcase_name":
"_minlength_larger_values",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [1, 7], [2, 4], [2, 5]],
"expected_values": [1, 1, 1, 2, 1],
"expected_shape": [3, 8],
"minlength":
3,
},
{
"testcase_name":
"_no_maxlength_binary",
"x":
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [2, 4], [2, 5]],
"expected_values": [1, 1, 1, 1],
"expected_shape": [3, 6],
"binary_output":
True,
},
{
"testcase_name":
"_maxlength_binary",
"x":
np.array([[3, 0, 1, 0], [0, 0, 7, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [2, 4], [2, 5]],
"expected_values": [1, 1, 1, 1],
"expected_shape": [3, 7],
"maxlength":
7,
"binary_output":
True,
},
{
"testcase_name":
"_minlength_binary",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [1, 7], [2, 4], [2, 5]],
"expected_values": [1, 1, 1, 1, 1],
"expected_shape": [3, 9],
"minlength":
9,
"binary_output":
True,
},
{
"testcase_name":
"_minlength_larger_values_binary",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [1, 7], [2, 4], [2, 5]],
"expected_values": [1, 1, 1, 1, 1],
"expected_shape": [3, 8],
"minlength":
3,
"binary_output":
True,
},
{
"testcase_name":
"_no_maxlength_weights",
"x":
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [2, 4], [2, 5]],
"expected_values": [2, 6, 7, 10],
"expected_shape": [3, 6],
"weights":
np.array([[6, 0, 2, 0], [0, 0, 0, 0], [10, 0, 3.5, 3.5]]),
},
{
"testcase_name":
"_maxlength_weights",
"x":
np.array([[3, 0, 1, 0], [0, 0, 7, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [2, 4], [2, 5]],
"expected_values": [2, 6, 7, 10],
"expected_shape": [3, 7],
"maxlength":
7,
"weights":
np.array([[6, 0, 2, 0], [0, 0, 14, 0], [10, 0, 3.5, 3.5]]),
},
{
"testcase_name":
"_minlength_weights",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [1, 7], [2, 4], [2, 5]],
"expected_values": [2, 6, 14, 6.5, 10],
"expected_shape": [3, 9],
"minlength":
9,
"weights":
np.array([[6, 0, 2, 0], [14, 0, 0, 0], [10, 0, 3, 3.5]]),
},
{
"testcase_name":
"_minlength_larger_values_weights",
"x":
np.array([[3, 0, 1, 0], [7, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[0, 1], [0, 3], [1, 7], [2, 4], [2, 5]],
"expected_values": [2, 6, 14, 6.5, 10],
"expected_shape": [3, 8],
"minlength":
3,
"weights":
np.array([[6, 0, 2, 0], [14, 0, 0, 0], [10, 0, 3, 3.5]]),
},
{
"testcase_name": "_1d",
"x": np.array([3, 0, 1, 1], dtype=np.int32),
"expected_indices": [[1], [3]],
"expected_values": [2, 1],
"expected_shape": [4],
},
{
"testcase_name":
"_all_axes",
"x":
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]],
dtype=np.int32),
"expected_indices": [[1], [3], [4], [5]],
"expected_values": [1, 1, 2, 1],
"expected_shape": [6],
"axis":
None,
},
)
def test_sparse_input(self,
x,
expected_indices,
expected_values,
expected_shape,
maxlength=None,
minlength=None,
binary_output=False,
weights=None,
axis=-1):
x_sparse = sparse_ops.from_dense(x)
w_sparse = sparse_ops.from_dense(weights) if weights is not None else None
y = bincount_ops.sparse_bincount(
x_sparse,
weights=w_sparse,
minlength=minlength,
maxlength=maxlength,
binary_output=binary_output,
axis=axis)
self.assertAllEqual(expected_indices, y.indices)
self.assertAllEqual(expected_values, y.values)
self.assertAllEqual(expected_shape, y.dense_shape)
@parameterized.named_parameters(
{
"testcase_name": "_no_maxlength",
"x": [[], [], [3, 0, 1], [], [5, 0, 4, 4]],
"expected_indices": [[2, 0], [2, 1], [2, 3], [4, 0], [4, 4], [4, 5]],
"expected_values": [1, 1, 1, 1, 2, 1],
"expected_shape": [5, 6],
},
{
"testcase_name": "_maxlength",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"maxlength": 7,
"expected_indices": [[2, 0], [2, 1], [2, 3], [4, 0], [4, 4], [4, 5]],
"expected_values": [1, 1, 1, 1, 2, 1],
"expected_shape": [5, 7],
},
{
"testcase_name": "_minlength",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"minlength": 9,
"expected_indices": [[2, 0], [2, 1], [2, 3], [3, 7], [4, 0], [4, 4],
[4, 5]],
"expected_values": [1, 1, 1, 1, 1, 2, 1],
"expected_shape": [5, 9],
},
{
"testcase_name": "_minlength_larger_values",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"minlength": 3,
"expected_indices": [[2, 0], [2, 1], [2, 3], [3, 7], [4, 0], [4, 4],
[4, 5]],
"expected_values": [1, 1, 1, 1, 1, 2, 1],
"expected_shape": [5, 8],
},
{
"testcase_name": "_no_maxlength_binary",
"x": [[], [], [3, 0, 1], [], [5, 0, 4, 4]],
"expected_indices": [[2, 0], [2, 1], [2, 3], [4, 0], [4, 4], [4, 5]],
"expected_values": [1, 1, 1, 1, 1, 1],
"expected_shape": [5, 6],
"binary_output": True,
},
{
"testcase_name": "_maxlength_binary",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"maxlength": 7,
"expected_indices": [[2, 0], [2, 1], [2, 3], [4, 0], [4, 4], [4, 5]],
"expected_values": [1, 1, 1, 1, 1, 1],
"expected_shape": [5, 7],
"binary_output": True,
},
{
"testcase_name": "_minlength_binary",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"minlength": 9,
"expected_indices": [[2, 0], [2, 1], [2, 3], [3, 7], [4, 0], [4, 4],
[4, 5]],
"expected_values": [1, 1, 1, 1, 1, 1, 1],
"expected_shape": [5, 9],
"binary_output": True,
},
{
"testcase_name": "_minlength_larger_values_binary",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"minlength": 3,
"binary_output": True,
"expected_indices": [[2, 0], [2, 1], [2, 3], [3, 7], [4, 0], [4, 4],
[4, 5]],
"expected_values": [1, 1, 1, 1, 1, 1, 1],
"expected_shape": [5, 8],
},
{
"testcase_name": "_no_maxlength_weights",
"x": [[], [], [3, 0, 1], [], [5, 0, 4, 4]],
"expected_indices": [[2, 0], [2, 1], [2, 3], [4, 0], [4, 4], [4, 5]],
"expected_values": [0.5, 2, 6, 0.25, 8, 10],
"expected_shape": [5, 6],
"weights": [[], [], [6, 0.5, 2], [], [10, 0.25, 5, 3]],
},
{
"testcase_name": "_maxlength_weights",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"maxlength": 7,
"expected_indices": [[2, 0], [2, 1], [2, 3], [4, 0], [4, 4], [4, 5]],
"expected_values": [0.5, 2, 6, 0.25, 8, 10],
"expected_shape": [5, 7],
"weights": [[], [], [6, 0.5, 2], [14], [10, 0.25, 5, 3]],
},
{
"testcase_name": "_minlength_weights",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"minlength": 9,
"expected_indices": [[2, 0], [2, 1], [2, 3], [3, 7], [4, 0], [4, 4],
[4, 5]],
"expected_values": [0.5, 2, 6, 14, 0.25, 8, 10],
"expected_shape": [5, 9],
"weights": [[], [], [6, 0.5, 2], [14], [10, 0.25, 5, 3]],
},
{
"testcase_name": "_minlength_larger_values_weights",
"x": [[], [], [3, 0, 1], [7], [5, 0, 4, 4]],
"minlength": 3,
"expected_indices": [[2, 0], [2, 1], [2, 3], [3, 7], [4, 0], [4, 4],
[4, 5]],
"expected_values": [0.5, 2, 6, 14, 0.25, 8, 10],
"expected_shape": [5, 8],
"weights": [[], [], [6, 0.5, 2], [14], [10, 0.25, 5, 3]],
},
{
"testcase_name": "_1d",
"x": [3, 0, 1, 1],
"expected_indices": [[0], [1], [3]],
"expected_values": [1, 2, 1],
"expected_shape": [4],
},
{
"testcase_name": "_all_axes",
"x": [[], [], [3, 0, 1], [], [5, 0, 4, 4]],
"expected_indices": [[0], [1], [3], [4], [5]],
"expected_values": [2, 1, 1, 2, 1],
"expected_shape": [6],
"axis": None,
},
)
def test_ragged_input(self,
x,
expected_indices,
expected_values,
expected_shape,
maxlength=None,
minlength=None,
binary_output=False,
weights=None,
axis=-1):
x_ragged = ragged_factory_ops.constant(x)
w = ragged_factory_ops.constant(weights) if weights is not None else None
y = bincount_ops.sparse_bincount(
x_ragged,
weights=w,
minlength=minlength,
maxlength=maxlength,
binary_output=binary_output,
axis=axis)
self.assertAllEqual(expected_indices, y.indices)
self.assertAllEqual(expected_values, y.values)
self.assertAllEqual(expected_shape, y.dense_shape)
class TestDenseBincount(test.TestCase, parameterized.TestCase):
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_sparse_input_all_count(self, dtype):
np.random.seed(42)
num_rows = 128
size = 1000
n_elems = 4096
inp_indices = np.random.randint(0, num_rows, (n_elems, 1))
inp_indices = np.concatenate([inp_indices, np.zeros((n_elems, 1))], axis=1)
inp_vals = np.random.randint(0, size, (n_elems,), dtype=dtype)
sparse_inp = sparse_tensor.SparseTensor(inp_indices, inp_vals,
[num_rows, 1])
np_out = np.bincount(inp_vals, minlength=size)
self.assertAllEqual(
np_out, self.evaluate(bincount_ops.bincount(sparse_inp, axis=0)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_sparse_input_all_count_with_weights(self, dtype):
np.random.seed(42)
num_rows = 128
size = 1000
n_elems = 4096
inp_indices = np.random.randint(0, num_rows, (n_elems, 1))
inp_indices = np.concatenate([inp_indices, np.zeros((n_elems, 1))], axis=1)
inp_vals = np.random.randint(0, size, (n_elems,), dtype=dtype)
sparse_inp = sparse_tensor.SparseTensor(inp_indices, inp_vals,
[num_rows, 1])
weight_vals = np.random.random((n_elems,))
sparse_weights = sparse_tensor.SparseTensor(inp_indices, weight_vals,
[num_rows, 1])
np_out = np.bincount(inp_vals, minlength=size, weights=weight_vals)
self.assertAllEqual(
np_out,
self.evaluate(bincount_ops.bincount(
sparse_inp, sparse_weights, axis=0)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_sparse_input_all_binary(self, dtype):
np.random.seed(42)
num_rows = 128
size = 10
n_elems = 4096
inp_indices = np.random.randint(0, num_rows, (n_elems, 1))
inp_indices = np.concatenate([inp_indices, np.zeros((n_elems, 1))], axis=1)
inp_vals = np.random.randint(0, size, (n_elems,), dtype=dtype)
sparse_inp = sparse_tensor.SparseTensor(inp_indices, inp_vals,
[num_rows, 1])
np_out = np.ones((size,))
self.assertAllEqual(
np_out,
self.evaluate(bincount_ops.bincount(sparse_inp, binary_output=True)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_sparse_input_col_reduce_count(self, dtype):
num_rows = 128
num_cols = 27
size = 100
np.random.seed(42)
inp = np.random.randint(0, size, (num_rows, num_cols), dtype=dtype)
np_out = np.reshape(
np.concatenate(
[np.bincount(inp[j, :], minlength=size) for j in range(num_rows)],
axis=0), (num_rows, size))
# from_dense will filter out 0s.
inp = inp + 1
# from_dense will cause OOM in GPU.
with ops.device("/CPU:0"):
inp_sparse = sparse_ops.from_dense(inp)
inp_sparse = sparse_tensor.SparseTensor(inp_sparse.indices,
inp_sparse.values - 1,
inp_sparse.dense_shape)
self.assertAllEqual(
np_out, self.evaluate(bincount_ops.bincount(arr=inp_sparse, axis=-1)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_sparse_input_col_reduce_binary(self, dtype):
num_rows = 128
num_cols = 27
size = 100
np.random.seed(42)
inp = np.random.randint(0, size, (num_rows, num_cols), dtype=dtype)
np_out = np.reshape(
np.concatenate([
np.where(np.bincount(inp[j, :], minlength=size) > 0, 1, 0)
for j in range(num_rows)
],
axis=0), (num_rows, size))
# from_dense will filter out 0s.
inp = inp + 1
# from_dense will cause OOM in GPU.
with ops.device("/CPU:0"):
inp_sparse = sparse_ops.from_dense(inp)
inp_sparse = sparse_tensor.SparseTensor(inp_sparse.indices,
inp_sparse.values - 1,
inp_sparse.dense_shape)
self.assertAllEqual(
np_out,
self.evaluate(
bincount_ops.bincount(arr=inp_sparse, axis=-1, binary_output=True)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_ragged_input_count(self, dtype):
x = ragged_factory_ops.constant([[], [], [3, 0, 1], [], [5, 0, 4, 4]],
dtype)
# pyformat: disable
expected_output = [
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 2, 1]]
# pyformat: enable
self.assertAllEqual(expected_output,
self.evaluate(bincount_ops.bincount(arr=x, axis=-1)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_ragged_input_binary(self, dtype):
x = ragged_factory_ops.constant([[], [], [3, 0, 1], [], [5, 0, 4, 4]])
# pyformat: disable
expected_output = [
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 1, 1]]
# pyformat: enable
self.assertAllEqual(
expected_output,
self.evaluate(
bincount_ops.bincount(arr=x, axis=-1, binary_output=True)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_ragged_input_count_with_weights(self, dtype):
x = ragged_factory_ops.constant([[], [], [3, 0, 1], [], [5, 0, 4, 4]])
weights = ragged_factory_ops.constant([[], [], [.1, .2, .3], [],
[.2, .5, .6, .3]])
# pyformat: disable
expected_output = [
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[.2, .3, 0, .1, 0, 0],
[0, 0, 0, 0, 0, 0],
[.5, 0, 0, 0, .9, .2]]
# pyformat: enable
self.assertAllClose(
expected_output,
self.evaluate(bincount_ops.bincount(arr=x, weights=weights, axis=-1)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_ragged_input_count_np(self, dtype):
np.random.seed(42)
num_rows = 128
num_cols = 27
size = 1000
inp = np.random.randint(0, size, (num_rows, num_cols), dtype=dtype)
np_out = np.reshape(
np.concatenate(
[np.bincount(inp[j, :], minlength=size) for j in range(num_rows)],
axis=0), (num_rows, size))
x = ragged_tensor.RaggedTensor.from_tensor(inp)
self.assertAllEqual(
np_out,
self.evaluate(bincount_ops.bincount(arr=x, minlength=size, axis=-1)))
@parameterized.parameters([{
"dtype": np.int32,
}, {
"dtype": np.int64,
}])
def test_ragged_input_count_np_with_weights(self, dtype):
np.random.seed(42)
num_rows = 128
num_cols = 27
size = 1000
inp = np.random.randint(0, size, (num_rows, num_cols), dtype=dtype)
np_weight = np.random.random((num_rows, num_cols))
np_out = np.reshape(
np.concatenate([
np.bincount(inp[j, :], weights=np_weight[j, :], minlength=size)
for j in range(num_rows)
],
axis=0), (num_rows, size))
x = ragged_tensor.RaggedTensor.from_tensor(inp)
weights = ragged_tensor.RaggedTensor.from_tensor(np_weight)
self.assertAllEqual(
np_out,
self.evaluate(
bincount_ops.bincount(
arr=x, weights=weights, minlength=size, axis=-1)))
class TestSparseCountFailureModes(test.TestCase):
def test_dense_input_sparse_weights_fails(self):
x = np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32)
weights = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
with self.assertRaisesRegex(ValueError, "must be a tf.Tensor"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_dense_input_ragged_weights_fails(self):
x = np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32)
weights = ragged_factory_ops.constant([[6, 0.5, 2], [14], [10, 0.25, 5, 3]])
with self.assertRaisesRegex(ValueError, "must be a tf.Tensor"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_dense_input_wrong_shape_fails(self):
x = np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32)
weights = np.array([[3, 2], [5, 4], [4, 3]])
# Note: Eager mode and graph mode throw different errors here. Graph mode
# will fail with a ValueError from the shape checking logic, while Eager
# will fail with an InvalidArgumentError from the kernel itself.
if context.executing_eagerly():
with self.assertRaisesRegex(errors.InvalidArgumentError,
"must have the same shape"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
else:
with self.assertRaisesRegex(ValueError, "both shapes must be equal"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_sparse_input_dense_weights_fails(self):
x = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
weights = np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32)
with self.assertRaisesRegex(ValueError, "must be a SparseTensor"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_sparse_input_ragged_weights_fails(self):
x = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
weights = ragged_factory_ops.constant([[6, 0.5, 2], [14], [10, 0.25, 5, 3]])
with self.assertRaisesRegex(ValueError, "must be a SparseTensor"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_sparse_input_wrong_indices_fails(self):
x = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
weights = sparse_ops.from_dense(
np.array([[3, 1, 0, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
with self.assertRaisesRegex(errors.InvalidArgumentError,
"must have the same indices"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_sparse_input_too_many_indices_fails(self):
x = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
weights = sparse_ops.from_dense(
np.array([[3, 1, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Incompatible shapes"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_sparse_input_wrong_shape_fails(self):
x = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
weights = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4], [0, 0, 0, 0]],
dtype=np.int32))
with self.assertRaisesRegex(errors.InvalidArgumentError,
"must have the same dense shape"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_ragged_input_dense_weights_fails(self):
x = ragged_factory_ops.constant([[6, 1, 2], [14], [10, 1, 5, 3]])
weights = np.array([[3, 2, 1], [5, 4, 4]], dtype=np.int32)
with self.assertRaisesRegex(ValueError, "must be a RaggedTensor"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_ragged_input_sparse_weights_fails(self):
x = ragged_factory_ops.constant([[6, 1, 2], [14], [10, 1, 5, 3]])
weights = sparse_ops.from_dense(
np.array([[3, 0, 1, 0], [0, 0, 0, 0], [5, 0, 4, 4]], dtype=np.int32))
with self.assertRaisesRegex(ValueError, "must be a RaggedTensor"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
def test_ragged_input_different_shape_fails(self):
x = ragged_factory_ops.constant([[6, 1, 2], [14], [10, 1, 5, 3]])
weights = ragged_factory_ops.constant([[6, 0.5, 2], [], [10, 0.25, 5, 3]])
with self.assertRaisesRegex(errors.InvalidArgumentError,
"must have the same row splits"):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
class RawOpsHeapOobTest(test.TestCase, parameterized.TestCase):
@test_util.run_v1_only("Test security error")
def testSparseCountSparseOutputBadIndicesShapeTooSmall(self):
indices = [1]
values = [[1]]
weights = []
dense_shape = [10]
with self.assertRaisesRegex(ValueError,
"Shape must be rank 2 but is rank 1 for"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=True))
@test_util.run_all_in_graph_and_eager_modes
@test_util.disable_tfrt
class RawOpsTest(test.TestCase, parameterized.TestCase):
def testSparseCountSparseOutputBadIndicesShape(self):
indices = [[[0], [0]], [[0], [1]], [[1], [0]], [[1], [2]]]
values = [1, 1, 1, 10]
weights = [1, 2, 4, 6]
dense_shape = [2, 3]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Input indices must be a 2-dimensional tensor"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=False))
def testSparseCountSparseOutputBadWeightsShape(self):
indices = [[0, 0], [0, 1], [1, 0], [1, 2]]
values = [1, 1, 1, 10]
weights = [1, 2, 4]
dense_shape = [2, 3]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Weights and values must have the same shape"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=False))
def testSparseCountSparseOutputBadNumberOfValues(self):
indices = [[0, 0], [0, 1], [1, 0]]
values = [1, 1, 1, 10]
weights = [1, 2, 4, 6]
dense_shape = [2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Number of values must match first dimension of indices"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutput(self):
splits = [0, 4, 7]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
output_indices, output_values, output_shape = self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits, values=values, weights=weights, binary_output=False))
self.assertAllEqual([[0, 1], [0, 2], [1, 2], [1, 5], [1, 10]],
output_indices)
self.assertAllEqual([7, 3, 5, 7, 6], output_values)
self.assertAllEqual([2, 11], output_shape)
def testRaggedCountSparseOutputBadWeightsShape(self):
splits = [0, 4, 7]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Weights and values must have the same shape"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutputEmptySplits(self):
splits = []
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Must provide at least 2 elements for the splits argument"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutputBadSplitsStart(self):
splits = [1, 7]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Splits must start with 0"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutputBadSplitsEnd(self):
splits = [0, 5]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Splits must end with the number of values"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
if __name__ == "__main__":
test.main()
| 37.905738 | 80 | 0.503271 | 4,690 | 36,996 | 3.812154 | 0.055011 | 0.017227 | 0.019129 | 0.016556 | 0.850719 | 0.821019 | 0.80849 | 0.79736 | 0.779798 | 0.744337 | 0 | 0.08347 | 0.322224 | 36,996 | 975 | 81 | 37.944615 | 0.629551 | 0.030544 | 0 | 0.729911 | 0 | 0 | 0.124547 | 0.009042 | 0 | 0 | 0 | 0 | 0.046875 | 1 | 0.03683 | false | 0 | 0.017857 | 0 | 0.060268 | 0.001116 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4811b1b0debb649107ed68e6c78d0e23bf7ba6d6 | 47 | py | Python | pyuplift/metrics/__init__.py | duketemon/pyuplift | 33daa0768ff333387cb8223ebfaedaffa57de335 | [
"MIT"
] | 26 | 2019-02-24T07:41:59.000Z | 2022-01-03T05:07:26.000Z | pyuplift/metrics/__init__.py | duketemon/pyuplift | 33daa0768ff333387cb8223ebfaedaffa57de335 | [
"MIT"
] | 8 | 2019-03-17T07:57:16.000Z | 2019-08-02T19:55:49.000Z | pyuplift/metrics/__init__.py | duketemon/pyuplift | 33daa0768ff333387cb8223ebfaedaffa57de335 | [
"MIT"
] | 4 | 2019-07-17T12:36:37.000Z | 2020-07-16T11:36:35.000Z | from .average_effect import get_average_effect
| 23.5 | 46 | 0.893617 | 7 | 47 | 5.571429 | 0.714286 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
483a79faa44d795c6cc17f8b1ccac0d963838e94 | 67 | py | Python | tests/modules/imported/submodules/submodulea.py | Fryguy/py2rb | 0d2fbc5a86b82707a1d83241a21af6b2cc22c0b8 | [
"MIT"
] | 124 | 2017-08-19T05:37:16.000Z | 2022-03-08T18:24:18.000Z | tests/modules/imported/submodules/submodulea.py | JeMaMokuma/py2rb | 0d2fbc5a86b82707a1d83241a21af6b2cc22c0b8 | [
"MIT"
] | 15 | 2017-12-16T05:59:31.000Z | 2022-02-08T02:51:17.000Z | tests/modules/imported/submodules/submodulea.py | JeMaMokuma/py2rb | 0d2fbc5a86b82707a1d83241a21af6b2cc22c0b8 | [
"MIT"
] | 18 | 2017-09-25T11:57:04.000Z | 2022-02-19T17:33:48.000Z |
def foo():
print("imported.modules.submodules.modulea.foo()")
| 16.75 | 54 | 0.686567 | 8 | 67 | 5.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 67 | 3 | 55 | 22.333333 | 0.779661 | 0 | 0 | 0 | 0 | 0 | 0.621212 | 0.621212 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 8 |
485c0744aaa16942d0b1804735dd283c047124a8 | 1,337 | py | Python | mods/mcpython/Item/wood_planks.py | uuk0/mcpython-a-minecraft-clone-in-python | c16cd66f319efdeec4130e1a43f5a857caf1ea13 | [
"MIT"
] | 2 | 2020-04-23T16:25:51.000Z | 2020-08-27T17:56:16.000Z | mods/mcpython/Item/wood_planks.py | uuk0/mcpython-a-minecraft-clone-in-python | c16cd66f319efdeec4130e1a43f5a857caf1ea13 | [
"MIT"
] | null | null | null | mods/mcpython/Item/wood_planks.py | uuk0/mcpython-a-minecraft-clone-in-python | c16cd66f319efdeec4130e1a43f5a857caf1ea13 | [
"MIT"
] | null | null | null | from .Item import *
from oredictnames import *
class wood_plank_0(Item):
def getName(self):
return "minecraft:wood_plank_0"
def getTexturFile(self):
return "./assets/textures/items/plank.png"
def getOreDictNames(self):
return [OreDict.WOOD_PLANK]
def getFuelAmount(self):
return 20
handler.register(wood_plank_0)
class wood_plank_1(Item):
def getName(self):
return "minecraft:wood_plank_1"
def getTexturFile(self):
return "./assets/textures/items/plank_1.png"
def getOreDictNames(self):
return [OreDict.WOOD_PLANK]
def getFuelAmount(self):
return 20
handler.register(wood_plank_1)
class wood_plank_2(Item):
def getName(self):
return "minecraft:wood_plank_2"
def getTexturFile(self):
return "./assets/textures/items/plank_2.png"
def getOreDictNames(self):
return [OreDict.WOOD_PLANK]
def getFuelAmount(self):
return 20
handler.register(wood_plank_2)
class wood_plank_3(Item):
def getName(self):
return "minecraft:wood_plank_3"
def getTexturFile(self):
return "./assets/textures/items/plank_3.png"
def getOreDictNames(self):
return [OreDict.WOOD_PLANK]
def getFuelAmount(self):
return 20
handler.register(wood_plank_3)
| 18.830986 | 52 | 0.674645 | 165 | 1,337 | 5.278788 | 0.163636 | 0.165327 | 0.064294 | 0.082664 | 0.877153 | 0.877153 | 0.877153 | 0.877153 | 0.45465 | 0.45465 | 0 | 0.022265 | 0.227375 | 1,337 | 70 | 53 | 19.1 | 0.82091 | 0 | 0 | 0.571429 | 0 | 0 | 0.169035 | 0.169035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.380952 | false | 0 | 0.047619 | 0.380952 | 0.904762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
486188252850a861887f58e70f04d5d345211b8d | 5,182 | py | Python | tests/test_property_decorators.py | ckauth/eurostat-api-client | 70c859881d50b3eca275434e2590ff7d76b290e9 | [
"Apache-2.0"
] | 4 | 2019-01-04T12:57:07.000Z | 2021-03-14T04:03:42.000Z | tests/test_property_decorators.py | ckauth/eurostat-api-client | 70c859881d50b3eca275434e2590ff7d76b290e9 | [
"Apache-2.0"
] | 6 | 2019-06-16T21:20:09.000Z | 2021-09-15T21:03:57.000Z | tests/test_property_decorators.py | ckauth/eurostat-api-client | 70c859881d50b3eca275434e2590ff7d76b290e9 | [
"Apache-2.0"
] | 9 | 2019-07-29T16:13:25.000Z | 2022-03-10T17:42:30.000Z | import unittest
import datetime
from eurostatapiclient.utils.property_decorators import property_is_boolean, \
property_is_int, property_is_string, property_is_datetime
class TestBooleanDecorator(unittest.TestCase):
"""Unit test for the Boolean property decorator."""
def test_value_error(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_boolean
def value(self, value):
self._value = value
self.assertRaises(ValueError, SomeObject, 3)
self.assertRaises(ValueError, SomeObject, 'string')
def test_value_assignation(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_boolean
def value(self, value):
self._value = value
some_object = SomeObject(True)
self.assertEqual(bool(some_object.value), True)
class TestDatetimeDecorator(unittest.TestCase):
"""Unit test for the Datetime property decorator."""
def test_value_error(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_datetime
def value(self, value):
self._value = value
self.assertRaises(ValueError, SomeObject, 3)
self.assertRaises(ValueError, SomeObject, 'string')
def test_value_assignation(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_datetime
def value(self, value):
self._value = value
ts = datetime.datetime(2018, 1, 1)
some_object = SomeObject(ts)
self.assertEqual(some_object.value, ts)
class TestStringDecorator(unittest.TestCase):
"""Unit test for the String property decorator."""
def test_value_error(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_string
def value(self, value):
self._value = value
self.assertRaises(ValueError, SomeObject, 3)
self.assertRaises(ValueError, SomeObject, True)
def test_value_assignation(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_string
def value(self, value):
self._value = value
some_object = SomeObject('String')
self.assertEqual(str(some_object.value), 'String')
class TestIntegerDecorator(unittest.TestCase):
"""Unit test for the Integer property decorator."""
def test_value_error(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_int()
def value(self, value):
self._value = value
self.assertRaises(ValueError, SomeObject, True)
self.assertRaises(ValueError, SomeObject, 'string')
def test_value_assignation(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_int()
def value(self, value):
self._value = value
some_object = SomeObject(0)
self.assertEqual(int(some_object.value), 0)
some_object = SomeObject(-100)
self.assertEqual(int(some_object.value), -100)
some_object = SomeObject(100)
self.assertEqual(int(some_object.value), 100)
def test_range_limitation(self):
class SomeObject(object):
def __init__(self, value):
self.value = value
@property
def value(self):
return self._value
@value.setter
@property_is_int((0, 1000))
def value(self, value):
self._value = value
some_object = SomeObject(500)
self.assertEqual(int(some_object.value), 500)
self.assertRaises(ValueError, SomeObject, 1001)
self.assertRaises(ValueError, SomeObject, -1)
| 28.629834 | 78 | 0.563103 | 506 | 5,182 | 5.543478 | 0.102767 | 0.144385 | 0.134759 | 0.115508 | 0.804991 | 0.802139 | 0.735829 | 0.735829 | 0.735829 | 0.735829 | 0 | 0.011649 | 0.353917 | 5,182 | 180 | 79 | 28.788889 | 0.826165 | 0.035315 | 0 | 0.795455 | 0 | 0 | 0.006027 | 0 | 0 | 0 | 0 | 0 | 0.128788 | 1 | 0.272727 | false | 0 | 0.022727 | 0.068182 | 0.462121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6f926fc04efeca297508bac66c2641d84d5b002d | 4,825 | py | Python | tests/integration/api/v2010/account/call/test_siprec.py | BrimmingDev/twilio-python | 3226b5fed92b3c2ce64f03e6b19fc4792ef7647f | [
"MIT"
] | 1,362 | 2015-01-04T10:25:18.000Z | 2022-03-24T10:07:08.000Z | tests/integration/api/v2010/account/call/test_siprec.py | BrimmingDev/twilio-python | 3226b5fed92b3c2ce64f03e6b19fc4792ef7647f | [
"MIT"
] | 299 | 2015-01-30T09:52:39.000Z | 2022-03-31T23:03:02.000Z | tests/integration/api/v2010/account/call/test_siprec.py | BrimmingDev/twilio-python | 3226b5fed92b3c2ce64f03e6b19fc4792ef7647f | [
"MIT"
] | 622 | 2015-01-03T04:43:09.000Z | 2022-03-29T14:11:00.000Z | # coding=utf-8
r"""
This code was generated by
\ / _ _ _| _ _
| (_)\/(_)(_|\/| |(/_ v1.0.0
/ /
"""
from tests import IntegrationTestCase
from tests.holodeck import Request
from twilio.base.exceptions import TwilioException
from twilio.http.response import Response
class SiprecTestCase(IntegrationTestCase):
def test_create_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.api.v2010.accounts("ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.calls("CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.siprec.create()
self.holodeck.assert_has_request(Request(
'post',
'https://api.twilio.com/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Calls/CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Siprec.json',
))
def test_create_no_args_response(self):
self.holodeck.mock(Response(
201,
'''
{
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"call_sid": "CAaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"sid": "SRaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"name": null,
"status": "in-progress",
"date_updated": "Thu, 30 Jul 2015 20:00:00 +0000"
}
'''
))
actual = self.client.api.v2010.accounts("ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.calls("CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.siprec.create()
self.assertIsNotNone(actual)
def test_create_with_args_response(self):
self.holodeck.mock(Response(
201,
'''
{
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"call_sid": "CAaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"sid": "SRaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"name": "myName",
"status": "in-progress",
"date_updated": "Thu, 30 Jul 2015 20:00:00 +0000"
}
'''
))
actual = self.client.api.v2010.accounts("ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.calls("CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.siprec.create()
self.assertIsNotNone(actual)
def test_update_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.api.v2010.accounts("ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.calls("CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.siprec("SRXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").update(status="stopped")
values = {'Status': "stopped", }
self.holodeck.assert_has_request(Request(
'post',
'https://api.twilio.com/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Calls/CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Siprec/SRXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.json',
data=values,
))
def test_update_by_sid_response(self):
self.holodeck.mock(Response(
200,
'''
{
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"call_sid": "CAaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"sid": "SRaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"name": null,
"status": "stopped",
"date_updated": "Thu, 30 Jul 2015 20:00:00 +0000"
}
'''
))
actual = self.client.api.v2010.accounts("ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.calls("CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.siprec("SRXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").update(status="stopped")
self.assertIsNotNone(actual)
def test_update_by_name_response(self):
self.holodeck.mock(Response(
200,
'''
{
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"call_sid": "CAaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"sid": "SRaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"name": "mySiprec",
"status": "stopped",
"date_updated": "Thu, 30 Jul 2015 20:00:00 +0000"
}
'''
))
actual = self.client.api.v2010.accounts("ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.calls("CAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") \
.siprec("SRXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").update(status="stopped")
self.assertIsNotNone(actual)
| 37.403101 | 180 | 0.557513 | 333 | 4,825 | 7.942943 | 0.249249 | 0.036295 | 0.142155 | 0.244991 | 0.874102 | 0.874102 | 0.854064 | 0.854064 | 0.854064 | 0.854064 | 0 | 0.03935 | 0.336373 | 4,825 | 128 | 181 | 37.695313 | 0.786696 | 0.022591 | 0 | 0.733333 | 1 | 0.033333 | 0.265851 | 0.160075 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.1 | false | 0 | 0.066667 | 0 | 0.183333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
6f9678c9692dbe2b0abf7b7299518c5d9b5e93c2 | 125 | py | Python | insights_messaging/publishers/__init__.py | JoseLSegura/insights-core-messaging | d4585774abfeb107c58e1b8270155b5272fa4b31 | [
"Apache-2.0"
] | null | null | null | insights_messaging/publishers/__init__.py | JoseLSegura/insights-core-messaging | d4585774abfeb107c58e1b8270155b5272fa4b31 | [
"Apache-2.0"
] | null | null | null | insights_messaging/publishers/__init__.py | JoseLSegura/insights-core-messaging | d4585774abfeb107c58e1b8270155b5272fa4b31 | [
"Apache-2.0"
] | null | null | null |
class Publisher:
def publish(self, input_msg, response):
pass
def error(self, input_msg, ex):
pass
| 15.625 | 43 | 0.608 | 16 | 125 | 4.625 | 0.6875 | 0.243243 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.304 | 125 | 7 | 44 | 17.857143 | 0.850575 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
6fa36f58a37baf136aaf645a2dca55b1a737815b | 5,939 | py | Python | tests/threadfix/create_test.py | matt-fevold/webbreaker | b500fc620ebba03a27321c8f832ab77bb760b9c5 | [
"MIT"
] | 7 | 2018-12-20T19:18:43.000Z | 2019-12-10T15:03:41.000Z | tests/threadfix/create_test.py | matt-fevold/webbreaker | b500fc620ebba03a27321c8f832ab77bb760b9c5 | [
"MIT"
] | 5 | 2019-04-02T17:07:44.000Z | 2020-02-17T07:08:11.000Z | tests/threadfix/create_test.py | matt-fevold/webbreaker | b500fc620ebba03a27321c8f832ab77bb760b9c5 | [
"MIT"
] | 7 | 2019-01-10T10:40:55.000Z | 2022-03-13T14:08:37.000Z | import mock
import pytest
from webbreaker.threadfix.create import ThreadFixCreate
from threadfixproapi.threadfixpro import ThreadFixProResponse
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_successful_no_team_id(log_mock, helper_mock):
team_id = '10'
team = 'some team'
test_team = {
'id': team_id,
'name': team
}
app_name = 'new app'
url = 'someurl.com'
new_app = {
'id': '100',
'application': app_name,
'team_id': team_id,
'url': url
}
helper_mock.return_value.get_team_list.return_value = [test_team]
helper_mock.return_value.api.create_application.return_value = ThreadFixProResponse(message='test', success=True, data=new_app)
tfc = ThreadFixCreate(None, team, app_name, url)
assert helper_mock.call_count == 1
assert tfc.helper.get_team_list.call_count == 1
log_mock.log_info_application_created_with_id.assert_called_once_with(new_app['id'])
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_failed_no_team_id_team_not_exist(log_mock, helper_mock):
team_id = '10'
team = 'some team'
test_team = {
'id': team_id,
'name': team
}
app_name = 'new app'
url = 'someurl.com'
helper_mock.return_value.get_team_list.return_value = [test_team]
fakeTeam = 'team that does not exist'
tfc = ThreadFixCreate(None, fakeTeam, app_name, url)
assert helper_mock.call_count == 1
assert tfc.helper.get_team_list.call_count == 1
log_mock.log_error_no_team_with_application.assert_called_once_with(fakeTeam)
assert log_mock.log_info_application_created_with_id.call_count == 0
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_failed_no_team_id(log_mock, helper_mock):
team_id = '10'
team = 'some team'
test_team = {
'id': team_id,
'name': team
}
app_name = 'new app'
url = 'someurl.com'
new_app = {}
helper_mock.return_value.get_team_list.return_value = [test_team]
helper_mock.return_value.api.create_application.return_value = ThreadFixProResponse(message='error', success=False, data=new_app)
with pytest.raises(SystemExit):
ThreadFixCreate(None, team, app_name, url)
assert helper_mock.call_count == 1
assert log_mock.log_info_application_created_with_id.call_count == 0
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_successful_no_team(log_mock, helper_mock):
team_id = '10'
app_name = 'new app'
url = 'someurl.com'
new_app = {
'id': '100',
'application': app_name,
'team_id': team_id,
'url': url
}
helper_mock.return_value.api.create_application.return_value = ThreadFixProResponse(message='test', success=True, data=new_app)
tfc = ThreadFixCreate(team_id, None, app_name, url)
assert helper_mock.call_count == 1
assert tfc.helper.get_team_list.call_count == 0
log_mock.log_info_application_created_with_id.assert_called_once_with(new_app['id'])
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_failed_no_team(log_mock, helper_mock):
team_id = '10'
app_name = 'new app'
url = 'someurl.com'
new_app = {}
helper_mock.return_value.api.create_application.return_value = ThreadFixProResponse(message='error', success=False, data=new_app)
with pytest.raises(SystemExit):
ThreadFixCreate(team_id, None, app_name, url)
assert helper_mock.call_count == 1
assert log_mock.log_info_application_created_with_id.call_count == 0
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_successful_all_params(log_mock, helper_mock):
team_id = '10'
team = 'some team'
app_name = 'new app'
url = 'someurl.com'
new_app = {
'id': '100',
'application': app_name,
'team_id': team_id,
'url': url
}
helper_mock.return_value.api.create_application.return_value = ThreadFixProResponse(message='test', success=True, data=new_app)
tfc = ThreadFixCreate(team_id, team, app_name, url)
assert helper_mock.call_count == 1
assert tfc.helper.get_team_list.call_count == 0
log_mock.log_info_application_created_with_id.assert_called_once_with(new_app['id'])
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_failed_all_params(log_mock, helper_mock):
team_id = '10'
team = 'some team'
app_name = 'new app'
url = 'someurl.com'
new_app = {}
helper_mock.return_value.api.create_application.return_value = ThreadFixProResponse(message='error', success=False, data=new_app)
with pytest.raises(SystemExit):
ThreadFixCreate(team_id, team, app_name, url)
assert helper_mock.call_count == 1
assert log_mock.log_info_application_created_with_id.call_count == 0
@mock.patch('webbreaker.threadfix.create.ThreadFixHelper')
@mock.patch('webbreaker.threadfix.create.threadfixloghelper')
def test_threadfix_create_app_failed_no_team_nor_team_id(log_mock, helper_mock):
app_name = 'new app'
url = 'someurl.com'
tfc = ThreadFixCreate(None, None, app_name, url)
assert helper_mock.call_count == 1
assert tfc.helper.get_team_list.call_count == 0
assert log_mock.log_error_specify_team.call_count == 1
assert log_mock.log_info_application_created_with_id.call_count == 0
| 38.564935 | 133 | 0.739182 | 805 | 5,939 | 5.106832 | 0.08323 | 0.039406 | 0.103381 | 0.108976 | 0.924593 | 0.924593 | 0.918998 | 0.912673 | 0.912673 | 0.912673 | 0 | 0.008362 | 0.154235 | 5,939 | 153 | 134 | 38.816993 | 0.810074 | 0 | 0 | 0.781955 | 0 | 0 | 0.179828 | 0.119886 | 0 | 0 | 0 | 0 | 0.172932 | 1 | 0.06015 | false | 0 | 0.030075 | 0 | 0.090226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6fdc300f860add94e9930423990d4a06a1621042 | 60,847 | py | Python | sdk/digitaltwins/azure-digitaltwins-core/azure/digitaltwins/core/_generated/aio/operations_async/_digital_twins_operations_async.py | kahinton/azure-sdk-for-python | 7f67fd752ec77ef0de9507e9293a377c17c3162f | [
"MIT"
] | 1 | 2021-09-07T18:35:07.000Z | 2021-09-07T18:35:07.000Z | sdk/digitaltwins/azure-digitaltwins-core/azure/digitaltwins/core/_generated/aio/operations_async/_digital_twins_operations_async.py | kahinton/azure-sdk-for-python | 7f67fd752ec77ef0de9507e9293a377c17c3162f | [
"MIT"
] | null | null | null | sdk/digitaltwins/azure-digitaltwins-core/azure/digitaltwins/core/_generated/aio/operations_async/_digital_twins_operations_async.py | kahinton/azure-sdk-for-python | 7f67fd752ec77ef0de9507e9293a377c17c3162f | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, List, Optional, TypeVar
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from ... import models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class DigitalTwinsOperations:
"""DigitalTwinsOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.digitaltwins.core.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
async def get_by_id(
self,
id: str,
digital_twins_get_by_id_options: Optional["models.DigitalTwinsGetByIdOptions"] = None,
**kwargs
) -> object:
"""Retrieves a digital twin.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param digital_twins_get_by_id_options: Parameter group.
:type digital_twins_get_by_id_options: ~azure.digitaltwins.core.models.DigitalTwinsGetByIdOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: object, or the result of cls(response)
:rtype: object
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[object]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_get_by_id_options is not None:
_traceparent = digital_twins_get_by_id_options.traceparent
_tracestate = digital_twins_get_by_id_options.tracestate
api_version = "2020-10-31"
# Construct URL
url = self.get_by_id.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Accept'] = 'application/json'
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_by_id.metadata = {'url': '/digitaltwins/{id}'} # type: ignore
async def add(
self,
id: str,
twin: object,
if_none_match: Optional[str] = "*",
digital_twins_add_options: Optional["models.DigitalTwinsAddOptions"] = None,
**kwargs
) -> Optional[object]:
"""Adds or replaces a digital twin.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id or payload is invalid.
* ModelDecommissioned - The model for the digital twin is decommissioned.
* TwinLimitReached - The maximum number of digital twins allowed has been reached.
* ValidationFailed - The digital twin payload is not valid.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param twin: The digital twin instance being added. If provided, the $dtId property is ignored.
:type twin: object
:param if_none_match: Only perform the operation if the entity does not already exist.
:type if_none_match: str
:param digital_twins_add_options: Parameter group.
:type digital_twins_add_options: ~azure.digitaltwins.core.models.DigitalTwinsAddOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: object, or the result of cls(response)
:rtype: object or None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Optional[object]]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_add_options is not None:
_traceparent = digital_twins_add_options.traceparent
_tracestate = digital_twins_add_options.tracestate
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json")
# Construct URL
url = self.add.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = 'application/json'
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(twin, 'object')
body_content_kwargs['content'] = body_content
request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
deserialized = None
if response.status_code == 200:
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
add.metadata = {'url': '/digitaltwins/{id}'} # type: ignore
async def delete(
self,
id: str,
digital_twins_delete_options: Optional["models.DigitalTwinsDeleteOptions"] = None,
**kwargs
) -> None:
"""Deletes a digital twin. All relationships referencing the digital twin must already be deleted.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id is invalid.
* RelationshipsNotDeleted - The digital twin contains relationships.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param digital_twins_delete_options: Parameter group.
:type digital_twins_delete_options: ~azure.digitaltwins.core.models.DigitalTwinsDeleteOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
_if_match = None
if digital_twins_delete_options is not None:
_traceparent = digital_twins_delete_options.traceparent
_tracestate = digital_twins_delete_options.tracestate
_if_match = digital_twins_delete_options.if_match
api_version = "2020-10-31"
# Construct URL
url = self.delete.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if _if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", _if_match, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete.metadata = {'url': '/digitaltwins/{id}'} # type: ignore
async def update(
self,
id: str,
patch_document: List[object],
digital_twins_update_options: Optional["models.DigitalTwinsUpdateOptions"] = None,
**kwargs
) -> None:
"""Updates a digital twin.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id or payload is invalid.
* JsonPatchInvalid - The JSON Patch provided is invalid.
* ValidationFailed - Applying the patch results in an invalid digital twin.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param patch_document: An update specification described by JSON Patch. Updates to property
values and $model elements may happen in the same request. Operations are limited to add,
replace and remove.
:type patch_document: list[object]
:param digital_twins_update_options: Parameter group.
:type digital_twins_update_options: ~azure.digitaltwins.core.models.DigitalTwinsUpdateOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
_if_match = None
if digital_twins_update_options is not None:
_traceparent = digital_twins_update_options.traceparent
_tracestate = digital_twins_update_options.tracestate
_if_match = digital_twins_update_options.if_match
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json-patch+json")
# Construct URL
url = self.update.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if _if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", _if_match, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(patch_document, '[object]')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
if response.status_code == 204:
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
if cls:
return cls(pipeline_response, None, response_headers)
update.metadata = {'url': '/digitaltwins/{id}'} # type: ignore
async def get_relationship_by_id(
self,
id: str,
relationship_id: str,
digital_twins_get_relationship_by_id_options: Optional["models.DigitalTwinsGetRelationshipByIdOptions"] = None,
**kwargs
) -> object:
"""Retrieves a relationship between two digital twins.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id or relationship id is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* RelationshipNotFound - The relationship was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param relationship_id: The id of the relationship. The id is unique within the digital twin
and case sensitive.
:type relationship_id: str
:param digital_twins_get_relationship_by_id_options: Parameter group.
:type digital_twins_get_relationship_by_id_options: ~azure.digitaltwins.core.models.DigitalTwinsGetRelationshipByIdOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: object, or the result of cls(response)
:rtype: object
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[object]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_get_relationship_by_id_options is not None:
_traceparent = digital_twins_get_relationship_by_id_options.traceparent
_tracestate = digital_twins_get_relationship_by_id_options.tracestate
api_version = "2020-10-31"
# Construct URL
url = self.get_relationship_by_id.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'relationshipId': self._serialize.url("relationship_id", relationship_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Accept'] = 'application/json'
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_relationship_by_id.metadata = {'url': '/digitaltwins/{id}/relationships/{relationshipId}'} # type: ignore
async def add_relationship(
self,
id: str,
relationship_id: str,
relationship: object,
if_none_match: Optional[str] = "*",
digital_twins_add_relationship_options: Optional["models.DigitalTwinsAddRelationshipOptions"] = None,
**kwargs
) -> object:
"""Adds a relationship between two digital twins.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id, relationship id, or payload is invalid.
* InvalidRelationship - The relationship is invalid.
* OperationNotAllowed - The relationship cannot connect to the same digital twin.
* ValidationFailed - The relationship content is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* TargetTwinNotFound - The digital twin target of the relationship was not found.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param relationship_id: The id of the relationship. The id is unique within the digital twin
and case sensitive.
:type relationship_id: str
:param relationship: The data for the relationship.
:type relationship: object
:param if_none_match: Only perform the operation if the entity does not already exist.
:type if_none_match: str
:param digital_twins_add_relationship_options: Parameter group.
:type digital_twins_add_relationship_options: ~azure.digitaltwins.core.models.DigitalTwinsAddRelationshipOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: object, or the result of cls(response)
:rtype: object
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[object]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_add_relationship_options is not None:
_traceparent = digital_twins_add_relationship_options.traceparent
_tracestate = digital_twins_add_relationship_options.tracestate
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json")
# Construct URL
url = self.add_relationship.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'relationshipId': self._serialize.url("relationship_id", relationship_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = 'application/json'
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(relationship, 'object')
body_content_kwargs['content'] = body_content
request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
add_relationship.metadata = {'url': '/digitaltwins/{id}/relationships/{relationshipId}'} # type: ignore
async def delete_relationship(
self,
id: str,
relationship_id: str,
digital_twins_delete_relationship_options: Optional["models.DigitalTwinsDeleteRelationshipOptions"] = None,
**kwargs
) -> None:
"""Deletes a relationship between two digital twins.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id or relationship id is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* RelationshipNotFound - The relationship was not found.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param relationship_id: The id of the relationship. The id is unique within the digital twin
and case sensitive.
:type relationship_id: str
:param digital_twins_delete_relationship_options: Parameter group.
:type digital_twins_delete_relationship_options: ~azure.digitaltwins.core.models.DigitalTwinsDeleteRelationshipOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
_if_match = None
if digital_twins_delete_relationship_options is not None:
_traceparent = digital_twins_delete_relationship_options.traceparent
_tracestate = digital_twins_delete_relationship_options.tracestate
_if_match = digital_twins_delete_relationship_options.if_match
api_version = "2020-10-31"
# Construct URL
url = self.delete_relationship.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'relationshipId': self._serialize.url("relationship_id", relationship_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if _if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", _if_match, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_relationship.metadata = {'url': '/digitaltwins/{id}/relationships/{relationshipId}'} # type: ignore
async def update_relationship(
self,
id: str,
relationship_id: str,
patch_document: List[object],
digital_twins_update_relationship_options: Optional["models.DigitalTwinsUpdateRelationshipOptions"] = None,
**kwargs
) -> None:
"""Updates the properties on a relationship between two digital twins.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id or relationship id is invalid.
* InvalidRelationship - The relationship is invalid.
* JsonPatchInvalid - The JSON Patch provided is invalid.
* ValidationFailed - The relationship content is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* RelationshipNotFound - The relationship was not found.
* 409 Conflict
* RelationshipAlreadyExists - The relationship already exists.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param relationship_id: The id of the relationship. The id is unique within the digital twin
and case sensitive.
:type relationship_id: str
:param patch_document: JSON Patch description of the update to the relationship properties.
:type patch_document: list[object]
:param digital_twins_update_relationship_options: Parameter group.
:type digital_twins_update_relationship_options: ~azure.digitaltwins.core.models.DigitalTwinsUpdateRelationshipOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
_if_match = None
if digital_twins_update_relationship_options is not None:
_traceparent = digital_twins_update_relationship_options.traceparent
_tracestate = digital_twins_update_relationship_options.tracestate
_if_match = digital_twins_update_relationship_options.if_match
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json-patch+json")
# Construct URL
url = self.update_relationship.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'relationshipId': self._serialize.url("relationship_id", relationship_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if _if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", _if_match, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(patch_document, '[object]')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
if cls:
return cls(pipeline_response, None, response_headers)
update_relationship.metadata = {'url': '/digitaltwins/{id}/relationships/{relationshipId}'} # type: ignore
def list_relationships(
self,
id: str,
relationship_name: Optional[str] = None,
digital_twins_list_relationships_options: Optional["models.DigitalTwinsListRelationshipsOptions"] = None,
**kwargs
) -> AsyncIterable["models.RelationshipCollection"]:
"""Retrieves the relationships from a digital twin.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param relationship_name: The name of the relationship.
:type relationship_name: str
:param digital_twins_list_relationships_options: Parameter group.
:type digital_twins_list_relationships_options: ~azure.digitaltwins.core.models.DigitalTwinsListRelationshipsOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either RelationshipCollection or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.digitaltwins.core.models.RelationshipCollection]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.RelationshipCollection"]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_list_relationships_options is not None:
_traceparent = digital_twins_list_relationships_options.traceparent
_tracestate = digital_twins_list_relationships_options.tracestate
api_version = "2020-10-31"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Accept'] = 'application/json'
if not next_link:
# Construct URL
url = self.list_relationships.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if relationship_name is not None:
query_parameters['relationshipName'] = self._serialize.query("relationship_name", relationship_name, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('RelationshipCollection', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.ErrorResponse, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_relationships.metadata = {'url': '/digitaltwins/{id}/relationships'} # type: ignore
def list_incoming_relationships(
self,
id: str,
digital_twins_list_incoming_relationships_options: Optional["models.DigitalTwinsListIncomingRelationshipsOptions"] = None,
**kwargs
) -> AsyncIterable["models.IncomingRelationshipCollection"]:
"""Retrieves all incoming relationship for a digital twin.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param digital_twins_list_incoming_relationships_options: Parameter group.
:type digital_twins_list_incoming_relationships_options: ~azure.digitaltwins.core.models.DigitalTwinsListIncomingRelationshipsOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either IncomingRelationshipCollection or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.digitaltwins.core.models.IncomingRelationshipCollection]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.IncomingRelationshipCollection"]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_list_incoming_relationships_options is not None:
_traceparent = digital_twins_list_incoming_relationships_options.traceparent
_tracestate = digital_twins_list_incoming_relationships_options.tracestate
api_version = "2020-10-31"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Accept'] = 'application/json'
if not next_link:
# Construct URL
url = self.list_incoming_relationships.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('IncomingRelationshipCollection', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.ErrorResponse, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_incoming_relationships.metadata = {'url': '/digitaltwins/{id}/incomingrelationships'} # type: ignore
async def send_telemetry(
self,
id: str,
message_id: str,
telemetry: object,
telemetry_source_time: Optional[str] = None,
digital_twins_send_telemetry_options: Optional["models.DigitalTwinsSendTelemetryOptions"] = None,
**kwargs
) -> None:
"""Sends telemetry on behalf of a digital twin.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id or message id is invalid.
* ValidationFailed - The telemetry content is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param message_id: A unique message identifier (in the scope of the digital twin id) that is
commonly used for de-duplicating messages.
:type message_id: str
:param telemetry: The telemetry measurements to send from the digital twin.
:type telemetry: object
:param telemetry_source_time: An RFC 3339 timestamp that identifies the time the telemetry was
measured.
:type telemetry_source_time: str
:param digital_twins_send_telemetry_options: Parameter group.
:type digital_twins_send_telemetry_options: ~azure.digitaltwins.core.models.DigitalTwinsSendTelemetryOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_send_telemetry_options is not None:
_traceparent = digital_twins_send_telemetry_options.traceparent
_tracestate = digital_twins_send_telemetry_options.tracestate
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json")
# Construct URL
url = self.send_telemetry.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Message-Id'] = self._serialize.header("message_id", message_id, 'str')
if telemetry_source_time is not None:
header_parameters['Telemetry-Source-Time'] = self._serialize.header("telemetry_source_time", telemetry_source_time, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(telemetry, 'object')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
send_telemetry.metadata = {'url': '/digitaltwins/{id}/telemetry'} # type: ignore
async def send_component_telemetry(
self,
id: str,
component_path: str,
message_id: str,
telemetry: object,
telemetry_source_time: Optional[str] = None,
digital_twins_send_component_telemetry_options: Optional["models.DigitalTwinsSendComponentTelemetryOptions"] = None,
**kwargs
) -> None:
"""Sends telemetry on behalf of a component in a digital twin.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id, message id, or component path is invalid.
* ValidationFailed - The telemetry content is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* ComponentNotFound - The component path was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param component_path: The name of the DTDL component.
:type component_path: str
:param message_id: A unique message identifier (in the scope of the digital twin id) that is
commonly used for de-duplicating messages.
:type message_id: str
:param telemetry: The telemetry measurements to send from the digital twin's component.
:type telemetry: object
:param telemetry_source_time: An RFC 3339 timestamp that identifies the time the telemetry was
measured.
:type telemetry_source_time: str
:param digital_twins_send_component_telemetry_options: Parameter group.
:type digital_twins_send_component_telemetry_options: ~azure.digitaltwins.core.models.DigitalTwinsSendComponentTelemetryOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_send_component_telemetry_options is not None:
_traceparent = digital_twins_send_component_telemetry_options.traceparent
_tracestate = digital_twins_send_component_telemetry_options.tracestate
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json")
# Construct URL
url = self.send_component_telemetry.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'componentPath': self._serialize.url("component_path", component_path, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Message-Id'] = self._serialize.header("message_id", message_id, 'str')
if telemetry_source_time is not None:
header_parameters['Telemetry-Source-Time'] = self._serialize.header("telemetry_source_time", telemetry_source_time, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(telemetry, 'object')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
send_component_telemetry.metadata = {'url': '/digitaltwins/{id}/components/{componentPath}/telemetry'} # type: ignore
async def get_component(
self,
id: str,
component_path: str,
digital_twins_get_component_options: Optional["models.DigitalTwinsGetComponentOptions"] = None,
**kwargs
) -> object:
"""Retrieves a component from a digital twin.
Status codes:
* 200 OK
* 400 Bad Request
* InvalidArgument - The digital twin id or component path is invalid.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* ComponentNotFound - The component path was not found.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param component_path: The name of the DTDL component.
:type component_path: str
:param digital_twins_get_component_options: Parameter group.
:type digital_twins_get_component_options: ~azure.digitaltwins.core.models.DigitalTwinsGetComponentOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: object, or the result of cls(response)
:rtype: object
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[object]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
if digital_twins_get_component_options is not None:
_traceparent = digital_twins_get_component_options.traceparent
_tracestate = digital_twins_get_component_options.tracestate
api_version = "2020-10-31"
# Construct URL
url = self.get_component.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'componentPath': self._serialize.url("component_path", component_path, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
header_parameters['Accept'] = 'application/json'
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_component.metadata = {'url': '/digitaltwins/{id}/components/{componentPath}'} # type: ignore
async def update_component(
self,
id: str,
component_path: str,
patch_document: List[object],
digital_twins_update_component_options: Optional["models.DigitalTwinsUpdateComponentOptions"] = None,
**kwargs
) -> None:
"""Updates a component on a digital twin.
Status codes:
* 204 No Content
* 400 Bad Request
* InvalidArgument - The digital twin id, component path, or payload is invalid.
* JsonPatchInvalid - The JSON Patch provided is invalid.
* ValidationFailed - Applying the patch results in an invalid digital twin.
* 404 Not Found
* DigitalTwinNotFound - The digital twin was not found.
* 412 Precondition Failed
* PreconditionFailed - The precondition check (If-Match or If-None-Match) failed.
:param id: The id of the digital twin. The id is unique within the service and case sensitive.
:type id: str
:param component_path: The name of the DTDL component.
:type component_path: str
:param patch_document: An update specification described by JSON Patch. Updates to property
values and $model elements may happen in the same request. Operations are limited to add,
replace and remove.
:type patch_document: list[object]
:param digital_twins_update_component_options: Parameter group.
:type digital_twins_update_component_options: ~azure.digitaltwins.core.models.DigitalTwinsUpdateComponentOptions
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
_traceparent = None
_tracestate = None
_if_match = None
if digital_twins_update_component_options is not None:
_traceparent = digital_twins_update_component_options.traceparent
_tracestate = digital_twins_update_component_options.tracestate
_if_match = digital_twins_update_component_options.if_match
api_version = "2020-10-31"
content_type = kwargs.pop("content_type", "application/json-patch+json")
# Construct URL
url = self.update_component.metadata['url'] # type: ignore
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'componentPath': self._serialize.url("component_path", component_path, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if _traceparent is not None:
header_parameters['traceparent'] = self._serialize.header("traceparent", _traceparent, 'str')
if _tracestate is not None:
header_parameters['tracestate'] = self._serialize.header("tracestate", _tracestate, 'str')
if _if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", _if_match, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(patch_document, '[object]')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.ErrorResponse, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
if response.status_code == 204:
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
if cls:
return cls(pipeline_response, None, response_headers)
update_component.metadata = {'url': '/digitaltwins/{id}/components/{componentPath}'} # type: ignore
| 45.922264 | 141 | 0.66544 | 6,707 | 60,847 | 5.819442 | 0.048755 | 0.0289 | 0.019728 | 0.013272 | 0.897184 | 0.873613 | 0.84415 | 0.812329 | 0.786734 | 0.765187 | 0 | 0.008958 | 0.244137 | 60,847 | 1,324 | 142 | 45.956949 | 0.83969 | 0.082979 | 0 | 0.785714 | 0 | 0 | 0.101955 | 0.032436 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007143 | false | 0 | 0.01 | 0 | 0.055714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b506cc53b20d70f7079753bc4044b4b0b77a3649 | 92 | py | Python | web2py/web2py/parameters_8000.py | eddgt/web2py | 956c67754bbbdeaa5479f61d6a91ce0de87861e9 | [
"BSD-3-Clause"
] | null | null | null | web2py/web2py/parameters_8000.py | eddgt/web2py | 956c67754bbbdeaa5479f61d6a91ce0de87861e9 | [
"BSD-3-Clause"
] | null | null | null | web2py/web2py/parameters_8000.py | eddgt/web2py | 956c67754bbbdeaa5479f61d6a91ce0de87861e9 | [
"BSD-3-Clause"
] | null | null | null | password="pbkdf2(1000,20,sha512)$9f524405befcdd9a$541820135b4a8ed2d356729c8cef66880d4b21d2"
| 46 | 91 | 0.891304 | 7 | 92 | 11.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.505495 | 0.01087 | 92 | 1 | 92 | 92 | 0.395604 | 0 | 0 | 0 | 0 | 0 | 0.869565 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
962c0dee1ff7ebbdc5afe6d67798f046fe8fb8ed | 185 | py | Python | personalWeb/viktors_website/routes.py | victorChekhovoy/myWebsite | 3aa2b0e63099e8ad36e8b23c3d7728ca08afc0e3 | [
"MIT"
] | null | null | null | personalWeb/viktors_website/routes.py | victorChekhovoy/myWebsite | 3aa2b0e63099e8ad36e8b23c3d7728ca08afc0e3 | [
"MIT"
] | null | null | null | personalWeb/viktors_website/routes.py | victorChekhovoy/myWebsite | 3aa2b0e63099e8ad36e8b23c3d7728ca08afc0e3 | [
"MIT"
] | null | null | null | from viktors_website import viktors_app
from flask import render_template
@viktors_app.route('/')
@viktors_app.route('/index')
def index():
return render_template("index.html")
| 26.428571 | 44 | 0.762162 | 25 | 185 | 5.4 | 0.52 | 0.222222 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118919 | 185 | 6 | 45 | 30.833333 | 0.828221 | 0 | 0 | 0 | 0 | 0 | 0.091892 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
963b4840a260350e16cbbe8a3781f46556e46de1 | 126 | py | Python | inferelator/preprocessing/__init__.py | Xparx/inferelator | 2a33c741c4ba7a6bf3d18a3c14d583af0e0705e8 | [
"BSD-2-Clause"
] | 25 | 2019-06-21T07:56:53.000Z | 2022-03-19T06:58:07.000Z | inferelator/preprocessing/__init__.py | Xparx/inferelator | 2a33c741c4ba7a6bf3d18a3c14d583af0e0705e8 | [
"BSD-2-Clause"
] | 22 | 2019-04-16T15:28:19.000Z | 2022-03-02T19:11:12.000Z | inferelator/preprocessing/__init__.py | Xparx/inferelator | 2a33c741c4ba7a6bf3d18a3c14d583af0e0705e8 | [
"BSD-2-Clause"
] | 12 | 2019-05-13T20:03:17.000Z | 2022-02-11T01:44:01.000Z | from inferelator.preprocessing.priors import ManagePriors
from inferelator.preprocessing.simulate_data import make_data_noisy
| 42 | 67 | 0.904762 | 15 | 126 | 7.4 | 0.666667 | 0.27027 | 0.504505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 126 | 2 | 68 | 63 | 0.940678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
964d327ecfd752ba8a5b81eeae19de3675959292 | 2,108 | py | Python | testobject/suites.py | enriquegh/testobject-python-api | 298b7c36cada5b56aeb09a0382fed75accdd3b65 | [
"MIT"
] | 2 | 2019-06-18T15:26:35.000Z | 2019-10-02T16:04:56.000Z | testobject/suites.py | enriquegh/testobject-python-api | 298b7c36cada5b56aeb09a0382fed75accdd3b65 | [
"MIT"
] | 143 | 2017-10-05T16:36:36.000Z | 2021-05-07T18:43:15.000Z | testobject/suites.py | enriquegh/testobject-python-api | 298b7c36cada5b56aeb09a0382fed75accdd3b65 | [
"MIT"
] | 2 | 2019-06-17T22:33:01.000Z | 2019-07-11T12:38:12.000Z |
class Suites(object):
def __init__(self, testobject):
self.testobject = testobject
def update_suite(self, batch_id, data=None):
method = 'PUT'
endpoint = '/v2/appium/suites/{batch_id}'.format(batch_id=batch_id)
content = self.testobject.request(method, endpoint, auth_type='suite', data=data)
return content
def get_devices_ids(self, batch_id):
method = 'GET'
endpoint = '/v2/appium/suites/{batch_id}/deviceIds'.format(batch_id=batch_id)
content = self.testobject.request(method, endpoint, auth_type='suite')
return content
def start_suite(self, batch_id, data=None):
method = 'POST'
endpoint = '/v2/appium/suites/{batch_id}/reports/start'.format(batch_id=batch_id)
content = self.testobject.request(method, endpoint, auth_type='suite', data=data)
return content
def stop_suite(self, batch_id, batch_report_id, data=None):
method = 'PUT'
endpoint = '/v2/appium/suites/{batch_id}/reports/{batch_report_id}/finish'.format(batch_id=batch_id, batch_report_id=batch_report_id)
content = self.testobject.request(method, endpoint, auth_type='suite', data=data)
return content
def stop_suite_test(self, batch_id, batch_report_id, test_report_id, passed):
method = 'PUT'
endpoint = '/v2/appium/suites/{batch_id}/reports/{batch_report_id}/results/{test_report_id}/finish'.format(batch_id=batch_id, batch_report_id=batch_report_id, test_report_id=test_report_id)
data = {}
data['passed'] = passed
content = self.testobject.request(method, endpoint, auth_type='suite', data=data)
return content
def skip_suite_test(self, batch_id, batch_report_id, test_report_id):
method = 'PUT'
endpoint = '/v2/appium/suites/{batch_id}/reports/{batch_report_id}/results/{test_report_id}/skip'.format(batch_id=batch_id, batch_report_id=batch_report_id, test_report_id=test_report_id)
content = self.testobject.request(method, endpoint, auth_type='suite')
return content
| 36.344828 | 197 | 0.693074 | 283 | 2,108 | 4.869258 | 0.134276 | 0.121916 | 0.104499 | 0.097968 | 0.880987 | 0.880987 | 0.843977 | 0.785922 | 0.785922 | 0.785922 | 0 | 0.003501 | 0.186907 | 2,108 | 57 | 198 | 36.982456 | 0.800467 | 0 | 0 | 0.457143 | 0 | 0 | 0.187085 | 0.160969 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.057143 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
968a98a13727f0505e6b9b18c8e91d765a592492 | 1,462 | py | Python | todooo/validators.py | dansackett/todooo | de5e3cdff934e60ea24565aecd76510db6fcdbb7 | [
"MIT"
] | 1 | 2016-11-14T19:51:18.000Z | 2016-11-14T19:51:18.000Z | todooo/validators.py | dansackett/todooo | de5e3cdff934e60ea24565aecd76510db6fcdbb7 | [
"MIT"
] | null | null | null | todooo/validators.py | dansackett/todooo | de5e3cdff934e60ea24565aecd76510db6fcdbb7 | [
"MIT"
] | null | null | null | import errors
def validate_num_arguments_eq(num_args):
"""Validate that the number of supplied args is equal to some number"""
def decorator(func):
def wrapped_func(*args, **kwargs):
if len(args[1]) != num_args:
raise errors.InvalidArgumentError
else:
func(*args, **kwargs)
return wrapped_func
return decorator
def validate_num_arguments_lt(num_args):
"""Validate that the number of supplied args is less than to some number"""
def decorator(func):
def wrapped_func(*args, **kwargs):
if len(args[1]) > num_args:
raise errors.InvalidArgumentError
else:
func(*args, **kwargs)
return wrapped_func
return decorator
def validate_num_arguments_gt(num_args):
"""Validate that the number of supplied args is greater than to some number"""
def decorator(func):
def wrapped_func(*args, **kwargs):
if len(args[1]) < num_args:
raise errors.InvalidArgumentError
else:
func(*args, **kwargs)
return wrapped_func
return decorator
def parse_index(lst, id):
"""Validate an index to the list is within range and a digit and return it"""
if not id.isdigit():
raise errors.ExpectedItemError
idx = int(id) - 1
if idx > len(lst) - 1 or idx < 0:
raise errors.InvalidItemError
return idx
| 28.666667 | 82 | 0.612175 | 183 | 1,462 | 4.770492 | 0.289617 | 0.04811 | 0.09622 | 0.079038 | 0.731959 | 0.731959 | 0.731959 | 0.731959 | 0.731959 | 0.731959 | 0 | 0.005877 | 0.301642 | 1,462 | 50 | 83 | 29.24 | 0.849167 | 0.191518 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.028571 | 0 | 0.514286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
9692823afad7ce01392f4c13da1696d1c29f61dd | 28 | py | Python | config.py | vazrupe/fake-og-server | 5d318575ec7750d6dfa2270ac8995fc3d0b1fa68 | [
"MIT"
] | null | null | null | config.py | vazrupe/fake-og-server | 5d318575ec7750d6dfa2270ac8995fc3d0b1fa68 | [
"MIT"
] | null | null | null | config.py | vazrupe/fake-og-server | 5d318575ec7750d6dfa2270ac8995fc3d0b1fa68 | [
"MIT"
] | null | null | null | host = '0.0.0.0'
port = 8088 | 14 | 16 | 0.571429 | 7 | 28 | 2.285714 | 0.571429 | 0.375 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.347826 | 0.178571 | 28 | 2 | 17 | 14 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
96936f6b2e6080697db0b31766c7669f74147109 | 2,664 | py | Python | nfv/nfv-vim/nfv_vim/database/model/_instance.py | SidneyAn/nfv | 5f0262a5b6ea4be59f977b9c587c483cbe0e373d | [
"Apache-2.0"
] | 2 | 2020-02-07T19:01:36.000Z | 2022-02-23T01:41:46.000Z | nfv/nfv-vim/nfv_vim/database/model/_instance.py | SidneyAn/nfv | 5f0262a5b6ea4be59f977b9c587c483cbe0e373d | [
"Apache-2.0"
] | 1 | 2021-01-14T12:02:25.000Z | 2021-01-14T12:02:25.000Z | nfv/nfv-vim/nfv_vim/database/model/_instance.py | SidneyAn/nfv | 5f0262a5b6ea4be59f977b9c587c483cbe0e373d | [
"Apache-2.0"
] | 2 | 2021-01-13T08:39:21.000Z | 2022-02-09T00:21:55.000Z | #
# Copyright (c) 2015-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from sqlalchemy import Boolean
from sqlalchemy import Column
from sqlalchemy import String
from sqlalchemy import Text
from nfv_vim.database.model._base import AsDictMixin
from nfv_vim.database.model._base import Base
class Instance_v4(AsDictMixin, Base):
"""
Instance Database Table
"""
__tablename__ = 'instances_v4'
uuid = Column(String(64), nullable=False, primary_key=True)
name = Column(String(64), nullable=False)
admin_state = Column(String(64), nullable=False)
oper_state = Column(String(64), nullable=False)
avail_status = Column(String(64), nullable=False)
action = Column(String(64), nullable=False)
host_name = Column(String(64), nullable=True)
instance_type_uuid = Column(String(64), nullable=False)
image_uuid = Column(String(64), nullable=True)
live_migration_support = Column(Boolean, nullable=False)
elapsed_time_in_state = Column(String(64), nullable=False)
elapsed_time_on_host = Column(String(64), nullable=False)
action_data = Column(String(2048), nullable=True)
last_action_data = Column(String(2048), nullable=True)
guest_services = Column(String(2048), nullable=True)
nfvi_instance_data = Column(String(2048), nullable=False)
recoverable = Column(Boolean, nullable=False)
unlock_to_recover = Column(Boolean, nullable=False)
def __repr__(self):
return "<Instance(%r, %r)>" % (self.uuid, self.name)
class Instance_v5(AsDictMixin, Base):
"""
Instance Database Table
"""
__tablename__ = 'instances_v5'
uuid = Column(String(64), nullable=False, primary_key=True)
name = Column(String(64), nullable=False)
admin_state = Column(String(64), nullable=False)
oper_state = Column(String(64), nullable=False)
avail_status = Column(String(64), nullable=False)
action = Column(String(64), nullable=False)
host_name = Column(String(64), nullable=True)
image_uuid = Column(String(64), nullable=True)
live_migration_support = Column(Boolean, nullable=False)
elapsed_time_in_state = Column(String(64), nullable=False)
elapsed_time_on_host = Column(String(64), nullable=False)
action_data = Column(String(2048), nullable=True)
last_action_data = Column(String(2048), nullable=True)
guest_services = Column(String(2048), nullable=True)
nfvi_instance_data = Column(Text(), nullable=False)
recoverable = Column(Boolean, nullable=False)
unlock_to_recover = Column(Boolean, nullable=False)
def __repr__(self):
return "<Instance(%r, %r)>" % (self.uuid, self.name)
| 38.057143 | 63 | 0.721096 | 343 | 2,664 | 5.396501 | 0.212828 | 0.181524 | 0.158833 | 0.249595 | 0.873582 | 0.868179 | 0.851432 | 0.757428 | 0.757428 | 0.757428 | 0 | 0.037483 | 0.158784 | 2,664 | 69 | 64 | 38.608696 | 0.788487 | 0.049925 | 0 | 0.734694 | 0 | 0 | 0.024038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.122449 | 0.040816 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
96be8027b53ef30eea542daac25433f3fdcf695b | 26 | py | Python | src/dafsa/common.py | tresoldi/dafsa | 56db4d1765067c7c5e8c88ba0e47f69e0a5e2611 | [
"MIT"
] | 13 | 2019-12-18T12:12:27.000Z | 2022-01-07T04:31:04.000Z | src/dafsa/common.py | tresoldi/dafsa | 56db4d1765067c7c5e8c88ba0e47f69e0a5e2611 | [
"MIT"
] | 7 | 2020-01-22T19:01:32.000Z | 2021-03-17T15:36:44.000Z | src/dafsa/common.py | tresoldi/dafsa | 56db4d1765067c7c5e8c88ba0e47f69e0a5e2611 | [
"MIT"
] | 1 | 2021-01-26T13:07:24.000Z | 2021-01-26T13:07:24.000Z | def dummy():
return 42 | 13 | 13 | 0.615385 | 4 | 26 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.269231 | 26 | 2 | 13 | 13 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
736c6d40461a70bdbaebc32ebf9903bdd5ef04b9 | 166 | py | Python | ideas/deploy.py | Createdd/ml_api_covid | a086f8cf2cd8ea0af50da58acac014e8abb557d8 | [
"CC-BY-4.0"
] | 21 | 2020-09-26T18:12:24.000Z | 2020-12-23T14:15:47.000Z | ideas/deploy.py | sagarendluri/api | 27cb2cf92742b05880c23167d211c81ff69f75a4 | [
"CC-BY-4.0"
] | null | null | null | ideas/deploy.py | sagarendluri/api | 27cb2cf92742b05880c23167d211c81ff69f75a4 | [
"CC-BY-4.0"
] | 18 | 2020-09-27T07:24:08.000Z | 2021-09-17T12:26:54.000Z | # import subprocess
# def install(name):
# subprocess.call(['pip', 'uninstall', '-r requirements.txt -y'])
# os.system(["pip uninstall -r requirements.txt -y"]) | 27.666667 | 69 | 0.662651 | 21 | 166 | 5.238095 | 0.666667 | 0.218182 | 0.236364 | 0.454545 | 0.527273 | 0.527273 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138554 | 166 | 6 | 70 | 27.666667 | 0.769231 | 0.939759 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
73b466543011b0484142988e196e7d8519952193 | 46,523 | py | Python | sdk/python/pulumi_yandex/mdb_redis_cluster.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2021-04-20T15:39:41.000Z | 2022-02-20T09:14:39.000Z | sdk/python/pulumi_yandex/mdb_redis_cluster.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | 56 | 2021-04-20T11:31:03.000Z | 2022-03-31T15:53:06.000Z | sdk/python/pulumi_yandex/mdb_redis_cluster.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['MdbRedisClusterArgs', 'MdbRedisCluster']
@pulumi.input_type
class MdbRedisClusterArgs:
def __init__(__self__, *,
config: pulumi.Input['MdbRedisClusterConfigArgs'],
environment: pulumi.Input[str],
hosts: pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]],
network_id: pulumi.Input[str],
resources: pulumi.Input['MdbRedisClusterResourcesArgs'],
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input['MdbRedisClusterMaintenanceWindowArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
tls_enabled: Optional[pulumi.Input[bool]] = None):
"""
The set of arguments for constructing a MdbRedisCluster resource.
:param pulumi.Input['MdbRedisClusterConfigArgs'] config: Configuration of the Redis cluster. The structure is documented below.
:param pulumi.Input[str] environment: Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]] hosts: A host of the Redis cluster. The structure is documented below.
:param pulumi.Input[str] network_id: ID of the network, to which the Redis cluster belongs.
:param pulumi.Input['MdbRedisClusterResourcesArgs'] resources: Resources allocated to hosts of the Redis cluster. The structure is documented below.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
:param pulumi.Input[str] description: Description of the Redis cluster.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the Redis cluster.
:param pulumi.Input[str] name: Name of the Redis cluster. Provided by the client when the cluster is created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[bool] sharded: Redis Cluster mode enabled/disabled.
:param pulumi.Input[bool] tls_enabled: tls support mode enabled/disabled.
"""
pulumi.set(__self__, "config", config)
pulumi.set(__self__, "environment", environment)
pulumi.set(__self__, "hosts", hosts)
pulumi.set(__self__, "network_id", network_id)
pulumi.set(__self__, "resources", resources)
if deletion_protection is not None:
pulumi.set(__self__, "deletion_protection", deletion_protection)
if description is not None:
pulumi.set(__self__, "description", description)
if folder_id is not None:
pulumi.set(__self__, "folder_id", folder_id)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if maintenance_window is not None:
pulumi.set(__self__, "maintenance_window", maintenance_window)
if name is not None:
pulumi.set(__self__, "name", name)
if security_group_ids is not None:
pulumi.set(__self__, "security_group_ids", security_group_ids)
if sharded is not None:
pulumi.set(__self__, "sharded", sharded)
if tls_enabled is not None:
pulumi.set(__self__, "tls_enabled", tls_enabled)
@property
@pulumi.getter
def config(self) -> pulumi.Input['MdbRedisClusterConfigArgs']:
"""
Configuration of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "config")
@config.setter
def config(self, value: pulumi.Input['MdbRedisClusterConfigArgs']):
pulumi.set(self, "config", value)
@property
@pulumi.getter
def environment(self) -> pulumi.Input[str]:
"""
Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: pulumi.Input[str]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def hosts(self) -> pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]]:
"""
A host of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter(name="networkId")
def network_id(self) -> pulumi.Input[str]:
"""
ID of the network, to which the Redis cluster belongs.
"""
return pulumi.get(self, "network_id")
@network_id.setter
def network_id(self, value: pulumi.Input[str]):
pulumi.set(self, "network_id", value)
@property
@pulumi.getter
def resources(self) -> pulumi.Input['MdbRedisClusterResourcesArgs']:
"""
Resources allocated to hosts of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: pulumi.Input['MdbRedisClusterResourcesArgs']):
pulumi.set(self, "resources", value)
@property
@pulumi.getter(name="deletionProtection")
def deletion_protection(self) -> Optional[pulumi.Input[bool]]:
"""
Inhibits deletion of the cluster. Can be either `true` or `false`.
"""
return pulumi.get(self, "deletion_protection")
@deletion_protection.setter
def deletion_protection(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "deletion_protection", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description of the Redis cluster.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="folderId")
def folder_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
"""
return pulumi.get(self, "folder_id")
@folder_id.setter
def folder_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "folder_id", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A set of key/value label pairs to assign to the Redis cluster.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="maintenanceWindow")
def maintenance_window(self) -> Optional[pulumi.Input['MdbRedisClusterMaintenanceWindowArgs']]:
return pulumi.get(self, "maintenance_window")
@maintenance_window.setter
def maintenance_window(self, value: Optional[pulumi.Input['MdbRedisClusterMaintenanceWindowArgs']]):
pulumi.set(self, "maintenance_window", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the Redis cluster. Provided by the client when the cluster is created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="securityGroupIds")
def security_group_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A set of ids of security groups assigned to hosts of the cluster.
"""
return pulumi.get(self, "security_group_ids")
@security_group_ids.setter
def security_group_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "security_group_ids", value)
@property
@pulumi.getter
def sharded(self) -> Optional[pulumi.Input[bool]]:
"""
Redis Cluster mode enabled/disabled.
"""
return pulumi.get(self, "sharded")
@sharded.setter
def sharded(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "sharded", value)
@property
@pulumi.getter(name="tlsEnabled")
def tls_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
tls support mode enabled/disabled.
"""
return pulumi.get(self, "tls_enabled")
@tls_enabled.setter
def tls_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "tls_enabled", value)
@pulumi.input_type
class _MdbRedisClusterState:
def __init__(__self__, *,
config: Optional[pulumi.Input['MdbRedisClusterConfigArgs']] = None,
created_at: Optional[pulumi.Input[str]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
health: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input['MdbRedisClusterMaintenanceWindowArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input['MdbRedisClusterResourcesArgs']] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
status: Optional[pulumi.Input[str]] = None,
tls_enabled: Optional[pulumi.Input[bool]] = None):
"""
Input properties used for looking up and filtering MdbRedisCluster resources.
:param pulumi.Input['MdbRedisClusterConfigArgs'] config: Configuration of the Redis cluster. The structure is documented below.
:param pulumi.Input[str] created_at: Creation timestamp of the key.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
:param pulumi.Input[str] description: Description of the Redis cluster.
:param pulumi.Input[str] environment: Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[str] health: Aggregated health of the cluster. Can be either `ALIVE`, `DEGRADED`, `DEAD` or `HEALTH_UNKNOWN`.
For more information see `health` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
:param pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]] hosts: A host of the Redis cluster. The structure is documented below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the Redis cluster.
:param pulumi.Input[str] name: Name of the Redis cluster. Provided by the client when the cluster is created.
:param pulumi.Input[str] network_id: ID of the network, to which the Redis cluster belongs.
:param pulumi.Input['MdbRedisClusterResourcesArgs'] resources: Resources allocated to hosts of the Redis cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[bool] sharded: Redis Cluster mode enabled/disabled.
:param pulumi.Input[str] status: Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
:param pulumi.Input[bool] tls_enabled: tls support mode enabled/disabled.
"""
if config is not None:
pulumi.set(__self__, "config", config)
if created_at is not None:
pulumi.set(__self__, "created_at", created_at)
if deletion_protection is not None:
pulumi.set(__self__, "deletion_protection", deletion_protection)
if description is not None:
pulumi.set(__self__, "description", description)
if environment is not None:
pulumi.set(__self__, "environment", environment)
if folder_id is not None:
pulumi.set(__self__, "folder_id", folder_id)
if health is not None:
pulumi.set(__self__, "health", health)
if hosts is not None:
pulumi.set(__self__, "hosts", hosts)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if maintenance_window is not None:
pulumi.set(__self__, "maintenance_window", maintenance_window)
if name is not None:
pulumi.set(__self__, "name", name)
if network_id is not None:
pulumi.set(__self__, "network_id", network_id)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if security_group_ids is not None:
pulumi.set(__self__, "security_group_ids", security_group_ids)
if sharded is not None:
pulumi.set(__self__, "sharded", sharded)
if status is not None:
pulumi.set(__self__, "status", status)
if tls_enabled is not None:
pulumi.set(__self__, "tls_enabled", tls_enabled)
@property
@pulumi.getter
def config(self) -> Optional[pulumi.Input['MdbRedisClusterConfigArgs']]:
"""
Configuration of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "config")
@config.setter
def config(self, value: Optional[pulumi.Input['MdbRedisClusterConfigArgs']]):
pulumi.set(self, "config", value)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[pulumi.Input[str]]:
"""
Creation timestamp of the key.
"""
return pulumi.get(self, "created_at")
@created_at.setter
def created_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "created_at", value)
@property
@pulumi.getter(name="deletionProtection")
def deletion_protection(self) -> Optional[pulumi.Input[bool]]:
"""
Inhibits deletion of the cluster. Can be either `true` or `false`.
"""
return pulumi.get(self, "deletion_protection")
@deletion_protection.setter
def deletion_protection(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "deletion_protection", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description of the Redis cluster.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter(name="folderId")
def folder_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
"""
return pulumi.get(self, "folder_id")
@folder_id.setter
def folder_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "folder_id", value)
@property
@pulumi.getter
def health(self) -> Optional[pulumi.Input[str]]:
"""
Aggregated health of the cluster. Can be either `ALIVE`, `DEGRADED`, `DEAD` or `HEALTH_UNKNOWN`.
For more information see `health` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
"""
return pulumi.get(self, "health")
@health.setter
def health(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "health", value)
@property
@pulumi.getter
def hosts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]]]:
"""
A host of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MdbRedisClusterHostArgs']]]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A set of key/value label pairs to assign to the Redis cluster.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="maintenanceWindow")
def maintenance_window(self) -> Optional[pulumi.Input['MdbRedisClusterMaintenanceWindowArgs']]:
return pulumi.get(self, "maintenance_window")
@maintenance_window.setter
def maintenance_window(self, value: Optional[pulumi.Input['MdbRedisClusterMaintenanceWindowArgs']]):
pulumi.set(self, "maintenance_window", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the Redis cluster. Provided by the client when the cluster is created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="networkId")
def network_id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the network, to which the Redis cluster belongs.
"""
return pulumi.get(self, "network_id")
@network_id.setter
def network_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "network_id", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['MdbRedisClusterResourcesArgs']]:
"""
Resources allocated to hosts of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['MdbRedisClusterResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter(name="securityGroupIds")
def security_group_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A set of ids of security groups assigned to hosts of the cluster.
"""
return pulumi.get(self, "security_group_ids")
@security_group_ids.setter
def security_group_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "security_group_ids", value)
@property
@pulumi.getter
def sharded(self) -> Optional[pulumi.Input[bool]]:
"""
Redis Cluster mode enabled/disabled.
"""
return pulumi.get(self, "sharded")
@sharded.setter
def sharded(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "sharded", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter(name="tlsEnabled")
def tls_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
tls support mode enabled/disabled.
"""
return pulumi.get(self, "tls_enabled")
@tls_enabled.setter
def tls_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "tls_enabled", value)
class MdbRedisCluster(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
config: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterConfigArgs']]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbRedisClusterHostArgs']]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterMaintenanceWindowArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterResourcesArgs']]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
tls_enabled: Optional[pulumi.Input[bool]] = None,
__props__=None):
"""
Manages a Redis cluster within the Yandex.Cloud. For more information, see
[the official documentation](https://cloud.yandex.com/docs/managed-redis/concepts).
## Example Usage
Example of creating a Standalone Redis.
```python
import pulumi
import pulumi_yandex as yandex
foo_vpc_network = yandex.VpcNetwork("fooVpcNetwork")
foo_vpc_subnet = yandex.VpcSubnet("fooVpcSubnet",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.5.0.0/24"],
zone="ru-central1-a")
foo_mdb_redis_cluster = yandex.MdbRedisCluster("fooMdbRedisCluster",
config=yandex.MdbRedisClusterConfigArgs(
password="your_password",
version="6.0",
),
environment="PRESTABLE",
hosts=[yandex.MdbRedisClusterHostArgs(
subnet_id=foo_vpc_subnet.id,
zone="ru-central1-a",
)],
maintenance_window=yandex.MdbRedisClusterMaintenanceWindowArgs(
type="ANYTIME",
),
network_id=foo_vpc_network.id,
resources=yandex.MdbRedisClusterResourcesArgs(
disk_size=16,
resource_preset_id="hm1.nano",
))
```
Example of creating a sharded Redis Cluster.
```python
import pulumi
import pulumi_yandex as yandex
foo_vpc_network = yandex.VpcNetwork("fooVpcNetwork")
foo_vpc_subnet = yandex.VpcSubnet("fooVpcSubnet",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.1.0.0/24"],
zone="ru-central1-a")
bar = yandex.VpcSubnet("bar",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.2.0.0/24"],
zone="ru-central1-b")
baz = yandex.VpcSubnet("baz",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.3.0.0/24"],
zone="ru-central1-c")
foo_mdb_redis_cluster = yandex.MdbRedisCluster("fooMdbRedisCluster",
config=yandex.MdbRedisClusterConfigArgs(
password="your_password",
version="6.0",
),
environment="PRESTABLE",
hosts=[
yandex.MdbRedisClusterHostArgs(
shard_name="first",
subnet_id=foo_vpc_subnet.id,
zone="ru-central1-a",
),
yandex.MdbRedisClusterHostArgs(
shard_name="second",
subnet_id=bar.id,
zone="ru-central1-b",
),
yandex.MdbRedisClusterHostArgs(
shard_name="third",
subnet_id=baz.id,
zone="ru-central1-c",
),
],
network_id=foo_vpc_network.id,
resources=yandex.MdbRedisClusterResourcesArgs(
disk_size=16,
resource_preset_id="hm1.nano",
),
sharded=True)
```
## Import
A cluster can be imported using the `id` of the resource, e.g.
```sh
$ pulumi import yandex:index/mdbRedisCluster:MdbRedisCluster foo cluster_id
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['MdbRedisClusterConfigArgs']] config: Configuration of the Redis cluster. The structure is documented below.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
:param pulumi.Input[str] description: Description of the Redis cluster.
:param pulumi.Input[str] environment: Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbRedisClusterHostArgs']]]] hosts: A host of the Redis cluster. The structure is documented below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the Redis cluster.
:param pulumi.Input[str] name: Name of the Redis cluster. Provided by the client when the cluster is created.
:param pulumi.Input[str] network_id: ID of the network, to which the Redis cluster belongs.
:param pulumi.Input[pulumi.InputType['MdbRedisClusterResourcesArgs']] resources: Resources allocated to hosts of the Redis cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[bool] sharded: Redis Cluster mode enabled/disabled.
:param pulumi.Input[bool] tls_enabled: tls support mode enabled/disabled.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: MdbRedisClusterArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Redis cluster within the Yandex.Cloud. For more information, see
[the official documentation](https://cloud.yandex.com/docs/managed-redis/concepts).
## Example Usage
Example of creating a Standalone Redis.
```python
import pulumi
import pulumi_yandex as yandex
foo_vpc_network = yandex.VpcNetwork("fooVpcNetwork")
foo_vpc_subnet = yandex.VpcSubnet("fooVpcSubnet",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.5.0.0/24"],
zone="ru-central1-a")
foo_mdb_redis_cluster = yandex.MdbRedisCluster("fooMdbRedisCluster",
config=yandex.MdbRedisClusterConfigArgs(
password="your_password",
version="6.0",
),
environment="PRESTABLE",
hosts=[yandex.MdbRedisClusterHostArgs(
subnet_id=foo_vpc_subnet.id,
zone="ru-central1-a",
)],
maintenance_window=yandex.MdbRedisClusterMaintenanceWindowArgs(
type="ANYTIME",
),
network_id=foo_vpc_network.id,
resources=yandex.MdbRedisClusterResourcesArgs(
disk_size=16,
resource_preset_id="hm1.nano",
))
```
Example of creating a sharded Redis Cluster.
```python
import pulumi
import pulumi_yandex as yandex
foo_vpc_network = yandex.VpcNetwork("fooVpcNetwork")
foo_vpc_subnet = yandex.VpcSubnet("fooVpcSubnet",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.1.0.0/24"],
zone="ru-central1-a")
bar = yandex.VpcSubnet("bar",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.2.0.0/24"],
zone="ru-central1-b")
baz = yandex.VpcSubnet("baz",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.3.0.0/24"],
zone="ru-central1-c")
foo_mdb_redis_cluster = yandex.MdbRedisCluster("fooMdbRedisCluster",
config=yandex.MdbRedisClusterConfigArgs(
password="your_password",
version="6.0",
),
environment="PRESTABLE",
hosts=[
yandex.MdbRedisClusterHostArgs(
shard_name="first",
subnet_id=foo_vpc_subnet.id,
zone="ru-central1-a",
),
yandex.MdbRedisClusterHostArgs(
shard_name="second",
subnet_id=bar.id,
zone="ru-central1-b",
),
yandex.MdbRedisClusterHostArgs(
shard_name="third",
subnet_id=baz.id,
zone="ru-central1-c",
),
],
network_id=foo_vpc_network.id,
resources=yandex.MdbRedisClusterResourcesArgs(
disk_size=16,
resource_preset_id="hm1.nano",
),
sharded=True)
```
## Import
A cluster can be imported using the `id` of the resource, e.g.
```sh
$ pulumi import yandex:index/mdbRedisCluster:MdbRedisCluster foo cluster_id
```
:param str resource_name: The name of the resource.
:param MdbRedisClusterArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(MdbRedisClusterArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
config: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterConfigArgs']]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbRedisClusterHostArgs']]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterMaintenanceWindowArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterResourcesArgs']]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
tls_enabled: Optional[pulumi.Input[bool]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = MdbRedisClusterArgs.__new__(MdbRedisClusterArgs)
if config is None and not opts.urn:
raise TypeError("Missing required property 'config'")
__props__.__dict__["config"] = config
__props__.__dict__["deletion_protection"] = deletion_protection
__props__.__dict__["description"] = description
if environment is None and not opts.urn:
raise TypeError("Missing required property 'environment'")
__props__.__dict__["environment"] = environment
__props__.__dict__["folder_id"] = folder_id
if hosts is None and not opts.urn:
raise TypeError("Missing required property 'hosts'")
__props__.__dict__["hosts"] = hosts
__props__.__dict__["labels"] = labels
__props__.__dict__["maintenance_window"] = maintenance_window
__props__.__dict__["name"] = name
if network_id is None and not opts.urn:
raise TypeError("Missing required property 'network_id'")
__props__.__dict__["network_id"] = network_id
if resources is None and not opts.urn:
raise TypeError("Missing required property 'resources'")
__props__.__dict__["resources"] = resources
__props__.__dict__["security_group_ids"] = security_group_ids
__props__.__dict__["sharded"] = sharded
__props__.__dict__["tls_enabled"] = tls_enabled
__props__.__dict__["created_at"] = None
__props__.__dict__["health"] = None
__props__.__dict__["status"] = None
super(MdbRedisCluster, __self__).__init__(
'yandex:index/mdbRedisCluster:MdbRedisCluster',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
config: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterConfigArgs']]] = None,
created_at: Optional[pulumi.Input[str]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
health: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbRedisClusterHostArgs']]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterMaintenanceWindowArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[pulumi.InputType['MdbRedisClusterResourcesArgs']]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
status: Optional[pulumi.Input[str]] = None,
tls_enabled: Optional[pulumi.Input[bool]] = None) -> 'MdbRedisCluster':
"""
Get an existing MdbRedisCluster resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['MdbRedisClusterConfigArgs']] config: Configuration of the Redis cluster. The structure is documented below.
:param pulumi.Input[str] created_at: Creation timestamp of the key.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
:param pulumi.Input[str] description: Description of the Redis cluster.
:param pulumi.Input[str] environment: Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[str] health: Aggregated health of the cluster. Can be either `ALIVE`, `DEGRADED`, `DEAD` or `HEALTH_UNKNOWN`.
For more information see `health` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbRedisClusterHostArgs']]]] hosts: A host of the Redis cluster. The structure is documented below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the Redis cluster.
:param pulumi.Input[str] name: Name of the Redis cluster. Provided by the client when the cluster is created.
:param pulumi.Input[str] network_id: ID of the network, to which the Redis cluster belongs.
:param pulumi.Input[pulumi.InputType['MdbRedisClusterResourcesArgs']] resources: Resources allocated to hosts of the Redis cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[bool] sharded: Redis Cluster mode enabled/disabled.
:param pulumi.Input[str] status: Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
:param pulumi.Input[bool] tls_enabled: tls support mode enabled/disabled.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _MdbRedisClusterState.__new__(_MdbRedisClusterState)
__props__.__dict__["config"] = config
__props__.__dict__["created_at"] = created_at
__props__.__dict__["deletion_protection"] = deletion_protection
__props__.__dict__["description"] = description
__props__.__dict__["environment"] = environment
__props__.__dict__["folder_id"] = folder_id
__props__.__dict__["health"] = health
__props__.__dict__["hosts"] = hosts
__props__.__dict__["labels"] = labels
__props__.__dict__["maintenance_window"] = maintenance_window
__props__.__dict__["name"] = name
__props__.__dict__["network_id"] = network_id
__props__.__dict__["resources"] = resources
__props__.__dict__["security_group_ids"] = security_group_ids
__props__.__dict__["sharded"] = sharded
__props__.__dict__["status"] = status
__props__.__dict__["tls_enabled"] = tls_enabled
return MdbRedisCluster(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def config(self) -> pulumi.Output['outputs.MdbRedisClusterConfig']:
"""
Configuration of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "config")
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> pulumi.Output[str]:
"""
Creation timestamp of the key.
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter(name="deletionProtection")
def deletion_protection(self) -> pulumi.Output[bool]:
"""
Inhibits deletion of the cluster. Can be either `true` or `false`.
"""
return pulumi.get(self, "deletion_protection")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
Description of the Redis cluster.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def environment(self) -> pulumi.Output[str]:
"""
Deployment environment of the Redis cluster. Can be either `PRESTABLE` or `PRODUCTION`.
"""
return pulumi.get(self, "environment")
@property
@pulumi.getter(name="folderId")
def folder_id(self) -> pulumi.Output[str]:
"""
The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
"""
return pulumi.get(self, "folder_id")
@property
@pulumi.getter
def health(self) -> pulumi.Output[str]:
"""
Aggregated health of the cluster. Can be either `ALIVE`, `DEGRADED`, `DEAD` or `HEALTH_UNKNOWN`.
For more information see `health` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
"""
return pulumi.get(self, "health")
@property
@pulumi.getter
def hosts(self) -> pulumi.Output[Sequence['outputs.MdbRedisClusterHost']]:
"""
A host of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "hosts")
@property
@pulumi.getter
def labels(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A set of key/value label pairs to assign to the Redis cluster.
"""
return pulumi.get(self, "labels")
@property
@pulumi.getter(name="maintenanceWindow")
def maintenance_window(self) -> pulumi.Output['outputs.MdbRedisClusterMaintenanceWindow']:
return pulumi.get(self, "maintenance_window")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the Redis cluster. Provided by the client when the cluster is created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="networkId")
def network_id(self) -> pulumi.Output[str]:
"""
ID of the network, to which the Redis cluster belongs.
"""
return pulumi.get(self, "network_id")
@property
@pulumi.getter
def resources(self) -> pulumi.Output['outputs.MdbRedisClusterResources']:
"""
Resources allocated to hosts of the Redis cluster. The structure is documented below.
"""
return pulumi.get(self, "resources")
@property
@pulumi.getter(name="securityGroupIds")
def security_group_ids(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A set of ids of security groups assigned to hosts of the cluster.
"""
return pulumi.get(self, "security_group_ids")
@property
@pulumi.getter
def sharded(self) -> pulumi.Output[Optional[bool]]:
"""
Redis Cluster mode enabled/disabled.
"""
return pulumi.get(self, "sharded")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
"""
Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-redis/api-ref/Cluster/).
"""
return pulumi.get(self, "status")
@property
@pulumi.getter(name="tlsEnabled")
def tls_enabled(self) -> pulumi.Output[bool]:
"""
tls support mode enabled/disabled.
"""
return pulumi.get(self, "tls_enabled")
| 44.906371 | 176 | 0.638845 | 5,193 | 46,523 | 5.548623 | 0.053726 | 0.09124 | 0.081106 | 0.038939 | 0.91223 | 0.898348 | 0.874853 | 0.859235 | 0.857187 | 0.841639 | 0 | 0.002905 | 0.252628 | 46,523 | 1,035 | 177 | 44.949758 | 0.825798 | 0.375255 | 0 | 0.758157 | 1 | 0 | 0.125159 | 0.045484 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165067 | false | 0.001919 | 0.013436 | 0.005758 | 0.278311 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fb5a800343a6cca1332e9eb4a9d574f3ab5baeee | 7,586 | py | Python | dingtalk/python/alibabacloud_dingtalk/occupationauth_1_0/client.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 15 | 2020-08-27T04:10:26.000Z | 2022-03-07T06:25:42.000Z | dingtalk/python/alibabacloud_dingtalk/occupationauth_1_0/client.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 1 | 2020-09-27T01:30:46.000Z | 2021-12-29T09:15:34.000Z | dingtalk/python/alibabacloud_dingtalk/occupationauth_1_0/client.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 5 | 2020-08-27T04:07:44.000Z | 2021-12-03T02:55:20.000Z | # -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
from Tea.core import TeaCore
from alibabacloud_tea_openapi.client import Client as OpenApiClient
from alibabacloud_tea_openapi import models as open_api_models
from alibabacloud_tea_util.client import Client as UtilClient
from alibabacloud_dingtalk.occupationauth_1_0 import models as dingtalkoccupationauth__1__0_models
from alibabacloud_tea_util import models as util_models
from alibabacloud_openapi_util.client import Client as OpenApiUtilClient
class Client(OpenApiClient):
"""
*\
"""
def __init__(
self,
config: open_api_models.Config,
):
super().__init__(config)
self._endpoint_rule = ''
if UtilClient.empty(self._endpoint):
self._endpoint = 'api.dingtalk.com'
def check_user_task_status(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTaskStatusRequest,
) -> dingtalkoccupationauth__1__0_models.CheckUserTaskStatusResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkoccupationauth__1__0_models.CheckUserTaskStatusHeaders()
return self.check_user_task_status_with_options(request, headers, runtime)
async def check_user_task_status_async(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTaskStatusRequest,
) -> dingtalkoccupationauth__1__0_models.CheckUserTaskStatusResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkoccupationauth__1__0_models.CheckUserTaskStatusHeaders()
return await self.check_user_task_status_with_options_async(request, headers, runtime)
def check_user_task_status_with_options(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTaskStatusRequest,
headers: dingtalkoccupationauth__1__0_models.CheckUserTaskStatusHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkoccupationauth__1__0_models.CheckUserTaskStatusResponse:
UtilClient.validate_model(request)
body = {}
if not UtilClient.is_unset(request.province_code):
body['provinceCode'] = request.province_code
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
body=OpenApiUtilClient.parse_to_map(body)
)
return TeaCore.from_map(
dingtalkoccupationauth__1__0_models.CheckUserTaskStatusResponse(),
self.do_roarequest('CheckUserTaskStatus', 'occupationauth_1.0', 'HTTP', 'POST', 'AK', f'/v1.0/occupationauth/auths/userTasks', 'json', req, runtime)
)
async def check_user_task_status_with_options_async(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTaskStatusRequest,
headers: dingtalkoccupationauth__1__0_models.CheckUserTaskStatusHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkoccupationauth__1__0_models.CheckUserTaskStatusResponse:
UtilClient.validate_model(request)
body = {}
if not UtilClient.is_unset(request.province_code):
body['provinceCode'] = request.province_code
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
body=OpenApiUtilClient.parse_to_map(body)
)
return TeaCore.from_map(
dingtalkoccupationauth__1__0_models.CheckUserTaskStatusResponse(),
await self.do_roarequest_async('CheckUserTaskStatus', 'occupationauth_1.0', 'HTTP', 'POST', 'AK', f'/v1.0/occupationauth/auths/userTasks', 'json', req, runtime)
)
def check_user_tasks_status(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTasksStatusRequest,
) -> dingtalkoccupationauth__1__0_models.CheckUserTasksStatusResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkoccupationauth__1__0_models.CheckUserTasksStatusHeaders()
return self.check_user_tasks_status_with_options(request, headers, runtime)
async def check_user_tasks_status_async(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTasksStatusRequest,
) -> dingtalkoccupationauth__1__0_models.CheckUserTasksStatusResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkoccupationauth__1__0_models.CheckUserTasksStatusHeaders()
return await self.check_user_tasks_status_with_options_async(request, headers, runtime)
def check_user_tasks_status_with_options(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTasksStatusRequest,
headers: dingtalkoccupationauth__1__0_models.CheckUserTasksStatusHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkoccupationauth__1__0_models.CheckUserTasksStatusResponse:
UtilClient.validate_model(request)
query = {}
if not UtilClient.is_unset(request.province_code):
query['provinceCode'] = request.province_code
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
query=OpenApiUtilClient.query(query)
)
return TeaCore.from_map(
dingtalkoccupationauth__1__0_models.CheckUserTasksStatusResponse(),
self.do_roarequest('CheckUserTasksStatus', 'occupationauth_1.0', 'HTTP', 'POST', 'AK', f'/v1.0/occupationauth/userTasks/check', 'json', req, runtime)
)
async def check_user_tasks_status_with_options_async(
self,
request: dingtalkoccupationauth__1__0_models.CheckUserTasksStatusRequest,
headers: dingtalkoccupationauth__1__0_models.CheckUserTasksStatusHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkoccupationauth__1__0_models.CheckUserTasksStatusResponse:
UtilClient.validate_model(request)
query = {}
if not UtilClient.is_unset(request.province_code):
query['provinceCode'] = request.province_code
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
query=OpenApiUtilClient.query(query)
)
return TeaCore.from_map(
dingtalkoccupationauth__1__0_models.CheckUserTasksStatusResponse(),
await self.do_roarequest_async('CheckUserTasksStatus', 'occupationauth_1.0', 'HTTP', 'POST', 'AK', f'/v1.0/occupationauth/userTasks/check', 'json', req, runtime)
)
| 49.581699 | 173 | 0.730556 | 780 | 7,586 | 6.660256 | 0.124359 | 0.01309 | 0.133975 | 0.167469 | 0.902021 | 0.867372 | 0.8641 | 0.836574 | 0.832339 | 0.825794 | 0 | 0.01259 | 0.193778 | 7,586 | 152 | 174 | 49.907895 | 0.836821 | 0.010546 | 0 | 0.713235 | 1 | 0 | 0.069674 | 0.033636 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036765 | false | 0 | 0.051471 | 0 | 0.154412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fb91eb7bdc42a57bba706a510ea83b0583e62880 | 25,115 | py | Python | lib/find.py | keiv-fly/wot_ai | 8f073968ae7c4eb88351ebf99fb2428a9862ab75 | [
"MIT"
] | null | null | null | lib/find.py | keiv-fly/wot_ai | 8f073968ae7c4eb88351ebf99fb2428a9862ab75 | [
"MIT"
] | null | null | null | lib/find.py | keiv-fly/wot_ai | 8f073968ae7c4eb88351ebf99fb2428a9862ab75 | [
"MIT"
] | null | null | null | import numpy as np
import cv2
from itertools import repeat
import numba
from multiprocessing.dummy import Pool as ThreadPool
#np.core.arrayprint._line_width = 180
def remove_close(loc):
loc = np.array(loc)
loc_filtered = []
n = loc.shape[0]
for i in range(n):
loc0 = loc[0]
loc_filtered.append(loc0)
loc = loc[1:]
loc = loc[np.abs(np.sum(loc - loc0[np.newaxis,:], axis=1))>6]
if len(loc)==0:
break
return loc_filtered
remove_close_jit = numba.jit(remove_close)
def red_diff(img):
r = img[:, :, 2].astype(np.uint16)
g = img[:, :, 1].astype(np.int32)
b = img[:, :, 0].astype(np.int32)
return np.minimum(np.maximum((2 * r - g - b), 0), 255).astype(np.uint8)
filename='scenes/scene01051.png'
filename='scenes/102.png'
filename='scenes/scene03151.png'
filename='scenes/scene08251.png'
def get_num_of_enemy(img0):
threshold = 0.20
#
# img = red_diff(img0)
# _ = cv2.rectangle(img, (0, 720), (368, 1090), (0, 0, 0), -1)
# _ = cv2.rectangle(img, (1000, 0), (1228, 36), (0, 0, 0), -1)
#
# # light red
# template = cv2.imread('signs/sign_light_red_augm.png', 0)
#
# w, h = template.shape[::-1]
# res_match = cv2.matchTemplate(img, template, cv2.TM_SQDIFF_NORMED)
# # min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# loc = np.where(res_match <= threshold)
#
# if loc[0].shape[0] > 0:
# loc = list(zip(*loc[::-1]))
# loc = np.array(loc)
# loc0 = loc[0]
# mask = np.ones(loc.shape[0],dtype=np.bool)
# for i in range(loc.shape[0]):
# mask = mask & (np.sum(loc - loc[i], axis=1) > 20)
#
# mask[0] = True
# loc = loc[mask]
# else:
# loc = np.array([])
#
# light_red = loc.shape[0]
#
# # med red
# template = cv2.imread('signs/sign_med_red_augm.png', 0)
#
# w, h = template.shape[::-1]
# res_match = cv2.matchTemplate(img, template, cv2.TM_SQDIFF_NORMED)
# # min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# loc = np.where(res_match <= threshold)
#
# if loc[0].shape[0] > 0:
# loc = list(zip(*loc[::-1]))
# loc = np.array(loc)
# loc0 = loc[0]
# mask = np.ones(loc.shape[0],dtype=np.bool)
# for i in range(loc.shape[0]):
# mask = mask & (np.sum(loc - loc[i], axis=1) > 20)
#
# mask[0] = True
# loc = loc[mask]
# else:
# loc = np.array([])
#
# medium_red = loc.shape[0]
#class1
threshold = 0.10
template1 = cv2.imread('signs/sign_I_2.png', 0)
template2 = cv2.imread('signs/sign_I_3.png', 0)
w, h = template1.shape[::-1]
img_grey = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
_ = cv2.rectangle(img_grey, (0, 720), (368, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (690, 0), (1228, 36), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (0, 0), (68, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1854, 0), (1920, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1800, 1000), (1920, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (750, 1048), (1385, 1090), (0, 0, 0), -1)
# manager = multiprocessing.Manager()
# return_dict = manager.dict()
# p1 = Process(target=one_match, args=(template1,img_grey,return_dict,1))
# p2 = Process(target=one_match, args=(template2, img_grey, return_dict, 2))
# p1.start()
# p2.start()
# p1.join()
# p2.join()
# vals = return_dict.values()
# res_match1 = vals[0]
# res_match2 = vals[1]
res_match1 = cv2.matchTemplate(img_grey, template1, cv2.TM_SQDIFF_NORMED)
res_match2 = cv2.matchTemplate(img_grey, template2, cv2.TM_SQDIFF_NORMED)
loc1 = np.where(res_match1 <= threshold)
loc2 = np.where(res_match2 <= threshold)
loc1 = list(zip(*loc1[::-1]))
loc2 = list(zip(*loc2[::-1]))
loc=loc1+loc2
loc = list(set(loc))
if len(loc) > 0:
loc = remove_close_jit(loc)
# loc = np.array(loc)
# loc_filtered = []
#
# n = loc.shape[0]
# for i in range(n):
# loc0 = loc[0]
# loc_filtered.append(loc0)
# loc = loc[1:]
# loc = loc[np.abs(np.sum(loc - loc0[np.newaxis,:], axis=1))>6]
# if len(loc)==0:
# break
# loc = loc_filtered
else:
loc = np.array([])
#loc[0]+[w/2,h/2]
loc_npa=np.array(loc)
# img0[:,:,2][loc_npa[0,0]:(loc_npa[0,0]+w),(loc_npa[0,1]+73):(loc_npa[0,1]+95)]
# yrange1=np.arange(73,73+4)
# yrange2=np.arange(73+10,95)
# xrange=np.arange(0,w)
loc_red = []
loc_white = []
loc_black = []
loc_red_bw = []
loc_red_other = []
if len(loc) > 0:
colors_list = []
i=0
for i in range(len(loc)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 73):(loc_npa[i, 1] + 91), loc_npa[i, 0]:(loc_npa[i, 0] + w)]
if len(t1)==0:
colors_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:,:,1][(loc_npa[i,1]+73):(loc_npa[i,1]+91),loc_npa[i,0]:(loc_npa[i,0]+w)]
)
b=np.median(
img0[:,:,0][(loc_npa[i,1]+73):(loc_npa[i,1]+91),loc_npa[i,0]:(loc_npa[i,0]+w)]
)
# b=np.median(
# np.concatenate([
# img0[:,:,0][loc_npa[i,0]:(loc_npa[i,0]+w),(loc_npa[i,1]+73):(loc_npa[i,1]+73+4)],
# img0[:, :, 0][loc_npa[i, 0]:(loc_npa[i, 0] + w), (loc_npa[i,1]+73+10):(loc_npa[i, 1]+95)]
# ], axis=1)
# )
colors_list.append((r,g,b))
good_colors = np.array(((200, 0, 10),(229,229,226)))
red = (200, 0, 10)
t1=np.array(colors_list) - red
mask1 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_red = loc_npa[mask1]
white = (229,229,226)
t1=np.array(colors_list) - white
mask2 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_white = loc_npa[mask2]
black = (4,2,3)
t1=np.array(colors_list) - black
mask3 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_black = loc_npa[mask3]
other_npa = loc_npa[~mask1&~mask2&~mask3].copy()
b_w = np.concatenate((loc_white,loc_black))
colors_bw_list = []
loc_npa = b_w
i=2
for i in range(len(b_w)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size==0:
colors_bw_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b=np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_bw_list.append((r,g,b))
if len(colors_bw_list)>0:
red = (200, 0, 10)
t1 = np.array(colors_bw_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1=t1[mask]
loc_red_bw = loc_npa[np.sum(np.power(t1,2),axis=1)<3000]
colors_other_list = []
loc_npa = other_npa
i=0
for i in range(len(other_npa)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size==0:
colors_other_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b=np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_other_list.append((r,g,b))
if len(colors_other_list)>0:
red = (200, 0, 10)
t1 = np.array(colors_other_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1=t1[mask]
loc_red_other = loc_npa[np.sum(np.power(t1,2),axis=1)<3000]
return len(loc_red), len(loc_white), len(loc_black), len(loc_red_bw), len(loc_red_other)
def get_num_of_enemy_parallel(img0,pool):
threshold = 0.10
template1 = cv2.imread('signs/sign_I_2.png', 0)
template2 = cv2.imread('signs/sign_I_3.png', 0)
w, h = template1.shape[::-1]
img_grey = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
_ = cv2.rectangle(img_grey, (0, 720), (368, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (690, 0), (1228, 36), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (0, 0), (68, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1854, 0), (1920, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1800, 1000), (1920, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (750, 1048), (1385, 1090), (0, 0, 0), -1)
templates = (template1,template2)
res_matchs = pool.starmap(cv2.matchTemplate, zip(repeat(img_grey),templates,repeat(cv2.TM_SQDIFF_NORMED)))
#res_match1 = cv2.matchTemplate(img_grey, template1, cv2.TM_SQDIFF_NORMED)
#res_match2 = cv2.matchTemplate(img_grey, template2, cv2.TM_SQDIFF_NORMED)
locs = (np.where(res_match <= threshold) for res_match in res_matchs)
loc1, loc2 = (list(zip(*x[::-1])) for x in locs)
loc=loc1+loc2
loc = list(set(loc))
if len(loc) > 0:
loc = remove_close_jit(loc)
# loc = np.array(loc)
# loc_filtered = []
#
# n = loc.shape[0]
# for i in range(n):
# loc0 = loc[0]
# loc_filtered.append(loc0)
# loc = loc[1:]
# loc = loc[np.abs(np.sum(loc - loc0[np.newaxis,:], axis=1))>6]
# if len(loc)==0:
# break
# loc = loc_filtered
else:
loc = np.array([])
#loc[0]+[w/2,h/2]
loc_npa=np.array(loc)
# img0[:,:,2][loc_npa[0,0]:(loc_npa[0,0]+w),(loc_npa[0,1]+73):(loc_npa[0,1]+95)]
# yrange1=np.arange(73,73+4)
# yrange2=np.arange(73+10,95)
# xrange=np.arange(0,w)
loc_red = []
loc_white = []
loc_black = []
loc_red_bw = []
loc_red_other = []
if len(loc) > 0:
colors_list = []
i=0
for i in range(len(loc)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 73):(loc_npa[i, 1] + 91), loc_npa[i, 0]:(loc_npa[i, 0] + w)]
if len(t1)==0:
colors_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:,:,1][(loc_npa[i,1]+73):(loc_npa[i,1]+91),loc_npa[i,0]:(loc_npa[i,0]+w)]
)
b=np.median(
img0[:,:,0][(loc_npa[i,1]+73):(loc_npa[i,1]+91),loc_npa[i,0]:(loc_npa[i,0]+w)]
)
# b=np.median(
# np.concatenate([
# img0[:,:,0][loc_npa[i,0]:(loc_npa[i,0]+w),(loc_npa[i,1]+73):(loc_npa[i,1]+73+4)],
# img0[:, :, 0][loc_npa[i, 0]:(loc_npa[i, 0] + w), (loc_npa[i,1]+73+10):(loc_npa[i, 1]+95)]
# ], axis=1)
# )
colors_list.append((r,g,b))
good_colors = np.array(((200, 0, 10),(229,229,226)))
red = (200, 0, 10)
t1=np.array(colors_list) - red
mask1 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_red = loc_npa[mask1]
white = (229,229,226)
t1=np.array(colors_list) - white
mask2 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_white = loc_npa[mask2]
black = (4,2,3)
t1=np.array(colors_list) - black
mask3 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_black = loc_npa[mask3]
other_npa = loc_npa[~mask1&~mask2&~mask3].copy()
b_w = np.concatenate((loc_white,loc_black))
colors_bw_list = []
loc_npa = b_w
i=2
for i in range(len(b_w)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size==0:
colors_bw_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b=np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_bw_list.append((r,g,b))
if len(colors_bw_list)>0:
red = (200, 0, 10)
t1 = np.array(colors_bw_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1=t1[mask]
loc_red_bw = loc_npa[np.sum(np.power(t1,2),axis=1)<3000]
colors_other_list = []
loc_npa = other_npa
i=0
for i in range(len(other_npa)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size==0:
colors_other_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b=np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_other_list.append((r,g,b))
if len(colors_other_list)>0:
red = (200, 0, 10)
t1 = np.array(colors_other_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1=t1[mask]
loc_red_other = loc_npa[np.sum(np.power(t1,2),axis=1)<3000]
return len(loc_red), len(loc_white), len(loc_black), len(loc_red_bw), len(loc_red_other)
def get_num_of_enemy_internal_parallel(img0):
threshold = 0.10
template1 = cv2.imread('signs/sign_I_2.png', 0)
template2 = cv2.imread('signs/sign_I_3.png', 0)
w, h = template1.shape[::-1]
img_grey = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
_ = cv2.rectangle(img_grey, (0, 720), (368, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (690, 0), (1228, 36), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (0, 0), (68, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1854, 0), (1920, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1800, 1000), (1920, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (750, 1048), (1385, 1090), (0, 0, 0), -1)
templates = (template1,template2)
with ThreadPool(2) as pool:
res_matchs = pool.starmap(cv2.matchTemplate, zip(repeat(img_grey),templates,repeat(cv2.TM_SQDIFF_NORMED)))
#res_match1 = cv2.matchTemplate(img_grey, template1, cv2.TM_SQDIFF_NORMED)
#res_match2 = cv2.matchTemplate(img_grey, template2, cv2.TM_SQDIFF_NORMED)
locs = (np.where(res_match <= threshold) for res_match in res_matchs)
loc1, loc2 = (list(zip(*x[::-1])) for x in locs)
loc=loc1+loc2
loc = list(set(loc))
if len(loc) > 0:
loc = remove_close_jit(loc)
else:
loc = np.array([])
loc_npa=np.array(loc)
loc_red = []
loc_white = []
loc_black = []
loc_red_bw = []
loc_red_other = []
if len(loc) > 0:
colors_list = []
i=0
for i in range(len(loc)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 73):(loc_npa[i, 1] + 91), loc_npa[i, 0]:(loc_npa[i, 0] + w)]
if len(t1)==0:
colors_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:,:,1][(loc_npa[i,1]+73):(loc_npa[i,1]+91),loc_npa[i,0]:(loc_npa[i,0]+w)]
)
b=np.median(
img0[:,:,0][(loc_npa[i,1]+73):(loc_npa[i,1]+91),loc_npa[i,0]:(loc_npa[i,0]+w)]
)
colors_list.append((r,g,b))
good_colors = np.array(((200, 0, 10),(229,229,226)))
red = (200, 0, 10)
t1=np.array(colors_list) - red
mask1 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_red = loc_npa[mask1]
white = (229,229,226)
t1=np.array(colors_list) - white
mask2 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_white = loc_npa[mask2]
black = (4,2,3)
t1=np.array(colors_list) - black
mask3 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_black = loc_npa[mask3]
other_npa = loc_npa[~mask1&~mask2&~mask3].copy()
b_w = np.concatenate((loc_white,loc_black))
colors_bw_list = []
loc_npa = b_w
i=2
for i in range(len(b_w)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size==0:
colors_bw_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b=np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_bw_list.append((r,g,b))
if len(colors_bw_list)>0:
red = (200, 0, 10)
t1 = np.array(colors_bw_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1=t1[mask]
loc_red_bw = loc_npa[np.sum(np.power(t1,2),axis=1)<3000]
colors_other_list = []
loc_npa = other_npa
i=0
for i in range(len(other_npa)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size==0:
colors_other_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g=np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b=np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_other_list.append((r,g,b))
if len(colors_other_list)>0:
red = (200, 0, 10)
t1 = np.array(colors_other_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1=t1[mask]
loc_red_other = loc_npa[np.sum(np.power(t1,2),axis=1)<3000]
return len(loc_red), len(loc_white), len(loc_black), len(loc_red_bw), len(loc_red_other)
def get_num_of_enemy_service(img0):
threshold = 0.10
template1 = cv2.imread('signs/sign_I_2.png', 0)
template2 = cv2.imread('signs/sign_I_3.png', 0)
w, h = template1.shape[::-1]
img_grey = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
_ = cv2.rectangle(img_grey, (0, 720), (368, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (690, 0), (1228, 36), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (0, 0), (68, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1854, 0), (1920, 274), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (1800, 1000), (1920, 1090), (0, 0, 0), -1)
_ = cv2.rectangle(img_grey, (750, 1048), (1385, 1090), (0, 0, 0), -1)
templates = (template1, template2)
with ThreadPool(2) as pool:
res_matchs = pool.starmap(cv2.matchTemplate, zip(repeat(img_grey), templates, repeat(cv2.TM_SQDIFF_NORMED)))
# res_match1 = cv2.matchTemplate(img_grey, template1, cv2.TM_SQDIFF_NORMED)
# res_match2 = cv2.matchTemplate(img_grey, template2, cv2.TM_SQDIFF_NORMED)
locs = (np.where(res_match <= threshold) for res_match in res_matchs)
loc1, loc2 = (list(zip(*x[::-1])) for x in locs)
loc = loc1 + loc2
loc = list(set(loc))
if len(loc) > 0:
loc = remove_close_jit(loc)
else:
loc = np.array([])
loc_npa = np.array(loc)
loc_red = []
loc_white = []
loc_black = []
loc_red_bw = []
loc_red_other = []
if len(loc) > 0:
colors_list = []
i = 0
for i in range(len(loc)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 73):(loc_npa[i, 1] + 91), loc_npa[i, 0]:(loc_npa[i, 0] + w)]
if len(t1) == 0:
colors_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g = np.median(
img0[:, :, 1][(loc_npa[i, 1] + 73):(loc_npa[i, 1] + 91), loc_npa[i, 0]:(loc_npa[i, 0] + w)]
)
b = np.median(
img0[:, :, 0][(loc_npa[i, 1] + 73):(loc_npa[i, 1] + 91), loc_npa[i, 0]:(loc_npa[i, 0] + w)]
)
colors_list.append((r, g, b))
good_colors = np.array(((200, 0, 10), (229, 229, 226)))
red = (200, 0, 10)
t1 = np.array(colors_list) - red
mask1 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_red = loc_npa[mask1]
white = (229, 229, 226)
t1 = np.array(colors_list) - white
mask2 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_white = loc_npa[mask2]
black = (4, 2, 3)
t1 = np.array(colors_list) - black
mask3 = np.sum(np.power(t1, 2), axis=1) < 3000
loc_black = loc_npa[mask3]
other_npa = loc_npa[~mask1 & ~mask2 & ~mask3].copy()
b_w = np.concatenate((loc_white, loc_black))
colors_bw_list = []
loc_npa = b_w
i = 2
for i in range(len(b_w)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size == 0:
colors_bw_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g = np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b = np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_bw_list.append((r, g, b))
if len(colors_bw_list) > 0:
red = (200, 0, 10)
t1 = np.array(colors_bw_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1 = t1[mask]
loc_red_bw = loc_npa[np.sum(np.power(t1, 2), axis=1) < 3000]
colors_other_list = []
loc_npa = other_npa
i = 0
for i in range(len(other_npa)):
t1 = img0[:, :, 2][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
if t1.size == 0:
colors_other_list.append((1000, 1000, 1000))
continue
r = np.median(t1)
g = np.median(
img0[:, :, 1][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
b = np.median(
img0[:, :, 0][(loc_npa[i, 1] + 46):(loc_npa[i, 1] + 46 + 14),
(loc_npa[i, 0] - 41):(loc_npa[i, 0] - 41 + w)]
)
colors_other_list.append((r, g, b))
if len(colors_other_list) > 0:
red = (200, 0, 10)
t1 = np.array(colors_other_list) - red
mask = ~np.isnan(t1).any(axis=1)
loc_npa = loc_npa[mask]
t1 = t1[mask]
loc_red_other = loc_npa[np.sum(np.power(t1, 2), axis=1) < 3000]
return len(loc_red), len(loc_white), len(loc_black), len(loc_red_bw), len(loc_red_other)
# for pt in loc:
# _ = cv2.rectangle(img_grey, tuple(pt), (pt[0] + w, pt[1] + h), 0, 2)
# plt.imshow(img_grey,cmap = 'gray')
"""
pic_false
8101 Lamp is over the tank
8401 Timeline. Tank too big. Only sequence can show that there is a tank
8551 Timeline. Tank too big. The sign is out of the screen
"""
"""
1071 - 1030
431 - 385
y=(loc_npa[i, 1] + 46,loc_npa[i, 1] + 46 + 14)
x=(loc_npa[i, 0] - 41,loc_npa[i, 0] - 41 + w)
p1=(x[0],y[0])
p2=(x[1],y[1])
_ = cv2.rectangle(img_grey, p1, p2, 0, 2)
plt.imshow(img_grey,cmap = 'gray')
""" | 35.17507 | 116 | 0.498547 | 3,865 | 25,115 | 3.069082 | 0.056662 | 0.113303 | 0.09678 | 0.055303 | 0.912747 | 0.905497 | 0.900607 | 0.89968 | 0.895296 | 0.895296 | 0 | 0.116754 | 0.317937 | 25,115 | 714 | 117 | 35.17507 | 0.575715 | 0.142903 | 0 | 0.870833 | 0 | 0 | 0.010531 | 0.003002 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0125 | false | 0 | 0.010417 | 0 | 0.035417 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fbc95257c670709407b32264d0f4f00fd2662b94 | 93 | py | Python | aox/summary/__init__.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | 2 | 2021-11-10T22:38:49.000Z | 2021-12-03T08:09:01.000Z | aox/summary/__init__.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | null | null | null | aox/summary/__init__.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | null | null | null | from .base_summary import * # noqa: F401, F403
from .summaries import * # noqa: F401, F403
| 31 | 47 | 0.698925 | 13 | 93 | 4.923077 | 0.615385 | 0.3125 | 0.4375 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 0.193548 | 93 | 2 | 48 | 46.5 | 0.693333 | 0.354839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
fbdec2467d4cf9c936cededebb4f88725a1c67c9 | 1,645 | py | Python | 1st Year - CS/Python - Language/Lab 16 - 4th June/patterns/half-dia-3-done.py | rahularepaka/CSLab | 6d223b8814ad04a5821cbe63bf059f5726ff22ce | [
"MIT"
] | null | null | null | 1st Year - CS/Python - Language/Lab 16 - 4th June/patterns/half-dia-3-done.py | rahularepaka/CSLab | 6d223b8814ad04a5821cbe63bf059f5726ff22ce | [
"MIT"
] | null | null | null | 1st Year - CS/Python - Language/Lab 16 - 4th June/patterns/half-dia-3-done.py | rahularepaka/CSLab | 6d223b8814ad04a5821cbe63bf059f5726ff22ce | [
"MIT"
] | null | null | null | N = int(input())
one = int(1)
zero = int(0)
def diamond(N):
even = 0*one**zero
k1 = 1*one*one**zero
for i in range(N):
for j in range(k1):
if(j*one == k1-1):
if(i % 2 != 0*one**zero):
if(j % 2 == 0*one**zero):
print(even, end="")
else:
print(zero, end="")
else:
print(zero, end="")
else:
if(i % 2 != 0*one**zero):
print(even, end=" ")
else:
print(zero, end=" ")
k1 = (k1 + 2)*one**zero
if(i % 2 == 0*one**zero):
even = (even + 2)*one**zero
else:
even = (even + 0)*one**zero
k1 = k1*one*one**zero
print()
even = (even + 0)*one**zero
for i in range(N):
for j in range(k1-4):
if(j*one == k1-5):
if(i % 2 == 0*one**zero):
if(j % 2 == 0*one**zero):
print(even-2, end="")
else:
print(even-2, end="")
else:
print(zero, end="")
else:
if(i % 2 == 0*one**zero):
print(even-2, end=" ")
else:
print(zero, end=" ")
k1 = (k1 - 2)*one**zero
k1 = k1*one
if(i % 2 != 0*one**zero):
even = (even - 2)*one**zero
else:
even = (even + 0)*one**zero
print()
diamond(N*one*one**zero)
| 24.552239 | 45 | 0.328875 | 193 | 1,645 | 2.803109 | 0.119171 | 0.245841 | 0.177449 | 0.133087 | 0.828096 | 0.759704 | 0.759704 | 0.759704 | 0.750462 | 0.750462 | 0 | 0.057357 | 0.512462 | 1,645 | 66 | 46 | 24.924242 | 0.617207 | 0 | 0 | 0.673077 | 0 | 0 | 0.002432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0 | 0 | 0.019231 | 0.230769 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
83835c7c093301f011e017774f8422fa7f557ca1 | 3,082 | py | Python | amlc/test/test_covariance_matrix.py | jegpeek/amlc | 41ebdabc524b4117ecd36300964f693a8c012288 | [
"MIT"
] | 1 | 2021-03-29T04:55:35.000Z | 2021-03-29T04:55:35.000Z | amlc/test/test_covariance_matrix.py | jegpeek/amlc | 41ebdabc524b4117ecd36300964f693a8c012288 | [
"MIT"
] | null | null | null | amlc/test/test_covariance_matrix.py | jegpeek/amlc | 41ebdabc524b4117ecd36300964f693a8c012288 | [
"MIT"
] | 1 | 2019-07-11T19:12:38.000Z | 2019-07-11T19:12:38.000Z | import pytest
import numpy as np
from ..covariance_matrix import DiagonalCovarianceMatrix, GeneralCovarianceMatrix
@pytest.fixture
def diagonal_covariance_matrix_example():
size = 5
variances = np.arange(1, size + 1)
return DiagonalCovarianceMatrix(variances)
@pytest.fixture
def general_covariance_matrix_example():
size = 5
variances = np.arange(1, size + 1)
covariance_matrix = np.diag(variances)
return GeneralCovarianceMatrix(covariance_matrix)
@pytest.mark.parametrize('arg', ['diagonal_covariance_matrix_example',
'general_covariance_matrix_example'])
def test_apply_inverse_to_vector(arg, request):
covariance_matrix_example = request.getfixturevalue(arg)
size = covariance_matrix_example.shape[0]
vector = np.arange(1, size + 1)
correct_answer = 1.
assert np.allclose(covariance_matrix_example.apply_inverse(vector),
correct_answer)
@pytest.mark.parametrize('arg', ['diagonal_covariance_matrix_example',
'general_covariance_matrix_example'])
def test_apply_inverse_to_matrix(arg, request):
covariance_matrix_example = request.getfixturevalue(arg)
size = covariance_matrix_example.shape[0]
matrix = np.ones([size, size + 1]) * np.arange(1, size + 1)[:, None]
correct_answer = 1.
assert np.allclose(covariance_matrix_example.apply_inverse(matrix),
correct_answer)
@pytest.mark.parametrize('arg', ['diagonal_covariance_matrix_example',
'general_covariance_matrix_example'])
def test_get_inverse(arg, request):
covariance_matrix_example = request.getfixturevalue(arg)
size = covariance_matrix_example.shape[0]
precisions = 1. / np.arange(1, size + 1)
correct_answer = np.diag(precisions)
assert np.allclose(covariance_matrix_example.get_inverse(),
correct_answer)
assert np.allclose(covariance_matrix_example._inverse,
correct_answer)
@pytest.mark.parametrize('arg', ['diagonal_covariance_matrix_example',
'general_covariance_matrix_example'])
def test_add_inverse(arg, request):
covariance_matrix_example = request.getfixturevalue(arg)
size = covariance_matrix_example.shape[0]
precisions = 1. / np.arange(1, size + 1)
correct_answer = np.diag(precisions)
assert np.allclose(covariance_matrix_example.get_inverse(),
correct_answer)
@pytest.mark.parametrize('arg', ['diagonal_covariance_matrix_example',
'general_covariance_matrix_example'])
def test_get_logdet(arg, request):
covariance_matrix_example = request.getfixturevalue(arg)
size = covariance_matrix_example.shape[0]
variances = np.arange(1, size + 1)
correct_answer = np.sum(np.log(variances))
assert np.allclose(covariance_matrix_example.get_logdet(),
correct_answer)
assert np.allclose(covariance_matrix_example._logdet,
correct_answer)
| 39.512821 | 81 | 0.696301 | 339 | 3,082 | 6.017699 | 0.135693 | 0.25098 | 0.326961 | 0.044608 | 0.820098 | 0.820098 | 0.808333 | 0.780882 | 0.722059 | 0.722059 | 0 | 0.010717 | 0.212849 | 3,082 | 77 | 82 | 40.025974 | 0.830173 | 0 | 0 | 0.666667 | 0 | 0 | 0.113563 | 0.108696 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.047619 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
839037fa30919664cbe7d8c99021e43ae0dc2816 | 16,392 | py | Python | origins/migrations/0001_initial.py | paleocore/paleocore110 | 754f3248ab22a2996b43bd224bd4ba15462edf7d | [
"MIT"
] | null | null | null | origins/migrations/0001_initial.py | paleocore/paleocore110 | 754f3248ab22a2996b43bd224bd4ba15462edf7d | [
"MIT"
] | 7 | 2020-02-05T20:54:24.000Z | 2021-12-13T20:13:20.000Z | origins/migrations/0001_initial.py | paleocore/paleocore110 | 754f3248ab22a2996b43bd224bd4ba15462edf7d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on 2017-12-17 03:38
from __future__ import unicode_literals
import django.contrib.gis.db.models.fields
from django.db import migrations, models
import django.db.models.deletion
import django_countries.fields
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Context',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('verbatim_collection_no', models.IntegerField(blank=True, null=True)),
('verbatim_record_type', models.CharField(blank=True, max_length=20, null=True)),
('verbatim_formation', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_member', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_lng', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_lat', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_collection_name', models.CharField(blank=True, max_length=200, null=True)),
('verbatim_collection_subset', models.CharField(blank=True, max_length=20, null=True)),
('verbatim_collection_aka', models.CharField(blank=True, max_length=200, null=True)),
('verbatim_n_occs', models.IntegerField(blank=True, null=True)),
('verbatim_early_interval', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_late_interval', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_max_ma', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_min_ma', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_reference_no', models.IntegerField(blank=True, null=True)),
('source', models.CharField(blank=True, max_length=20, null=True)),
('name', models.CharField(blank=True, max_length=200, null=True)),
('geological_formation', models.CharField(blank=True, max_length=50, null=True, verbose_name='Formation')),
('geological_member', models.CharField(blank=True, max_length=50, null=True, verbose_name='Member')),
('geological_bed', models.CharField(blank=True, max_length=50, null=True)),
('older_interval', models.CharField(blank=True, max_length=50, null=True)),
('younger_interval', models.CharField(blank=True, max_length=50, null=True)),
('max_age', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('min_age', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('best_age', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('mio_plio', models.BooleanField(default=False)),
('geom', django.contrib.gis.db.models.fields.PointField(blank=True, null=True, srid=4326)),
],
),
migrations.CreateModel(
name='Fossil',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('verbatim_PlaceName', models.CharField(blank=True, max_length=100, null=True)),
('verbatim_HomininElement', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_HomininElementNotes', models.TextField(blank=True, null=True)),
('verbatim_SkeletalElement', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementSubUnit', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementSubUnitDescriptor', models.CharField(blank=True, max_length=100, null=True)),
('verbatim_SkeletalElementSide', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementPosition', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementComplete', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementClass', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_Locality', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_Country', models.CharField(blank=True, max_length=20, null=True)),
('place_name', models.CharField(blank=True, max_length=100, null=True)),
('guid', models.URLField(blank=True, null=True)),
('uuid', models.UUIDField(default=uuid.uuid4)),
('catalog_number', models.CharField(blank=True, max_length=40, null=True)),
('organism_id', models.CharField(blank=True, max_length=40, null=True)),
('nickname', models.CharField(blank=True, max_length=40, null=True)),
('holotype', models.BooleanField(default=False)),
('lifestage', models.CharField(blank=True, max_length=20, null=True)),
('sex', models.CharField(blank=True, max_length=10, null=True)),
('project_name', models.CharField(blank=True, max_length=100, null=True)),
('project_abbreviation', models.CharField(blank=True, max_length=10, null=True)),
('collection_code', models.CharField(blank=True, max_length=10, null=True)),
('origins', models.BooleanField(default=False)),
('created_by', models.CharField(blank=True, max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True, verbose_name='Modified')),
('modified', models.DateTimeField(auto_now=True, help_text='The date and time this resource was last altered.', verbose_name='Modified')),
('locality', models.CharField(blank=True, max_length=40, null=True)),
('country', django_countries.fields.CountryField(blank=True, max_length=2, null=True, verbose_name='Country')),
('continent', models.CharField(blank=True, max_length=20, null=True)),
('verbatim_provenience', models.TextField(blank=True, null=True)),
('image', models.ImageField(blank=True, max_length=255, null=True, upload_to='uploads/images/origins')),
('context', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='origins.Context')),
],
),
migrations.CreateModel(
name='FossilElement',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('verbatim_PlaceName', models.CharField(blank=True, max_length=100, null=True)),
('verbatim_HomininElement', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_HomininElementNotes', models.TextField(blank=True, null=True)),
('verbatim_SkeletalElement', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementSubUnit', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementSubUnitDescriptor', models.CharField(blank=True, max_length=100, null=True)),
('verbatim_SkeletalElementSide', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementPosition', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementComplete', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_SkeletalElementClass', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_Locality', models.CharField(blank=True, max_length=40, null=True)),
('verbatim_Country', models.CharField(blank=True, max_length=20, null=True)),
('hominin_element', models.CharField(blank=True, max_length=40, null=True)),
('hominin_element_notes', models.TextField(blank=True, null=True)),
('skeletal_element', models.CharField(blank=True, max_length=40, null=True)),
('skeletal_element_subunit', models.CharField(blank=True, max_length=40, null=True)),
('skeletal_element_subunit_descriptor', models.CharField(blank=True, max_length=100, null=True)),
('skeletal_element_side', models.CharField(blank=True, max_length=40, null=True)),
('skeletal_element_position', models.CharField(blank=True, max_length=40, null=True)),
('skeletal_element_complete', models.CharField(blank=True, max_length=40, null=True)),
('skeletal_element_class', models.CharField(blank=True, max_length=40, null=True)),
('continent', models.CharField(blank=True, max_length=20, null=True)),
('fossil', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='fossil_element', to='origins.Fossil')),
],
),
migrations.CreateModel(
name='IdentificationQualifier',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=15, unique=True)),
('qualified', models.BooleanField()),
],
options={
'verbose_name': 'Identification Qualifer',
},
),
migrations.CreateModel(
name='Photo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', models.ImageField(blank=True, null=True, upload_to='uploads/images/origins', verbose_name='Image')),
('fossil', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='origins.Fossil')),
],
options={
'verbose_name': 'Image',
'verbose_name_plural': 'Images',
'managed': True,
},
),
migrations.CreateModel(
name='Reference',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('reference_no', models.IntegerField(blank=True, null=True)),
('record_type', models.CharField(blank=True, max_length=5, null=True)),
('ref_type', models.CharField(blank=True, max_length=201, null=True)),
('author1init', models.CharField(blank=True, max_length=202, null=True)),
('author1last', models.CharField(blank=True, max_length=203, null=True)),
('author2init', models.CharField(blank=True, max_length=204, null=True)),
('author2last', models.CharField(blank=True, max_length=205, null=True)),
('otherauthors', models.TextField(blank=True, null=True)),
('pubyr', models.CharField(blank=True, max_length=207, null=True)),
('reftitle', models.TextField(blank=True, null=True)),
('pubtitle', models.TextField(blank=True, null=True)),
('editors', models.TextField(blank=True, null=True)),
('pubvol', models.CharField(blank=True, max_length=210, null=True)),
('pubno', models.CharField(blank=True, max_length=211, null=True)),
('firstpage', models.CharField(blank=True, max_length=212, null=True)),
('lastpage', models.CharField(blank=True, max_length=213, null=True)),
('publication_type', models.CharField(blank=True, max_length=200, null=True)),
('language', models.CharField(blank=True, max_length=214, null=True)),
('doi', models.CharField(blank=True, max_length=215, null=True)),
('source', models.CharField(blank=True, max_length=216, null=True)),
('reference_pdf', models.FileField(blank=True, max_length=255, null=True, upload_to='uploads/files/origins')),
('fossil', models.ManyToManyField(to='origins.Fossil')),
],
),
migrations.CreateModel(
name='Site',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('verbatim_collection_no', models.IntegerField(blank=True, null=True)),
('verbatim_record_type', models.CharField(blank=True, max_length=20, null=True)),
('verbatim_formation', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_lng', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_lat', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_collection_name', models.CharField(blank=True, max_length=200, null=True)),
('verbatim_collection_subset', models.CharField(blank=True, max_length=20, null=True)),
('verbatim_collection_aka', models.CharField(blank=True, max_length=200, null=True)),
('verbatim_n_occs', models.IntegerField(blank=True, null=True)),
('verbatim_early_interval', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_late_interval', models.CharField(blank=True, max_length=50, null=True)),
('verbatim_max_ma', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_min_ma', models.DecimalField(blank=True, decimal_places=10, max_digits=20, null=True)),
('verbatim_reference_no', models.IntegerField(blank=True, null=True)),
('name', models.CharField(blank=True, max_length=40, null=True)),
('source', models.CharField(blank=True, max_length=20, null=True)),
('mio_plio', models.BooleanField(default=False)),
('geom', django.contrib.gis.db.models.fields.PointField(blank=True, null=True, srid=4326)),
],
),
migrations.CreateModel(
name='Taxon',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='origins.Taxon')),
],
options={
'verbose_name': 'Taxon',
'verbose_name_plural': 'Taxa',
'ordering': ['rank__ordinal', 'name'],
},
),
migrations.CreateModel(
name='TaxonRank',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, unique=True)),
('plural', models.CharField(max_length=50, unique=True)),
('ordinal', models.IntegerField(unique=True)),
],
options={
'verbose_name': 'LGRP Taxon Rank',
},
),
migrations.AddField(
model_name='taxon',
name='rank',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='origins.TaxonRank'),
),
migrations.AddField(
model_name='context',
name='reference',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='origins.Reference'),
),
migrations.AddField(
model_name='context',
name='site',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='origins.Site'),
),
]
| 68.585774 | 154 | 0.620547 | 1,782 | 16,392 | 5.542649 | 0.12514 | 0.098815 | 0.104485 | 0.156728 | 0.809659 | 0.804495 | 0.729473 | 0.713678 | 0.704667 | 0.643414 | 0 | 0.022332 | 0.235115 | 16,392 | 238 | 155 | 68.87395 | 0.765433 | 0.004148 | 0 | 0.508696 | 1 | 0 | 0.160713 | 0.068562 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026087 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83a50a171c40a3a10bb72e182416f2c2ae99b425 | 3,532 | py | Python | app/tests/unit/test_doc_preprocess.py | willsower/latex2speech | 36a69bb5ee74e1ca362968604b4a554034c5f408 | [
"MIT"
] | 3 | 2021-03-17T22:13:23.000Z | 2021-08-30T20:35:39.000Z | app/tests/unit/test_doc_preprocess.py | willsower/latex2speech | 36a69bb5ee74e1ca362968604b4a554034c5f408 | [
"MIT"
] | 50 | 2021-03-15T23:03:43.000Z | 2021-07-14T14:22:45.000Z | app/tests/unit/test_doc_preprocess.py | willsower/latex2speech | 36a69bb5ee74e1ca362968604b4a554034c5f408 | [
"MIT"
] | 3 | 2021-03-30T18:18:40.000Z | 2021-04-14T17:51:26.000Z | import unittest, os
from doc_preprocess import doc_preprocess
class TestDocPreprocess(unittest.TestCase):
def _docsEqual(self, doc1, doc2):
if str(doc1) == str(doc2):
return True
return False
# Remove \left
def test_remove_left(self):
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\left"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", "r")
output = f.read()
self._docsEqual(output, r"\begin{document}\end{document}")
os.remove("doc_preprocess.tex")
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\left just testing this \left example"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", "r")
output = f.read()
self._docsEqual(output, r"\begin{document}just testing this example\end{document}")
os.remove("doc_preprocess.tex")
# Remove \right
def test_remove_right(self):
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\right"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", "r")
output = f.read()
self._docsEqual(output, r"\begin{document}\end{document}")
os.remove("doc_preprocess.tex")
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\right just testing this \right example"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", "r")
output = f.read()
self._docsEqual(output, r"\begin{document}just testing this example\end{document}")
os.remove("doc_preprocess.tex")
# Remove extra \\
def remove_double_backslash(self):
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\\ test \\"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", " test ")
output = f.read()
self._docsEqual(output, r"\begin{document}\end{document}")
os.remove("doc_preprocess.tex")
# Testing whether \def is replaced with \newcommand
def replace_def_with_newcommand(self):
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\def \NAS {National Academy of Science}"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", " test ")
output = f.read()
self._docsEqual(output, r"\begin{document}\end{document}")
os.remove("doc_preprocess.tex")
file = open("doc_preprocess.tex", "w")
file.write("\\begin{document}" +
r"\def\foo #1(Hello, #1)"+
r"\end{document}")
# file.close()
doc_preprocess("doc_preprocess.tex")
f = open("doc_preprocess.tex", " test ")
output = f.read()
self._docsEqual(output, r"\begin{document}\end{document}")
os.remove("doc_preprocess.tex") | 37.978495 | 91 | 0.550396 | 396 | 3,532 | 4.772727 | 0.143939 | 0.254497 | 0.237037 | 0.148148 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0 | 0.002426 | 0.29983 | 3,532 | 93 | 92 | 37.978495 | 0.761828 | 0.051812 | 0 | 0.756757 | 0 | 0 | 0.34991 | 0.057519 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.027027 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
83e2d13c3dc77c4faf702c2b724ab8ce528a66e1 | 30,681 | py | Python | arduino_iot_rest/api/devices_v2_certs_api.py | akash73/iot-client-py | 5335dbaa816fb2d26097f0403d3d51796ebd9d99 | [
"Apache-2.0"
] | null | null | null | arduino_iot_rest/api/devices_v2_certs_api.py | akash73/iot-client-py | 5335dbaa816fb2d26097f0403d3d51796ebd9d99 | [
"Apache-2.0"
] | null | null | null | arduino_iot_rest/api/devices_v2_certs_api.py | akash73/iot-client-py | 5335dbaa816fb2d26097f0403d3d51796ebd9d99 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Arduino IoT Cloud API
Provides a set of endpoints to manage Arduino IoT Cloud **Devices**, **Things**, **Properties** and **Timeseries**. This API can be called just with any HTTP Client, or using one of these clients: * [Javascript NPM package](https://www.npmjs.com/package/@arduino/arduino-iot-client) * [Python PYPI Package](https://pypi.org/project/arduino-iot-client/) * [Golang Module](https://github.com/arduino/iot-client-go) # noqa: E501
The version of the OpenAPI document: 2.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from arduino_iot_rest.api_client import ApiClient
from arduino_iot_rest.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class DevicesV2CertsApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def devices_v2_certs_create(self, id, create_devices_v2_certs_payload, **kwargs): # noqa: E501
"""create devices_v2_certs # noqa: E501
Creates a new cert associated to a device. The csr is signed and saved in database. The CommonName will be replaced with the device id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_create(id, create_devices_v2_certs_payload, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the device (required)
:param CreateDevicesV2CertsPayload create_devices_v2_certs_payload: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ArduinoDevicev2Cert
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.devices_v2_certs_create_with_http_info(id, create_devices_v2_certs_payload, **kwargs) # noqa: E501
def devices_v2_certs_create_with_http_info(self, id, create_devices_v2_certs_payload, **kwargs): # noqa: E501
"""create devices_v2_certs # noqa: E501
Creates a new cert associated to a device. The csr is signed and saved in database. The CommonName will be replaced with the device id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_create_with_http_info(id, create_devices_v2_certs_payload, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the device (required)
:param CreateDevicesV2CertsPayload create_devices_v2_certs_payload: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ArduinoDevicev2Cert, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'create_devices_v2_certs_payload'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method devices_v2_certs_create" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `devices_v2_certs_create`") # noqa: E501
# verify the required parameter 'create_devices_v2_certs_payload' is set
if self.api_client.client_side_validation and ('create_devices_v2_certs_payload' not in local_var_params or # noqa: E501
local_var_params['create_devices_v2_certs_payload'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `create_devices_v2_certs_payload` when calling `devices_v2_certs_create`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'create_devices_v2_certs_payload' in local_var_params:
body_params = local_var_params['create_devices_v2_certs_payload']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/v2/devices/{id}/certs', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ArduinoDevicev2Cert', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def devices_v2_certs_delete(self, cid, id, **kwargs): # noqa: E501
"""delete devices_v2_certs # noqa: E501
Removes a cert associated to a device # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_delete(cid, id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str cid: The id of the cert (required)
:param str id: The id of the device (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.devices_v2_certs_delete_with_http_info(cid, id, **kwargs) # noqa: E501
def devices_v2_certs_delete_with_http_info(self, cid, id, **kwargs): # noqa: E501
"""delete devices_v2_certs # noqa: E501
Removes a cert associated to a device # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_delete_with_http_info(cid, id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str cid: The id of the cert (required)
:param str id: The id of the device (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'cid',
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method devices_v2_certs_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'cid' is set
if self.api_client.client_side_validation and ('cid' not in local_var_params or # noqa: E501
local_var_params['cid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `cid` when calling `devices_v2_certs_delete`") # noqa: E501
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `devices_v2_certs_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'cid' in local_var_params:
path_params['cid'] = local_var_params['cid'] # noqa: E501
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/v2/devices/{id}/certs/{cid}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def devices_v2_certs_list(self, id, **kwargs): # noqa: E501
"""list devices_v2_certs # noqa: E501
Returns the list of certs associated to the device # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_list(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the device (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[ArduinoDevicev2Cert]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.devices_v2_certs_list_with_http_info(id, **kwargs) # noqa: E501
def devices_v2_certs_list_with_http_info(self, id, **kwargs): # noqa: E501
"""list devices_v2_certs # noqa: E501
Returns the list of certs associated to the device # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_list_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the device (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[ArduinoDevicev2Cert], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method devices_v2_certs_list" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `devices_v2_certs_list`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/v2/devices/{id}/certs', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[ArduinoDevicev2Cert]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def devices_v2_certs_show(self, cid, id, **kwargs): # noqa: E501
"""show devices_v2_certs # noqa: E501
Returns the cert requested by the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_show(cid, id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str cid: The id of the cert (required)
:param str id: The id of the device (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ArduinoDevicev2Cert
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.devices_v2_certs_show_with_http_info(cid, id, **kwargs) # noqa: E501
def devices_v2_certs_show_with_http_info(self, cid, id, **kwargs): # noqa: E501
"""show devices_v2_certs # noqa: E501
Returns the cert requested by the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_show_with_http_info(cid, id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str cid: The id of the cert (required)
:param str id: The id of the device (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ArduinoDevicev2Cert, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'cid',
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method devices_v2_certs_show" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'cid' is set
if self.api_client.client_side_validation and ('cid' not in local_var_params or # noqa: E501
local_var_params['cid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `cid` when calling `devices_v2_certs_show`") # noqa: E501
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `devices_v2_certs_show`") # noqa: E501
collection_formats = {}
path_params = {}
if 'cid' in local_var_params:
path_params['cid'] = local_var_params['cid'] # noqa: E501
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/v2/devices/{id}/certs/{cid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ArduinoDevicev2Cert', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def devices_v2_certs_update(self, cid, id, devicev2_cert, **kwargs): # noqa: E501
"""update devices_v2_certs # noqa: E501
Updates a cert associated to a device. The csr is signed and saved in database. The CommonName will be replaced with the device id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_update(cid, id, devicev2_cert, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str cid: The id of the cert (required)
:param str id: The id of the device (required)
:param Devicev2Cert devicev2_cert: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ArduinoDevicev2Cert
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.devices_v2_certs_update_with_http_info(cid, id, devicev2_cert, **kwargs) # noqa: E501
def devices_v2_certs_update_with_http_info(self, cid, id, devicev2_cert, **kwargs): # noqa: E501
"""update devices_v2_certs # noqa: E501
Updates a cert associated to a device. The csr is signed and saved in database. The CommonName will be replaced with the device id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.devices_v2_certs_update_with_http_info(cid, id, devicev2_cert, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str cid: The id of the cert (required)
:param str id: The id of the device (required)
:param Devicev2Cert devicev2_cert: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ArduinoDevicev2Cert, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'cid',
'id',
'devicev2_cert'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method devices_v2_certs_update" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'cid' is set
if self.api_client.client_side_validation and ('cid' not in local_var_params or # noqa: E501
local_var_params['cid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `cid` when calling `devices_v2_certs_update`") # noqa: E501
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `devices_v2_certs_update`") # noqa: E501
# verify the required parameter 'devicev2_cert' is set
if self.api_client.client_side_validation and ('devicev2_cert' not in local_var_params or # noqa: E501
local_var_params['devicev2_cert'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `devicev2_cert` when calling `devices_v2_certs_update`") # noqa: E501
collection_formats = {}
path_params = {}
if 'cid' in local_var_params:
path_params['cid'] = local_var_params['cid'] # noqa: E501
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'devicev2_cert' in local_var_params:
body_params = local_var_params['devicev2_cert']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/v2/devices/{id}/certs/{cid}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ArduinoDevicev2Cert', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 46.416036 | 434 | 0.596265 | 3,533 | 30,681 | 4.926691 | 0.062836 | 0.045042 | 0.064346 | 0.025853 | 0.947432 | 0.939101 | 0.937206 | 0.930886 | 0.914512 | 0.907158 | 0 | 0.020437 | 0.330172 | 30,681 | 660 | 435 | 46.486364 | 0.826529 | 0.450442 | 0 | 0.731861 | 0 | 0 | 0.179119 | 0.068141 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0347 | false | 0 | 0.015773 | 0 | 0.085174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7ae3178a246dbc182a53d800ff6a71661fb1043 | 134 | py | Python | tssearch/utils/__init__.py | mluacnunes/tssearch | e8e2cd9f07d66eeec8a839fe72c259232f968da9 | [
"BSD-3-Clause"
] | 11 | 2021-11-11T15:21:14.000Z | 2022-03-31T23:28:34.000Z | tssearch/utils/__init__.py | MargaridaAntunes/tssearch | d29c47c6176ebbb25c8785da843946bf3eb27721 | [
"BSD-3-Clause"
] | 1 | 2022-01-23T02:36:02.000Z | 2022-01-24T17:13:32.000Z | tssearch/utils/__init__.py | MargaridaAntunes/tssearch | d29c47c6176ebbb25c8785da843946bf3eb27721 | [
"BSD-3-Clause"
] | 5 | 2021-12-13T21:46:36.000Z | 2022-03-31T23:29:36.000Z | from tssearch.utils.preprocessing import *
from tssearch.utils.visualisation import *
from tssearch.utils.distances_settings import *
| 33.5 | 47 | 0.843284 | 16 | 134 | 7 | 0.5 | 0.321429 | 0.455357 | 0.410714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089552 | 134 | 3 | 48 | 44.666667 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
f708bc3b0e1b8efa4b672733fdae01f2f74c4bfb | 142 | py | Python | wxwork_hr_syncing/wizard/__init__.py | rainbow-studio-solution/wxwork | 344a0a8f8f0ac364101a1bb4a98c132588118839 | [
"MulanPSL-1.0"
] | 9 | 2021-01-02T15:42:21.000Z | 2021-08-13T08:09:16.000Z | wxwork_hr_syncing/wizard/__init__.py | rainbow-studio-solution/wxwork | 344a0a8f8f0ac364101a1bb4a98c132588118839 | [
"MulanPSL-1.0"
] | null | null | null | wxwork_hr_syncing/wizard/__init__.py | rainbow-studio-solution/wxwork | 344a0a8f8f0ac364101a1bb4a98c132588118839 | [
"MulanPSL-1.0"
] | 4 | 2021-01-11T04:57:07.000Z | 2021-05-21T06:01:55.000Z | # -*- coding: utf-8 -*-
from . import wizard_wxwork_contacts_sync
from . import wizard_wxwork_sync_tag
from . import wizard_wxwork_sync_user
| 23.666667 | 41 | 0.788732 | 21 | 142 | 4.904762 | 0.52381 | 0.291262 | 0.466019 | 0.640777 | 0.504854 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.126761 | 142 | 5 | 42 | 28.4 | 0.822581 | 0.147887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f7a01db88c8fe742a32f01d5f4cf301e254055df | 102 | py | Python | kapi/api_resources/__init__.py | mattsta/kapi-python | 5ec2ad4034d4c3ad9c4085b40f31b5df7bc59c81 | [
"MIT"
] | 1 | 2020-01-02T04:50:36.000Z | 2020-01-02T04:50:36.000Z | kapi/api_resources/__init__.py | mattsta/kapi-python | 5ec2ad4034d4c3ad9c4085b40f31b5df7bc59c81 | [
"MIT"
] | 1 | 2021-05-01T22:30:33.000Z | 2021-05-01T22:30:33.000Z | kapi/api_resources/__init__.py | mattsta/kapi-python | 5ec2ad4034d4c3ad9c4085b40f31b5df7bc59c81 | [
"MIT"
] | null | null | null | from kapi.api_resources.resume import Resume
from kapi.api_resources.availability import Availability
| 34 | 56 | 0.882353 | 14 | 102 | 6.285714 | 0.5 | 0.181818 | 0.25 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 2 | 57 | 51 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e38bc1582f9d1414d4efa143d361b988e2ec91cf | 17,711 | py | Python | eeauditor/auditors/aws/AWS_DataSync_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 442 | 2020-03-15T20:56:36.000Z | 2022-03-31T22:13:07.000Z | eeauditor/auditors/aws/AWS_DataSync_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 57 | 2020-03-15T22:09:56.000Z | 2022-03-31T13:17:06.000Z | eeauditor/auditors/aws/AWS_DataSync_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 59 | 2020-03-15T21:19:10.000Z | 2022-03-31T15:01:31.000Z | #This file is part of ElectricEye.
#SPDX-License-Identifier: Apache-2.0
#Licensed to the Apache Software Foundation (ASF) under one
#or more contributor license agreements. See the NOTICE file
#distributed with this work for additional information
#regarding copyright ownership. The ASF licenses this file
#to you under the Apache License, Version 2.0 (the
#"License"); you may not use this file except in compliance
#with the License. You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing,
#software distributed under the License is distributed on an
#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
#KIND, either express or implied. See the License for the
#specific language governing permissions and limitations
#under the License.
import boto3
import datetime
from check_register import CheckRegister
from dateutil.parser import parse
registry = CheckRegister()
datasync = boto3.client("datasync")
@registry.register_check("datasync")
def datasync_public_agent_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[DataSync.1] AWS DataSync Agents should not be accessible over the Internet"""
paginator = datasync.get_paginator("list_agents")
# ISO Time
iso8601Time = (datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat())
try:
iterator = paginator.paginate()
for page in iterator:
for a in page["Agents"]:
agentArn = str(a["AgentArn"])
agentName = str(a["Name"])
response = datasync.describe_agent(AgentArn=agentArn)
if str(response["EndpointType"]) == "PUBLIC":
try:
# create Sec Hub finding
finding = {
"SchemaVersion": "2018-10-08",
"Id": agentArn + "/public-agent-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": agentArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "HIGH"},
"Confidence": 99,
"Title": "[DataSync.1] AWS DataSync Agents should not be accessible over the Internet",
"Description": "DataSync Agent "
+ agentName
+ " is not configured to use a PrivateLink Endpoint. If you use a VPC endpoint, all communication from DataSync to AWS services occurs through the VPC endpoint in your VPC in AWS. This approach provides a private connection between your self-managed data center, your VPC, and AWS services. It increases the security of your data as it is copied over the network. Refer to the remediation instructions if this configuration is not intended.",
"Remediation": {
"Recommendation": {
"Text": "You CANNOT change and Endpoint Type after creation and will need to create a new Agent. To learn more about making an Agent private refer to the Choose a service endpoint section of the AWS DataSync User Guide",
"Url": "https://docs.aws.amazon.com/datasync/latest/userguide/choose-service-endpoint.html#choose-service-endpoint-vpc"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsDataSyncAgent",
"Id": agentArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"Name": agentName
}
},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1",
]
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE"
}
yield finding
except Exception as e:
print(e)
continue
else:
try:
# create Sec Hub finding
finding = {
"SchemaVersion": "2018-10-08",
"Id": agentArn + "/public-agent-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": agentArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[DataSync.1] AWS DataSync Agents should not be accessible over the Internet",
"Description": "DataSync Agent "
+ agentName
+ " is configured to use a PrivateLink Endpoint.",
"Remediation": {
"Recommendation": {
"Text": "You CANNOT change and Endpoint Type after creation and will need to create a new Agent. To learn more about making an Agent private refer to the Choose a service endpoint section of the AWS DataSync User Guide",
"Url": "https://docs.aws.amazon.com/datasync/latest/userguide/choose-service-endpoint.html#choose-service-endpoint-vpc"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsDataSyncAgent",
"Id": agentArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"Name": agentName
}
},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1",
]
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED"
}
yield finding
except Exception as e:
print(e)
continue
else:
continue
except Exception as e:
print(e)
@registry.register_check("datasync")
def datasync_task_logging_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[DataSync.2] AWS DataSync data transfer Tasks should have logging enabled"""
paginator = datasync.get_paginator("list_tasks")
# ISO Time
iso8601Time = (datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat())
try:
iterator = paginator.paginate()
for page in iterator:
for t in page["Tasks"]:
taskArn = str(t["TaskArn"])
taskName = str(t["Name"])
response = datasync.describe_task(TaskArn=taskArn)
if str(response["EndpointType"]) == "PUBLIC":
try:
# create Sec Hub finding
finding = {
"SchemaVersion": "2018-10-08",
"Id": taskArn + "/task-logging-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": taskArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[DataSync.2] AWS DataSync data transfer Tasks should have logging enabled",
"Description": "DataSync Task "
+ taskName
+ " does not have logging enabled. Refer to the remediation instructions if this configuration is not intended.",
"Remediation": {
"Recommendation": {
"Text": "To learn more about monitoring DataSync Tasks refer to the Monitoring your task section of the AWS DataSync User Guide",
"Url": "https://docs.aws.amazon.com/datasync/latest/userguide/monitor-datasync.html"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsDataSyncTask",
"Id": taskArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"Name": taskName
}
},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
]
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE"
}
yield finding
except Exception as e:
print(e)
continue
else:
try:
# create Sec Hub finding
finding = {
"SchemaVersion": "2018-10-08",
"Id": taskArn + "/task-logging-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": taskArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[DataSync.2] AWS DataSync data transfer Tasks should have logging enabled",
"Description": "DataSync Task "
+ taskName
+ " has logging enabled.",
"Remediation": {
"Recommendation": {
"Text": "To learn more about monitoring DataSync Tasks refer to the Monitoring your task section of the AWS DataSync User Guide",
"Url": "https://docs.aws.amazon.com/datasync/latest/userguide/monitor-datasync.html"
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsDataSyncTask",
"Id": taskArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"Name": taskName
}
},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
]
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED"
}
yield finding
except Exception as e:
print(e)
continue
else:
continue
except Exception as e:
print(e) | 54.66358 | 470 | 0.388572 | 1,294 | 17,711 | 5.306801 | 0.222566 | 0.019222 | 0.028834 | 0.035241 | 0.807776 | 0.798165 | 0.772827 | 0.772827 | 0.772827 | 0.772827 | 0 | 0.052815 | 0.529614 | 17,711 | 324 | 471 | 54.66358 | 0.771456 | 0.060132 | 0 | 0.794521 | 0 | 0.023973 | 0.308938 | 0.021186 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006849 | false | 0.006849 | 0.013699 | 0 | 0.020548 | 0.020548 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e3e2f8240e0caab77d9321998e2502e7b0b65281 | 47 | py | Python | openopenmmreporters/demo/__init__.py | dprada/OpenOpenMMReporters | 7e093bf6cd5fd5aa9ae2f4473baad63c0387dbb6 | [
"MIT"
] | 1 | 2021-06-25T16:31:03.000Z | 2021-06-25T16:31:03.000Z | openopenmmreporters/demo/__init__.py | dprada/OpenOpenMMReporters | 7e093bf6cd5fd5aa9ae2f4473baad63c0387dbb6 | [
"MIT"
] | null | null | null | openopenmmreporters/demo/__init__.py | dprada/OpenOpenMMReporters | 7e093bf6cd5fd5aa9ae2f4473baad63c0387dbb6 | [
"MIT"
] | 1 | 2021-06-19T16:37:38.000Z | 2021-06-19T16:37:38.000Z | from .two_LJ_particles import two_LJ_particles
| 23.5 | 46 | 0.893617 | 8 | 47 | 4.75 | 0.625 | 0.263158 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
e3e7718c2371ef0bb788286572f2fd63ad8cf120 | 26,400 | py | Python | test/cp_request/design/test_block.py | aquariumbio/experiment-request | 026e3eb767c47f980a35004e9ded5e4e33553693 | [
"MIT"
] | null | null | null | test/cp_request/design/test_block.py | aquariumbio/experiment-request | 026e3eb767c47f980a35004e9ded5e4e33553693 | [
"MIT"
] | null | null | null | test/cp_request/design/test_block.py | aquariumbio/experiment-request | 026e3eb767c47f980a35004e9ded5e4e33553693 | [
"MIT"
] | null | null | null | import pytest
import json
from cp_request import (
Unit,
Value, NamedEntity, Attribute, Treatment
)
from cp_request.design import (
GenerateBlock, ProductBlock, ReplicateBlock, SumBlock,
BlockReference, SubjectReference, TreatmentReference,
BlockDefinitionEncoder, BlockDefinitionDecoder,
DesignBlock, DesignBlockEncoder, DesignBlockDecoder
)
@pytest.fixture
def iptg():
micromolar_unit = Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')
return Treatment.create_from(
entity=NamedEntity(
name='IPTG',
reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1',
attributes=[
Attribute.create_from(
name='concentration', unit=micromolar_unit)
])
)
@pytest.fixture()
def kan():
microgram_per_milliliter_unit = Unit(
reference='http://purl.obolibrary.org/obo/UO_0000274')
return Treatment.create_from(
entity=NamedEntity(
name='Kan',
reference='https://hub.sd2e.org/user/sd2e/design/Kan/1',
attributes=[
Attribute.create_from(
name='concentration', unit=microgram_per_milliliter_unit)
])
)
@pytest.fixture
def temperature():
temperature_unit = Unit(
reference='http://purl.obolibrary.org/obo/UO_0000027')
return Treatment.create_from(
attribute=Attribute.create_from(
name='temperature',
value=Value(
value=37.0,
unit=temperature_unit
)))
@pytest.fixture
def timepoint():
hour_unit = Unit(reference='http://purl.obolibrary.org/obo/UO_0000032')
return Treatment.create_from(
attribute=Attribute.create_from(
name='timepoint',
unit=hour_unit)
)
@pytest.fixture
def nand_circuit():
return NamedEntity(
name="MG1655_NAND_Circuit",
reference="https://hub.sd2e.org/user/sd2e/design/MG1655_NAND_Circuit/1"
)
@pytest.fixture
def empty_landing_pads():
return NamedEntity(
name="MG1655_empty_landing_pads",
reference="https://hub.sd2e.org/user/sd2e/design/MG1655_empty_landing_pads/1"
)
@pytest.fixture
def strain_block(nand_circuit, empty_landing_pads, kan):
return DesignBlock(
label='strains',
definition=SumBlock(block_list=[
ProductBlock(block_list=[
SubjectReference(entity=nand_circuit),
TreatmentReference(treatment=kan)
]),
SubjectReference(entity=empty_landing_pads)
])
)
@pytest.fixture
def condition_block(iptg):
micromolar_unit = Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')
l_arabinose = Treatment.create_from(
entity=NamedEntity(
name='L-arabinose',
reference='https://hub.sd2e.org/user/sd2e/design/Larabinose/1',
attributes=[
Attribute.create_from(
name='concentration', unit=micromolar_unit)
])
)
return DesignBlock(
label='conditions',
definition=ProductBlock(block_list=[
GenerateBlock(
treatment=iptg,
attribute_name='concentration',
values=[
Value(
value=0,
unit=micromolar_unit),
Value(
value=0.25,
unit=micromolar_unit),
Value(
value=2.5,
unit=micromolar_unit),
Value(
value=25,
unit=micromolar_unit),
Value(
value=250,
unit=micromolar_unit)
]),
GenerateBlock(
treatment=l_arabinose,
attribute_name='concentration',
values=[
Value(
value=0,
unit=micromolar_unit),
Value(
value=5,
unit=micromolar_unit),
Value(
value=50,
unit=micromolar_unit),
Value(
value=500,
unit=micromolar_unit),
Value(
value=5000,
unit=micromolar_unit),
Value(
value=25000,
unit=micromolar_unit)
]),
])
)
@pytest.fixture
def dummy_definition_decoder(iptg, kan, temperature, timepoint, condition_block, strain_block):
class DummyDefinitionDecoder(json.JSONDecoder):
def __init__(self):
self.__symbol_table = dict()
self.__add_symbol(iptg)
self.__add_symbol(kan)
self.__add_symbol(temperature)
self.__add_symbol(timepoint)
self.__add_symbol(condition_block)
self.__add_symbol(strain_block)
super().__init__(object_hook=self.convert)
def convert(self, d):
return BlockDefinitionDecoder(self.__symbol_table).object_hook(d)
def __add_symbol(self, obj):
if isinstance(obj, DesignBlock):
self.__symbol_table[obj.label] = obj
else:
self.__symbol_table[obj.name] = obj
return DummyDefinitionDecoder
@pytest.fixture
def dummy_design_decoder(iptg, kan, temperature, timepoint):
class DummyDesignDecoder(json.JSONDecoder):
def __init__(self):
self.__symbol_table = dict()
self.__add_symbol(iptg)
self.__add_symbol(kan)
self.__add_symbol(temperature)
self.__add_symbol(timepoint)
super().__init__(object_hook=self.convert)
def convert(self, d):
return DesignBlockDecoder(self.__symbol_table).object_hook(d)
def __add_symbol(self, obj):
self.__symbol_table[obj.name] = obj
return DummyDesignDecoder
class TestDefinitionBlock:
def test_generate_block(self, iptg):
b1 = GenerateBlock(
treatment=iptg,
attribute_name='concentration',
values=[
Value(value=0, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=0.25, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=2.5, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=25, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=250, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064'))
])
b2 = GenerateBlock(
treatment=iptg,
attribute_name='concentration',
values=[
Value(value=0, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=0.25, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=2.5, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=25, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=250, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064'))
])
assert b1 == b1
assert b1 == b2
assert b1 != {}
assert repr(
b1) == "GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='IPTG', reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=0.25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=2.5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=250, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])"
def test_generate_block_serialization(self, iptg, dummy_definition_decoder):
b1 = GenerateBlock(
treatment=iptg,
attribute_name='concentration',
values=[
Value(value=0, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=0.25, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=2.5, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=25, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')),
Value(value=250, unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064'))
])
b_json = json.dumps(b1, cls=BlockDefinitionEncoder)
b2 = json.loads(b_json, cls=dummy_definition_decoder)
assert b1 == b2
def test_product_block(self, temperature, timepoint):
b1 = ProductBlock(block_list=[
TreatmentReference(treatment=temperature),
TreatmentReference(treatment=timepoint)
])
b2 = ProductBlock(block_list=[
TreatmentReference(treatment=temperature),
TreatmentReference(treatment=timepoint)
])
assert b1 == b1
assert b1 == b2
assert b1 != {}
assert repr(
b1) == "ProductBlock(block_list=[TreatmentReference(treatment=AttributeTreatment(attribute=BoundAttribute(name='temperature', value=Value(value=37.0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000027'))))), TreatmentReference(treatment=AttributeTreatment(attribute=UnboundAttribute(name='timepoint', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000032'))))])"
def test_product_block_serialization(self, temperature, timepoint, dummy_definition_decoder):
b1 = ProductBlock(block_list=[
TreatmentReference(treatment=temperature),
TreatmentReference(treatment=timepoint)
])
b_json = json.dumps(b1, cls=BlockDefinitionEncoder)
assert b_json == '{"block_type": "product_block", "block_list": [{"block_type": "treatment_reference", "reference": "temperature"}, {"block_type": "treatment_reference", "reference": "timepoint"}]}'
b2 = json.loads(b_json, cls=dummy_definition_decoder)
assert b1 == b2
def test_replicate_block(self, condition_block):
b1 = ReplicateBlock(count=4,
block=BlockReference(block=condition_block))
b2 = ReplicateBlock(count=4,
block=BlockReference(block=condition_block))
assert b1 == b1
assert b1 == b2
assert b1 != {}
assert repr(
b1) == "ReplicateBlock(count=4, block=BlockReference(block=DesignBlock(label='conditions', definition=ProductBlock(block_list=[GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='IPTG', reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=0.25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=2.5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=250, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))]), GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='L-arabinose', reference='https://hub.sd2e.org/user/sd2e/design/Larabinose/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=50, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=500, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=5000, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25000, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])]))))"
def test_replicate_block_serialization(self, condition_block, dummy_definition_decoder):
b1 = ReplicateBlock(count=4,
block=BlockReference(block=condition_block))
b_json = json.dumps(b1, cls=BlockDefinitionEncoder)
assert b_json == '{"block_type": "replicate_block", "count": 4, "block": {"block_type": "block_reference", "reference": "conditions"}}'
b2 = json.loads(b_json, cls=dummy_definition_decoder)
assert b1 == b2
def test_sum_block(self, condition_block, strain_block):
b1 = SumBlock(block_list=[
BlockReference(block=strain_block),
BlockReference(block=condition_block)
])
b2 = SumBlock(block_list=[
BlockReference(block=strain_block),
BlockReference(block=condition_block)
])
assert b1 == b1
assert b1 == b2
assert b1 != {}
assert repr(
b1) == "SumBlock(block_list=[BlockReference(block=DesignBlock(label='strains', definition=SumBlock(block_list=[ProductBlock(block_list=[SubjectReference(entity=NamedEntity(name='MG1655_NAND_Circuit', reference='https://hub.sd2e.org/user/sd2e/design/MG1655_NAND_Circuit/1')), TreatmentReference(treatment=EntityTreatment(entity=NamedEntity(name='Kan', reference='https://hub.sd2e.org/user/sd2e/design/Kan/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000274'))])))]), SubjectReference(entity=NamedEntity(name='MG1655_empty_landing_pads', reference='https://hub.sd2e.org/user/sd2e/design/MG1655_empty_landing_pads/1'))]))), BlockReference(block=DesignBlock(label='conditions', definition=ProductBlock(block_list=[GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='IPTG', reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=0.25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=2.5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=250, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))]), GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='L-arabinose', reference='https://hub.sd2e.org/user/sd2e/design/Larabinose/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=50, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=500, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=5000, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25000, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])])))])"
def test_sum_block_serialization(self, condition_block, strain_block, dummy_definition_decoder):
b1 = SumBlock(block_list=[
BlockReference(block=strain_block),
BlockReference(block=condition_block)
])
b_json = json.dumps(b1, cls=BlockDefinitionEncoder)
assert b_json == '{"block_type": "sum_block", "block_list": [{"block_type": "block_reference", "reference": "strains"}, {"block_type": "block_reference", "reference": "conditions"}]}'
b2 = json.loads(b_json, cls=dummy_definition_decoder)
assert b1 == b2
def test_tuple_block(self, condition_block, strain_block):
b1 = ProductBlock(block_list=[
BlockReference(block=strain_block),
BlockReference(block=condition_block)
])
b2 = ProductBlock(block_list=[
BlockReference(block=strain_block),
BlockReference(block=condition_block)
])
assert b1 == b1
assert b1 == b2
assert b1 != {}
assert repr(
b1) == "ProductBlock(block_list=[BlockReference(block=DesignBlock(label='strains', definition=SumBlock(block_list=[ProductBlock(block_list=[SubjectReference(entity=NamedEntity(name='MG1655_NAND_Circuit', reference='https://hub.sd2e.org/user/sd2e/design/MG1655_NAND_Circuit/1')), TreatmentReference(treatment=EntityTreatment(entity=NamedEntity(name='Kan', reference='https://hub.sd2e.org/user/sd2e/design/Kan/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000274'))])))]), SubjectReference(entity=NamedEntity(name='MG1655_empty_landing_pads', reference='https://hub.sd2e.org/user/sd2e/design/MG1655_empty_landing_pads/1'))]))), BlockReference(block=DesignBlock(label='conditions', definition=ProductBlock(block_list=[GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='IPTG', reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=0.25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=2.5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=250, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))]), GenerateBlock(treatment=EntityTreatment(entity=NamedEntity(name='L-arabinose', reference='https://hub.sd2e.org/user/sd2e/design/Larabinose/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), attribute_name='concentration', values=[Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=5, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=50, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=500, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=5000, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')), Value(value=25000, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])])))])"
def test_tuple_block_serialization(self, condition_block, strain_block, dummy_definition_decoder):
b1 = ProductBlock(block_list=[
BlockReference(block=strain_block),
BlockReference(block=condition_block)
])
b_json = json.dumps(b1, cls=BlockDefinitionEncoder)
b2 = json.loads(b_json, cls=dummy_definition_decoder)
assert b1 == b2
class TestReference:
def test_block_reference(self, strain_block):
r1 = BlockReference(block=strain_block)
r2 = BlockReference(block=strain_block)
assert r1 == r1
assert r1 == r2
assert r1 != {}
assert repr(r1) == "BlockReference(block=DesignBlock(label='strains', definition=SumBlock(block_list=[ProductBlock(block_list=[SubjectReference(entity=NamedEntity(name='MG1655_NAND_Circuit', reference='https://hub.sd2e.org/user/sd2e/design/MG1655_NAND_Circuit/1')), TreatmentReference(treatment=EntityTreatment(entity=NamedEntity(name='Kan', reference='https://hub.sd2e.org/user/sd2e/design/Kan/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000274'))])))]), SubjectReference(entity=NamedEntity(name='MG1655_empty_landing_pads', reference='https://hub.sd2e.org/user/sd2e/design/MG1655_empty_landing_pads/1'))])))"
def test_block_reference_serialization(self, strain_block, dummy_definition_decoder):
r1 = BlockReference(block=strain_block)
r_json = json.dumps(r1, cls=BlockDefinitionEncoder)
assert r_json == '{"block_type": "block_reference", "reference": "strains"}'
r2 = json.loads(r_json, cls=dummy_definition_decoder)
assert r1 == r2
def test_treatment_reference(self, temperature):
r1 = TreatmentReference(treatment=temperature)
r2 = TreatmentReference(treatment=temperature)
assert r1 == r1
assert r1 == r2
assert r1 != {}
assert repr(r1) == "TreatmentReference(treatment=AttributeTreatment(attribute=BoundAttribute(name='temperature', value=Value(value=37.0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000027')))))"
def test_treatment_reference_serialization(self, temperature, dummy_definition_decoder):
r1 = TreatmentReference(treatment=temperature)
r_json = json.dumps(r1, cls=BlockDefinitionEncoder)
assert r_json == '{"block_type": "treatment_reference", "reference": "temperature"}'
r2 = json.loads(r_json, cls=dummy_definition_decoder)
assert r1 == r2
def test_treatment_attribute_reference(self, iptg):
r1 = TreatmentReference(treatment=iptg)
r2 = TreatmentReference(treatment=iptg)
assert r1 == r1
assert r1 == r2
assert r1 != {}
assert repr(
r1) == "TreatmentReference(treatment=EntityTreatment(entity=NamedEntity(name='IPTG', reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])))"
def test_treatment_attribute_reference_serialization(self, iptg, dummy_definition_decoder):
r1 = TreatmentReference(treatment=iptg)
r_json = json.dumps(r1, cls=BlockDefinitionEncoder)
r2 = json.loads(r_json, cls=dummy_definition_decoder)
assert r1 == r2
def test_treatment_attribute_value_reference(self, iptg):
r1 = TreatmentReference.create_from(
treatment=iptg,
value=Value(
value=0,
unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')))
r2 = TreatmentReference.create_from(
treatment=iptg,
value=Value(
value=0,
unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')))
assert r1 == r1
assert r1 == r2
assert r1 != {}
assert repr(
r1) == "TreatmentValueReference(treatment=EntityTreatment(entity=NamedEntity(name='IPTG', reference='https://hub.sd2e.org/user/sd2e/design/IPTG/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064'))])), value=Value(value=0, unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000064')))"
def test_treatment_attribute_value_reference_serialization(self, iptg, dummy_definition_decoder):
r1 = TreatmentReference.create_from(
treatment=iptg,
value=Value(
value=0,
unit=Unit(
reference='http://purl.obolibrary.org/obo/UO_0000064')))
r_json = json.dumps(r1, cls=BlockDefinitionEncoder)
r2 = json.loads(r_json, cls=dummy_definition_decoder)
assert r1 == r2
def test_subject_reference(self, kan):
r1 = TreatmentReference(treatment=kan)
r2 = TreatmentReference(treatment=kan)
assert r1 == r1
assert r1 == r2
assert r1 != {}
assert repr(
r1) == "TreatmentReference(treatment=EntityTreatment(entity=NamedEntity(name='Kan', reference='https://hub.sd2e.org/user/sd2e/design/Kan/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000274'))])))"
def test_subject_reference_serialization(self, kan, dummy_definition_decoder):
r1 = TreatmentReference(treatment=kan)
r_json = json.dumps(r1, cls=BlockDefinitionEncoder)
r2 = json.loads(r_json, cls=dummy_definition_decoder)
assert r1 == r2
class TestDesignBlock:
def test_design_block(self, kan):
b1 = DesignBlock(
label="test", definition=TreatmentReference(treatment=kan))
b2 = DesignBlock(
label="test", definition=TreatmentReference(treatment=kan))
assert b1 == b1
assert b1 == b2
assert b1 != {}
assert repr(
b1) == "DesignBlock(label='test', definition=TreatmentReference(treatment=EntityTreatment(entity=NamedEntity(name='Kan', reference='https://hub.sd2e.org/user/sd2e/design/Kan/1', attributes=[UnboundAttribute(name='concentration', unit=Unit(reference='http://purl.obolibrary.org/obo/UO_0000274'))]))))"
def test_design_block_serialization(self, kan, dummy_design_decoder):
b1 = DesignBlock(
label="test", definition=TreatmentReference(treatment=kan))
b_json = json.dumps(b1, cls=DesignBlockEncoder)
assert b_json == '{"object_type": "design_block", "label": "test", "definition": {"block_type": "treatment_reference", "reference": "Kan"}}'
b2 = json.loads(b_json, cls=dummy_design_decoder)
assert b1 == b2
| 54.771784 | 2,316 | 0.655227 | 2,919 | 26,400 | 5.760877 | 0.046591 | 0.037583 | 0.079864 | 0.098656 | 0.873989 | 0.846991 | 0.8147 | 0.794422 | 0.781518 | 0.753449 | 0 | 0.048151 | 0.209394 | 26,400 | 481 | 2,317 | 54.885655 | 0.757522 | 0 | 0 | 0.673861 | 1 | 0.035971 | 0.432538 | 0.139015 | 0 | 0 | 0 | 0 | 0.146283 | 1 | 0.091127 | false | 0 | 0.009592 | 0.01199 | 0.141487 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
541fad7d93783d0a5679aa70b3e90fedba61e294 | 11,600 | py | Python | accelbyte_py_sdk/api/social/wrappers/_slot.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | accelbyte_py_sdk/api/social/wrappers/_slot.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | 1 | 2021-10-13T03:46:58.000Z | 2021-10-13T03:46:58.000Z | accelbyte_py_sdk/api/social/wrappers/_slot.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: justice_py_sdk_codegen/__main__.py
# pylint: disable=duplicate-code
# pylint: disable=line-too-long
# pylint: disable=missing-function-docstring
# pylint: disable=missing-function-docstring
# pylint: disable=missing-module-docstring
# pylint: disable=too-many-arguments
# pylint: disable=too-many-branches
# pylint: disable=too-many-instance-attributes
# pylint: disable=too-many-lines
# pylint: disable=too-many-locals
# pylint: disable=too-many-public-methods
# pylint: disable=too-many-return-statements
# pylint: disable=too-many-statements
# pylint: disable=unused-import
from typing import Any, Dict, List, Optional, Tuple, Union
from ....core import HeaderStr
from ....core import get_namespace as get_services_namespace
from ....core import run_request
from ....core import run_request_async
from ....core import same_doc_as
from ..models import ErrorEntity
from ..models import SlotInfo
from ..models import SlotMetadataUpdate
from ..operations.slot import GetSlotData
from ..operations.slot import GetUserNamespaceSlots
from ..operations.slot import PublicCreateUserNamespaceSlot
from ..operations.slot import PublicDeleteUserNamespaceSlot
from ..operations.slot import PublicGetSlotData
from ..operations.slot import PublicGetUserNamespaceSlots
from ..operations.slot import PublicUpdateUserNamespaceSlot
from ..operations.slot import PublicUpdateUserNamespaceSlotMetadata
@same_doc_as(GetSlotData)
def get_slot_data(slot_id: str, user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetSlotData.create(
slot_id=slot_id,
user_id=user_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetSlotData)
async def get_slot_data_async(slot_id: str, user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetSlotData.create(
slot_id=slot_id,
user_id=user_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetUserNamespaceSlots)
def get_user_namespace_slots(user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetUserNamespaceSlots.create(
user_id=user_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetUserNamespaceSlots)
async def get_user_namespace_slots_async(user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetUserNamespaceSlots.create(
user_id=user_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicCreateUserNamespaceSlot)
def public_create_user_namespace_slot(user_id: str, checksum: Optional[str] = None, custom_attribute: Optional[str] = None, file: Optional[Any] = None, label: Optional[str] = None, tags: Optional[List[str]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicCreateUserNamespaceSlot.create(
user_id=user_id,
checksum=checksum,
custom_attribute=custom_attribute,
file=file,
label=label,
tags=tags,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicCreateUserNamespaceSlot)
async def public_create_user_namespace_slot_async(user_id: str, checksum: Optional[str] = None, custom_attribute: Optional[str] = None, file: Optional[Any] = None, label: Optional[str] = None, tags: Optional[List[str]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicCreateUserNamespaceSlot.create(
user_id=user_id,
checksum=checksum,
custom_attribute=custom_attribute,
file=file,
label=label,
tags=tags,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicDeleteUserNamespaceSlot)
def public_delete_user_namespace_slot(slot_id: str, user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicDeleteUserNamespaceSlot.create(
slot_id=slot_id,
user_id=user_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicDeleteUserNamespaceSlot)
async def public_delete_user_namespace_slot_async(slot_id: str, user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicDeleteUserNamespaceSlot.create(
slot_id=slot_id,
user_id=user_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetSlotData)
def public_get_slot_data(slot_id: str, user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicGetSlotData.create(
slot_id=slot_id,
user_id=user_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetSlotData)
async def public_get_slot_data_async(slot_id: str, user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicGetSlotData.create(
slot_id=slot_id,
user_id=user_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetUserNamespaceSlots)
def public_get_user_namespace_slots(user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicGetUserNamespaceSlots.create(
user_id=user_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetUserNamespaceSlots)
async def public_get_user_namespace_slots_async(user_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicGetUserNamespaceSlots.create(
user_id=user_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicUpdateUserNamespaceSlot)
def public_update_user_namespace_slot(slot_id: str, user_id: str, checksum: Optional[str] = None, custom_attribute: Optional[str] = None, file: Optional[Any] = None, label: Optional[str] = None, tags: Optional[List[str]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicUpdateUserNamespaceSlot.create(
slot_id=slot_id,
user_id=user_id,
checksum=checksum,
custom_attribute=custom_attribute,
file=file,
label=label,
tags=tags,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicUpdateUserNamespaceSlot)
async def public_update_user_namespace_slot_async(slot_id: str, user_id: str, checksum: Optional[str] = None, custom_attribute: Optional[str] = None, file: Optional[Any] = None, label: Optional[str] = None, tags: Optional[List[str]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicUpdateUserNamespaceSlot.create(
slot_id=slot_id,
user_id=user_id,
checksum=checksum,
custom_attribute=custom_attribute,
file=file,
label=label,
tags=tags,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicUpdateUserNamespaceSlotMetadata)
def public_update_user_namespace_slot_metadata(slot_id: str, user_id: str, body: Optional[SlotMetadataUpdate] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicUpdateUserNamespaceSlotMetadata.create(
slot_id=slot_id,
user_id=user_id,
body=body,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicUpdateUserNamespaceSlotMetadata)
async def public_update_user_namespace_slot_metadata_async(slot_id: str, user_id: str, body: Optional[SlotMetadataUpdate] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = PublicUpdateUserNamespaceSlotMetadata.create(
slot_id=slot_id,
user_id=user_id,
body=body,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
| 40.84507 | 339 | 0.722155 | 1,387 | 11,600 | 5.795242 | 0.081471 | 0.03583 | 0.07166 | 0.047773 | 0.846977 | 0.838019 | 0.822219 | 0.808534 | 0.79684 | 0.79261 | 0 | 0.000424 | 0.186034 | 11,600 | 283 | 340 | 40.989399 | 0.850879 | 0.066034 | 0 | 0.782222 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035556 | false | 0 | 0.075556 | 0 | 0.253333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5858f0bb62e4193c1e6d0d97662fa2a79f48a265 | 15,035 | py | Python | Ejer_3/P3_mapper.py | hunzaGit/EjerciciosPythonSpark | 9ef32a402a85284c050dfae77019ba56b42eacd1 | [
"MIT"
] | null | null | null | Ejer_3/P3_mapper.py | hunzaGit/EjerciciosPythonSpark | 9ef32a402a85284c050dfae77019ba56b42eacd1 | [
"MIT"
] | null | null | null | Ejer_3/P3_mapper.py | hunzaGit/EjerciciosPythonSpark | 9ef32a402a85284c050dfae77019ba56b42eacd1 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import sys
import re
# ********************************************************************************
# ****************** Variables y Tipos y clases de meteoritos ******************
# ********************************************************************************
paramTipoMeteo = None
paramClaseMeteo = None
nameType = ['Valid', 'Relict']
# size: 2 elementos
recclass = ['Acapulcoite', 'Acapulcoite/Lodranite', 'Acapulcoite/lodranite', 'Achondrite-prim', 'Achondrite-ung', 'Angrite', 'Aubrite', 'Aubrite-an', 'Brachinite', 'C', 'C1-ung', 'C1/2-ung', 'C2', 'C2-ung', 'C3-ung', 'C3.0-ung', 'C3/4-ung', 'C4', 'C4-ung', 'C4/5', 'C5/6-ung', 'C6', 'CB', 'CBa', 'CBb', 'CH/CBb', 'CH3', 'CH3 ', 'CI1', 'CK', 'CK3', 'CK3-an', 'CK3.8', 'CK3/4', 'CK4', 'CK4-an', 'CK4/5', 'CK5', 'CK5/6', 'CK6', 'CM', 'CM-an', 'CM1', 'CM1/2', 'CM2', 'CM2-an', 'CO3', 'CO3 ', 'CO3.0', 'CO3.1', 'CO3.2', 'CO3.3', 'CO3.4', 'CO3.5', 'CO3.6', 'CO3.7', 'CO3.8', 'CR', 'CR-an', 'CR1', 'CR2', 'CR2-an', 'CR7', 'CV2', 'CV3', 'CV3-an', 'Chondrite-fusion crust', 'Chondrite-ung', 'Diogenite', 'Diogenite-an', 'Diogenite-olivine', 'Diogenite-pm', 'E', 'E-an', 'E3', 'E3-an', 'E4', 'E5', 'E5-an', 'E6', 'EH', 'EH-imp melt', 'EH3', 'EH3/4-an', 'EH4', 'EH4/5', 'EH5', 'EH6', 'EH6-an', 'EH7', 'EH7-an', 'EL-melt rock', 'EL3', 'EL3/4', 'EL4', 'EL4/5', 'EL5', 'EL6', 'EL6 ', 'EL6/7', 'EL7', 'Enst achon', 'Enst achon-ung', 'Eucrite', 'Eucrite-Mg rich', 'Eucrite-an', 'Eucrite-br', 'Eucrite-cm', 'Eucrite-mmict', 'Eucrite-pmict', 'Eucrite-unbr', 'Fusion crust', 'H', 'H(5?)', 'H(?)4', 'H(L)3', 'H(L)3-an', 'H-an', 'H-imp melt', 'H-melt breccia', 'H-melt rock', 'H-metal', 'H/L3', 'H/L3-4', 'H/L3.5', 'H/L3.6', 'H/L3.7', 'H/L3.9', 'H/L4', 'H/L4-5', 'H/L4/5', 'H/L5', 'H/L6', 'H/L6-melt rock', 'H/L~4', 'H3', 'H3 ', 'H3-4', 'H3-5', 'H3-6', 'H3-an', 'H3.0', 'H3.0-3.4', 'H3.05', 'H3.1', 'H3.10', 'H3.15', 'H3.2', 'H3.2-3.7', 'H3.2-6', 'H3.2-an', 'H3.3', 'H3.4', 'H3.4-5', 'H3.4/3.5', 'H3.5', 'H3.5-4', 'H3.6', 'H3.6-6', 'H3.7', 'H3.7-5', 'H3.7-6', 'H3.7/3.8', 'H3.8', 'H3.8-4', 'H3.8-5', 'H3.8-6', 'H3.8-an', 'H3.8/3.9', 'H3.8/4', 'H3.9', 'H3.9-5', 'H3.9-6', 'H3.9/4', 'H3/4', 'H4', 'H4 ', 'H4(?)', 'H4-5', 'H4-6', 'H4-an', 'H4-melt breccia', 'H4/5', 'H4/6', 'H5', 'H5 ', 'H5-6', 'H5-7', 'H5-an', 'H5-melt breccia', 'H5/6', 'H6', 'H6 ', 'H6-melt breccia', 'H6/7', 'H7', 'H?', 'Howardite', 'Howardite-an', 'H~4', 'H~4/5', 'H~5', 'H~6', 'Impact melt breccia', 'Iron', 'Iron, IAB complex', 'Iron, IAB-MG', 'Iron, IAB-an', 'Iron, IAB-sHH', 'Iron, IAB-sHL', 'Iron, IAB-sHL-an', 'Iron, IAB-sLH', 'Iron, IAB-sLL', 'Iron, IAB-sLM', 'Iron, IAB-ung', 'Iron, IAB?', 'Iron, IC', 'Iron, IC-an', 'Iron, IIAB', 'Iron, IIAB-an', 'Iron, IIC', 'Iron, IID', 'Iron, IID-an', 'Iron, IIE', 'Iron, IIE-an', 'Iron, IIF', 'Iron, IIG', 'Iron, IIIAB', 'Iron, IIIAB-an', 'Iron, IIIAB?', 'Iron, IIIE', 'Iron, IIIE-an', 'Iron, IIIF', 'Iron, IVA', 'Iron, IVA-an', 'Iron, IVB', 'Iron, ungrouped', 'Iron?', 'K', 'K3', 'L', 'L(?)3', 'L(H)3', 'L(LL)3', 'L(LL)3.05', 'L(LL)3.5-3.7', 'L(LL)5', 'L(LL)6', 'L(LL)~4', 'L-imp melt', 'L-melt breccia', 'L-melt rock', 'L-metal', 'L/LL', 'L/LL(?)3', 'L/LL-melt rock', 'L/LL3', 'L/LL3-5', 'L/LL3-6', 'L/LL3.10', 'L/LL3.2', 'L/LL3.4', 'L/LL3.5', 'L/LL3.6/3.7', 'L/LL4', 'L/LL4-6', 'L/LL4/5', 'L/LL5', 'L/LL5-6', 'L/LL5/6', 'L/LL6', 'L/LL6-an', 'L/LL~4', 'L/LL~5', 'L/LL~6', 'L3', 'L3-4', 'L3-5', 'L3-6', 'L3-7', 'L3-melt breccia', 'L3.0', 'L3.0-3.7', 'L3.0-3.9', 'L3.00', 'L3.05', 'L3.1', 'L3.10', 'L3.2', 'L3.2-3.5', 'L3.2-3.6', 'L3.3', 'L3.3-3.5', 'L3.3-3.6', 'L3.3-3.7', 'L3.4', 'L3.4-3.7', 'L3.5', 'L3.5-3.7', 'L3.5-3.8', 'L3.5-3.9', 'L3.5-5', 'L3.6', 'L3.6-4', 'L3.7', 'L3.7-3.9', 'L3.7-4', 'L3.7-6', 'L3.7/3.8', 'L3.8', 'L3.8-5', 'L3.8-6', 'L3.8-an', 'L3.9', 'L3.9-5', 'L3.9-6', 'L3.9/4', 'L3/4', 'L4', 'L4 ', 'L4-5', 'L4-6', 'L4-an', 'L4-melt breccia', 'L4-melt rock', 'L4/5', 'L5', 'L5 ', 'L5-6', 'L5-7', 'L5-melt breccia', 'L5/6', 'L6', 'L6 ', 'L6-melt breccia', 'L6-melt rock', 'L6/7', 'L7', 'LL', 'LL(L)3', 'LL(L)3.1', 'LL-imp melt', 'LL-melt breccia', 'LL-melt rock', 'LL3', 'LL3-4', 'LL3-5', 'LL3-6', 'LL3.0', 'LL3.00', 'LL3.05', 'LL3.1', 'LL3.1-3.5', 'LL3.10', 'LL3.15', 'LL3.2', 'LL3.3', 'LL3.4', 'LL3.5', 'LL3.6', 'LL3.7', 'LL3.7-6', 'LL3.8', 'LL3.8-4', 'LL3.8-6', 'LL3.9', 'LL3.9/4', 'LL3/4', 'LL4', 'LL4-5', 'LL4-6', 'LL4/5', 'LL4/6', 'LL5', 'LL5-6', 'LL5-7', 'LL5/6', 'LL6', 'LL6 ', 'LL6(?)', 'LL6-an', 'LL6-melt breccia', 'LL6/7', 'LL7', 'LL7(?)', 'LL<3.5', 'LL~3', 'LL~4', 'LL~4/5', 'LL~5', 'LL~6', 'Lodranite', 'Lodranite-an', 'Lunar', 'Lunar (anorth)', 'Lunar (bas. breccia)', 'Lunar (bas/anor)', 'Lunar (bas/gab brec)', 'Lunar (basalt)', 'Lunar (feldsp. breccia)', 'Lunar (gabbro)', 'Lunar (norite)', 'L~3', 'L~4', 'L~4-6', 'L~5', 'L~6', 'Martian', 'Martian (OPX)', 'Martian (basaltic breccia)', 'Martian (chassignite)', 'Martian (nakhlite)', 'Martian (shergottite)', 'Mesosiderite', 'Mesosiderite-A', 'Mesosiderite-A1', 'Mesosiderite-A2', 'Mesosiderite-A3', 'Mesosiderite-A3/4', 'Mesosiderite-A4', 'Mesosiderite-B', 'Mesosiderite-B1', 'Mesosiderite-B2', 'Mesosiderite-B4', 'Mesosiderite-C', 'Mesosiderite-C2', 'Mesosiderite-an', 'Mesosiderite?', 'OC', 'OC3', 'Pallasite', 'Pallasite, PES', 'Pallasite, PMG', 'Pallasite, PMG-an', 'Pallasite, ungrouped', 'Pallasite?', 'R', 'R3', 'R3-4', 'R3-5', 'R3-6', 'R3.4', 'R3.5-4', 'R3.5-6', 'R3.6', 'R3.7', 'R3.8', 'R3.8-5', 'R3.8-6', 'R3.9', 'R3/4', 'R4', 'R4/5', 'R5', 'R6', 'Relict H', 'Relict OC', 'Relict iron', 'Stone-uncl', 'Stone-ung', 'Unknown', 'Ureilite', 'Ureilite-an', 'Ureilite-pmict', 'Winonaite', 'recclass']
# size: 466 elementos
# recclassValid = ['Acapulcoite', 'Acapulcoite/Lodranite', 'Acapulcoite/lodranite', 'Achondrite-prim', 'Achondrite-ung', 'Angrite', 'Aubrite', 'Aubrite-an', 'Brachinite', 'C', 'C1-ung', 'C1/2-ung', 'C2', 'C2-ung', 'C3-ung', 'C3.0-ung', 'C3/4-ung', 'C4', 'C4-ung', 'C4/5', 'C5/6-ung', 'C6', 'CB', 'CBa', 'CBb', 'CH/CBb', 'CH3', 'CH3 ', 'CI1', 'CK', 'CK3', 'CK3-an', 'CK3.8', 'CK3/4', 'CK4', 'CK4-an', 'CK4/5', 'CK5', 'CK5/6', 'CK6', 'CM', 'CM-an', 'CM1', 'CM1/2', 'CM2', 'CM2-an', 'CO3', 'CO3 ', 'CO3.0', 'CO3.1', 'CO3.2', 'CO3.3', 'CO3.4', 'CO3.5', 'CO3.6', 'CO3.7', 'CO3.8', 'CR', 'CR-an', 'CR1', 'CR2', 'CR2-an', 'CR7', 'CV2', 'CV3', 'CV3-an', 'Chondrite-ung', 'Diogenite', 'Diogenite-an', 'Diogenite-olivine', 'Diogenite-pm', 'E', 'E-an', 'E3', 'E3-an', 'E4', 'E5', 'E5-an', 'E6', 'EH', 'EH-imp melt', 'EH3', 'EH3/4-an', 'EH4', 'EH4/5', 'EH5', 'EH6', 'EH6-an', 'EH7', 'EH7-an', 'EL-melt rock', 'EL3', 'EL3/4', 'EL4', 'EL4/5', 'EL5', 'EL6', 'EL6 ', 'EL6/7', 'EL7', 'Enst achon', 'Enst achon-ung', 'Eucrite', 'Eucrite-Mg rich', 'Eucrite-an', 'Eucrite-br', 'Eucrite-cm', 'Eucrite-mmict', 'Eucrite-pmict', 'Eucrite-unbr', 'H', 'H(5?)', 'H(?)4', 'H(L)3', 'H(L)3-an', 'H-an', 'H-imp melt', 'H-melt breccia', 'H-melt rock', 'H-metal', 'H/L3', 'H/L3-4', 'H/L3.5', 'H/L3.6', 'H/L3.7', 'H/L3.9', 'H/L4', 'H/L4-5', 'H/L4/5', 'H/L5', 'H/L6', 'H/L6-melt rock', 'H/L~4', 'H3', 'H3 ', 'H3-4', 'H3-5', 'H3-6', 'H3-an', 'H3.0', 'H3.0-3.4', 'H3.05', 'H3.1', 'H3.10', 'H3.15', 'H3.2', 'H3.2-3.7', 'H3.2-6', 'H3.2-an', 'H3.3', 'H3.4', 'H3.4-5', 'H3.4/3.5', 'H3.5', 'H3.5-4', 'H3.6', 'H3.6-6', 'H3.7', 'H3.7-5', 'H3.7-6', 'H3.7/3.8', 'H3.8', 'H3.8-4', 'H3.8-5', 'H3.8-6', 'H3.8-an', 'H3.8/3.9', 'H3.8/4', 'H3.9', 'H3.9-5', 'H3.9-6', 'H3.9/4', 'H3/4', 'H4', 'H4 ', 'H4(?)', 'H4-5', 'H4-6', 'H4-an', 'H4-melt breccia', 'H4/5', 'H4/6', 'H5', 'H5 ', 'H5-6', 'H5-7', 'H5-an', 'H5-melt breccia', 'H5/6', 'H6', 'H6 ', 'H6-melt breccia', 'H6/7', 'H7', 'H?', 'Howardite', 'Howardite-an', 'H~4', 'H~4/5', 'H~5', 'H~6', 'Impact melt breccia', 'Iron', 'Iron, IAB complex', 'Iron, IAB-MG', 'Iron, IAB-an', 'Iron, IAB-sHH', 'Iron, IAB-sHL', 'Iron, IAB-sHL-an', 'Iron, IAB-sLH', 'Iron, IAB-sLL', 'Iron, IAB-sLM', 'Iron, IAB-ung', 'Iron, IAB?', 'Iron, IC', 'Iron, IC-an', 'Iron, IIAB', 'Iron, IIAB-an', 'Iron, IIC', 'Iron, IID', 'Iron, IID-an', 'Iron, IIE', 'Iron, IIE-an', 'Iron, IIF', 'Iron, IIG', 'Iron, IIIAB', 'Iron, IIIAB-an', 'Iron, IIIAB?', 'Iron, IIIE', 'Iron, IIIE-an', 'Iron, IIIF', 'Iron, IVA', 'Iron, IVA-an', 'Iron, IVB', 'Iron, ungrouped', 'Iron?', 'K', 'K3', 'L', 'L(?)3', 'L(H)3', 'L(LL)3', 'L(LL)3.05', 'L(LL)3.5-3.7', 'L(LL)5', 'L(LL)6', 'L(LL)~4', 'L-imp melt', 'L-melt breccia', 'L-melt rock', 'L-metal', 'L/LL', 'L/LL(?)3', 'L/LL-melt rock', 'L/LL3', 'L/LL3-5', 'L/LL3-6', 'L/LL3.10', 'L/LL3.2', 'L/LL3.4', 'L/LL3.5', 'L/LL3.6/3.7', 'L/LL4', 'L/LL4-6', 'L/LL4/5', 'L/LL5', 'L/LL5-6', 'L/LL5/6', 'L/LL6', 'L/LL6-an', 'L/LL~4', 'L/LL~5', 'L/LL~6', 'L3', 'L3-4', 'L3-5', 'L3-6', 'L3-7', 'L3-melt breccia', 'L3.0', 'L3.0-3.7', 'L3.0-3.9', 'L3.00', 'L3.05', 'L3.1', 'L3.10', 'L3.2', 'L3.2-3.5', 'L3.2-3.6', 'L3.3', 'L3.3-3.5', 'L3.3-3.6', 'L3.3-3.7', 'L3.4', 'L3.4-3.7', 'L3.5', 'L3.5-3.7', 'L3.5-3.8', 'L3.5-3.9', 'L3.5-5', 'L3.6', 'L3.6-4', 'L3.7', 'L3.7-3.9', 'L3.7-4', 'L3.7-6', 'L3.7/3.8', 'L3.8', 'L3.8-5', 'L3.8-6', 'L3.8-an', 'L3.9', 'L3.9-5', 'L3.9-6', 'L3.9/4', 'L3/4', 'L4', 'L4 ', 'L4-5', 'L4-6', 'L4-an', 'L4-melt breccia', 'L4-melt rock', 'L4/5', 'L5', 'L5 ', 'L5-6', 'L5-7', 'L5-melt breccia', 'L5/6', 'L6', 'L6 ', 'L6-melt breccia', 'L6-melt rock', 'L6/7', 'L7', 'LL', 'LL(L)3', 'LL(L)3.1', 'LL-imp melt', 'LL-melt breccia', 'LL-melt rock', 'LL3', 'LL3-4', 'LL3-5', 'LL3-6', 'LL3.0', 'LL3.00', 'LL3.05', 'LL3.1', 'LL3.1-3.5', 'LL3.10', 'LL3.15', 'LL3.2', 'LL3.3', 'LL3.4', 'LL3.5', 'LL3.6', 'LL3.7', 'LL3.7-6', 'LL3.8', 'LL3.8-4', 'LL3.8-6', 'LL3.9', 'LL3.9/4', 'LL3/4', 'LL4', 'LL4-5', 'LL4-6', 'LL4/5', 'LL4/6', 'LL5', 'LL5-6', 'LL5-7', 'LL5/6', 'LL6', 'LL6 ', 'LL6(?)', 'LL6-an', 'LL6-melt breccia', 'LL6/7', 'LL7', 'LL7(?)', 'LL<3.5', 'LL~3', 'LL~4', 'LL~4/5', 'LL~5', 'LL~6', 'Lodranite', 'Lodranite-an', 'Lunar', 'Lunar (anorth)', 'Lunar (bas. breccia)', 'Lunar (bas/anor)', 'Lunar (bas/gab brec)', 'Lunar (basalt)', 'Lunar (feldsp. breccia)', 'Lunar (gabbro)', 'Lunar (norite)', 'L~3', 'L~4', 'L~4-6', 'L~5', 'L~6', 'Martian', 'Martian (OPX)', 'Martian (basaltic breccia)', 'Martian (chassignite)', 'Martian (nakhlite)', 'Martian (shergottite)', 'Mesosiderite', 'Mesosiderite-A', 'Mesosiderite-A1', 'Mesosiderite-A2', 'Mesosiderite-A3', 'Mesosiderite-A3/4', 'Mesosiderite-A4', 'Mesosiderite-B', 'Mesosiderite-B1', 'Mesosiderite-B2', 'Mesosiderite-B4', 'Mesosiderite-C', 'Mesosiderite-C2', 'Mesosiderite-an', 'Mesosiderite?', 'OC', 'OC3', 'Pallasite', 'Pallasite, PES', 'Pallasite, PMG', 'Pallasite, PMG-an', 'Pallasite, ungrouped', 'Pallasite?', 'R', 'R3', 'R3-4', 'R3-5', 'R3-6', 'R3.4', 'R3.5-4', 'R3.5-6', 'R3.6', 'R3.7', 'R3.8', 'R3.8-5', 'R3.8-6', 'R3.9', 'R3/4', 'R4', 'R4/5', 'R5', 'R6', 'Stone-uncl', 'Stone-ung', 'Unknown', 'Ureilite', 'Ureilite-an', 'Ureilite-pmict', 'Winonaite']
# size: 460 elementos
# recclassRelict = ['Chondrite-fusion crust', 'Fusion crust', 'Relict H', 'Relict OC', 'Relict iron']
# size: 5 elementos
# ********************************************************************************
# ********************************************************************************
# **************************** Parseo de parametros ****************************
# ********************************************************************************
def salir():
string = """
*****************************************************************
- Instruduzca un tipo o clase de meteorito como segundo parametro para ver la masa media por anyo:
-t tipoMeteoro ['Valid', 'Relict'] (o --tipo)
-c claseMeteoro [una de las clases de meteorio] (o --clase)
*****************************************************************
"""
sys.exit(string)
if len(sys.argv)>=3:
if sys.argv[1] == '-t' or sys.argv[1] == '--tipo':
if sys.argv[2] in nameType:
paramTipoMeteo = sys.argv[2]
else:
salir()
elif sys.argv[1] == '-c' or sys.argv[1] == '--clase':
if sys.argv[2] in recclass:
paramClaseMeteo = sys.argv[2]
else:
salir()
else:
salir()
else:
salir()
# ********************************************************************************
# ********************************************************************************
# ************************ Funcion de formateo de filas ************************
# ********************************************************************************
# Tres tipos distintos
# ['Bulls Run,5163,Valid,Iron?,2250,Fell,01/01/1964 12:00:00 AM,,,']
# |--------------------------------------------------------------|
# ['Aachen,1,Valid,L5,21,Fell,01/01/1880 12:00:00 AM,50.775000,6.083330,', '(50.775000, 6.083330)', '\n'] - 3 elemento
# |--------------------------------------------------------------------| |---------------------| |--|
# ['Akyumak,433,Valid,', 'Iron, IVA', ',50000,Fell,01/01/1981 12:00:00 AM,39.916670,42.816670,', '(39.916670, 42.816670)', '\n'] - 5 elementos
# |------------------| |----------| |------------------------------------------------------| |-----------------------| |--|
# Yamato 791510,26859,Valid,E5-an,9.77,Found,01/01/1979 12:00:00 AM,-71.500000,35.666670, "(-71.500000, 35.666670)"
def formatearFila(line):
tipoOClase = ''
masa = ''
anyo = ''
if len(line) <= 3:
line = line[0].split(',')
if len(line)>6:
if paramTipoMeteo != None: # Si se busca por tipo
tipoOClase = line[2]
else: # Si se busca por Clase
tipoOClase = line[3]
masa = line[4]
fecha = line[6]
else:
return None
else: # if len(line) > 3:
if paramTipoMeteo != None: # Si se busca por tipo
tipoOClase = line[0].split(',')[2]
else: # Si se busca por Clase
tipoOClase = line[1]
masa = line[2].encode("ascii", "ignore").split(',')[1]
fecha = line[2].encode("ascii", "ignore").split(',')[3]
if fecha != '':
fecha = re.split(' ', fecha)[0] # ['01/01/1880', '12:00:00', 'AM']
anyo = re.split('/', fecha)[2] # 1880
try:
float(masa)
if tipoOClase != '' and masa != '' and anyo != '' and anyo.isdigit():
return [tipoOClase, masa, anyo]
else:
return None
except ValueError:
return None
# ********************************************************************************
# ********************************************************************************
# ************************* Codigo de la funcion Mapper ************************
# ********************************************************************************
# 0 1 2 3 4 5 6
# name,id,nametype,recclass,mass (g),fall, year, reclat, reclong, GeoLocation
# Aachen,1, Valid, L5, 21, Fell,01/01/1880 12:00:00 AM,50.775000,6.083330,"(50.775000, 6.083330)"
# Se elimina la cabecera
sys.stdin.next()
num = 0
numTotal = 0
for line in sys.stdin:
words = re.split('"', line)
tupla = formatearFila(words)
if tupla != None:
numTotal += 1
if paramTipoMeteo != None and tupla[0] == paramTipoMeteo: # Si se busca por tipo
print( tupla[2] + "\t" + tupla[1])
num += 1
elif paramClaseMeteo != None and tupla[0] == paramClaseMeteo: # Si se busca por Clase
print( tupla[2] + "\t" + tupla[1] )
num += 1
# year massValue
# 1880 21
| 105.880282 | 5,183 | 0.482873 | 2,490 | 15,035 | 2.915663 | 0.125301 | 0.036364 | 0.004408 | 0.006612 | 0.795317 | 0.782782 | 0.768457 | 0.766253 | 0.760193 | 0.750275 | 0 | 0.114561 | 0.123911 | 15,035 | 141 | 5,184 | 106.631206 | 0.436608 | 0.515464 | 0 | 0.311688 | 0 | 0 | 0.513125 | 0.023763 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.025974 | 0 | 0.103896 | 0.025974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5883264afa8ab71fec1df4720d09f54ef87405b3 | 188 | py | Python | sciann/engine~/__init__.py | kkoocheki/sciann | 964e63a03f560a15713ff862b7c7a665086c89ca | [
"MIT"
] | null | null | null | sciann/engine~/__init__.py | kkoocheki/sciann | 964e63a03f560a15713ff862b7c7a665086c89ca | [
"MIT"
] | null | null | null | sciann/engine~/__init__.py | kkoocheki/sciann | 964e63a03f560a15713ff862b7c7a665086c89ca | [
"MIT"
] | null | null | null |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# from . import constraint
# from . import functional
# from . import models
| 20.888889 | 38 | 0.81383 | 23 | 188 | 6.043478 | 0.434783 | 0.215827 | 0.345324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154255 | 188 | 8 | 39 | 23.5 | 0.874214 | 0.37234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
588b18747059ec774fc56896e75804e614711fe3 | 49 | py | Python | app/dota2_heatmap/__init__.py | codeway3/OSP | 16f9bac61f537154746d7a928ae1dfb99bffc47f | [
"MIT"
] | null | null | null | app/dota2_heatmap/__init__.py | codeway3/OSP | 16f9bac61f537154746d7a928ae1dfb99bffc47f | [
"MIT"
] | 5 | 2018-03-28T05:11:48.000Z | 2018-07-01T12:13:05.000Z | app/dota2_heatmap/__init__.py | codeway3/OSP | 16f9bac61f537154746d7a928ae1dfb99bffc47f | [
"MIT"
] | null | null | null | from .render_heatmap import page, render_heatmap
| 24.5 | 48 | 0.857143 | 7 | 49 | 5.714286 | 0.714286 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
54733978f53be925376bcae3f8c6da0b2a037090 | 27,758 | py | Python | libbrook/apis/projects_api.py | doalitic/libbrook | 9f34bfbfa1f7513ca6681a0a5566f2434edb77eb | [
"Apache-2.0"
] | null | null | null | libbrook/apis/projects_api.py | doalitic/libbrook | 9f34bfbfa1f7513ca6681a0a5566f2434edb77eb | [
"Apache-2.0"
] | null | null | null | libbrook/apis/projects_api.py | doalitic/libbrook | 9f34bfbfa1f7513ca6681a0a5566f2434edb77eb | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
ProjectsApi.py
Copyright 2016 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class ProjectsApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def add_user_to_project(self, project_id, user, **kwargs):
"""
Add user to project
Adds a user to a project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.add_user_to_project(project_id, user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project identifier (required)
:param AddUserToProjectRequest user: User data (required)
:return: User
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'user']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_user_to_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `add_user_to_project`")
# verify the required parameter 'user' is set
if ('user' not in params) or (params['user'] is None):
raise ValueError("Missing the required parameter `user` when calling `add_user_to_project`")
resource_path = '/v1/projects/{projectId}/users'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['projectId'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'user' in params:
body_params = params['user']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='User',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def create_project(self, project, **kwargs):
"""
Create project
Create a new project associated with the organization and the user given in the do_org and sub claim of the JWT.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_project(project, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param StoreProjectRequest project: New project data (required)
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project' is set
if ('project' not in params) or (params['project'] is None):
raise ValueError("Missing the required parameter `project` when calling `create_project`")
resource_path = '/v1/projects'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'project' in params:
body_params = params['project']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def destroy_project(self, project_id, **kwargs):
"""
Delete project
Deletes a user project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.destroy_project(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project identifier (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method destroy_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `destroy_project`")
resource_path = '/v1/projects/{projectId}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['projectId'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def index_project_users(self, **kwargs):
"""
List project users
Retrieve the list of users in a project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.index_project_users(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: list[User]
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method index_project_users" % key
)
params[key] = val
del params['kwargs']
resource_path = '/v1/projects/{projectId}/users'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[User]',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def index_projects(self, **kwargs):
"""
List projects
Retrieve the list of available projects to the current user.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.index_projects(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method index_projects" % key
)
params[key] = val
del params['kwargs']
resource_path = '/v1/projects'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Project]',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def remove_user_from_project(self, project_id, user_id, **kwargs):
"""
Remove user from project
Disassociates a user from a project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.remove_user_from_project(project_id, user_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project identifier (required)
:param str user_id: User identifier (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'user_id']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method remove_user_from_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `remove_user_from_project`")
# verify the required parameter 'user_id' is set
if ('user_id' not in params) or (params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `remove_user_from_project`")
resource_path = '/v1/projects/{projectId}/users/{userId}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['projectId'] = params['project_id']
if 'user_id' in params:
path_params['userId'] = params['user_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def show_project(self, project_id, **kwargs):
"""
Show project details
Retrieve the details of a project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.show_project(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project identifier (required)
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method show_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `show_project`")
resource_path = '/v1/projects/{projectId}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['projectId'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def stats_projects(self, **kwargs):
"""
Get project statistics
Retrieve the statistics of available projects to the current user.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.stats_projects(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: StatsProject
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method stats_projects" % key
)
params[key] = val
del params['kwargs']
resource_path = '/v1/stats/projects'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StatsProject',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def update_project(self, project_id, project, **kwargs):
"""
Update project
Updates a project properties.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_project(project_id, project, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project identifier (required)
:param UpdateProjectRequest project: New project data (required)
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'project']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `update_project`")
# verify the required parameter 'project' is set
if ('project' not in params) or (params['project'] is None):
raise ValueError("Missing the required parameter `project` when calling `update_project`")
resource_path = '/v1/projects/{projectId}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['projectId'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'project' in params:
body_params = params['project']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
response = self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
| 37.510811 | 120 | 0.537142 | 2,696 | 27,758 | 5.345697 | 0.081231 | 0.044963 | 0.027061 | 0.028726 | 0.870455 | 0.856023 | 0.846031 | 0.839231 | 0.839231 | 0.831876 | 0 | 0.001165 | 0.381404 | 27,758 | 739 | 121 | 37.56157 | 0.838157 | 0.272138 | 0 | 0.822917 | 0 | 0 | 0.154326 | 0.015274 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026042 | false | 0 | 0.015625 | 0 | 0.067708 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
54c5288af1b4b2e110fba85c19cd30239810179e | 191 | py | Python | APIfeed/APIfeed/perfiles_api/admin.py | BrianMarquez3/Python-Django | 61f84a01b7f57254f9dcbbad86cc4c88c2acf4d7 | [
"MIT"
] | 2 | 2020-09-28T21:23:59.000Z | 2021-11-10T15:01:15.000Z | APIfeed/APIfeed/perfiles_api/admin.py | BrianMarquez3/Python-Django | 61f84a01b7f57254f9dcbbad86cc4c88c2acf4d7 | [
"MIT"
] | 21 | 2021-02-04T01:37:44.000Z | 2022-03-12T01:00:55.000Z | APIfeed/APIfeed/perfiles_api/admin.py | BrianMarquez3/Python-Django | 61f84a01b7f57254f9dcbbad86cc4c88c2acf4d7 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.db.models.base import Model
from perfiles_api import models
admin.site.register(models.UserProfile)
admin.site.register(models.ProfileFeedItem)
| 23.875 | 43 | 0.842932 | 27 | 191 | 5.925926 | 0.555556 | 0.125 | 0.2125 | 0.2875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08377 | 191 | 7 | 44 | 27.285714 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
49e6186304e1fa7af5339f065af6ebe33766ca91 | 7,836 | py | Python | mayan/apps/authentication/tests/test_views.py | eshbeata/open-paperless | 6b9ed1f21908116ad2795b3785b2dbd66713d66e | [
"Apache-2.0"
] | 2,743 | 2017-12-18T07:12:30.000Z | 2022-03-27T17:21:25.000Z | mayan/apps/authentication/tests/test_views.py | eshbeata/open-paperless | 6b9ed1f21908116ad2795b3785b2dbd66713d66e | [
"Apache-2.0"
] | 15 | 2017-12-18T14:58:07.000Z | 2021-03-01T20:05:05.000Z | mayan/apps/authentication/tests/test_views.py | eshbeata/open-paperless | 6b9ed1f21908116ad2795b3785b2dbd66713d66e | [
"Apache-2.0"
] | 257 | 2017-12-18T03:12:58.000Z | 2022-03-25T08:59:10.000Z | from __future__ import absolute_import, unicode_literals
from django.conf import settings
from django.core import mail
from django.test import override_settings
from django.urls import reverse
from common.tests import BaseTestCase
from smart_settings.classes import Namespace
from user_management.tests.literals import (
TEST_ADMIN_EMAIL, TEST_ADMIN_PASSWORD, TEST_USER_PASSWORD_EDITED,
TEST_ADMIN_USERNAME
)
from ..settings import setting_maximum_session_length
from .literals import TEST_EMAIL_AUTHENTICATION_BACKEND
class UserLoginTestCase(BaseTestCase):
"""
Test that users can login via the supported authentication methods
"""
def setUp(self):
super(UserLoginTestCase, self).setUp()
Namespace.invalidate_cache_all()
@override_settings(AUTHENTICATION_LOGIN_METHOD='username')
def test_normal_behavior(self):
response = self.client.get(reverse('documents:document_list'))
self.assertRedirects(
response,
'http://testserver/authentication/login/?next=/documents/list/'
)
@override_settings(AUTHENTICATION_LOGIN_METHOD='username')
def test_username_login(self):
logged_in = self.client.login(
username=TEST_ADMIN_USERNAME, password=TEST_ADMIN_PASSWORD
)
self.assertTrue(logged_in)
response = self.client.get(reverse('documents:document_list'))
# We didn't get redirected to the login URL
self.assertEqual(response.status_code, 200)
@override_settings(AUTHENTICATION_LOGIN_METHOD='email')
def test_email_login(self):
with self.settings(AUTHENTICATION_BACKENDS=(TEST_EMAIL_AUTHENTICATION_BACKEND,)):
logged_in = self.client.login(
username=TEST_ADMIN_USERNAME, password=TEST_ADMIN_PASSWORD
)
self.assertFalse(logged_in)
logged_in = self.client.login(
email=TEST_ADMIN_EMAIL, password=TEST_ADMIN_PASSWORD
)
self.assertTrue(logged_in)
response = self.client.get(reverse('documents:document_list'))
# We didn't get redirected to the login URL
self.assertEqual(response.status_code, 200)
@override_settings(AUTHENTICATION_LOGIN_METHOD='username')
def test_username_login_via_views(self):
response = self.client.get(reverse('documents:document_list'))
self.assertRedirects(
response,
'http://testserver/authentication/login/?next=/documents/list/'
)
response = self.client.post(
reverse(settings.LOGIN_URL), {
'username': TEST_ADMIN_USERNAME,
'password': TEST_ADMIN_PASSWORD
}
)
response = self.client.get(reverse('documents:document_list'))
# We didn't get redirected to the login URL
self.assertEqual(response.status_code, 200)
@override_settings(AUTHENTICATION_LOGIN_METHOD='email')
def test_email_login_via_views(self):
with self.settings(AUTHENTICATION_BACKENDS=(TEST_EMAIL_AUTHENTICATION_BACKEND,)):
response = self.client.get(reverse('documents:document_list'))
self.assertRedirects(
response,
'http://testserver/authentication/login/?next=/documents/list/'
)
response = self.client.post(
reverse(settings.LOGIN_URL), {
'email': TEST_ADMIN_EMAIL, 'password': TEST_ADMIN_PASSWORD
}, follow=True
)
self.assertEqual(response.status_code, 200)
response = self.client.get(reverse('documents:document_list'))
# We didn't get redirected to the login URL
self.assertEqual(response.status_code, 200)
@override_settings(AUTHENTICATION_LOGIN_METHOD='username')
def test_username_remember_me(self):
response = self.client.post(
reverse(settings.LOGIN_URL), {
'username': TEST_ADMIN_USERNAME,
'password': TEST_ADMIN_PASSWORD,
'remember_me': True
}, follow=True
)
response = self.client.get(reverse('documents:document_list'))
self.assertEqual(response.status_code, 200)
self.assertEqual(
self.client.session.get_expiry_age(),
setting_maximum_session_length.value
)
self.assertFalse(self.client.session.get_expire_at_browser_close())
@override_settings(AUTHENTICATION_LOGIN_METHOD='username')
def test_username_dont_remember_me(self):
response = self.client.post(
reverse(settings.LOGIN_URL), {
'username': TEST_ADMIN_USERNAME,
'password': TEST_ADMIN_PASSWORD,
'remember_me': False
}, follow=True
)
response = self.client.get(reverse('documents:document_list'))
self.assertEqual(response.status_code, 200)
self.assertTrue(self.client.session.get_expire_at_browser_close())
@override_settings(AUTHENTICATION_LOGIN_METHOD='email')
def test_email_remember_me(self):
with self.settings(AUTHENTICATION_BACKENDS=(TEST_EMAIL_AUTHENTICATION_BACKEND,)):
response = self.client.post(
reverse(settings.LOGIN_URL), {
'email': TEST_ADMIN_EMAIL,
'password': TEST_ADMIN_PASSWORD,
'remember_me': True
}, follow=True
)
response = self.client.get(reverse('documents:document_list'))
self.assertEqual(response.status_code, 200)
self.assertEqual(
self.client.session.get_expiry_age(),
setting_maximum_session_length.value
)
self.assertFalse(self.client.session.get_expire_at_browser_close())
@override_settings(AUTHENTICATION_LOGIN_METHOD='email')
def test_email_dont_remember_me(self):
with self.settings(AUTHENTICATION_BACKENDS=(TEST_EMAIL_AUTHENTICATION_BACKEND,)):
response = self.client.post(
reverse(settings.LOGIN_URL), {
'email': TEST_ADMIN_EMAIL,
'password': TEST_ADMIN_PASSWORD,
'remember_me': False
}, follow=True
)
response = self.client.get(reverse('documents:document_list'))
self.assertEqual(response.status_code, 200)
self.assertTrue(self.client.session.get_expire_at_browser_close())
@override_settings(AUTHENTICATION_LOGIN_METHOD='username')
def test_password_reset(self):
response = self.client.post(
reverse('authentication:password_reset_view'), {
'email': TEST_ADMIN_EMAIL,
}, follow=True
)
self.assertContains(
response, text='Password reset email sent!', status_code=200
)
self.assertEqual(len(mail.outbox), 1)
uid_token = mail.outbox[0].body.replace('\n', '').split('/')
response = self.client.post(
reverse('authentication:password_reset_confirm_view', args=uid_token[-3:-1]), {
'new_password1': TEST_USER_PASSWORD_EDITED,
'new_password2': TEST_USER_PASSWORD_EDITED,
}, follow=True
)
self.assertContains(
response, text='Password reset complete!', status_code=200
)
response = self.client.post(
reverse(settings.LOGIN_URL), {
'username': TEST_ADMIN_USERNAME,
'password': TEST_USER_PASSWORD_EDITED,
'remember_me': True
}, follow=True
)
response = self.client.get(reverse('documents:document_list'))
self.assertEqual(response.status_code, 200)
| 37.673077 | 91 | 0.642037 | 811 | 7,836 | 5.922318 | 0.144266 | 0.062461 | 0.078701 | 0.052467 | 0.827191 | 0.806579 | 0.795544 | 0.795544 | 0.734541 | 0.732875 | 0 | 0.007293 | 0.265059 | 7,836 | 207 | 92 | 37.855072 | 0.826706 | 0.02999 | 0 | 0.634146 | 0 | 0 | 0.111448 | 0.046426 | 0 | 0 | 0 | 0 | 0.152439 | 1 | 0.067073 | false | 0.109756 | 0.060976 | 0 | 0.134146 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
b71faf0c729d133cc71fd49a370c71cf4ad6b6e7 | 96,653 | py | Python | PythonLibraries/rlnav/rlnav/scene_graphs.py | batu/NavAssist | 3e518125bbb5ba18f4478f6dc8297d196634af68 | [
"MIT"
] | null | null | null | PythonLibraries/rlnav/rlnav/scene_graphs.py | batu/NavAssist | 3e518125bbb5ba18f4478f6dc8297d196634af68 | [
"MIT"
] | null | null | null | PythonLibraries/rlnav/rlnav/scene_graphs.py | batu/NavAssist | 3e518125bbb5ba18f4478f6dc8297d196634af68 | [
"MIT"
] | null | null | null | URBAN_SCENE_GRAPH_JSONSTR = """{"SourceNodes":[0,0,2,2,2,2,2,2,8,9,9,9,9,9,9,9,9,8,18,18,18,18,18,18,18,18,18,18,2,29,29,29,29,2,34,35,35,35,35,35,35,35,35,34,44,44,44,44,44,44,44,44,44,44,34,55,55,55,55,55,55,55,55,55,55,34,66,66,66,66,66,66,66,66,66,66,66,66,2,79,79,81,81,81,81,81,81,81,81,81,81,81,92,92,92,81,96,96,96,96,96,96,81,103,103,103,81,79,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,108,79,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,220,2,332,333,333,333,333,333,333,333,333,332,342,342,342,342,342,342,342,342,342,342,342,342,342,2,2,2,2,2,360,360,362,2,2],"DestinationNodes":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365],"NumNodes":366,"Features":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[2.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.7,0.7,0.7,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.7,0.7,0.7],[3.0,3.0,0.0,-35.0,0.0,0.0,0.0,0.0,1.0,91.0,4.1258,91.0,0.0,-50.0,0.0,0.0,0.0,0.0,1.0,130.0,5.894,130.0],[4.0,4.0,10.6145983,-0.7,37.8,0.0,0.0,0.0,1.0,37.45221,1.21105969,5.7057,15.1637115,-1.0,54.0,0.0,0.0,0.0,1.0,53.50316,1.73008537,8.151],[5.0,5.0,2.38959742,-0.7,0.0,0.0,0.0,0.0,1.0,21.0,1.21105969,5.7057,3.41371059,-1.0,0.0,0.0,0.0,0.0,1.0,30.0,1.73008537,8.151],[6.0,6.0,27.9045963,-8.749999,-16.304821,0.0,0.0,0.0,1.0,35.0,17.5,58.9094,39.86371,-12.499999,-23.2926,0.0,0.0,0.0,1.0,50.0,25.0,84.15629],[7.0,7.0,-26.7829037,-8.75,0.0,0.0,0.0,0.0,1.0,37.2757034,17.5,91.0,-38.26129,-12.5,0.0,0.0,0.0,0.0,1.0,53.2510033,25.0,130.0],[8.0,8.0,0.9545973,0.0,-2.1,0.0,0.0,0.0,1.0,0.7,0.7,0.7,1.3637104,0.0,-3.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[9.0,3.0,0.9545973,0.0,-2.1,0.0,1.0,0.0,0.0,0.7,0.7,0.7,0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,1.0],[10.0,9.0,28.9545956,4.375,-44.1,0.0,1.0,0.0,0.0,0.7,8.75,2.8,-40.0,6.25,60.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[11.0,9.0,28.9545956,4.375,-30.1,0.0,1.0,0.0,0.0,0.7,8.75,2.8,-40.0,6.25,40.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[12.0,9.0,28.9545956,6.125,-37.1,0.0,0.707107842,0.707105756,0.0,0.7,11.2,3.85,-40.0,8.75,50.0,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,5.5],[13.0,9.0,37.0045967,4.375,-45.1499977,0.0,-0.7071037,0.0,0.7071099,0.7000001,8.75,16.800005,-51.5,6.25,61.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[14.0,9.0,45.0545959,4.375,-37.1,0.0,-1.0,0.0,5.96046448E-06,0.7,8.75,16.8,-63.0,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[15.0,9.0,37.0045967,4.375,-29.05,0.0,-0.7071037,0.0,0.7071099,0.7000001,8.75,16.800005,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[16.0,9.0,41.9045944,3.5,-37.1,0.707105756,-0.7071079,4.214679E-06,4.21469031E-06,0.7000001,7.00000143,16.8000031,-58.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[17.0,9.0,32.1045952,8.4,-37.1,0.707105756,-0.7071079,4.214679E-06,4.21469031E-06,0.7000001,7.00000143,16.8000031,-44.5,12.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[18.0,10.0,73.0552444,8.75,-72.09946,0.0,5.96046448E-06,0.0,-1.0,0.7,0.7,0.7,103.000931,12.5,-99.99922,0.0,5.96046448E-06,0.0,-1.0,1.0,1.0,1.0],[19.0,9.0,28.9546547,13.125,-30.0999031,0.0,5.96046448E-06,0.0,-1.0,0.7,8.75,2.8,-63.00013,6.25,60.00011,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[20.0,9.0,28.9548244,13.125,-44.0999031,0.0,5.96046448E-06,0.0,-1.0,0.7,8.75,2.8,-63.00013,6.25,40.00011,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[21.0,9.0,28.95474,16.1,-37.0999031,0.707105756,4.214691E-06,4.214679E-06,-0.707107842,0.7,11.2,2.80016327,-63.00013,10.5,50.00011,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,4.000233],[22.0,9.0,45.0547829,13.125,-37.09979,0.0,0.0,0.0,1.0,0.7,8.75,16.8,-40.0000725,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[23.0,9.0,37.0049248,13.125,-45.1498833,0.0,0.7071057,0.0,0.707107961,0.7000001,8.75,16.800005,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[24.0,9.0,41.9045944,12.25,-37.0999451,8.429358E-06,-1.364242E-12,-0.707105756,0.7071079,0.7000001,7.00000143,16.8,-44.5003357,5.0,49.99983,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[25.0,9.0,33.4298325,17.15,-37.0998268,8.429358E-06,-1.364242E-12,-0.707105756,0.7071079,0.7000001,8.950447,16.8,-56.60714,12.0,50.000145,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,12.7863493,24.0],[26.0,9.0,30.004734,13.125,-29.0499134,0.0,0.7071099,0.0,-0.70710367,0.7,8.75,2.8,-61.5,6.25,61.5000763,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[27.0,9.0,44.004734,13.125,-29.0497456,0.0,0.7071099,0.0,-0.70710367,0.7,8.75,2.8,-41.5,6.25,61.50008,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[28.0,9.0,37.004734,16.8874989,-29.0498314,0.499996871,0.500003159,0.500001431,-0.499998569,0.7,11.2,1.22507143,-51.5,11.625,61.5000763,-0.499999851,-0.5000002,-0.49999845,0.50000155,1.0,16.0,1.750102],[29.0,11.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.7,0.7,0.7,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[30.0,12.0,-52.5,14.0,0.0,0.0,-0.707106,0.0,0.7071076,119.0,84.0,14.0,-75.0,20.0,0.0,0.0,-0.707106,0.0,0.7071076,170.0,120.0,20.0],[31.0,13.0,52.5,14.0,0.0,0.0,-0.707106,0.0,0.7071076,119.0,84.0,14.0,75.0,20.0,0.0,0.0,-0.707106,0.0,0.7071076,170.0,120.0,20.0],[32.0,14.0,0.0,14.0,52.5,0.0,-3.427267E-07,0.0,1.0,119.0,84.0,14.0,0.0,20.0,75.0,0.0,-3.427267E-07,0.0,1.0,170.0,120.0,20.0],[33.0,15.0,0.0,14.0,-52.5,0.0,-3.427267E-07,0.0,1.0,119.0,84.0,14.0,0.0,20.0,-75.0,0.0,-3.427267E-07,0.0,1.0,170.0,120.0,20.0],[34.0,16.0,-38.2499962,0.0,41.2500267,0.0,0.0,0.0,1.0,0.7,0.7,0.7,-54.6428528,0.0,58.9286079,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[35.0,3.0,5.849971,0.0,-2.49984622,0.0,0.0,0.0,1.0,0.7,0.7,0.7,62.9999542,0.0,-62.4998169,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[36.0,9.0,-22.1500282,4.375,39.5001526,0.0,0.0,0.0,1.0,0.7,8.75,2.8,-40.0,6.25,60.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[37.0,9.0,-22.1500282,4.375,25.5001526,0.0,0.0,0.0,1.0,0.7,8.75,2.8,-40.0,6.25,40.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[38.0,9.0,-22.1500282,6.125,32.5001526,-0.707105756,0.0,0.0,0.707107842,0.7,11.2,3.85,-40.0,8.75,50.0,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,5.5],[39.0,9.0,-30.20003,4.375,40.55015,0.0,-0.7071099,0.0,-0.7071037,0.7000001,8.75,16.800005,-51.5,6.25,61.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[40.0,9.0,-38.2500267,4.375,32.5001526,0.0,-5.96046448E-06,0.0,-1.0,0.7,8.75,16.8,-63.0,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[41.0,9.0,-30.20003,4.375,24.4501534,0.0,-0.7071099,0.0,-0.7071037,0.7000001,8.75,16.800005,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[42.0,9.0,-35.10003,3.5,32.5001526,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,0.7000001,7.00000143,16.8,-58.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[43.0,9.0,-25.3000278,8.4,32.5001526,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,0.7000001,7.00000143,16.8,-44.5,12.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[44.0,10.0,-65.2,8.75,-3.54997444,0.0,0.707105756,0.0,0.707107842,0.7,0.7,0.7,-38.5,12.5,-64.0,0.0,0.707105756,0.0,0.707107842,1.0,1.0,1.0],[45.0,9.0,-23.200079,13.125,24.4501476,0.0,0.707105756,0.0,0.707107842,0.7,8.75,2.8,-40.0,6.25,60.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[46.0,9.0,-37.20008,13.125,24.4501076,0.0,0.707105756,0.0,0.707107842,0.7,8.75,2.8,-40.0,6.25,40.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[47.0,9.0,-30.200079,16.1,24.4501266,-0.5,0.5,0.49999854,0.5000015,0.7,11.2,2.80016327,-40.0,10.5,50.0,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,4.000233],[48.0,9.0,-30.2001247,13.125,40.5501251,0.0,-0.70711,0.0,-0.7071036,0.7,8.75,16.8,-63.0,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[49.0,9.0,-38.2501,13.125,32.500103,0.0,-1.00000012,0.0,2.89082527E-06,0.7000004,8.75,16.80001,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[50.0,9.0,-30.2001171,12.25,37.4001274,0.499995559,-0.50000304,0.500003,-0.499998569,0.7000001,7.00000143,16.8000011,-58.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[51.0,9.0,-30.2000866,17.15,27.6001263,0.499995559,-0.50000304,0.500003,-0.499998569,0.7000001,7.00000143,16.8000011,-44.5,12.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[52.0,9.0,-22.1500683,13.125,39.50015,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-61.5,6.25,61.5000763,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[53.0,9.0,-22.1500263,13.125,25.5001488,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-41.5,6.25,61.50008,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[54.0,9.0,-22.1500473,16.1,32.50015,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,0.7,11.2,2.80016327,-51.5,10.5,61.5000763,-0.499999851,-0.5000002,-0.49999845,0.50000155,1.0,16.0,4.000233],[55.0,10.0,-66.25028,17.5,67.49997,0.0,1.0,0.0,2.95042946E-06,0.7,0.7,0.7,-40.000412,25.0,37.4999237,0.0,1.0,0.0,2.95042946E-06,1.0,1.0,1.0],[56.0,9.0,-22.1499462,21.875,25.5001526,0.0,1.0,0.0,2.95042946E-06,0.7,8.75,2.8,-63.00013,6.25,60.00011,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[57.0,9.0,-22.1500282,21.875,39.5001526,0.0,1.0,0.0,2.95042946E-06,0.7,8.75,2.8,-63.00013,6.25,40.00011,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[58.0,9.0,-22.1499882,24.85,32.5001526,-2.08626557E-06,0.707107842,0.707105756,2.08627171E-06,0.7,11.2,2.80016327,-63.00013,10.5,50.00011,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,4.000233],[59.0,9.0,-38.2500267,21.875,32.5001373,0.0,-1.0,0.0,3.010035E-06,0.7,8.75,16.8,-40.0000725,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[60.0,9.0,-30.2001247,21.875,40.5501862,0.0,-0.7071058,0.0,0.707107842,0.7000001,8.75,16.800005,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[61.0,9.0,-25.30008,21.0,32.5002136,0.707105756,-0.7071079,6.30094473E-06,2.12841837E-06,0.7000001,7.00000143,16.8000031,-58.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[62.0,9.0,-35.10008,25.9,32.5001564,0.707105756,-0.7071079,6.30094473E-06,2.12841837E-06,0.7000001,7.00000143,16.8000031,-44.5,12.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[63.0,9.0,-23.2000313,21.875,24.4501724,0.0,0.7071058,0.0,0.7071078,0.7,8.75,2.8,-61.5,6.25,61.5000763,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[64.0,9.0,-37.20003,21.875,24.4500866,0.0,0.7071058,0.0,0.7071078,0.7,8.75,2.8,-41.5,6.25,61.50008,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[65.0,9.0,-30.2000313,24.85,24.4501324,-0.4999999,0.50000006,0.4999984,0.500001669,0.7,11.2,2.80016327,-51.5,10.5,61.5000763,-0.499999851,-0.5000002,-0.49999845,0.50000155,1.0,16.0,4.000233],[66.0,10.0,-65.2,26.25,-3.54997444,0.0,0.707105756,0.0,0.707107842,0.7,0.7,0.7,-38.5,37.5,-64.0,0.0,0.707105756,0.0,0.707107842,1.0,1.0,1.0],[67.0,9.0,-23.200079,30.625,24.4501476,0.0,0.707105756,0.0,0.707107842,0.7,8.75,2.8,-40.0,6.25,60.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[68.0,9.0,-37.20008,30.625,24.4501076,0.0,0.707105756,0.0,0.707107842,0.7,8.75,2.8,-40.0,6.25,40.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[69.0,9.0,-30.200079,34.475,24.4501266,-0.5,0.5,0.49999854,0.5000015,0.7,11.2,1.05006123,-40.0,11.75,50.0,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,1.5000875],[70.0,9.0,-30.2001247,30.625,40.5501251,0.0,-0.70711,0.0,-0.7071036,0.7,8.75,16.8,-63.0,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[71.0,9.0,-30.2001171,29.75,37.4001274,0.499995559,-0.50000304,0.500003,-0.499998569,0.7000001,7.00000143,16.8000011,-58.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[72.0,9.0,-26.0750923,34.6499977,32.5001564,0.499995559,-0.50000304,0.500003,-0.499998569,0.7000001,16.1000023,8.54952049,-51.5000229,12.0,55.892868,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,23.0,12.2136],[73.0,9.0,-22.1500683,30.625,39.50015,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-61.5,6.25,61.5000763,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[74.0,9.0,-22.1500263,30.625,25.5001488,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-41.5,6.25,61.50008,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[75.0,9.0,-22.1500473,34.6499977,32.50015,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,0.7,11.2,0.700075865,-51.5,12.0,61.5000763,-0.499999851,-0.5000002,-0.49999845,0.50000155,1.0,16.0,1.00010836],[76.0,9.0,-38.2499962,30.625,39.5001068,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-61.5000038,6.25,38.50018,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[77.0,9.0,-38.2500267,30.625,25.500103,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-41.4999962,6.25,38.5000763,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[78.0,9.0,-38.2499962,34.6499977,32.5001,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,0.7,11.2,0.700075865,-51.4999962,12.0,38.50015,-0.499999851,-0.5000002,-0.49999845,0.50000155,1.0,16.0,1.00010836],[79.0,17.0,5.329597,0.0,-62.3,0.0,0.0,0.0,1.0,0.7,0.7,0.7,7.6137104,0.0,-89.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[80.0,18.0,-42.6204033,0.0,18.9,0.0,-1.0,0.0,2.95042946E-06,0.7,0.7,0.7,-68.5,0.0,116.0,0.0,-1.0,0.0,2.95042946E-06,1.0,1.0,1.0],[81.0,3.0,2.17959714,0.0,-72.1,0.0,0.0,0.0,1.0,0.7,0.7,0.7,-4.5,0.0,-14.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[82.0,9.0,-17.7704029,8.749999,-44.1,0.0,0.0,0.0,1.0,0.7,17.4999828,2.8,-28.5,12.499999,40.0,0.0,0.0,0.0,1.0,1.0,24.9999752,4.0],[83.0,9.0,-17.7704029,6.65,-36.925,-0.707105756,0.0,0.0,0.707107842,0.7,12.2505589,3.85,-28.5,9.5,50.25,-0.707105756,0.0,0.0,0.707107842,1.0,17.5008,5.5],[84.0,9.0,-37.0204048,4.375,-29.05,0.0,-0.7071099,0.0,-0.7071037,0.7000001,8.75,16.800005,-56.0,6.25,61.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[85.0,9.0,-45.0704041,4.375,-17.5,0.0,-5.96046448E-06,0.0,-1.0,0.7,8.75,56.0,-67.5,6.25,78.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,80.0],[86.0,9.0,-31.4204044,8.749999,-45.1499977,0.0,-0.7071099,0.0,-0.7071037,0.7000001,17.4999828,28.0000057,-48.0,12.499999,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,24.9999752,40.0],[87.0,9.0,-40.1704025,3.5,-37.1,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,0.7000001,9.800002,16.8,-60.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,14.0,24.0],[88.0,9.0,-31.7704029,6.73749971,10.15,-0.49999997,-0.50000006,-0.4999985,0.50000155,0.7000001,27.3,4.025175,-48.5,9.625,117.5,-0.49999997,-0.50000006,-0.4999985,0.50000155,1.0,39.0,5.75025034],[89.0,9.0,-17.7704029,4.375,-10.32501,0.0,0.0,0.0,1.0,0.7,8.75,41.65025,-28.5,6.25,88.2499847,0.0,0.0,0.0,1.0,1.0,12.5,59.5003548],[90.0,9.0,-37.0204048,4.375,10.15,0.0,-0.7071099,0.0,-0.7071037,0.7000001,8.75,16.800005,-56.0,6.25,117.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[91.0,9.0,-31.5954037,8.4,-9.450006,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,0.7000001,27.6500053,39.8999977,-48.25,11.999999,89.49999,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,39.5,57.0],[92.0,19.0,2.17959714,0.0,-71.75,0.0,0.0,0.0,1.0,0.7,0.7,0.7,0.0,0.0,0.5,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[93.0,9.0,-17.7704182,13.125,4.4625,0.0,8.940697E-08,0.0,1.0,0.7,8.75,12.075,-28.5000229,18.75,108.875,0.0,8.940697E-08,0.0,1.0,1.0,12.5,17.25],[94.0,9.0,-17.7704029,16.1,2.1,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,0.7,11.2,2.80016327,-28.5,23.0,105.5,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,1.0,16.0,4.000233],[95.0,9.0,-17.7704029,9.1875,-10.325,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,0.7,36.04944,0.8751838,-28.4999981,13.125,87.75,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,1.0,51.4992,1.25026262],[96.0,19.0,2.17959714,0.0,-71.75,0.0,0.0,0.0,1.0,0.7,0.7,0.7,0.0,0.0,0.5,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[97.0,9.0,-17.7704182,13.125,9.099999,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-28.5000229,18.75,115.5,0.0,8.940697E-08,0.0,1.0,1.0,12.5,4.0],[98.0,9.0,-17.7703781,13.125,-29.75,0.0,8.940697E-08,0.0,1.0,0.7,8.75,2.8,-28.4999657,18.75,60.0,0.0,8.940697E-08,0.0,1.0,1.0,12.5,4.0],[99.0,9.0,-17.7704029,16.1,-10.325,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,0.7,36.04944,2.80016327,-28.4999981,23.0,87.75,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,1.0,51.4992,4.000233],[100.0,9.0,-17.7703781,13.125,-17.15,0.0,8.940697E-08,0.0,1.0,0.7,8.75,6.30015755,-28.4999657,18.75,78.0,0.0,8.940697E-08,0.0,1.0,1.0,12.5,9.000225],[101.0,9.0,-45.0704041,13.125,-17.5,0.0,8.940697E-08,0.0,1.0,0.7,8.75,55.6488762,-67.5,18.75,77.5,0.0,8.940697E-08,0.0,1.0,1.0,12.5,79.4984],[102.0,9.0,-30.3677444,9.275,10.15,-0.4999998,-0.500000238,-0.49999842,0.50000155,0.7,24.4942856,1.049949,-46.4962,13.25,117.0,-0.4999998,-0.500000238,-0.49999842,0.50000155,1.0,34.9918365,1.49992728],[103.0,19.0,36.8295937,0.0,30.1,0.0,-0.707105756,0.0,0.707107842,0.7,0.7,0.7,49.5,0.0,146.0,0.0,-0.707105756,0.0,0.707107842,1.0,1.0,1.0],[104.0,9.0,-44.020462,13.125,10.150219,0.0,-0.7071057,0.0,0.7071079,0.7,8.75,2.8,-28.5000229,18.75,115.5,0.0,8.940697E-08,0.0,1.0,1.0,12.5,4.0],[105.0,9.0,-30.2829628,16.1,10.150197,-0.499999881,-0.500000238,-0.49999845,0.50000155,0.700000048,24.8503532,2.80016351,-28.5,23.0,95.875,-0.7071056,-5.96046448E-08,-5.96046448E-08,0.707108,1.0,35.5005,4.000233],[106.0,9.0,-28.6204624,13.125,10.185194,0.0,-0.7071057,0.0,0.7071079,0.7,8.75,22.0493,-28.45,18.75,93.5,0.0,8.940697E-08,0.0,1.0,1.0,12.5,31.499],[107.0,9.0,-14.6204023,9.4,-9.575023,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,0.7000001,7.00000143,39.65367,-24.0,13.4285707,89.3213959,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,56.6481],[108.0,20.0,5.329597,0.0,-62.3,0.0,0.0,0.0,1.0,0.7,0.7,0.7,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[109.0,9.0,-42.4796143,0.350000322,-12.2789774,-0.642699063,2.96055134E-07,0.7661188,-1.04772994E-07,0.700000167,0.700000346,0.700000167,-68.2988739,0.5000005,71.4586,-0.642699063,2.96055134E-07,0.7661188,-1.04772994E-07,1.0,1.0,1.0],[110.0,9.0,-44.1787872,1.02814519,-6.062579,-0.6604609,0.6403705,-0.0568745,0.3879206,0.7,0.700000048,0.700000167,-70.726265,1.46877885,80.33917,-0.6604609,0.6403705,-0.0568745,0.3879206,1.0,1.0,1.0],[111.0,9.0,-43.9167,0.349999875,-3.40537572,0.35011965,0.6143423,-0.614342153,0.350119352,0.700000167,0.700000048,0.7,-70.35185,0.499999821,84.13518,0.35011965,0.6143423,-0.614342153,0.350119352,1.0,1.0,1.0],[112.0,9.0,-44.1803551,0.350000083,-8.472802,-0.156180635,-0.6896431,-0.689643145,0.15618068,0.7000001,0.7000001,0.7000001,-70.7285,0.5000001,76.8959961,-0.156180635,-0.6896431,-0.689643145,0.15618068,1.0,1.0,1.0],[113.0,9.0,-43.4541435,0.429832876,-5.629256,0.648170352,-0.353345871,0.458066583,0.495173663,0.700000048,0.700000167,0.7000001,-69.6910553,0.614047,80.9582062,0.648170352,-0.353345871,0.458066583,0.495173663,1.0,1.0,1.0],[114.0,9.0,-42.77045,0.4150438,-6.98568726,0.0248294324,0.966048956,-0.10186547,0.236128062,0.700000346,0.700000048,0.700000346,-68.7143555,0.5929197,79.02045,0.0248294324,0.966048956,-0.10186547,0.236128062,1.0,1.0,1.0],[115.0,9.0,-44.36704,0.3500023,-7.35342836,-0.7070874,0.003300767,0.00328877149,0.7071108,0.7,0.7,0.7,-70.99519,0.5000033,78.4951,-0.7070874,0.003300767,0.00328877149,0.7071108,1.0,1.0,1.0],[116.0,9.0,-41.9270668,0.3574566,-7.134967,0.122616917,0.125516236,-0.6882043,-0.7039785,0.7000001,0.700000048,0.700000048,-67.50952,0.5106523,78.80719,0.122616917,0.125516236,-0.6882043,-0.7039785,1.0,1.0,1.0],[117.0,9.0,-43.881115,0.3499344,-6.45493,-0.1374364,0.6935304,0.6939078,0.1364509,0.700000048,0.700000048,0.7,-70.30102,0.499906272,79.77867,-0.1374364,0.6935304,0.6939078,0.1364509,1.0,1.0,1.0],[118.0,9.0,-36.96356,0.35,-6.13710165,0.0458421,-4.28737366E-08,0.998948753,-1.50513046E-07,0.700000346,0.700000346,2.1,-60.4187965,0.5,80.23271,0.0458421,-4.28737366E-08,0.998948753,-1.50513046E-07,1.0,1.0,3.0],[119.0,9.0,-40.7084465,0.350001037,-4.05958557,-0.000445389334,0.347711384,0.000319879939,-0.9376015,0.700000048,0.7,2.10000014,-65.76863,0.5000015,83.20059,-0.000445389334,0.347711384,0.000319879939,-0.9376015,1.0,1.0,3.0],[120.0,9.0,-43.61579,1.23395753,-6.813763,0.159814462,0.478258133,-0.215270087,0.8362938,0.700000048,0.6999999,2.10000038,-69.92198,1.76279652,79.26605,0.159814462,0.478258133,-0.215270087,0.8362938,1.0,1.0,3.0],[121.0,9.0,-43.1038,0.7464877,-8.207322,-0.0073467507,0.279739916,0.708265364,0.648114,0.700000048,0.7,2.1,-69.19057,1.066411,77.27525,-0.0073467507,0.279739916,0.708265364,0.648114,1.0,1.0,3.0],[122.0,9.0,-40.20888,0.350000173,-8.098396,-0.36615178,3.41196454E-07,0.9305552,-4.59905721E-07,0.7000003,0.700000346,2.10000014,-65.05497,0.500000238,77.43086,-0.36615178,3.41196454E-07,0.9305552,-4.59905721E-07,1.0,1.0,3.0],[123.0,9.0,-38.131916,0.350000441,-6.268266,-4.30662759E-08,-0.01587225,1.09975986E-07,-0.9998741,0.700000048,0.7,2.1,-62.08788,0.500000656,80.0453339,-4.30662759E-08,-0.01587225,1.09975986E-07,-0.9998741,1.0,1.0,3.0],[124.0,9.0,-33.0352058,0.350000024,-6.62743044,-2.207958E-08,0.0157713331,-2.90181235E-09,0.999875665,0.700000048,0.7,2.1,-54.80686,0.50000006,79.53224,-2.207958E-08,0.0157713331,-2.90181235E-09,0.999875665,1.0,1.0,3.0],[125.0,9.0,-31.12954,0.3500002,-6.153604,-1.9939927E-08,-0.0223979075,3.36780261E-07,0.9997492,0.700000048,0.7,2.10000014,-52.0844841,0.5000003,80.20914,-1.9939927E-08,-0.0223979075,3.36780261E-07,0.9997492,1.0,1.0,3.0],[126.0,9.0,-32.017,0.3500001,-6.54766369,0.07650759,3.514336E-07,0.997069061,7.39839265E-07,0.700000346,0.700000346,2.1,-53.35228,0.5000002,79.6461945,0.07650759,3.514336E-07,0.997069061,7.39839265E-07,1.0,1.0,3.0],[127.0,9.0,-34.205513,0.350000024,-7.39778757,-0.2501868,0.250186235,0.661365747,-0.6613689,0.7000001,0.700000048,2.1,-56.4787331,0.50000006,78.43173,-0.2501868,0.250186235,0.661365747,-0.6613689,1.0,1.0,3.0],[128.0,9.0,-39.8703842,0.450290382,-6.06184244,0.0830977857,0.0153676709,0.980469,0.177592844,0.700000167,0.7000002,2.09999967,-64.5714,0.643272,80.3402252,0.0830977857,0.0153676709,0.980469,0.177592844,1.0,1.0,3.0],[129.0,9.0,-42.6923,0.49535504,-10.2777567,-0.6829904,-0.591788769,0.323580265,0.280367821,0.700000167,2.10000062,2.10000086,-68.60271,0.707650065,74.31749,-0.6829904,-0.591788769,0.323580265,0.280367821,1.0,3.0,3.0],[130.0,9.0,-28.4589577,1.19564021,-3.8241837,0.6577636,-0.181477085,-0.493492931,0.5393309,0.7,2.09999967,2.1,-48.2693672,1.7080574,83.53688,0.6577636,-0.181477085,-0.493492931,0.5393309,1.0,3.0,3.0],[131.0,9.0,-35.4733276,1.09330356,-8.387012,-0.29213956,0.07124259,0.9265639,-0.22596094,0.700000167,2.10000038,2.1,-58.2898979,1.56186223,77.0185547,-0.29213956,0.07124259,0.9265639,-0.22596094,1.0,3.0,3.0],[132.0,9.0,-27.5609341,1.18526852,-4.463472,0.5017441,-0.0527408868,-0.5236166,0.6865107,0.7,2.10000014,2.1,-46.9864731,1.69324076,82.62361,0.5017441,-0.0527408868,-0.5236166,0.6865107,1.0,3.0,3.0],[133.0,9.0,-26.5005264,0.350169182,-4.835774,0.26759997,-0.267674059,-0.654529,0.654471338,0.7,2.10000014,2.1,-45.4716034,0.5002417,82.09175,0.26759997,-0.267674059,-0.654529,0.654471338,1.0,3.0,3.0],[134.0,9.0,-24.43534,0.8272036,-1.77926636,0.5038543,0.8637384,-0.006992569,-0.00616816757,0.7000002,2.10000038,2.10000062,-42.52134,1.18171942,86.45819,0.5038543,0.8637384,-0.006992569,-0.00616816757,1.0,3.0,3.0],[135.0,9.0,-22.901619,0.349729747,-0.3232971,0.2569892,0.257356584,-0.6586972,-0.658667,0.7000002,2.10000038,2.1,-40.330307,0.4996139,88.53815,0.2569892,0.257356584,-0.6586972,-0.658667,1.0,3.0,3.0],[136.0,9.0,-24.6071587,1.05004644,-5.75976849,-5.21677475E-06,-0.0129489154,0.000160199081,0.999916136,0.7,2.1,2.1,-42.7667923,1.5000664,80.77176,-5.21677475E-06,-0.0129489154,0.000160199081,0.999916136,1.0,3.0,3.0],[137.0,9.0,-22.894783,1.05008852,-5.50895357,5.0540988E-05,0.173861876,0.000250560377,0.98477,0.7,2.1,2.1,-40.32054,1.50012648,81.1300659,5.0540988E-05,0.173861876,0.000250560377,0.98477,1.0,3.0,3.0],[138.0,9.0,-19.29491,2.44970322,-5.510962,-0.03968986,-0.040384572,0.7059748,0.7059698,0.6999999,2.10000014,2.10000014,-35.1778679,3.499576,81.1272,-0.03968986,-0.040384572,0.7059748,0.7059698,1.0,3.0,3.0],[139.0,9.0,-18.53671,1.05046582,-5.55458355,0.000150915512,-0.02771354,0.0006686302,0.9996157,0.700000048,2.10000014,2.1,-34.0947266,1.50066555,81.06488,0.000150915512,-0.02771354,0.0006686302,0.9996157,1.0,3.0,3.0],[140.0,9.0,-35.8468323,1.05000043,-3.68874669,-0.056281,0.7048633,-0.7048636,-0.0562810265,2.800001,2.10000014,2.10000062,-58.82347,1.50000072,83.73036,-0.056281,0.7048633,-0.7048636,-0.0562810265,4.0,3.0,3.0],[141.0,9.0,-36.75868,1.05,-13.090745,0.0455191173,-0.7056399,-0.705640554,-0.0455181,2.80000186,2.10000062,2.10000062,-60.1261063,1.5,70.2989349,0.0455191173,-0.7056399,-0.705640554,-0.0455181,4.0,3.0,3.0],[142.0,9.0,-20.568491,1.05000746,-6.52805328,-5.61947345E-05,0.155651659,3.80377533E-05,0.987812042,2.79999971,2.1,2.1,-36.99727,1.50001073,79.67421,-5.61947345E-05,0.155651659,3.80377533E-05,0.987812042,4.0,3.0,3.0],[143.0,9.0,-20.8980446,1.73186743,-1.90715182,0.582752645,0.6431368,0.2543728,0.426695347,2.80000043,2.1,2.10000014,-37.46806,2.4740963,86.2755,0.582752645,0.6431368,0.2543728,0.426695347,4.0,3.0,3.0],[144.0,9.0,-22.2030334,3.150025,-6.4850297,-8.068216E-05,0.00541184843,-5.02317853E-05,0.9999854,2.8,2.1,2.1,-39.33233,4.500036,79.73567,-8.068216E-05,0.00541184843,-5.02317853E-05,0.9999854,4.0,3.0,3.0],[145.0,9.0,-20.218235,3.93528032,-3.80003357,-0.0385901034,-0.6764708,0.7346641,-0.03415831,2.80000067,2.10000014,2.10000038,-36.4969025,5.621829,83.57138,-0.0385901034,-0.6764708,0.7346641,-0.03415831,4.0,3.0,3.0],[146.0,9.0,-39.848793,0.350000322,2.57730865,-0.0464662239,1.718747E-07,0.998919845,-2.628407E-07,0.6999999,0.7,0.6999999,-64.54056,0.5000005,92.68187,-0.0464662239,1.718747E-07,0.998919845,-2.628407E-07,1.0,1.0,1.0],[147.0,9.0,-34.3033257,1.02814519,5.8604064,-0.55965817,0.7444308,0.355284065,-0.07986786,0.700000167,0.700000167,0.7000002,-56.61846,1.46877885,97.37201,-0.55965817,0.7444308,0.355284065,-0.07986786,1.0,1.0,1.0],[148.0,9.0,-31.6714649,0.349999815,6.31059551,-0.09414186,0.7008121,-0.7008117,-0.0941422358,0.7000003,0.700000048,0.7,-52.8586578,0.499999762,98.01514,-0.09414186,0.7008121,-0.7008117,-0.0941422358,1.0,1.0,1.0],[149.0,9.0,-36.6281128,0.35,5.22432232,-0.542391956,-0.453663856,-0.453663915,0.542392,0.6999999,0.6999999,0.7,-59.9395828,0.5,96.46332,-0.542391956,-0.453663856,-0.453663915,0.542392,1.0,1.0,1.0],[150.0,9.0,-33.6937523,0.429832935,5.276222,0.7931698,0.01931201,-0.0288211331,0.608011663,0.700000048,0.700000167,0.7000001,-55.7476425,0.61404705,96.53746,0.7931698,0.01931201,-0.0288211331,0.608011663,1.0,1.0,1.0],[151.0,9.0,-34.82101,0.415043861,4.258052,-0.0420286842,0.911348,-0.09605565,-0.398060083,0.700000048,0.7,0.700000167,-57.35801,0.5929198,95.08293,-0.0420286842,0.911348,-0.09605565,-0.398060083,1.0,1.0,1.0],[152.0,9.0,-35.598,0.350002319,5.7004776,-0.5602493,0.4314213,0.431397527,0.56026113,0.700000048,0.700000048,0.7,-58.468,0.500003338,97.14354,-0.5602493,0.4314213,0.431397527,0.56026113,1.0,1.0,1.0],[153.0,9.0,-34.74185,0.3574566,3.4052155,-0.319832146,-0.3270923,-0.6215848,-0.6358856,0.7000001,0.700000167,0.700000048,-57.2449265,0.5106523,93.86459,-0.319832146,-0.3270923,-0.6215848,-0.6358856,1.0,1.0,1.0],[154.0,9.0,-34.60297,0.3499344,5.469551,0.311507046,0.6342088,0.6351064,-0.3120618,0.700000167,0.700000167,0.7000002,-57.046524,0.4999063,96.8136444,0.311507046,0.6342088,0.6351064,-0.3120618,1.0,1.0,1.0],[155.0,9.0,-32.4665146,0.35,-1.1174835,0.6422205,-1.25363471E-07,0.766519964,-9.368235E-08,0.7,0.7,2.1,-53.9944458,0.5,87.403595,0.6422205,-1.25363471E-07,0.766519964,-9.368235E-08,1.0,1.0,3.0],[156.0,9.0,-31.4536648,0.350001,3.043573,-0.000160176409,-0.292083323,0.000524441537,-0.956392765,0.700000048,0.7,2.1,-52.5475159,0.500001431,93.34796,-0.000160176409,-0.292083323,0.000524441537,-0.956392765,1.0,1.0,3.0],[157.0,9.0,-34.8788223,1.23395753,5.118739,-0.003463909,0.8874232,-0.2680855,0.3749643,0.700000167,0.7,2.10000014,-57.4405975,1.76279652,96.3124847,-0.003463909,0.8874232,-0.2680855,0.3749643,1.0,1.0,3.0],[158.0,9.0,-36.0872879,0.7464877,4.256354,0.423654675,0.615457535,0.5676353,0.3457151,0.7000001,0.700000167,2.10000014,-59.1669769,1.066411,95.0805054,0.423654675,0.615457535,0.5676353,0.3457151,1.0,1.0,3.0],[159.0,9.0,-35.2164536,0.350000173,1.49338531,0.273147225,-7.58596652E-09,0.961972237,-5.726008E-07,0.700000048,0.7,2.1,-57.9229279,0.500000238,91.13341,0.273147225,-7.58596652E-09,0.961972237,-5.726008E-07,1.0,1.0,3.0],[160.0,9.0,-32.90206,0.3500005,-0.0254638661,3.24458078E-08,-0.6189511,1.13563615E-07,-0.785429537,0.700000048,0.7,2.1,-54.6166573,0.5000007,88.96362,3.24458078E-08,-0.6189511,1.13563615E-07,-0.785429537,1.0,1.0,3.0],[161.0,9.0,-31.9001751,0.35,-5.035608,-1.93163459E-08,0.618871748,1.10818066E-08,0.7854921,0.700000167,0.7,2.10000014,-53.1853867,0.5,81.8062744,-1.93163459E-08,0.618871748,1.10818066E-08,0.7854921,1.0,1.0,3.0],[162.0,9.0,-30.939106,0.350000173,-6.748042,1.88370379E-07,0.588445,2.798841E-07,0.8085373,0.700000048,0.7,2.10000014,-51.8124352,0.500000238,79.35994,1.88370379E-07,0.588445,2.798841E-07,0.8085373,1.0,1.0,3.0],[163.0,9.0,-31.5539017,0.350000173,-5.99644136,0.665464759,7.280881E-07,0.7464293,3.751753E-07,0.700000048,0.700000346,2.1,-52.690712,0.500000238,80.4336548,0.665464759,7.280881E-07,0.7464293,3.751753E-07,1.0,1.0,3.0],[164.0,9.0,-32.9526749,0.35,-4.11078024,0.202119365,-0.2021217,0.677603245,-0.6776049,0.7,0.6999999,2.10000014,-54.6889572,0.5,83.12746,0.202119365,-0.2021217,0.677603245,-0.6776049,1.0,1.0,3.0],[165.0,9.0,-33.16288,0.450290352,1.70567322,0.66063875,0.119913235,0.729233265,0.1318948,0.700000167,0.700000346,2.10000038,-54.98925,0.6432719,91.436676,0.66063875,0.119913235,0.729233265,0.1318948,1.0,1.0,3.0],[166.0,9.0,-37.9751167,0.495355129,3.31180882,-0.34686178,-0.300546646,0.671466231,0.581800461,0.700000048,2.10000038,2.1,-61.8638763,0.7076502,93.7311554,-0.34686178,-0.300546646,0.671466231,0.581800461,1.0,3.0,3.0],[167.0,9.0,-27.9861889,1.19564021,-8.707265,0.22376667,0.182751462,-0.7912753,0.538900554,0.7000001,2.10000038,2.10000014,-47.59398,1.7080574,76.56105,0.22376667,0.182751462,-0.7912753,0.538900554,1.0,3.0,3.0],[168.0,9.0,-34.24205,1.09330356,-3.14981842,0.329578131,-0.0803753361,0.913917,-0.222876042,0.6999999,2.09999943,2.1,-56.5309258,1.56186223,84.50026,0.329578131,-0.0803753361,0.913917,-0.222876042,1.0,3.0,3.0],[169.0,9.0,-28.3651447,1.18526852,-9.742407,0.08143948,0.374367446,-0.7206172,0.5778647,0.7000002,2.10000038,2.10000014,-48.1353455,1.69324076,75.0822754,0.08143948,0.374367446,-0.7206172,0.5778647,1.0,3.0,3.0],[170.0,9.0,-28.4436741,0.3501692,-10.863533,-0.184127167,0.184033319,-0.6827263,0.6827252,0.7,2.1,2.1,-48.2475281,0.500241756,73.48067,-0.184127167,0.184033319,-0.6827263,0.6827252,1.0,3.0,3.0],[171.0,9.0,-24.9497356,0.8272035,-12.0465879,0.3964017,0.683065,-0.3111008,-0.5286815,0.7,2.1,2.1,-43.25619,1.1817193,71.79059,0.3964017,0.683065,-0.3111008,-0.5286815,1.0,3.0,3.0],[172.0,9.0,-23.1398945,0.3497298,-13.1405191,-0.195092142,-0.194781721,-0.6796061,-0.6798049,0.700000048,2.10000014,2.1,-40.6707,0.499614,70.22783,-0.195092142,-0.194781721,-0.6796061,-0.6798049,1.0,3.0,3.0],[173.0,9.0,-28.8338833,1.05004632,-12.9338923,9.299778E-05,0.596059561,0.000130546818,0.802940249,0.700000048,2.1,2.1,-48.8049736,1.50006628,70.52301,9.299778E-05,0.596059561,0.000130546818,0.802940249,1.0,3.0,3.0],[174.0,9.0,-28.1390038,1.05008841,-14.5189114,0.000192129271,0.7354181,0.000168585844,0.6776136,0.700000048,2.1,2.1,-47.81229,1.50012636,68.2587,0.000192129271,0.7354181,0.000168585844,0.6776136,1.0,3.0,3.0],[175.0,9.0,-27.1886425,2.44970322,-17.991066,0.3965483,0.3959927,0.5854274,0.585844755,0.700000048,2.10000038,2.10000038,-46.45463,3.499576,63.2984772,0.3965483,0.3959927,0.5854274,0.585844755,1.0,3.0,3.0],[176.0,9.0,-27.0301456,1.050466,-18.73379,0.000525464,0.584137,0.00044014887,0.811654747,0.699999869,2.1,2.1,-46.2282066,1.50066566,62.237442,0.000525464,0.584137,0.00044014887,0.811654747,1.0,3.0,3.0],[177.0,9.0,-29.8099575,1.05000043,-1.546759,-0.47218588,0.5263463,-0.5263466,-0.4721857,2.80000019,2.10000038,2.1,-50.1993637,1.50000072,86.7903442,-0.47218588,0.5263463,-0.5263466,-0.4721857,4.0,3.0,3.0],[178.0,9.0,-39.11823,1.05,-3.1545608,-0.391710043,-0.5886957,-0.588696361,0.391710758,2.80000067,2.10000062,2.10000062,-63.4968948,1.5,84.4934845,-0.391710043,-0.5886957,-0.588696361,0.391710758,4.0,3.0,3.0],[179.0,9.0,-28.5064259,1.05000746,-17.031908,-2.16171939E-05,0.7227829,6.4322805E-05,0.691075146,2.79999971,2.1,2.1,-48.3371735,1.50001073,64.6687,-2.16171939E-05,0.7227829,6.4322805E-05,0.691075146,4.0,3.0,3.0],[180.0,9.0,-24.13731,1.73186743,-15.4917078,0.617632151,0.7701439,-0.151119456,-0.0507135428,2.80000067,2.10000038,2.10000086,-42.09558,2.4740963,66.86899,0.617632151,0.7701439,-0.151119456,-0.0507135428,4.0,3.0,3.0],[181.0,9.0,-28.8973236,3.150025,-15.44422,-9.46156651E-05,0.6107011,8.984142E-06,0.791861236,2.8,2.1,2.10000014,-48.8956,4.500036,66.93683,-9.46156651E-05,0.6107011,8.984142E-06,0.791861236,4.0,3.0,3.0],[182.0,9.0,-25.7829189,3.93528032,-16.6480274,0.414819419,-0.5586138,0.6075721,0.3830557,2.80000019,2.1,2.10000038,-44.44645,5.621829,65.2171,0.414819419,-0.5586138,0.6075721,0.3830557,4.0,3.0,3.0],[183.0,9.0,-30.0344,0.350000322,-0.8470047,0.4477877,2.1447855E-08,0.8941399,-3.13314956E-07,0.6999999,0.7,0.6999999,-50.52,0.5000005,87.78999,0.4477877,2.1447855E-08,0.8941399,-3.13314956E-07,1.0,1.0,1.0],[184.0,9.0,-24.3391953,1.02814519,-3.86286,-0.314546257,0.610373557,0.5835277,-0.433589935,0.7,0.7,0.700000048,-42.3839874,1.46877885,83.48163,-0.314546257,0.610373557,0.5835277,-0.433589935,1.0,1.0,1.0],[185.0,9.0,-22.5812645,0.349999815,-5.87260437,-0.424718827,0.565344036,-0.565343738,-0.424719363,0.7,0.7000001,0.7,-39.8726578,0.499999762,80.6105652,-0.424718827,0.565344036,-0.565343738,-0.424719363,1.0,1.0,1.0],[186.0,9.0,-26.0953732,0.35,-2.2120986,-0.694939137,-0.13061215,-0.13061218,0.694939256,0.7,0.7000001,0.7000001,-44.8928146,0.5,85.83986,-0.694939137,-0.13061215,-0.13061218,0.694939256,1.0,1.0,1.0],[187.0,9.0,-24.51922,0.429832935,-4.68775463,0.677845836,0.314074516,-0.412884474,0.520969033,0.700000048,0.700000167,0.700000167,-42.64117,0.61404705,82.30321,0.677845836,0.314074516,-0.412884474,0.520969033,1.0,1.0,1.0],[188.0,9.0,-25.9761238,0.415043861,-4.25782776,-0.0836213753,0.600437641,-0.06325005,-0.7927683,0.700000048,0.700000048,0.700000048,-44.72246,0.5929198,82.91739,-0.0836213753,0.600437641,-0.06325005,-0.7927683,1.0,1.0,1.0],[189.0,9.0,-25.1514912,0.350002319,-2.84211564,-0.277853876,0.65024215,0.6502156,0.277852654,0.7000003,0.700000167,0.7000001,-43.54441,0.500003338,84.9398346,-0.277853876,0.65024215,0.6502156,0.277852654,1.0,1.0,1.0],[190.0,9.0,-26.6621952,0.3574566,-4.77056551,-0.582873464,-0.5961979,-0.3859,-0.394826382,0.700000048,0.700000048,0.700000167,-45.70256,0.5106523,82.184906,-0.582873464,-0.5961979,-0.3859,-0.394826382,1.0,1.0,1.0],[191.0,9.0,-24.8289852,0.3499344,-3.81133413,0.5822211,0.400711179,0.401765257,-0.582266152,0.700000048,0.7000001,0.7,-43.0836868,0.4999063,83.55524,0.5822211,0.400711179,0.401765257,-0.582266152,1.0,1.0,1.0],[192.0,9.0,-29.3318386,0.35,-9.072314,0.934966266,-1.551597E-07,0.354736626,-2.04412967E-08,0.7,0.7,2.09999967,-49.5163345,0.5,76.03955,0.934966266,-1.551597E-07,0.354736626,-2.04412967E-08,1.0,1.0,3.0],[193.0,9.0,-25.2540512,0.350001,-7.763916,0.000116641429,-0.722338,0.0005358082,-0.691539943,0.7000001,0.7,2.1,-43.6909256,0.500001431,77.90869,0.000116641429,-0.722338,0.0005358082,-0.691539943,1.0,1.0,3.0],[194.0,9.0,-25.27221,1.23395753,-3.75919938,-0.13407588,0.9574606,-0.2321758,-0.10671138,0.700000048,0.6999999,2.10000014,-43.7168655,1.76279652,83.629715,-0.13407588,0.9574606,-0.2321758,-0.10671138,1.0,1.0,3.0],[195.0,9.0,-26.6386337,0.7464877,-3.17870021,0.6470717,0.7059082,0.288082153,0.000723225065,0.700000048,0.7,2.1,-45.6689,1.066411,84.459,0.6470717,0.7059082,0.288082153,0.000723225065,1.0,1.0,3.0],[196.0,9.0,-28.5405788,0.350000173,-5.36385,0.7085463,-2.86534458E-07,0.705664337,-4.95809843E-07,0.6999999,0.7,2.1,-48.3859673,0.500000238,81.33736,0.7085463,-2.86534458E-07,0.705664337,-4.95809843E-07,1.0,1.0,3.0],[197.0,9.0,-28.6278057,0.3500005,-8.13075,8.382036E-08,-0.9239108,8.320793E-08,-0.382608,0.7000001,0.7,2.10000038,-48.51058,0.5000007,77.38464,8.382036E-08,-0.9239108,8.320793E-08,-0.382608,1.0,1.0,3.0],[198.0,9.0,-32.378006,0.35,-11.60081,-1.143359E-08,0.9238721,1.91102281E-08,0.382701367,0.700000048,0.7,2.10000038,-53.8680077,0.5,72.4274139,-1.143359E-08,0.9238721,1.91102281E-08,0.382701367,1.0,1.0,3.0],[199.0,9.0,-33.33683,0.350000173,-13.3144941,3.01149782E-07,0.908594549,1.520768E-07,0.4176793,0.7000001,0.7,2.10000038,-55.23776,0.500000238,69.9792938,3.01149782E-07,0.908594549,1.520768E-07,0.4176793,1.0,1.0,3.0],[200.0,9.0,-33.0167274,0.350000173,-12.3977633,0.94542253,8.18564729E-07,0.325847059,-2.8636018E-08,0.7,0.7,2.10000038,-54.7804642,0.500000238,71.28891,0.94542253,8.18564729E-07,0.325847059,-2.8636018E-08,1.0,1.0,3.0],[201.0,9.0,-32.1386528,0.35,-10.220314,0.5075693,-0.507572234,0.4923127,-0.492313027,0.7000001,0.700000167,2.10000038,-53.52607,0.5,74.39955,0.5075693,-0.507572234,0.4923127,-0.492313027,1.0,1.0,3.0],[202.0,9.0,-27.28746,0.450290352,-7.00453949,0.932806,0.169085354,0.3132052,0.0564409681,0.700000048,0.7000001,2.1,-46.5957947,0.6432719,78.993515,0.932806,0.169085354,0.3132052,0.0564409681,1.0,1.0,3.0],[203.0,9.0,-28.4297886,0.495355129,-2.06163335,0.0256563313,0.0222268477,0.7553289,0.6544663,0.700000167,2.10000062,2.10000014,-48.22769,0.7076502,86.05481,0.0256563313,0.0222268477,0.7553289,0.6544663,1.0,3.0,3.0],[204.0,9.0,-33.4663124,1.19564021,-16.8558941,-0.191609174,0.422868729,-0.799671,0.380781174,0.7,2.1,2.10000014,-55.4227333,1.7080574,64.92015,-0.191609174,0.422868729,-0.799671,0.380781174,1.0,3.0,3.0],[205.0,9.0,-31.99216,1.09330356,-8.61891,0.7342826,-0.179069981,0.6361566,-0.1551382,0.7000001,2.10000134,2.10000062,-53.3167953,1.56186223,76.68727,0.7342826,-0.179069981,0.6361566,-0.1551382,1.0,3.0,3.0],[206.0,9.0,-34.5470428,1.18526852,-17.0730743,-0.2812296,0.609075665,-0.668454468,0.321100354,0.700000167,2.10000014,2.10000038,-56.9666328,1.69324076,64.60989,-0.2812296,0.609075665,-0.668454468,0.321100354,1.0,3.0,3.0],[207.0,9.0,-35.54426,0.3501692,-17.5913773,-0.494378,0.494295537,-0.5055773,0.5056223,0.7,2.10000014,2.10000014,-58.3912239,0.500241756,63.86946,-0.494378,0.494295537,-0.5055773,0.5056223,1.0,3.0,3.0],[208.0,9.0,-34.72929,0.8272035,-21.189024,0.19372575,0.3374369,-0.465175539,-0.79512167,0.7,2.10000014,2.1,-57.2269859,1.1817193,58.7299652,0.19372575,0.3374369,-0.465175539,-0.79512167,1.0,3.0,3.0],[209.0,9.0,-34.7175,0.3497298,-23.3037529,-0.5024186,-0.5022444,-0.497495264,-0.4978199,0.700000048,2.10000038,2.10000038,-57.2101364,0.499614,55.7089233,-0.5024186,-0.5022444,-0.497495264,-0.4978199,1.0,3.0,3.0],[210.0,9.0,-37.5138,1.05004632,-18.3394,0.0001449463,0.9125011,6.84226761E-05,0.409074247,0.7,2.1,2.1,-61.2048569,1.50006628,62.8008575,0.0001449463,0.9125011,6.84226761E-05,0.409074247,1.0,3.0,3.0],[211.0,9.0,-38.5029259,1.05008841,-19.7595329,0.000250020385,0.9728069,5.31461119E-05,0.2316177,0.700000048,2.1,2.10000014,-62.6178932,1.50012636,60.7720947,0.000250020385,0.9728069,5.31461119E-05,0.2316177,1.0,3.0,3.0],[212.0,9.0,-40.9682579,2.44970322,-22.3827381,0.6321225,0.631842136,0.3168544,0.31749022,0.7000002,2.10000038,2.10000038,-66.13979,3.499576,57.02466,0.6321225,0.631842136,0.3168544,0.31749022,1.0,3.0,3.0],[213.0,9.0,-41.51899,1.050466,-22.9056664,0.0006735645,0.9063604,0.00012709749,0.422504872,0.6999999,2.10000014,2.10000014,-66.92655,1.50066566,56.27762,0.0006735645,0.9063604,0.00012709749,0.422504872,1.0,3.0,3.0],[214.0,9.0,-28.3111153,1.05000043,-11.5622406,-0.6692244,0.228338748,-0.228338748,-0.66922456,2.80000019,2.1,2.1,-48.05816,1.50000072,72.48251,-0.6692244,0.228338748,-0.228338748,-0.66922456,4.0,3.0,3.0],[215.0,9.0,-34.5418167,1.05,-4.462425,-0.629499853,-0.32207045,-0.322071224,0.6295004,2.8,2.10000038,2.1,-56.95916,1.5,82.62511,-0.629499853,-0.32207045,-0.322071224,0.6295004,4.0,3.0,3.0],[216.0,9.0,-40.8381157,1.05000746,-20.7580719,1.25861707E-05,0.968365,6.668073E-05,0.249538,2.8,2.1,2.1,-65.95387,1.50001073,59.34561,1.25861707E-05,0.968365,6.668073E-05,0.249538,4.0,3.0,3.0],[217.0,9.0,-37.24356,1.73186743,-23.6804771,0.464927346,0.6470567,-0.433762044,-0.420726478,2.80000043,2.10000014,2.1,-60.8187943,2.4740963,55.1707458,0.464927346,0.6470567,-0.433762044,-0.420726478,4.0,3.0,3.0],[218.0,9.0,-39.6880264,3.150025,-19.5958118,-7.81476556E-05,0.91985786,5.409046E-05,0.392251968,2.80000067,2.1,2.10000086,-64.31089,4.500036,61.00598,-7.81476556E-05,0.91985786,5.409046E-05,0.392251968,4.0,3.0,3.0],[219.0,9.0,-39.0888939,3.93528032,-22.8805771,0.6588871,-0.30005905,0.3272409,0.607244432,2.8,2.1,2.10000014,-63.45499,5.621829,56.31346,0.6588871,-0.30005905,0.3272409,0.607244432,4.0,3.0,3.0],[220.0,20.0,5.329597,8.75,-62.3,0.0,0.0,0.0,1.0,0.7,0.7,0.7,0.0,12.5,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[221.0,9.0,-42.4796143,9.099999,-12.2789774,-0.642699063,2.96055134E-07,0.7661188,-1.04772994E-07,0.700000167,0.700000346,0.700000167,-68.2988739,0.5000005,71.4586,-0.642699063,2.96055134E-07,0.7661188,-1.04772994E-07,1.0,1.0,1.0],[222.0,9.0,-44.1787872,9.778145,-6.062579,-0.6604609,0.6403705,-0.0568745,0.3879206,0.7,0.700000048,0.700000167,-70.726265,1.46877885,80.33917,-0.6604609,0.6403705,-0.0568745,0.3879206,1.0,1.0,1.0],[223.0,9.0,-43.9167,9.099999,-3.40537572,0.35011965,0.6143423,-0.614342153,0.350119352,0.700000167,0.700000048,0.7,-70.35185,0.499999821,84.13518,0.35011965,0.6143423,-0.614342153,0.350119352,1.0,1.0,1.0],[224.0,9.0,-44.1803551,9.099999,-8.472802,-0.156180635,-0.6896431,-0.689643145,0.15618068,0.7000001,0.7000001,0.7000001,-70.7285,0.5000001,76.8959961,-0.156180635,-0.6896431,-0.689643145,0.15618068,1.0,1.0,1.0],[225.0,9.0,-43.4541435,9.179832,-5.629256,0.648170352,-0.353345871,0.458066583,0.495173663,0.700000048,0.700000167,0.7000001,-69.6910553,0.614047,80.9582062,0.648170352,-0.353345871,0.458066583,0.495173663,1.0,1.0,1.0],[226.0,9.0,-42.77045,9.165044,-6.98568726,0.0248294324,0.966048956,-0.10186547,0.236128062,0.700000346,0.700000048,0.700000346,-68.7143555,0.5929197,79.02045,0.0248294324,0.966048956,-0.10186547,0.236128062,1.0,1.0,1.0],[227.0,9.0,-44.36704,9.100002,-7.35342836,-0.7070874,0.003300767,0.00328877149,0.7071108,0.7,0.7,0.7,-70.99519,0.5000033,78.4951,-0.7070874,0.003300767,0.00328877149,0.7071108,1.0,1.0,1.0],[228.0,9.0,-41.9270668,9.107456,-7.134967,0.122616917,0.125516236,-0.6882043,-0.7039785,0.7000001,0.700000048,0.700000048,-67.50952,0.5106523,78.80719,0.122616917,0.125516236,-0.6882043,-0.7039785,1.0,1.0,1.0],[229.0,9.0,-43.881115,9.099935,-6.45493,-0.1374364,0.6935304,0.6939078,0.1364509,0.700000048,0.700000048,0.7,-70.30102,0.499906272,79.77867,-0.1374364,0.6935304,0.6939078,0.1364509,1.0,1.0,1.0],[230.0,9.0,-36.96356,9.099999,-6.13710165,0.0458421,-4.28737366E-08,0.998948753,-1.50513046E-07,0.700000346,0.700000346,2.1,-60.4187965,0.5,80.23271,0.0458421,-4.28737366E-08,0.998948753,-1.50513046E-07,1.0,1.0,3.0],[231.0,9.0,-40.7084465,9.100001,-4.05958557,-0.000445389334,0.347711384,0.000319879939,-0.9376015,0.700000048,0.7,2.10000014,-65.76863,0.5000015,83.20059,-0.000445389334,0.347711384,0.000319879939,-0.9376015,1.0,1.0,3.0],[232.0,9.0,-43.61579,9.983957,-6.813763,0.159814462,0.478258133,-0.215270087,0.8362938,0.700000048,0.6999999,2.10000038,-69.92198,1.76279652,79.26605,0.159814462,0.478258133,-0.215270087,0.8362938,1.0,1.0,3.0],[233.0,9.0,-43.1038,9.496488,-8.207322,-0.0073467507,0.279739916,0.708265364,0.648114,0.700000048,0.7,2.1,-69.19057,1.066411,77.27525,-0.0073467507,0.279739916,0.708265364,0.648114,1.0,1.0,3.0],[234.0,9.0,-40.20888,9.099999,-8.098396,-0.36615178,3.41196454E-07,0.9305552,-4.59905721E-07,0.7000003,0.700000346,2.10000014,-65.05497,0.500000238,77.43086,-0.36615178,3.41196454E-07,0.9305552,-4.59905721E-07,1.0,1.0,3.0],[235.0,9.0,-38.131916,9.1,-6.268266,-4.30662759E-08,-0.01587225,1.09975986E-07,-0.9998741,0.700000048,0.7,2.1,-62.08788,0.500000656,80.0453339,-4.30662759E-08,-0.01587225,1.09975986E-07,-0.9998741,1.0,1.0,3.0],[236.0,9.0,-33.0352058,9.099999,-6.62743044,-2.207958E-08,0.0157713331,-2.90181235E-09,0.999875665,0.700000048,0.7,2.1,-54.80686,0.50000006,79.53224,-2.207958E-08,0.0157713331,-2.90181235E-09,0.999875665,1.0,1.0,3.0],[237.0,9.0,-31.12954,9.099999,-6.153604,-1.9939927E-08,-0.0223979075,3.36780261E-07,0.9997492,0.700000048,0.7,2.10000014,-52.0844841,0.5000003,80.20914,-1.9939927E-08,-0.0223979075,3.36780261E-07,0.9997492,1.0,1.0,3.0],[238.0,9.0,-32.017,9.099999,-6.54766369,0.07650759,3.514336E-07,0.997069061,7.39839265E-07,0.700000346,0.700000346,2.1,-53.35228,0.5000002,79.6461945,0.07650759,3.514336E-07,0.997069061,7.39839265E-07,1.0,1.0,3.0],[239.0,9.0,-34.205513,9.099999,-7.39778757,-0.2501868,0.250186235,0.661365747,-0.6613689,0.7000001,0.700000048,2.1,-56.4787331,0.50000006,78.43173,-0.2501868,0.250186235,0.661365747,-0.6613689,1.0,1.0,3.0],[240.0,9.0,-39.8703842,9.200291,-6.06184244,0.0830977857,0.0153676709,0.980469,0.177592844,0.700000167,0.7000002,2.09999967,-64.5714,0.643272,80.3402252,0.0830977857,0.0153676709,0.980469,0.177592844,1.0,1.0,3.0],[241.0,9.0,-42.6923,9.245355,-10.2777567,-0.6829904,-0.591788769,0.323580265,0.280367821,0.700000167,2.10000062,2.10000086,-68.60271,0.707650065,74.31749,-0.6829904,-0.591788769,0.323580265,0.280367821,1.0,3.0,3.0],[242.0,9.0,-28.4589577,9.94564,-3.8241837,0.6577636,-0.181477085,-0.493492931,0.5393309,0.7,2.09999967,2.1,-48.2693672,1.7080574,83.53688,0.6577636,-0.181477085,-0.493492931,0.5393309,1.0,3.0,3.0],[243.0,9.0,-35.4733276,9.843304,-8.387012,-0.29213956,0.07124259,0.9265639,-0.22596094,0.700000167,2.10000038,2.1,-58.2898979,1.56186223,77.0185547,-0.29213956,0.07124259,0.9265639,-0.22596094,1.0,3.0,3.0],[244.0,9.0,-27.5609341,9.935268,-4.463472,0.5017441,-0.0527408868,-0.5236166,0.6865107,0.7,2.10000014,2.1,-46.9864731,1.69324076,82.62361,0.5017441,-0.0527408868,-0.5236166,0.6865107,1.0,3.0,3.0],[245.0,9.0,-26.5005264,9.100169,-4.835774,0.26759997,-0.267674059,-0.654529,0.654471338,0.7,2.10000014,2.1,-45.4716034,0.5002417,82.09175,0.26759997,-0.267674059,-0.654529,0.654471338,1.0,3.0,3.0],[246.0,9.0,-24.43534,9.577204,-1.77926636,0.5038543,0.8637384,-0.006992569,-0.00616816757,0.7000002,2.10000038,2.10000062,-42.52134,1.18171942,86.45819,0.5038543,0.8637384,-0.006992569,-0.00616816757,1.0,3.0,3.0],[247.0,9.0,-22.901619,9.09973,-0.3232971,0.2569892,0.257356584,-0.6586972,-0.658667,0.7000002,2.10000038,2.1,-40.330307,0.4996139,88.53815,0.2569892,0.257356584,-0.6586972,-0.658667,1.0,3.0,3.0],[248.0,9.0,-24.6071587,9.800047,-5.75976849,-5.21677475E-06,-0.0129489154,0.000160199081,0.999916136,0.7,2.1,2.1,-42.7667923,1.5000664,80.77176,-5.21677475E-06,-0.0129489154,0.000160199081,0.999916136,1.0,3.0,3.0],[249.0,9.0,-22.894783,9.800089,-5.50895357,5.0540988E-05,0.173861876,0.000250560377,0.98477,0.7,2.1,2.1,-40.32054,1.50012648,81.1300659,5.0540988E-05,0.173861876,0.000250560377,0.98477,1.0,3.0,3.0],[250.0,9.0,-19.29491,11.1997032,-5.510962,-0.03968986,-0.040384572,0.7059748,0.7059698,0.6999999,2.10000014,2.10000014,-35.1778679,3.499576,81.1272,-0.03968986,-0.040384572,0.7059748,0.7059698,1.0,3.0,3.0],[251.0,9.0,-18.53671,9.800466,-5.55458355,0.000150915512,-0.02771354,0.0006686302,0.9996157,0.700000048,2.10000014,2.1,-34.0947266,1.50066555,81.06488,0.000150915512,-0.02771354,0.0006686302,0.9996157,1.0,3.0,3.0],[252.0,9.0,-35.8468323,9.8,-3.68874669,-0.056281,0.7048633,-0.7048636,-0.0562810265,2.800001,2.10000014,2.10000062,-58.82347,1.50000072,83.73036,-0.056281,0.7048633,-0.7048636,-0.0562810265,4.0,3.0,3.0],[253.0,9.0,-36.75868,9.8,-13.090745,0.0455191173,-0.7056399,-0.705640554,-0.0455181,2.80000186,2.10000062,2.10000062,-60.1261063,1.5,70.2989349,0.0455191173,-0.7056399,-0.705640554,-0.0455181,4.0,3.0,3.0],[254.0,9.0,-20.568491,9.800007,-6.52805328,-5.61947345E-05,0.155651659,3.80377533E-05,0.987812042,2.79999971,2.1,2.1,-36.99727,1.50001073,79.67421,-5.61947345E-05,0.155651659,3.80377533E-05,0.987812042,4.0,3.0,3.0],[255.0,9.0,-20.8980446,10.4818668,-1.90715182,0.582752645,0.6431368,0.2543728,0.426695347,2.80000043,2.1,2.10000014,-37.46806,2.4740963,86.2755,0.582752645,0.6431368,0.2543728,0.426695347,4.0,3.0,3.0],[256.0,9.0,-22.2030334,11.9000254,-6.4850297,-8.068216E-05,0.00541184843,-5.02317853E-05,0.9999854,2.8,2.1,2.1,-39.33233,4.500036,79.73567,-8.068216E-05,0.00541184843,-5.02317853E-05,0.9999854,4.0,3.0,3.0],[257.0,9.0,-20.218235,12.6852808,-3.80003357,-0.0385901034,-0.6764708,0.7346641,-0.03415831,2.80000067,2.10000014,2.10000038,-36.4969025,5.621829,83.57138,-0.0385901034,-0.6764708,0.7346641,-0.03415831,4.0,3.0,3.0],[258.0,9.0,-39.848793,9.099999,2.57730865,-0.0464662239,1.718747E-07,0.998919845,-2.628407E-07,0.6999999,0.7,0.6999999,-64.54056,0.5000005,92.68187,-0.0464662239,1.718747E-07,0.998919845,-2.628407E-07,1.0,1.0,1.0],[259.0,9.0,-34.3033257,9.778145,5.8604064,-0.55965817,0.7444308,0.355284065,-0.07986786,0.700000167,0.700000167,0.7000002,-56.61846,1.46877885,97.37201,-0.55965817,0.7444308,0.355284065,-0.07986786,1.0,1.0,1.0],[260.0,9.0,-31.6714649,9.099999,6.31059551,-0.09414186,0.7008121,-0.7008117,-0.0941422358,0.7000003,0.700000048,0.7,-52.8586578,0.499999762,98.01514,-0.09414186,0.7008121,-0.7008117,-0.0941422358,1.0,1.0,1.0],[261.0,9.0,-36.6281128,9.099999,5.22432232,-0.542391956,-0.453663856,-0.453663915,0.542392,0.6999999,0.6999999,0.7,-59.9395828,0.5,96.46332,-0.542391956,-0.453663856,-0.453663915,0.542392,1.0,1.0,1.0],[262.0,9.0,-33.6937523,9.179832,5.276222,0.7931698,0.01931201,-0.0288211331,0.608011663,0.700000048,0.700000167,0.7000001,-55.7476425,0.61404705,96.53746,0.7931698,0.01931201,-0.0288211331,0.608011663,1.0,1.0,1.0],[263.0,9.0,-34.82101,9.165044,4.258052,-0.0420286842,0.911348,-0.09605565,-0.398060083,0.700000048,0.7,0.700000167,-57.35801,0.5929198,95.08293,-0.0420286842,0.911348,-0.09605565,-0.398060083,1.0,1.0,1.0],[264.0,9.0,-35.598,9.100002,5.7004776,-0.5602493,0.4314213,0.431397527,0.56026113,0.700000048,0.700000048,0.7,-58.468,0.500003338,97.14354,-0.5602493,0.4314213,0.431397527,0.56026113,1.0,1.0,1.0],[265.0,9.0,-34.74185,9.107456,3.4052155,-0.319832146,-0.3270923,-0.6215848,-0.6358856,0.7000001,0.700000167,0.700000048,-57.2449265,0.5106523,93.86459,-0.319832146,-0.3270923,-0.6215848,-0.6358856,1.0,1.0,1.0],[266.0,9.0,-34.60297,9.099935,5.469551,0.311507046,0.6342088,0.6351064,-0.3120618,0.700000167,0.700000167,0.7000002,-57.046524,0.4999063,96.8136444,0.311507046,0.6342088,0.6351064,-0.3120618,1.0,1.0,1.0],[267.0,9.0,-32.4665146,9.099999,-1.1174835,0.6422205,-1.25363471E-07,0.766519964,-9.368235E-08,0.7,0.7,2.1,-53.9944458,0.5,87.403595,0.6422205,-1.25363471E-07,0.766519964,-9.368235E-08,1.0,1.0,3.0],[268.0,9.0,-31.4536648,9.100001,3.043573,-0.000160176409,-0.292083323,0.000524441537,-0.956392765,0.700000048,0.7,2.1,-52.5475159,0.500001431,93.34796,-0.000160176409,-0.292083323,0.000524441537,-0.956392765,1.0,1.0,3.0],[269.0,9.0,-34.8788223,9.983957,5.118739,-0.003463909,0.8874232,-0.2680855,0.3749643,0.700000167,0.7,2.10000014,-57.4405975,1.76279652,96.3124847,-0.003463909,0.8874232,-0.2680855,0.3749643,1.0,1.0,3.0],[270.0,9.0,-36.0872879,9.496488,4.256354,0.423654675,0.615457535,0.5676353,0.3457151,0.7000001,0.700000167,2.10000014,-59.1669769,1.066411,95.0805054,0.423654675,0.615457535,0.5676353,0.3457151,1.0,1.0,3.0],[271.0,9.0,-35.2164536,9.099999,1.49338531,0.273147225,-7.58596652E-09,0.961972237,-5.726008E-07,0.700000048,0.7,2.1,-57.9229279,0.500000238,91.13341,0.273147225,-7.58596652E-09,0.961972237,-5.726008E-07,1.0,1.0,3.0],[272.0,9.0,-32.90206,9.1,-0.0254638661,3.24458078E-08,-0.6189511,1.13563615E-07,-0.785429537,0.700000048,0.7,2.1,-54.6166573,0.5000007,88.96362,3.24458078E-08,-0.6189511,1.13563615E-07,-0.785429537,1.0,1.0,3.0],[273.0,9.0,-31.9001751,9.099999,-5.035608,-1.93163459E-08,0.618871748,1.10818066E-08,0.7854921,0.700000167,0.7,2.10000014,-53.1853867,0.5,81.8062744,-1.93163459E-08,0.618871748,1.10818066E-08,0.7854921,1.0,1.0,3.0],[274.0,9.0,-30.939106,9.099999,-6.748042,1.88370379E-07,0.588445,2.798841E-07,0.8085373,0.700000048,0.7,2.10000014,-51.8124352,0.500000238,79.35994,1.88370379E-07,0.588445,2.798841E-07,0.8085373,1.0,1.0,3.0],[275.0,9.0,-31.5539017,9.099999,-5.99644136,0.665464759,7.280881E-07,0.7464293,3.751753E-07,0.700000048,0.700000346,2.1,-52.690712,0.500000238,80.4336548,0.665464759,7.280881E-07,0.7464293,3.751753E-07,1.0,1.0,3.0],[276.0,9.0,-32.9526749,9.099999,-4.11078024,0.202119365,-0.2021217,0.677603245,-0.6776049,0.7,0.6999999,2.10000014,-54.6889572,0.5,83.12746,0.202119365,-0.2021217,0.677603245,-0.6776049,1.0,1.0,3.0],[277.0,9.0,-33.16288,9.200291,1.70567322,0.66063875,0.119913235,0.729233265,0.1318948,0.700000167,0.700000346,2.10000038,-54.98925,0.6432719,91.436676,0.66063875,0.119913235,0.729233265,0.1318948,1.0,1.0,3.0],[278.0,9.0,-37.9751167,9.245355,3.31180882,-0.34686178,-0.300546646,0.671466231,0.581800461,0.700000048,2.10000038,2.1,-61.8638763,0.7076502,93.7311554,-0.34686178,-0.300546646,0.671466231,0.581800461,1.0,3.0,3.0],[279.0,9.0,-27.9861889,9.94564,-8.707265,0.22376667,0.182751462,-0.7912753,0.538900554,0.7000001,2.10000038,2.10000014,-47.59398,1.7080574,76.56105,0.22376667,0.182751462,-0.7912753,0.538900554,1.0,3.0,3.0],[280.0,9.0,-34.24205,9.843304,-3.14981842,0.329578131,-0.0803753361,0.913917,-0.222876042,0.6999999,2.09999943,2.1,-56.5309258,1.56186223,84.50026,0.329578131,-0.0803753361,0.913917,-0.222876042,1.0,3.0,3.0],[281.0,9.0,-28.3651447,9.935268,-9.742407,0.08143948,0.374367446,-0.7206172,0.5778647,0.7000002,2.10000038,2.10000014,-48.1353455,1.69324076,75.0822754,0.08143948,0.374367446,-0.7206172,0.5778647,1.0,3.0,3.0],[282.0,9.0,-28.4436741,9.100169,-10.863533,-0.184127167,0.184033319,-0.6827263,0.6827252,0.7,2.1,2.1,-48.2475281,0.500241756,73.48067,-0.184127167,0.184033319,-0.6827263,0.6827252,1.0,3.0,3.0],[283.0,9.0,-24.9497356,9.577203,-12.0465879,0.3964017,0.683065,-0.3111008,-0.5286815,0.7,2.1,2.1,-43.25619,1.1817193,71.79059,0.3964017,0.683065,-0.3111008,-0.5286815,1.0,3.0,3.0],[284.0,9.0,-23.1398945,9.09973,-13.1405191,-0.195092142,-0.194781721,-0.6796061,-0.6798049,0.700000048,2.10000014,2.1,-40.6707,0.499614,70.22783,-0.195092142,-0.194781721,-0.6796061,-0.6798049,1.0,3.0,3.0],[285.0,9.0,-28.8338833,9.800047,-12.9338923,9.299778E-05,0.596059561,0.000130546818,0.802940249,0.700000048,2.1,2.1,-48.8049736,1.50006628,70.52301,9.299778E-05,0.596059561,0.000130546818,0.802940249,1.0,3.0,3.0],[286.0,9.0,-28.1390038,9.800088,-14.5189114,0.000192129271,0.7354181,0.000168585844,0.6776136,0.700000048,2.1,2.1,-47.81229,1.50012636,68.2587,0.000192129271,0.7354181,0.000168585844,0.6776136,1.0,3.0,3.0],[287.0,9.0,-27.1886425,11.1997032,-17.991066,0.3965483,0.3959927,0.5854274,0.585844755,0.700000048,2.10000038,2.10000038,-46.45463,3.499576,63.2984772,0.3965483,0.3959927,0.5854274,0.585844755,1.0,3.0,3.0],[288.0,9.0,-27.0301456,9.800466,-18.73379,0.000525464,0.584137,0.00044014887,0.811654747,0.699999869,2.1,2.1,-46.2282066,1.50066566,62.237442,0.000525464,0.584137,0.00044014887,0.811654747,1.0,3.0,3.0],[289.0,9.0,-29.8099575,9.8,-1.546759,-0.47218588,0.5263463,-0.5263466,-0.4721857,2.80000019,2.10000038,2.1,-50.1993637,1.50000072,86.7903442,-0.47218588,0.5263463,-0.5263466,-0.4721857,4.0,3.0,3.0],[290.0,9.0,-39.11823,9.8,-3.1545608,-0.391710043,-0.5886957,-0.588696361,0.391710758,2.80000067,2.10000062,2.10000062,-63.4968948,1.5,84.4934845,-0.391710043,-0.5886957,-0.588696361,0.391710758,4.0,3.0,3.0],[291.0,9.0,-28.5064259,9.800007,-17.031908,-2.16171939E-05,0.7227829,6.4322805E-05,0.691075146,2.79999971,2.1,2.1,-48.3371735,1.50001073,64.6687,-2.16171939E-05,0.7227829,6.4322805E-05,0.691075146,4.0,3.0,3.0],[292.0,9.0,-24.13731,10.4818668,-15.4917078,0.617632151,0.7701439,-0.151119456,-0.0507135428,2.80000067,2.10000038,2.10000086,-42.09558,2.4740963,66.86899,0.617632151,0.7701439,-0.151119456,-0.0507135428,4.0,3.0,3.0],[293.0,9.0,-28.8973236,11.9000254,-15.44422,-9.46156651E-05,0.6107011,8.984142E-06,0.791861236,2.8,2.1,2.10000014,-48.8956,4.500036,66.93683,-9.46156651E-05,0.6107011,8.984142E-06,0.791861236,4.0,3.0,3.0],[294.0,9.0,-25.7829189,12.6852808,-16.6480274,0.414819419,-0.5586138,0.6075721,0.3830557,2.80000019,2.1,2.10000038,-44.44645,5.621829,65.2171,0.414819419,-0.5586138,0.6075721,0.3830557,4.0,3.0,3.0],[295.0,9.0,-30.0344,9.099999,-0.8470047,0.4477877,2.1447855E-08,0.8941399,-3.13314956E-07,0.6999999,0.7,0.6999999,-50.52,0.5000005,87.78999,0.4477877,2.1447855E-08,0.8941399,-3.13314956E-07,1.0,1.0,1.0],[296.0,9.0,-24.3391953,9.778145,-3.86286,-0.314546257,0.610373557,0.5835277,-0.433589935,0.7,0.7,0.700000048,-42.3839874,1.46877885,83.48163,-0.314546257,0.610373557,0.5835277,-0.433589935,1.0,1.0,1.0],[297.0,9.0,-22.5812645,9.099999,-5.87260437,-0.424718827,0.565344036,-0.565343738,-0.424719363,0.7,0.7000001,0.7,-39.8726578,0.499999762,80.6105652,-0.424718827,0.565344036,-0.565343738,-0.424719363,1.0,1.0,1.0],[298.0,9.0,-26.0953732,9.099999,-2.2120986,-0.694939137,-0.13061215,-0.13061218,0.694939256,0.7,0.7000001,0.7000001,-44.8928146,0.5,85.83986,-0.694939137,-0.13061215,-0.13061218,0.694939256,1.0,1.0,1.0],[299.0,9.0,-24.51922,9.179832,-4.68775463,0.677845836,0.314074516,-0.412884474,0.520969033,0.700000048,0.700000167,0.700000167,-42.64117,0.61404705,82.30321,0.677845836,0.314074516,-0.412884474,0.520969033,1.0,1.0,1.0],[300.0,9.0,-25.9761238,9.165044,-4.25782776,-0.0836213753,0.600437641,-0.06325005,-0.7927683,0.700000048,0.700000048,0.700000048,-44.72246,0.5929198,82.91739,-0.0836213753,0.600437641,-0.06325005,-0.7927683,1.0,1.0,1.0],[301.0,9.0,-25.1514912,9.100002,-2.84211564,-0.277853876,0.65024215,0.6502156,0.277852654,0.7000003,0.700000167,0.7000001,-43.54441,0.500003338,84.9398346,-0.277853876,0.65024215,0.6502156,0.277852654,1.0,1.0,1.0],[302.0,9.0,-26.6621952,9.107456,-4.77056551,-0.582873464,-0.5961979,-0.3859,-0.394826382,0.700000048,0.700000048,0.700000167,-45.70256,0.5106523,82.184906,-0.582873464,-0.5961979,-0.3859,-0.394826382,1.0,1.0,1.0],[303.0,9.0,-24.8289852,9.099935,-3.81133413,0.5822211,0.400711179,0.401765257,-0.582266152,0.700000048,0.7000001,0.7,-43.0836868,0.4999063,83.55524,0.5822211,0.400711179,0.401765257,-0.582266152,1.0,1.0,1.0],[304.0,9.0,-29.3318386,9.099999,-9.072314,0.934966266,-1.551597E-07,0.354736626,-2.04412967E-08,0.7,0.7,2.09999967,-49.5163345,0.5,76.03955,0.934966266,-1.551597E-07,0.354736626,-2.04412967E-08,1.0,1.0,3.0],[305.0,9.0,-25.2540512,9.100001,-7.763916,0.000116641429,-0.722338,0.0005358082,-0.691539943,0.7000001,0.7,2.1,-43.6909256,0.500001431,77.90869,0.000116641429,-0.722338,0.0005358082,-0.691539943,1.0,1.0,3.0],[306.0,9.0,-25.27221,9.983957,-3.75919938,-0.13407588,0.9574606,-0.2321758,-0.10671138,0.700000048,0.6999999,2.10000014,-43.7168655,1.76279652,83.629715,-0.13407588,0.9574606,-0.2321758,-0.10671138,1.0,1.0,3.0],[307.0,9.0,-26.6386337,9.496488,-3.17870021,0.6470717,0.7059082,0.288082153,0.000723225065,0.700000048,0.7,2.1,-45.6689,1.066411,84.459,0.6470717,0.7059082,0.288082153,0.000723225065,1.0,1.0,3.0],[308.0,9.0,-28.5405788,9.099999,-5.36385,0.7085463,-2.86534458E-07,0.705664337,-4.95809843E-07,0.6999999,0.7,2.1,-48.3859673,0.500000238,81.33736,0.7085463,-2.86534458E-07,0.705664337,-4.95809843E-07,1.0,1.0,3.0],[309.0,9.0,-28.6278057,9.1,-8.13075,8.382036E-08,-0.9239108,8.320793E-08,-0.382608,0.7000001,0.7,2.10000038,-48.51058,0.5000007,77.38464,8.382036E-08,-0.9239108,8.320793E-08,-0.382608,1.0,1.0,3.0],[310.0,9.0,-32.378006,9.099999,-11.60081,-1.143359E-08,0.9238721,1.91102281E-08,0.382701367,0.700000048,0.7,2.10000038,-53.8680077,0.5,72.4274139,-1.143359E-08,0.9238721,1.91102281E-08,0.382701367,1.0,1.0,3.0],[311.0,9.0,-33.33683,9.099999,-13.3144941,3.01149782E-07,0.908594549,1.520768E-07,0.4176793,0.7000001,0.7,2.10000038,-55.23776,0.500000238,69.9792938,3.01149782E-07,0.908594549,1.520768E-07,0.4176793,1.0,1.0,3.0],[312.0,9.0,-33.0167274,9.099999,-12.3977633,0.94542253,8.18564729E-07,0.325847059,-2.8636018E-08,0.7,0.7,2.10000038,-54.7804642,0.500000238,71.28891,0.94542253,8.18564729E-07,0.325847059,-2.8636018E-08,1.0,1.0,3.0],[313.0,9.0,-32.1386528,9.099999,-10.220314,0.5075693,-0.507572234,0.4923127,-0.492313027,0.7000001,0.700000167,2.10000038,-53.52607,0.5,74.39955,0.5075693,-0.507572234,0.4923127,-0.492313027,1.0,1.0,3.0],[314.0,9.0,-27.28746,9.200291,-7.00453949,0.932806,0.169085354,0.3132052,0.0564409681,0.700000048,0.7000001,2.1,-46.5957947,0.6432719,78.993515,0.932806,0.169085354,0.3132052,0.0564409681,1.0,1.0,3.0],[315.0,9.0,-28.4297886,9.245355,-2.06163335,0.0256563313,0.0222268477,0.7553289,0.6544663,0.700000167,2.10000062,2.10000014,-48.22769,0.7076502,86.05481,0.0256563313,0.0222268477,0.7553289,0.6544663,1.0,3.0,3.0],[316.0,9.0,-33.4663124,9.94564,-16.8558941,-0.191609174,0.422868729,-0.799671,0.380781174,0.7,2.1,2.10000014,-55.4227333,1.7080574,64.92015,-0.191609174,0.422868729,-0.799671,0.380781174,1.0,3.0,3.0],[317.0,9.0,-31.99216,9.843304,-8.61891,0.7342826,-0.179069981,0.6361566,-0.1551382,0.7000001,2.10000134,2.10000062,-53.3167953,1.56186223,76.68727,0.7342826,-0.179069981,0.6361566,-0.1551382,1.0,3.0,3.0],[318.0,9.0,-34.5470428,9.935268,-17.0730743,-0.2812296,0.609075665,-0.668454468,0.321100354,0.700000167,2.10000014,2.10000038,-56.9666328,1.69324076,64.60989,-0.2812296,0.609075665,-0.668454468,0.321100354,1.0,3.0,3.0],[319.0,9.0,-35.54426,9.100169,-17.5913773,-0.494378,0.494295537,-0.5055773,0.5056223,0.7,2.10000014,2.10000014,-58.3912239,0.500241756,63.86946,-0.494378,0.494295537,-0.5055773,0.5056223,1.0,3.0,3.0],[320.0,9.0,-34.72929,9.577203,-21.189024,0.19372575,0.3374369,-0.465175539,-0.79512167,0.7,2.10000014,2.1,-57.2269859,1.1817193,58.7299652,0.19372575,0.3374369,-0.465175539,-0.79512167,1.0,3.0,3.0],[321.0,9.0,-34.7175,9.09973,-23.3037529,-0.5024186,-0.5022444,-0.497495264,-0.4978199,0.700000048,2.10000038,2.10000038,-57.2101364,0.499614,55.7089233,-0.5024186,-0.5022444,-0.497495264,-0.4978199,1.0,3.0,3.0],[322.0,9.0,-37.5138,9.800047,-18.3394,0.0001449463,0.9125011,6.84226761E-05,0.409074247,0.7,2.1,2.1,-61.2048569,1.50006628,62.8008575,0.0001449463,0.9125011,6.84226761E-05,0.409074247,1.0,3.0,3.0],[323.0,9.0,-38.5029259,9.800088,-19.7595329,0.000250020385,0.9728069,5.31461119E-05,0.2316177,0.700000048,2.1,2.10000014,-62.6178932,1.50012636,60.7720947,0.000250020385,0.9728069,5.31461119E-05,0.2316177,1.0,3.0,3.0],[324.0,9.0,-40.9682579,11.1997032,-22.3827381,0.6321225,0.631842136,0.3168544,0.31749022,0.7000002,2.10000038,2.10000038,-66.13979,3.499576,57.02466,0.6321225,0.631842136,0.3168544,0.31749022,1.0,3.0,3.0],[325.0,9.0,-41.51899,9.800466,-22.9056664,0.0006735645,0.9063604,0.00012709749,0.422504872,0.6999999,2.10000014,2.10000014,-66.92655,1.50066566,56.27762,0.0006735645,0.9063604,0.00012709749,0.422504872,1.0,3.0,3.0],[326.0,9.0,-28.3111153,9.8,-11.5622406,-0.6692244,0.228338748,-0.228338748,-0.66922456,2.80000019,2.1,2.1,-48.05816,1.50000072,72.48251,-0.6692244,0.228338748,-0.228338748,-0.66922456,4.0,3.0,3.0],[327.0,9.0,-34.5418167,9.8,-4.462425,-0.629499853,-0.32207045,-0.322071224,0.6295004,2.8,2.10000038,2.1,-56.95916,1.5,82.62511,-0.629499853,-0.32207045,-0.322071224,0.6295004,4.0,3.0,3.0],[328.0,9.0,-40.8381157,9.800007,-20.7580719,1.25861707E-05,0.968365,6.668073E-05,0.249538,2.8,2.1,2.1,-65.95387,1.50001073,59.34561,1.25861707E-05,0.968365,6.668073E-05,0.249538,4.0,3.0,3.0],[329.0,9.0,-37.24356,10.4818668,-23.6804771,0.464927346,0.6470567,-0.433762044,-0.420726478,2.80000043,2.10000014,2.1,-60.8187943,2.4740963,55.1707458,0.464927346,0.6470567,-0.433762044,-0.420726478,4.0,3.0,3.0],[330.0,9.0,-39.6880264,11.9000254,-19.5958118,-7.81476556E-05,0.91985786,5.409046E-05,0.392251968,2.80000067,2.1,2.10000086,-64.31089,4.500036,61.00598,-7.81476556E-05,0.91985786,5.409046E-05,0.392251968,4.0,3.0,3.0],[331.0,9.0,-39.0888939,12.6852808,-22.8805771,0.6588871,-0.30005905,0.3272409,0.607244432,2.8,2.1,2.10000014,-63.45499,5.621829,56.31346,0.6588871,-0.30005905,0.3272409,0.607244432,4.0,3.0,3.0],[332.0,8.0,-4.29540253,0.0,32.55,0.0,0.0,0.0,1.0,0.7,0.7,0.7,-6.1362896,0.0,46.5,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[333.0,3.0,-4.29540253,0.0,32.55,0.0,1.0,0.0,0.0,0.7,0.7,0.7,0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,1.0],[334.0,9.0,23.7045956,4.375,-9.45,0.0,1.0,0.0,0.0,0.7,8.75,2.8,-40.0,6.25,60.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[335.0,9.0,23.7045956,4.375,4.54999971,0.0,1.0,0.0,0.0,0.7,8.75,2.8,-40.0,6.25,40.0,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[336.0,9.0,23.7045956,6.125,-2.45,0.0,0.707107842,0.707105756,0.0,0.7,11.2,3.85,-40.0,8.75,50.0,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,5.5],[337.0,9.0,31.7545948,4.375,-10.5,0.0,-0.7071037,0.0,0.7071099,0.7000001,8.75,16.800005,-51.5,6.25,61.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[338.0,9.0,39.8045959,4.375,-2.45,0.0,-1.0,0.0,5.96046448E-06,0.7,8.75,16.8,-63.0,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[339.0,9.0,31.7545948,4.375,5.6,0.0,-0.7071037,0.0,0.7071099,0.7000001,8.75,16.800005,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[340.0,9.0,36.6545944,3.5,-2.45,0.707105756,-0.7071079,4.214679E-06,4.21469031E-06,0.7000001,7.00000143,16.8000031,-58.5,5.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[341.0,9.0,31.7545948,8.4,2.45,0.500002861,-0.499998629,-0.4999955,0.5000031,0.7000001,7.00000143,16.800005,-51.5,12.0,43.0,0.4999955,-0.5000031,0.500002861,-0.499998629,1.0,10.0,24.0],[342.0,10.0,67.8052444,8.75,-37.4494553,0.0,5.96046448E-06,0.0,-1.0,0.7,0.7,0.7,103.000931,12.5,-99.99922,0.0,5.96046448E-06,0.0,-1.0,1.0,1.0,1.0],[343.0,9.0,23.7046547,13.125,4.550096,0.0,5.96046448E-06,0.0,-1.0,0.7,8.75,2.8,-63.00013,6.25,60.00011,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[344.0,9.0,23.7048244,13.125,-9.44990349,0.0,5.96046448E-06,0.0,-1.0,0.7,8.75,2.8,-63.00013,6.25,40.00011,0.0,0.0,0.0,1.0,1.0,12.5,4.0],[345.0,9.0,23.70474,16.975,-2.44990373,0.707105756,4.214691E-06,4.214679E-06,-0.707107842,0.7,11.2,1.05006123,-63.00013,11.75,50.00011,-0.707105756,0.0,0.0,0.707107842,1.0,16.0,1.5000875],[346.0,9.0,39.8047829,13.125,-2.449789,0.0,0.0,0.0,1.0,0.7,8.75,16.8,-40.0000725,6.25,50.0,0.0,-5.96046448E-06,0.0,-1.0,1.0,12.5,24.0],[347.0,9.0,31.7549267,13.125,-10.4998846,0.0,0.7071057,0.0,0.707107961,0.7000001,8.75,16.800005,-51.5,6.25,38.5,0.0,-0.7071099,0.0,-0.7071037,1.0,12.5,24.0],[348.0,9.0,26.8545952,11.9,-2.449936,8.429358E-06,-1.364242E-12,-0.707105756,0.7071079,0.7000001,7.00000143,16.8,-58.5003357,4.5,50.00001,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,10.0,24.0],[349.0,9.0,34.4022331,17.15,-2.44985318,8.429358E-06,-1.364242E-12,-0.707105756,0.7071079,0.7000001,11.50597,16.8,-47.718,12.0,50.0,-4.214679E-06,-4.21469031E-06,0.707105756,-0.7071079,1.0,16.4370975,24.0],[350.0,9.0,24.754734,13.125,5.60008526,0.0,0.7071099,0.0,-0.70710367,0.7,8.75,2.8,-61.5,6.25,61.5000763,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[351.0,9.0,38.754734,13.125,5.60025358,0.0,0.7071099,0.0,-0.70710367,0.7,8.75,2.8,-41.5,6.25,61.50008,0.0,-0.7071057,0.0,0.7071079,1.0,12.5,4.0],[352.0,9.0,31.754734,16.1,5.600168,0.499996871,0.500003159,0.500001431,-0.499998569,0.7,11.2,2.80016327,-51.5,10.5,61.5000763,-0.499999851,-0.5000002,-0.49999845,0.50000155,1.0,16.0,4.000233],[353.0,21.0,36.6545944,22.1374989,2.45,0.0,0.0,0.0,1.0,7.0,9.275,7.0,-44.50025,19.125,56.999752,0.0,-5.96046448E-06,0.0,-1.0,10.0,13.25,10.0],[354.0,21.0,38.5795937,20.2125,-4.54999971,0.0,0.0,0.0,1.0,3.14999986,5.774802,7.0,-41.75037,16.375,46.9997177,0.0,-5.96046448E-06,0.0,-1.0,4.5,8.249718,10.0],[355.0,21.0,36.6545944,18.62966,-5.95,0.0,0.0,0.0,1.0,6.99993,2.609547,9.8,-44.5003929,14.1138,44.999752,0.0,-5.96046448E-06,0.0,-1.0,9.9999,3.727924,14.0],[356.0,5.0,30.5531883,16.8874989,-19.77409,0.0,-0.707105756,0.0,0.707107842,18.065155,1.21105969,3.79988,43.64741,24.125,-28.2487,0.0,-0.707105756,0.0,0.707107842,25.8073635,1.73008537,5.4284],[357.0,5.0,11.8045969,10.20429,-37.47416,0.972841,0.221833527,0.0644307062,0.0147955557,51.8,1.21105981,5.30544,16.86371,14.5775585,-53.53451,0.972841,0.221833527,0.0644307062,0.0147955557,74.0,1.73008537,7.5792],[358.0,6.0,37.3545952,-8.750001,37.1,0.0,0.0,0.0,1.0,16.27444,17.5,16.6219845,53.36371,-12.5000019,53.0,0.0,0.0,0.0,1.0,23.2492,25.0,23.7456913],[359.0,4.0,38.0545959,-0.7,24.3512478,0.0,-0.707106769,0.0,0.7071068,24.7972183,1.21105969,5.7057,54.36371,-1.0,34.7875,0.0,-0.707106769,0.0,0.7071068,35.4246,1.73008537,8.151],[360.0,22.0,16.7045975,0.7,-21.6999989,0.0,0.0,0.0,1.0,3.5,1.4,5.25,23.86371,1.0,-31.0,0.0,0.0,0.0,1.0,5.0,2.0,7.5],[361.0,9.0,16.7045975,0.7,-21.6999989,0.0,0.0,0.0,1.0,3.5,1.4,5.25,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[362.0,9.0,16.7045975,1.75,-21.6999989,0.0,0.0,0.0,1.0,2.8,1.4,2.45,0.0,0.75,0.0,0.0,0.0,0.0,1.0,0.8,1.0,0.4666667],[363.0,23.0,16.7045975,1.75,-21.6999989,0.0,0.0,0.0,1.0,2.8,1.4,2.45,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[364.0,18.0,34.6499977,0.0,38.1499977,0.0,-0.9914446,0.0,0.130528882,0.7000004,0.7,0.7000004,49.5,0.0,54.5,0.0,-0.9914446,0.0,0.130528882,1.0,1.0,1.0],[365.0,7.0,-26.7829037,-4.83,0.0,0.0,0.0,0.0,1.0,323.489716,2.132025,91.0,-38.26129,-6.9,0.0,0.0,0.0,0.0,1.0,462.128174,3.04575014,130.0]]}"""
AGENT_GOAL_SCENE_GRAPH_JSONSTR = """{"SourceNodes":[0],"DestinationNodes":[1],"NumNodes":2,"Features":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0],[1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0]]}"""
deprecated_URBAN_SCENE_GRAPH_JSONSTR = """{"SourceNodes":[0,0,0,0,0,0,6,7,7,7,7,7,7,7,7,6,16,16,16,16,16,16,16,16,16,16,0,27,27,27,27,0,32,33,33,33,33,33,33,33,33,32,42,42,42,42,42,42,42,42,42,42,32,53,53,53,53,53,53,53,53,53,53,32,64,64,64,64,64,64,64,64,64,64,64,64,0,77,77,79,79,79,79,79,79,79,79,79,79,79,90,90,90,79,94,94,94,94,94,94,79,101,101,101,79,77,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,106,77,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,218,0,330,331,331,331,331,331,331,331,331,330,340,340,340,340,340,340,340,340,340,340,340,340,340,0,0,0,0,0,358,358,360,0,0],"DestinationNodes":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363],"NumNodes":364,"Features":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0],[1.0,0.0,-35.0,0.0,0.0,-50.0,0.0],[2.0,10.6145983,-0.7,37.8,15.1637115,-1.0,54.0],[3.0,2.38959742,-0.7,0.0,3.41371059,-1.0,0.0],[4.0,27.9045963,-8.749999,-16.304821,39.86371,-12.499999,-23.2926],[5.0,-26.7829037,-8.75,0.0,-38.26129,-12.5,0.0],[6.0,0.9545973,0.0,-2.1,1.3637104,0.0,-3.0],[7.0,0.9545973,0.0,-2.1,0.0,0.0,0.0],[8.0,28.9545956,4.375,-44.1,-40.0,6.25,60.0],[9.0,28.9545956,4.375,-30.1,-40.0,6.25,40.0],[10.0,28.9545956,6.125,-37.1,-40.0,8.75,50.0],[11.0,37.0045967,4.375,-45.1499977,-51.5,6.25,61.5],[12.0,45.0545959,4.375,-37.1,-63.0,6.25,50.0],[13.0,37.0045967,4.375,-29.05,-51.5,6.25,38.5],[14.0,41.9045944,3.5,-37.1,-58.5,5.0,50.0],[15.0,32.1045952,8.4,-37.1,-44.5,12.0,50.0],[16.0,73.0552444,8.75,-72.09946,103.000931,12.5,-99.99922],[17.0,28.9546547,13.125,-30.0999031,-63.00013,6.25,60.00011],[18.0,28.9548244,13.125,-44.0999031,-63.00013,6.25,40.00011],[19.0,28.95474,16.1,-37.0999031,-63.00013,10.5,50.00011],[20.0,45.0547829,13.125,-37.09979,-40.0000725,6.25,50.0],[21.0,37.0049248,13.125,-45.1498833,-51.5,6.25,38.5],[22.0,41.9045944,12.25,-37.0999451,-44.5003357,5.0,49.99983],[23.0,33.4298325,17.15,-37.0998268,-56.60714,12.0,50.000145],[24.0,30.004734,13.125,-29.0499134,-61.5,6.25,61.5000763],[25.0,44.004734,13.125,-29.0497456,-41.5,6.25,61.50008],[26.0,37.004734,16.8874989,-29.0498314,-51.5,11.625,61.5000763],[27.0,0.0,0.0,0.0,0.0,0.0,0.0],[28.0,-52.5,14.0,0.0,-75.0,20.0,0.0],[29.0,52.5,14.0,0.0,75.0,20.0,0.0],[30.0,0.0,14.0,52.5,0.0,20.0,75.0],[31.0,0.0,14.0,-52.5,0.0,20.0,-75.0],[32.0,-38.2499962,0.0,41.2500267,-54.6428528,0.0,58.9286079],[33.0,5.849971,0.0,-2.49984622,62.9999542,0.0,-62.4998169],[34.0,-22.1500282,4.375,39.5001526,-40.0,6.25,60.0],[35.0,-22.1500282,4.375,25.5001526,-40.0,6.25,40.0],[36.0,-22.1500282,6.125,32.5001526,-40.0,8.75,50.0],[37.0,-30.20003,4.375,40.55015,-51.5,6.25,61.5],[38.0,-38.2500267,4.375,32.5001526,-63.0,6.25,50.0],[39.0,-30.20003,4.375,24.4501534,-51.5,6.25,38.5],[40.0,-35.10003,3.5,32.5001526,-58.5,5.0,50.0],[41.0,-25.3000278,8.4,32.5001526,-44.5,12.0,50.0],[42.0,-65.2,8.75,-3.54997444,-38.5,12.5,-64.0],[43.0,-23.200079,13.125,24.4501476,-40.0,6.25,60.0],[44.0,-37.20008,13.125,24.4501076,-40.0,6.25,40.0],[45.0,-30.200079,16.1,24.4501266,-40.0,10.5,50.0],[46.0,-30.2001247,13.125,40.5501251,-63.0,6.25,50.0],[47.0,-38.2501,13.125,32.500103,-51.5,6.25,38.5],[48.0,-30.2001171,12.25,37.4001274,-58.5,5.0,50.0],[49.0,-30.2000866,17.15,27.6001263,-44.5,12.0,50.0],[50.0,-22.1500683,13.125,39.50015,-61.5,6.25,61.5000763],[51.0,-22.1500263,13.125,25.5001488,-41.5,6.25,61.50008],[52.0,-22.1500473,16.1,32.50015,-51.5,10.5,61.5000763],[53.0,-66.25028,17.5,67.49997,-40.000412,25.0,37.4999237],[54.0,-22.1499462,21.875,25.5001526,-63.00013,6.25,60.00011],[55.0,-22.1500282,21.875,39.5001526,-63.00013,6.25,40.00011],[56.0,-22.1499882,24.85,32.5001526,-63.00013,10.5,50.00011],[57.0,-38.2500267,21.875,32.5001373,-40.0000725,6.25,50.0],[58.0,-30.2001247,21.875,40.5501862,-51.5,6.25,38.5],[59.0,-25.30008,21.0,32.5002136,-58.5,5.0,50.0],[60.0,-35.10008,25.9,32.5001564,-44.5,12.0,50.0],[61.0,-23.2000313,21.875,24.4501724,-61.5,6.25,61.5000763],[62.0,-37.20003,21.875,24.4500866,-41.5,6.25,61.50008],[63.0,-30.2000313,24.85,24.4501324,-51.5,10.5,61.5000763],[64.0,-65.2,26.25,-3.54997444,-38.5,37.5,-64.0],[65.0,-23.200079,30.625,24.4501476,-40.0,6.25,60.0],[66.0,-37.20008,30.625,24.4501076,-40.0,6.25,40.0],[67.0,-30.200079,34.475,24.4501266,-40.0,11.75,50.0],[68.0,-30.2001247,30.625,40.5501251,-63.0,6.25,50.0],[69.0,-30.2001171,29.75,37.4001274,-58.5,5.0,50.0],[70.0,-26.0750923,34.6499977,32.5001564,-51.5000229,12.0,55.892868],[71.0,-22.1500683,30.625,39.50015,-61.5,6.25,61.5000763],[72.0,-22.1500263,30.625,25.5001488,-41.5,6.25,61.50008],[73.0,-22.1500473,34.6499977,32.50015,-51.5,12.0,61.5000763],[74.0,-38.2499962,30.625,39.5001068,-61.5000038,6.25,38.50018],[75.0,-38.2500267,30.625,25.500103,-41.4999962,6.25,38.5000763],[76.0,-38.2499962,34.6499977,32.5001,-51.4999962,12.0,38.50015],[77.0,5.329597,0.0,-62.3,7.6137104,0.0,-89.0],[78.0,-42.6204033,0.0,18.9,-68.5,0.0,116.0],[79.0,2.17959714,0.0,-72.1,-4.5,0.0,-14.0],[80.0,-17.7704029,8.749999,-44.1,-28.5,12.499999,40.0],[81.0,-17.7704029,6.65,-36.925,-28.5,9.5,50.25],[82.0,-37.0204048,4.375,-29.05,-56.0,6.25,61.5],[83.0,-45.0704041,4.375,-17.5,-67.5,6.25,78.0],[84.0,-31.4204044,8.749999,-45.1499977,-48.0,12.499999,38.5],[85.0,-40.1704025,3.5,-37.1,-60.5,5.0,50.0],[86.0,-31.7704029,6.73749971,10.15,-48.5,9.625,117.5],[87.0,-17.7704029,4.375,-10.32501,-28.5,6.25,88.2499847],[88.0,-37.0204048,4.375,10.15,-56.0,6.25,117.5],[89.0,-31.5954037,8.4,-9.450006,-48.25,11.999999,89.49999],[90.0,2.17959714,0.0,-71.75,0.0,0.0,0.5],[91.0,-17.7704182,13.125,4.4625,-28.5000229,18.75,108.875],[92.0,-17.7704029,16.1,2.1,-28.5,23.0,105.5],[93.0,-17.7704029,9.1875,-10.325,-28.4999981,13.125,87.75],[94.0,2.17959714,0.0,-71.75,0.0,0.0,0.5],[95.0,-17.7704182,13.125,9.099999,-28.5000229,18.75,115.5],[96.0,-17.7703781,13.125,-29.75,-28.4999657,18.75,60.0],[97.0,-17.7704029,16.1,-10.325,-28.4999981,23.0,87.75],[98.0,-17.7703781,13.125,-17.15,-28.4999657,18.75,78.0],[99.0,-45.0704041,13.125,-17.5,-67.5,18.75,77.5],[100.0,-30.3677444,9.275,10.15,-46.4962,13.25,117.0],[101.0,36.8295937,0.0,30.1,49.5,0.0,146.0],[102.0,-44.020462,13.125,10.150219,-28.5000229,18.75,115.5],[103.0,-30.2829628,16.1,10.150197,-28.5,23.0,95.875],[104.0,-28.6204624,13.125,10.185194,-28.45,18.75,93.5],[105.0,-14.6204023,9.4,-9.575023,-24.0,13.4285707,89.3213959],[106.0,5.329597,0.0,-62.3,0.0,0.0,0.0],[107.0,-42.4796143,0.350000322,-12.2789774,-68.2988739,0.5000005,71.4586],[108.0,-44.1787872,1.02814519,-6.062579,-70.726265,1.46877885,80.33917],[109.0,-43.9167,0.349999875,-3.40537572,-70.35185,0.499999821,84.13518],[110.0,-44.1803551,0.350000083,-8.472802,-70.7285,0.5000001,76.8959961],[111.0,-43.4541435,0.429832876,-5.629256,-69.6910553,0.614047,80.9582062],[112.0,-42.77045,0.4150438,-6.98568726,-68.7143555,0.5929197,79.02045],[113.0,-44.36704,0.3500023,-7.35342836,-70.99519,0.5000033,78.4951],[114.0,-41.9270668,0.3574566,-7.134967,-67.50952,0.5106523,78.80719],[115.0,-43.881115,0.3499344,-6.45493,-70.30102,0.499906272,79.77867],[116.0,-36.96356,0.35,-6.13710165,-60.4187965,0.5,80.23271],[117.0,-40.7084465,0.350001037,-4.05958557,-65.76863,0.5000015,83.20059],[118.0,-43.61579,1.23395753,-6.813763,-69.92198,1.76279652,79.26605],[119.0,-43.1038,0.7464877,-8.207322,-69.19057,1.066411,77.27525],[120.0,-40.20888,0.350000173,-8.098396,-65.05497,0.500000238,77.43086],[121.0,-38.131916,0.350000441,-6.268266,-62.08788,0.500000656,80.0453339],[122.0,-33.0352058,0.350000024,-6.62743044,-54.80686,0.50000006,79.53224],[123.0,-31.12954,0.3500002,-6.153604,-52.0844841,0.5000003,80.20914],[124.0,-32.017,0.3500001,-6.54766369,-53.35228,0.5000002,79.6461945],[125.0,-34.205513,0.350000024,-7.39778757,-56.4787331,0.50000006,78.43173],[126.0,-39.8703842,0.450290382,-6.06184244,-64.5714,0.643272,80.3402252],[127.0,-42.6923,0.49535504,-10.2777567,-68.60271,0.707650065,74.31749],[128.0,-28.4589577,1.19564021,-3.8241837,-48.2693672,1.7080574,83.53688],[129.0,-35.4733276,1.09330356,-8.387012,-58.2898979,1.56186223,77.0185547],[130.0,-27.5609341,1.18526852,-4.463472,-46.9864731,1.69324076,82.62361],[131.0,-26.5005264,0.350169182,-4.835774,-45.4716034,0.5002417,82.09175],[132.0,-24.43534,0.8272036,-1.77926636,-42.52134,1.18171942,86.45819],[133.0,-22.901619,0.349729747,-0.3232971,-40.330307,0.4996139,88.53815],[134.0,-24.6071587,1.05004644,-5.75976849,-42.7667923,1.5000664,80.77176],[135.0,-22.894783,1.05008852,-5.50895357,-40.32054,1.50012648,81.1300659],[136.0,-19.29491,2.44970322,-5.510962,-35.1778679,3.499576,81.1272],[137.0,-18.53671,1.05046582,-5.55458355,-34.0947266,1.50066555,81.06488],[138.0,-35.8468323,1.05000043,-3.68874669,-58.82347,1.50000072,83.73036],[139.0,-36.75868,1.05,-13.090745,-60.1261063,1.5,70.2989349],[140.0,-20.568491,1.05000746,-6.52805328,-36.99727,1.50001073,79.67421],[141.0,-20.8980446,1.73186743,-1.90715182,-37.46806,2.4740963,86.2755],[142.0,-22.2030334,3.150025,-6.4850297,-39.33233,4.500036,79.73567],[143.0,-20.218235,3.93528032,-3.80003357,-36.4969025,5.621829,83.57138],[144.0,-39.848793,0.350000322,2.57730865,-64.54056,0.5000005,92.68187],[145.0,-34.3033257,1.02814519,5.8604064,-56.61846,1.46877885,97.37201],[146.0,-31.6714649,0.349999815,6.31059551,-52.8586578,0.499999762,98.01514],[147.0,-36.6281128,0.35,5.22432232,-59.9395828,0.5,96.46332],[148.0,-33.6937523,0.429832935,5.276222,-55.7476425,0.61404705,96.53746],[149.0,-34.82101,0.415043861,4.258052,-57.35801,0.5929198,95.08293],[150.0,-35.598,0.350002319,5.7004776,-58.468,0.500003338,97.14354],[151.0,-34.74185,0.3574566,3.4052155,-57.2449265,0.5106523,93.86459],[152.0,-34.60297,0.3499344,5.469551,-57.046524,0.4999063,96.8136444],[153.0,-32.4665146,0.35,-1.1174835,-53.9944458,0.5,87.403595],[154.0,-31.4536648,0.350001,3.043573,-52.5475159,0.500001431,93.34796],[155.0,-34.8788223,1.23395753,5.118739,-57.4405975,1.76279652,96.3124847],[156.0,-36.0872879,0.7464877,4.256354,-59.1669769,1.066411,95.0805054],[157.0,-35.2164536,0.350000173,1.49338531,-57.9229279,0.500000238,91.13341],[158.0,-32.90206,0.3500005,-0.0254638661,-54.6166573,0.5000007,88.96362],[159.0,-31.9001751,0.35,-5.035608,-53.1853867,0.5,81.8062744],[160.0,-30.939106,0.350000173,-6.748042,-51.8124352,0.500000238,79.35994],[161.0,-31.5539017,0.350000173,-5.99644136,-52.690712,0.500000238,80.4336548],[162.0,-32.9526749,0.35,-4.11078024,-54.6889572,0.5,83.12746],[163.0,-33.16288,0.450290352,1.70567322,-54.98925,0.6432719,91.436676],[164.0,-37.9751167,0.495355129,3.31180882,-61.8638763,0.7076502,93.7311554],[165.0,-27.9861889,1.19564021,-8.707265,-47.59398,1.7080574,76.56105],[166.0,-34.24205,1.09330356,-3.14981842,-56.5309258,1.56186223,84.50026],[167.0,-28.3651447,1.18526852,-9.742407,-48.1353455,1.69324076,75.0822754],[168.0,-28.4436741,0.3501692,-10.863533,-48.2475281,0.500241756,73.48067],[169.0,-24.9497356,0.8272035,-12.0465879,-43.25619,1.1817193,71.79059],[170.0,-23.1398945,0.3497298,-13.1405191,-40.6707,0.499614,70.22783],[171.0,-28.8338833,1.05004632,-12.9338923,-48.8049736,1.50006628,70.52301],[172.0,-28.1390038,1.05008841,-14.5189114,-47.81229,1.50012636,68.2587],[173.0,-27.1886425,2.44970322,-17.991066,-46.45463,3.499576,63.2984772],[174.0,-27.0301456,1.050466,-18.73379,-46.2282066,1.50066566,62.237442],[175.0,-29.8099575,1.05000043,-1.546759,-50.1993637,1.50000072,86.7903442],[176.0,-39.11823,1.05,-3.1545608,-63.4968948,1.5,84.4934845],[177.0,-28.5064259,1.05000746,-17.031908,-48.3371735,1.50001073,64.6687],[178.0,-24.13731,1.73186743,-15.4917078,-42.09558,2.4740963,66.86899],[179.0,-28.8973236,3.150025,-15.44422,-48.8956,4.500036,66.93683],[180.0,-25.7829189,3.93528032,-16.6480274,-44.44645,5.621829,65.2171],[181.0,-30.0344,0.350000322,-0.8470047,-50.52,0.5000005,87.78999],[182.0,-24.3391953,1.02814519,-3.86286,-42.3839874,1.46877885,83.48163],[183.0,-22.5812645,0.349999815,-5.87260437,-39.8726578,0.499999762,80.6105652],[184.0,-26.0953732,0.35,-2.2120986,-44.8928146,0.5,85.83986],[185.0,-24.51922,0.429832935,-4.68775463,-42.64117,0.61404705,82.30321],[186.0,-25.9761238,0.415043861,-4.25782776,-44.72246,0.5929198,82.91739],[187.0,-25.1514912,0.350002319,-2.84211564,-43.54441,0.500003338,84.9398346],[188.0,-26.6621952,0.3574566,-4.77056551,-45.70256,0.5106523,82.184906],[189.0,-24.8289852,0.3499344,-3.81133413,-43.0836868,0.4999063,83.55524],[190.0,-29.3318386,0.35,-9.072314,-49.5163345,0.5,76.03955],[191.0,-25.2540512,0.350001,-7.763916,-43.6909256,0.500001431,77.90869],[192.0,-25.27221,1.23395753,-3.75919938,-43.7168655,1.76279652,83.629715],[193.0,-26.6386337,0.7464877,-3.17870021,-45.6689,1.066411,84.459],[194.0,-28.5405788,0.350000173,-5.36385,-48.3859673,0.500000238,81.33736],[195.0,-28.6278057,0.3500005,-8.13075,-48.51058,0.5000007,77.38464],[196.0,-32.378006,0.35,-11.60081,-53.8680077,0.5,72.4274139],[197.0,-33.33683,0.350000173,-13.3144941,-55.23776,0.500000238,69.9792938],[198.0,-33.0167274,0.350000173,-12.3977633,-54.7804642,0.500000238,71.28891],[199.0,-32.1386528,0.35,-10.220314,-53.52607,0.5,74.39955],[200.0,-27.28746,0.450290352,-7.00453949,-46.5957947,0.6432719,78.993515],[201.0,-28.4297886,0.495355129,-2.06163335,-48.22769,0.7076502,86.05481],[202.0,-33.4663124,1.19564021,-16.8558941,-55.4227333,1.7080574,64.92015],[203.0,-31.99216,1.09330356,-8.61891,-53.3167953,1.56186223,76.68727],[204.0,-34.5470428,1.18526852,-17.0730743,-56.9666328,1.69324076,64.60989],[205.0,-35.54426,0.3501692,-17.5913773,-58.3912239,0.500241756,63.86946],[206.0,-34.72929,0.8272035,-21.189024,-57.2269859,1.1817193,58.7299652],[207.0,-34.7175,0.3497298,-23.3037529,-57.2101364,0.499614,55.7089233],[208.0,-37.5138,1.05004632,-18.3394,-61.2048569,1.50006628,62.8008575],[209.0,-38.5029259,1.05008841,-19.7595329,-62.6178932,1.50012636,60.7720947],[210.0,-40.9682579,2.44970322,-22.3827381,-66.13979,3.499576,57.02466],[211.0,-41.51899,1.050466,-22.9056664,-66.92655,1.50066566,56.27762],[212.0,-28.3111153,1.05000043,-11.5622406,-48.05816,1.50000072,72.48251],[213.0,-34.5418167,1.05,-4.462425,-56.95916,1.5,82.62511],[214.0,-40.8381157,1.05000746,-20.7580719,-65.95387,1.50001073,59.34561],[215.0,-37.24356,1.73186743,-23.6804771,-60.8187943,2.4740963,55.1707458],[216.0,-39.6880264,3.150025,-19.5958118,-64.31089,4.500036,61.00598],[217.0,-39.0888939,3.93528032,-22.8805771,-63.45499,5.621829,56.31346],[218.0,5.329597,8.75,-62.3,0.0,12.5,0.0],[219.0,-42.4796143,9.099999,-12.2789774,-68.2988739,0.5000005,71.4586],[220.0,-44.1787872,9.778145,-6.062579,-70.726265,1.46877885,80.33917],[221.0,-43.9167,9.099999,-3.40537572,-70.35185,0.499999821,84.13518],[222.0,-44.1803551,9.099999,-8.472802,-70.7285,0.5000001,76.8959961],[223.0,-43.4541435,9.179832,-5.629256,-69.6910553,0.614047,80.9582062],[224.0,-42.77045,9.165044,-6.98568726,-68.7143555,0.5929197,79.02045],[225.0,-44.36704,9.100002,-7.35342836,-70.99519,0.5000033,78.4951],[226.0,-41.9270668,9.107456,-7.134967,-67.50952,0.5106523,78.80719],[227.0,-43.881115,9.099935,-6.45493,-70.30102,0.499906272,79.77867],[228.0,-36.96356,9.099999,-6.13710165,-60.4187965,0.5,80.23271],[229.0,-40.7084465,9.100001,-4.05958557,-65.76863,0.5000015,83.20059],[230.0,-43.61579,9.983957,-6.813763,-69.92198,1.76279652,79.26605],[231.0,-43.1038,9.496488,-8.207322,-69.19057,1.066411,77.27525],[232.0,-40.20888,9.099999,-8.098396,-65.05497,0.500000238,77.43086],[233.0,-38.131916,9.1,-6.268266,-62.08788,0.500000656,80.0453339],[234.0,-33.0352058,9.099999,-6.62743044,-54.80686,0.50000006,79.53224],[235.0,-31.12954,9.099999,-6.153604,-52.0844841,0.5000003,80.20914],[236.0,-32.017,9.099999,-6.54766369,-53.35228,0.5000002,79.6461945],[237.0,-34.205513,9.099999,-7.39778757,-56.4787331,0.50000006,78.43173],[238.0,-39.8703842,9.200291,-6.06184244,-64.5714,0.643272,80.3402252],[239.0,-42.6923,9.245355,-10.2777567,-68.60271,0.707650065,74.31749],[240.0,-28.4589577,9.94564,-3.8241837,-48.2693672,1.7080574,83.53688],[241.0,-35.4733276,9.843304,-8.387012,-58.2898979,1.56186223,77.0185547],[242.0,-27.5609341,9.935268,-4.463472,-46.9864731,1.69324076,82.62361],[243.0,-26.5005264,9.100169,-4.835774,-45.4716034,0.5002417,82.09175],[244.0,-24.43534,9.577204,-1.77926636,-42.52134,1.18171942,86.45819],[245.0,-22.901619,9.09973,-0.3232971,-40.330307,0.4996139,88.53815],[246.0,-24.6071587,9.800047,-5.75976849,-42.7667923,1.5000664,80.77176],[247.0,-22.894783,9.800089,-5.50895357,-40.32054,1.50012648,81.1300659],[248.0,-19.29491,11.1997032,-5.510962,-35.1778679,3.499576,81.1272],[249.0,-18.53671,9.800466,-5.55458355,-34.0947266,1.50066555,81.06488],[250.0,-35.8468323,9.8,-3.68874669,-58.82347,1.50000072,83.73036],[251.0,-36.75868,9.8,-13.090745,-60.1261063,1.5,70.2989349],[252.0,-20.568491,9.800007,-6.52805328,-36.99727,1.50001073,79.67421],[253.0,-20.8980446,10.4818668,-1.90715182,-37.46806,2.4740963,86.2755],[254.0,-22.2030334,11.9000254,-6.4850297,-39.33233,4.500036,79.73567],[255.0,-20.218235,12.6852808,-3.80003357,-36.4969025,5.621829,83.57138],[256.0,-39.848793,9.099999,2.57730865,-64.54056,0.5000005,92.68187],[257.0,-34.3033257,9.778145,5.8604064,-56.61846,1.46877885,97.37201],[258.0,-31.6714649,9.099999,6.31059551,-52.8586578,0.499999762,98.01514],[259.0,-36.6281128,9.099999,5.22432232,-59.9395828,0.5,96.46332],[260.0,-33.6937523,9.179832,5.276222,-55.7476425,0.61404705,96.53746],[261.0,-34.82101,9.165044,4.258052,-57.35801,0.5929198,95.08293],[262.0,-35.598,9.100002,5.7004776,-58.468,0.500003338,97.14354],[263.0,-34.74185,9.107456,3.4052155,-57.2449265,0.5106523,93.86459],[264.0,-34.60297,9.099935,5.469551,-57.046524,0.4999063,96.8136444],[265.0,-32.4665146,9.099999,-1.1174835,-53.9944458,0.5,87.403595],[266.0,-31.4536648,9.100001,3.043573,-52.5475159,0.500001431,93.34796],[267.0,-34.8788223,9.983957,5.118739,-57.4405975,1.76279652,96.3124847],[268.0,-36.0872879,9.496488,4.256354,-59.1669769,1.066411,95.0805054],[269.0,-35.2164536,9.099999,1.49338531,-57.9229279,0.500000238,91.13341],[270.0,-32.90206,9.1,-0.0254638661, 54.6166573,0.5000007,88.96362],[271.0,-31.9001751,9.099999,-5.035608,-53.1853867,0.5,81.8062744],[272.0,-30.939106,9.099999,-6.748042,-51.8124352,0.500000238,79.35994],[273.0,-31.5539017,9.099999,-5.99644136,-52.690712,0.500000238,80.4336548],[274.0,-32.9526749,9.099999,-4.11078024,-54.6889572,0.5,83.12746],[275.0,-33.16288,9.200291,1.70567322,-54.98925,0.6432719,91.436676],[276.0,-37.9751167,9.245355,3.31180882,-61.8638763,0.7076502,93.7311554],[277.0,-27.9861889,9.94564,-8.707265,-47.59398,1.7080574,76.56105],[278.0,-34.24205,9.843304,-3.14981842,-56.5309258,1.56186223,84.50026],[279.0,-28.3651447,9.935268,-9.742407,-48.1353455,1.69324076,75.0822754],[280.0,-28.4436741,9.100169,-10.863533,-48.2475281,0.500241756,73.48067],[281.0,-24.9497356,9.577203,-12.0465879,-43.25619,1.1817193,71.79059],[282.0,-23.1398945,9.09973,-13.1405191,-40.6707,0.499614,70.22783],[283.0,-28.8338833,9.800047,-12.9338923,-48.8049736,1.50006628,70.52301],[284.0,-28.1390038,9.800088,-14.5189114,-47.81229,1.50012636,68.2587],[285.0,-27.1886425,11.1997032,-17.991066,-46.45463,3.499576,63.2984772],[286.0,-27.0301456,9.800466,-18.73379,-46.2282066,1.50066566,62.237442],[287.0,-29.8099575,9.8,-1.546759,-50.1993637,1.50000072,86.7903442],[288.0,-39.11823,9.8,-3.1545608,-63.4968948,1.5,84.4934845],[289.0,-28.5064259,9.800007,-17.031908,-48.3371735,1.50001073,64.6687],[290.0,-24.13731,10.4818668,-15.4917078,-42.09558,2.4740963,66.86899],[291.0,-28.8973236,11.9000254,-15.44422,-48.8956,4.500036,66.93683],[292.0,-25.7829189,12.6852808,-16.6480274,-44.44645,5.621829,65.2171],[293.0,-30.0344,9.099999,-0.8470047,-50.52,0.5000005,87.78999],[294.0,-24.3391953,9.778145,-3.86286,-42.3839874,1.46877885,83.48163],[295.0,-22.5812645,9.099999,-5.87260437,-39.8726578,0.499999762,80.6105652],[296.0,-26.0953732,9.099999,-2.2120986,-44.8928146,0.5,85.83986],[297.0,-24.51922,9.179832,-4.68775463,-42.64117,0.61404705,82.30321],[298.0,-25.9761238,9.165044,-4.25782776,-44.72246,0.5929198,82.91739],[299.0,-25.1514912,9.100002,-2.84211564,-43.54441,0.500003338,84.9398346],[300.0,-26.6621952,9.107456,-4.77056551,-45.70256,0.5106523,82.184906],[301.0,-24.8289852,9.099935,-3.81133413,-43.0836868,0.4999063,83.55524],[302.0,-29.3318386,9.099999,-9.072314,-49.5163345,0.5,76.03955],[303.0,-25.2540512,9.100001,-7.763916,-43.6909256,0.500001431,77.90869],[304.0,-25.27221,9.983957,-3.75919938,-43.7168655,1.76279652,83.629715],[305.0,-26.6386337,9.496488,-3.17870021,-45.6689,1.066411,84.459],[306.0,-28.5405788,9.099999,-5.36385,-48.3859673,0.500000238,81.33736],[307.0,-28.6278057,9.1,-8.13075,-48.51058,0.5000007,77.38464],[308.0,-32.378006,9.099999,-11.60081,-53.8680077,0.5,72.4274139],[309.0,-33.33683,9.099999,-13.3144941,-55.23776,0.500000238,69.9792938],[310.0,-33.0167274,9.099999,-12.3977633,-54.7804642,0.500000238,71.28891],[311.0,-32.1386528,9.099999,-10.220314,-53.52607,0.5,74.39955],[312.0,-27.28746,9.200291,-7.00453949,-46.5957947,0.6432719,78.993515],[313.0,-28.4297886,9.245355,-2.06163335,-48.22769,0.7076502,86.05481],[314.0,-33.4663124,9.94564,-16.8558941,-55.4227333,1.7080574,64.92015],[315.0,-31.99216,9.843304,-8.61891,-53.3167953,1.56186223,76.68727],[316.0,-34.5470428,9.935268,-17.0730743,-56.9666328,1.69324076,64.60989],[317.0,-35.54426,9.100169,-17.5913773,-58.3912239,0.500241756,63.86946],[318.0,-34.72929,9.577203,-21.189024,-57.2269859,1.1817193,58.7299652],[319.0,-34.7175,9.09973,-23.3037529,-57.2101364,0.499614,55.7089233],[320.0,-37.5138,9.800047,-18.3394,-61.2048569,1.50006628,62.8008575],[321.0,-38.5029259,9.800088,-19.7595329,-62.6178932,1.50012636,60.7720947],[322.0,-40.9682579,11.1997032,-22.3827381,-66.13979,3.499576,57.02466],[323.0,-41.51899,9.800466,-22.9056664,-66.92655,1.50066566,56.27762],[324.0,-28.3111153,9.8,-11.5622406,-48.05816,1.50000072,72.48251],[325.0,-34.5418167,9.8,-4.462425,-56.95916,1.5,82.62511],[326.0,-40.8381157,9.800007,-20.7580719,-65.95387,1.50001073,59.34561],[327.0,-37.24356,10.4818668,-23.6804771,-60.8187943,2.4740963,55.1707458],[328.0,-39.6880264,11.9000254,-19.5958118,-64.31089,4.500036,61.00598],[329.0,-39.0888939,12.6852808,-22.8805771,-63.45499,5.621829,56.31346],[330.0,-4.29540253,0.0,32.55,-6.1362896,0.0,46.5],[331.0,-4.29540253,0.0,32.55,0.0,0.0,0.0],[332.0,23.7045956,4.375,-9.45,-40.0,6.25,60.0],[333.0,23.7045956,4.375,4.54999971,-40.0,6.25,40.0],[334.0,23.7045956,6.125,-2.45,-40.0,8.75,50.0],[335.0,31.7545948,4.375,-10.5,-51.5,6.25,61.5],[336.0,39.8045959,4.375,-2.45,-63.0,6.25,50.0],[337.0,31.7545948,4.375,5.6,-51.5,6.25,38.5],[338.0,36.6545944,3.5,-2.45,-58.5,5.0,50.0],[339.0,31.7545948,8.4,2.45,-51.5,12.0,43.0],[340.0,67.8052444,8.75,-37.4494553,103.000931,12.5,-99.99922],[341.0,23.7046547,13.125,4.550096,-63.00013,6.25,60.00011],[342.0,23.7048244,13.125,-9.44990349,-63.00013,6.25,40.00011],[343.0,23.70474,16.975,-2.44990373,-63.00013,11.75,50.00011],[344.0,39.8047829,13.125,-2.449789,-40.0000725,6.25,50.0],[345.0,31.7549267,13.125,-10.4998846,-51.5,6.25,38.5],[346.0,26.8545952,11.9,-2.449936,-58.5003357,4.5,50.00001],[347.0,34.4022331,17.15,-2.44985318,-47.718,12.0,50.0],[348.0,24.754734,13.125,5.60008526,-61.5,6.25,61.5000763],[349.0,38.754734,13.125,5.60025358,-41.5,6.25,61.50008],[350.0,31.754734,16.1,5.600168,-51.5,10.5,61.5000763],[351.0,36.6545944,22.1374989,2.45,-44.50025,19.125,56.999752],[352.0,38.5795937,20.2125,-4.54999971,-41.75037,16.375,46.9997177],[353.0,36.6545944,18.62966,-5.95,-44.5003929,14.1138,44.999752],[354.0,30.5531883,16.8874989,-19.77409,43.64741,24.125,-28.2487],[355.0,11.8045969,10.20429,-37.47416,16.86371,14.5775585,-53.53451],[356.0,37.3545952,-8.750001,37.1,53.36371,-12.5000019,53.0],[357.0,38.0545959,-0.7,24.3512478,54.36371,-1.0,34.7875],[358.0,16.7045975,0.7,-21.6999989,23.86371,1.0,-31.0],[359.0,16.7045975,0.7,-21.6999989,0.0,0.0,0.0],[360.0,16.7045975,1.75,-21.6999989,0.0,0.75,0.0],[361.0,16.7045975,1.75,-21.6999989,0.0,0.0,0.0],[362.0,34.6499977,0.0,38.1499977,49.5,0.0,54.5],[363.0,-26.7829037,-4.83,0.0,-38.26129,-6.9,0.0]]}""" | 19,330.6 | 70,623 | 0.715208 | 23,155 | 96,653 | 2.984928 | 0.076528 | 0.03756 | 0.034854 | 0.029458 | 0.950677 | 0.940593 | 0.907691 | 0.727748 | 0.617802 | 0.613606 | 0 | 0.709081 | 0.000114 | 96,653 | 5 | 70,624 | 19,330.6 | 0.006095 | 0 | 0 | 0 | 0 | 1 | 0.998727 | 0.998717 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
3fa7bb13997cef315c484f13a5e48b3d7bad3bde | 210 | py | Python | app/droid_brick_pi/__init__.py | voyages-sncf-technologies/droideagile | 7c57438aa2c47015eeef19633dd2dfeecb15b9ea | [
"MIT"
] | null | null | null | app/droid_brick_pi/__init__.py | voyages-sncf-technologies/droideagile | 7c57438aa2c47015eeef19633dd2dfeecb15b9ea | [
"MIT"
] | null | null | null | app/droid_brick_pi/__init__.py | voyages-sncf-technologies/droideagile | 7c57438aa2c47015eeef19633dd2dfeecb15b9ea | [
"MIT"
] | null | null | null | import importlib
from app.droid_configuration import droidConfig, ensure_configuration_loaded
def should_use_mock():
ensure_configuration_loaded()
return droidConfig.getboolean("BrickPi", "UseMock")
| 23.333333 | 76 | 0.814286 | 23 | 210 | 7.130435 | 0.73913 | 0.231707 | 0.304878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 210 | 8 | 77 | 26.25 | 0.88172 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3fddf69838c3c95759415cebe59214b181149185 | 3,284 | py | Python | util/set_droplet_velo.py | qikaifzj/ludwig | e16d2d3472772fb3a36c1ee1bde028029c9ecd2d | [
"BSD-3-Clause"
] | null | null | null | util/set_droplet_velo.py | qikaifzj/ludwig | e16d2d3472772fb3a36c1ee1bde028029c9ecd2d | [
"BSD-3-Clause"
] | null | null | null | util/set_droplet_velo.py | qikaifzj/ludwig | e16d2d3472772fb3a36c1ee1bde028029c9ecd2d | [
"BSD-3-Clause"
] | null | null | null | import math
# distribution filename
input_filename = 'dist-00010000.001-001.bak'
output_filename = 'dist-00010000.001-001'
# droplet density and velocity
rho = 1.0
u = [0.01,0,0]
# droplet positions
r1 = [16,16,16]
# droplet diameter
d = 8.0
# box size
Lx = 32
Ly = 32
Lz = 32
r = [r1]
NVEL = 19
# definition of lattice vectors and weights
cv = [[0, 0, 0],
[ 1, 1, 0], [ 1, 0, 1], [ 1, 0, 0],
[ 1, 0, -1], [ 1, -1, 0], [ 0, 1, 1],
[ 0, 1, 0], [ 0, 1, -1], [ 0, 0, 1],
[ 0, 0, -1], [ 0, -1, 1], [ 0, -1, 0],
[ 0, -1, -1], [-1, 1, 0], [-1, 0, 1],
[-1, 0, 0],[ -1, 0, -1], [-1, -1, 0]]
q_ = [
[[-1.0/3.0, 0.0, 0.0],[ 0.0,-1.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[ 2.0/3.0, 1.0, 0.0],[ 1.0, 2.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[ 2.0/3.0, 0.0, 1.0],[ 0.0,-1.0/3.0, 0.0],[ 1.0, 0.0, 2.0/3.0]],
[[ 2.0/3.0, 0.0, 0.0],[ 0.0,-1.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[ 2.0/3.0, 0.0,-1.0],[ 0.0,-1.0/3.0, 0.0],[-1.0, 0.0, 2.0/3.0]],
[[ 2.0/3.0,-1.0, 0.0],[-1.0, 2.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0, 2.0/3.0, 1.0],[ 0.0, 1.0, 2.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0, 2.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0, 2.0/3.0,-1.0],[ 0.0,-1.0, 2.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0,-1.0/3.0, 0.0],[ 0.0, 0.0, 2.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0,-1.0/3.0, 0.0],[ 0.0, 0.0, 2.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0, 2.0/3.0,-1.0],[ 0.0,-1.0, 2.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0, 2.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[-1.0/3.0, 0.0, 0.0],[ 0.0, 2.0/3.0, 1.0],[ 0.0, 1.0, 2.0/3.0]],
[[ 2.0/3.0,-1.0, 0.0],[-1.0, 2.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[ 2.0/3.0, 0.0,-1.0],[ 0.0,-1.0/3.0, 0.0],[-1.0, 0.0, 2.0/3.0]],
[[ 2.0/3.0, 0.0, 0.0],[ 0.0,-1.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]],
[[ 2.0/3.0, 0.0, 1.0],[ 0.0,-1.0/3.0, 0.0],[ 1.0, 0.0, 2.0/3.0]],
[[ 2.0/3.0, 1.0, 0.0],[ 1.0, 2.0/3.0, 0.0],[ 0.0, 0.0,-1.0/3.0]]];
w0 = 12.0/36.0
w1 = 2.0/36.0
w2 = 1.0/36.0
wv = [w0,
w2, w2, w1, w2, w2, w2,
w1, w2, w1, w1, w2, w1,
w2, w2, w2, w1, w2, w2]
rcs2 = 3.0
def feq(rho, u):
fp = []
for p in range(0,NVEL):
udotc = 0.0
sdotq = 0.0
for ia in range(0,3):
udotc = udotc + u[ia]*cv[p][ia]
for ib in range(0,3):
sdotq += q_[p][ia][ib]*u[ia]*u[ib];
fp.append(rho*wv[p]*(1.0 + rcs2*udotc + 0.5*rcs2*rcs2*sdotq));
return fp
# read and modify distribution file
inp = open(input_filename,'r')
out = open(output_filename,'w')
while 1:
line=inp.readline()
if not line: break
arg = line.split();
cds = [int(arg[0]), int(arg[1]), int(arg[2])]
dcdsr = math.sqrt(pow(cds[0]-r1[0],2)+pow(cds[1]-r1[1],2)+pow(cds[2]-r1[2],2))
f = [float(arg[3]),\
float(arg[4]), float(arg[5]), float(arg[6]),\
float(arg[7]), float(arg[8]), float(arg[9]),\
float(arg[10]), float(arg[11]), float(arg[12]),\
float(arg[13]), float(arg[14]), float(arg[15]),\
float(arg[16]), float(arg[17]), float(arg[18]),\
float(arg[19]), float(arg[20]), float(arg[21])]
for ia in range(0,3):
out.write('%d ' % cds[ia])
if dcdsr < d:
fdrop = []
fdrop = feq(rho,u)
for p in range(0,NVEL):
out.write('%12.6le ' % fdrop[p])
else:
for p in range(0,NVEL):
out.write('%12.6le ' % f[p])
out.write('\n')
| 26.918033 | 83 | 0.452497 | 835 | 3,284 | 1.772455 | 0.124551 | 0.259459 | 0.273649 | 0.237838 | 0.471622 | 0.433784 | 0.404054 | 0.37973 | 0.37973 | 0.367568 | 0 | 0.259542 | 0.202192 | 3,284 | 121 | 84 | 27.140496 | 0.305344 | 0.051766 | 0 | 0.25 | 0 | 0 | 0.022237 | 0.014824 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.011905 | null | null | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3fecde6446ef4a88853310e4556426a4def1d757 | 115 | py | Python | app/single_resource/__init__.py | JyYoungK/maps4resources | c5c758259a4b18c7f8b81cef7f06681026a24b51 | [
"MIT"
] | 18 | 2016-10-17T22:08:40.000Z | 2020-07-15T14:19:12.000Z | app/single_resource/__init__.py | JyYoungK/maps4resources | c5c758259a4b18c7f8b81cef7f06681026a24b51 | [
"MIT"
] | 44 | 2016-10-16T00:19:07.000Z | 2017-06-10T16:01:53.000Z | app/single_resource/__init__.py | JyYoungK/maps4resources | c5c758259a4b18c7f8b81cef7f06681026a24b51 | [
"MIT"
] | 3 | 2017-04-30T14:10:27.000Z | 2020-05-23T19:34:45.000Z | from flask import Blueprint
single_resource = Blueprint('single_resource', __name__)
from . import views # noqa
| 19.166667 | 56 | 0.782609 | 14 | 115 | 6 | 0.642857 | 0.357143 | 0.547619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147826 | 115 | 5 | 57 | 23 | 0.857143 | 0.034783 | 0 | 0 | 0 | 0 | 0.137615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
b2067ae18483442d2df306da388ea2d11bfd26bd | 18,895 | py | Python | octopus_deploy_swagger_client/octopus_deploy_client/dashboard_configurations_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | octopus_deploy_swagger_client/octopus_deploy_client/dashboard_configurations_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | octopus_deploy_swagger_client/octopus_deploy_client/dashboard_configurations_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Octopus Server API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 2019.6.7+Branch.tags-2019.6.7.Sha.aa18dc6809953218c66f57eff7d26481d9b23d6a
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from octopus_deploy_swagger_client.api_client import ApiClient
class DashboardConfigurationsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action(self, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_with_http_info(**kwargs) # noqa: E501
return data
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_with_http_info(self, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyQuery', 'NugetApiKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/api/dashboardconfiguration', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='DashboardConfigurationResource', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces(self, base_space_id, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces(base_space_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str base_space_id: ID of the space (required)
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces_with_http_info(base_space_id, **kwargs) # noqa: E501
else:
(data) = self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces_with_http_info(base_space_id, **kwargs) # noqa: E501
return data
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces_with_http_info(self, base_space_id, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces_with_http_info(base_space_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str base_space_id: ID of the space (required)
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['base_space_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'base_space_id' is set
if ('base_space_id' not in params or
params['base_space_id'] is None):
raise ValueError("Missing the required parameter `base_space_id` when calling `custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_get_action_spaces`") # noqa: E501
collection_formats = {}
path_params = {}
if 'base_space_id' in params:
path_params['baseSpaceId'] = params['base_space_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyQuery', 'NugetApiKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/api/{baseSpaceId}/dashboardconfiguration', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='DashboardConfigurationResource', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action(self, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_with_http_info(**kwargs) # noqa: E501
return data
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_with_http_info(self, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyQuery', 'NugetApiKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/api/dashboardconfiguration', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='DashboardConfigurationResource', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces(self, base_space_id, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces(base_space_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str base_space_id: ID of the space (required)
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces_with_http_info(base_space_id, **kwargs) # noqa: E501
else:
(data) = self.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces_with_http_info(base_space_id, **kwargs) # noqa: E501
return data
def custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces_with_http_info(self, base_space_id, **kwargs): # noqa: E501
"""custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces # noqa: E501
NOTE: This definition is not complete. We will be adding more detail in future releases of Octopus. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces_with_http_info(base_space_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str base_space_id: ID of the space (required)
:return: DashboardConfigurationResource
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['base_space_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'base_space_id' is set
if ('base_space_id' not in params or
params['base_space_id'] is None):
raise ValueError("Missing the required parameter `base_space_id` when calling `custom_action_response_descriptor_octopus_server_web_api_actions_dashboard_configuration_update_action_spaces`") # noqa: E501
collection_formats = {}
path_params = {}
if 'base_space_id' in params:
path_params['baseSpaceId'] = params['base_space_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyQuery', 'NugetApiKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/api/{baseSpaceId}/dashboardconfiguration', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='DashboardConfigurationResource', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 47.35589 | 217 | 0.690659 | 2,196 | 18,895 | 5.53643 | 0.081056 | 0.034874 | 0.06251 | 0.093765 | 0.95649 | 0.95649 | 0.95649 | 0.947113 | 0.947113 | 0.947113 | 0 | 0.014128 | 0.239534 | 18,895 | 398 | 218 | 47.474874 | 0.831999 | 0.365494 | 0 | 0.811594 | 1 | 0 | 0.205727 | 0.103356 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.019324 | 0 | 0.125604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b7a255b50311656d34f9df29e3790074e951f485 | 2,188 | py | Python | tests/test_rules_ttt.py | Dratui/AI-Arena | e9693e34a90523bbb86eb2ad3b2c3e9797beed5c | [
"MIT"
] | 2 | 2018-11-16T08:18:42.000Z | 2018-11-22T08:44:10.000Z | tests/test_rules_ttt.py | Dratui/2048_online | e9693e34a90523bbb86eb2ad3b2c3e9797beed5c | [
"MIT"
] | 15 | 2018-11-16T10:52:24.000Z | 2018-11-23T08:36:17.000Z | tests/test_rules_ttt.py | Dratui/AI-Arena | e9693e34a90523bbb86eb2ad3b2c3e9797beed5c | [
"MIT"
] | 2 | 2018-11-15T09:32:36.000Z | 2018-11-16T08:56:54.000Z | from src.games.game_ttt.rules_ttt import *
from pytest import *
from src.board import *
def test_make_a_move():
board = generate_board_from_list([[None for i in range(3)] for j in range(3)])
assert generate_board_from_list([[None,1,None],[None,None,None],[None,None,None]]).get_all_tiles() == make_a_move(board,1,1).get_all_tiles()
board = generate_board_from_list([[None for i in range(3)] for j in range(3)])
assert generate_board_from_list([[None,None,None],[None,None,None],[None,None,0]]).get_all_tiles() == make_a_move(board,8,0).get_all_tiles()
board = generate_board_from_list([[None for i in range(5)] for j in range(5)])
assert generate_board_from_list([[None,None,None,None,None],[None,None,None,0,None],[None,None,None,None,None],[None,None,None,None,None],[None,None,None,None,None]]).get_all_tiles() == make_a_move(board,8,0).get_all_tiles()
board = generate_board_from_list([[None for i in range(5)] for j in range(5)])
assert generate_board_from_list([[None,None,None,None,None],[None,None,None,None,None],[None,None,None,None,None],[None,None,None,None,None],[None,None,None,0,None]]).get_all_tiles() == make_a_move(board,23,0).get_all_tiles()
def test_move_effective():
board = generate_board_from_list([[None,1,None],[1,0,0],[None,1,None]])
assert move_effective(board) == [0,2,6,8]
board = generate_board_from_list([[1,1,0],[None,0,None],[1,None,None]])
assert move_effective(board) == [3,5,7,8]
board = generate_board_from_list([[0, 0, 0], [None, 0, 0], [0, 0, 0]])
assert move_effective(board) == [3]
def test_is_over():
board = generate_board_from_list([[None,1,None],[1,0,0],[None,1,None]])
assert is_over(board) == False
board = generate_board_from_list([[1 for i in range(3)] for j in range(3)])
assert is_over(board) == True
board = generate_board_from_list([[i+j*2 for i in range(3)] for j in range(3)])
assert is_over(board) == True
def test_calc_score():
list_board = [generate_board_from_list([[1,1,None],[1,0,0],[1,1,None]]),generate_board_from_list([[1,1,None],[1,0,0],[1,1,None]])]
assert calc_score(list_board,1) == 1
assert calc_score(list_board,0) == 0
| 59.135135 | 229 | 0.696527 | 400 | 2,188 | 3.575 | 0.105 | 0.324476 | 0.436364 | 0.525874 | 0.855245 | 0.753147 | 0.714685 | 0.690909 | 0.673427 | 0.673427 | 0 | 0.039834 | 0.116545 | 2,188 | 36 | 230 | 60.777778 | 0.699948 | 0 | 0 | 0.266667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.133333 | false | 0 | 0.1 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b7a48676509a0e39b8c323dd0bd745b0d73f15fc | 6,007 | py | Python | tests/snapshots/snap_test_holidata/test_holidata_produces_holidays_for_locale_and_year[de_DE-2013] 1.py | gour/holidata | 89c7323f9c5345a3ecbf5cd5a835b0e08cfebc13 | [
"MIT"
] | 32 | 2019-04-12T08:01:34.000Z | 2022-02-28T04:41:50.000Z | tests/snapshots/snap_test_holidata/test_holidata_produces_holidays_for_locale_and_year[de_DE-2013] 1.py | gour/holidata | 89c7323f9c5345a3ecbf5cd5a835b0e08cfebc13 | [
"MIT"
] | 74 | 2019-07-09T16:35:20.000Z | 2022-03-09T16:41:34.000Z | tests/snapshots/snap_test_holidata/test_holidata_produces_holidays_for_locale_and_year[de_DE-2013] 1.py | gour/holidata | 89c7323f9c5345a3ecbf5cd5a835b0e08cfebc13 | [
"MIT"
] | 20 | 2019-01-28T07:41:02.000Z | 2022-02-16T02:38:57.000Z | [
{
'date': '2013-01-01',
'description': 'Neujahr',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NF'
},
{
'date': '2013-01-06',
'description': 'Heilige drei Könige',
'locale': 'de-DE',
'notes': '',
'region': 'BW',
'type': 'RF'
},
{
'date': '2013-01-06',
'description': 'Heilige drei Könige',
'locale': 'de-DE',
'notes': '',
'region': 'BY',
'type': 'RF'
},
{
'date': '2013-01-06',
'description': 'Heilige drei Könige',
'locale': 'de-DE',
'notes': '',
'region': 'ST',
'type': 'RF'
},
{
'date': '2013-03-29',
'description': 'Karfreitag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2013-03-31',
'description': 'Ostern',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2013-04-01',
'description': 'Ostermontag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2013-05-01',
'description': 'Erster Maifeiertag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NF'
},
{
'date': '2013-05-09',
'description': 'Christi Himmelfahrt',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2013-05-19',
'description': 'Pfingstsonntag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2013-05-20',
'description': 'Pfingstmontag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2013-05-30',
'description': 'Fronleichnam',
'locale': 'de-DE',
'notes': '',
'region': 'BW',
'type': 'RV'
},
{
'date': '2013-05-30',
'description': 'Fronleichnam',
'locale': 'de-DE',
'notes': '',
'region': 'BY',
'type': 'RV'
},
{
'date': '2013-05-30',
'description': 'Fronleichnam',
'locale': 'de-DE',
'notes': '',
'region': 'HE',
'type': 'RV'
},
{
'date': '2013-05-30',
'description': 'Fronleichnam',
'locale': 'de-DE',
'notes': '',
'region': 'NW',
'type': 'RV'
},
{
'date': '2013-05-30',
'description': 'Fronleichnam',
'locale': 'de-DE',
'notes': '',
'region': 'RP',
'type': 'RV'
},
{
'date': '2013-05-30',
'description': 'Fronleichnam',
'locale': 'de-DE',
'notes': '',
'region': 'SL',
'type': 'RV'
},
{
'date': '2013-08-15',
'description': 'Mariä Himmelfahrt',
'locale': 'de-DE',
'notes': '',
'region': 'SL',
'type': 'RF'
},
{
'date': '2013-10-03',
'description': 'Tag der Deutschen Einheit',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2013-10-31',
'description': 'Reformationstag',
'locale': 'de-DE',
'notes': '',
'region': 'BB',
'type': 'RF'
},
{
'date': '2013-10-31',
'description': 'Reformationstag',
'locale': 'de-DE',
'notes': '',
'region': 'MV',
'type': 'RF'
},
{
'date': '2013-10-31',
'description': 'Reformationstag',
'locale': 'de-DE',
'notes': '',
'region': 'SN',
'type': 'RF'
},
{
'date': '2013-10-31',
'description': 'Reformationstag',
'locale': 'de-DE',
'notes': '',
'region': 'ST',
'type': 'RF'
},
{
'date': '2013-10-31',
'description': 'Reformationstag',
'locale': 'de-DE',
'notes': '',
'region': 'TH',
'type': 'RF'
},
{
'date': '2013-11-01',
'description': 'Allerheiligen',
'locale': 'de-DE',
'notes': '',
'region': 'BW',
'type': 'RF'
},
{
'date': '2013-11-01',
'description': 'Allerheiligen',
'locale': 'de-DE',
'notes': '',
'region': 'BY',
'type': 'RF'
},
{
'date': '2013-11-01',
'description': 'Allerheiligen',
'locale': 'de-DE',
'notes': '',
'region': 'NW',
'type': 'RF'
},
{
'date': '2013-11-01',
'description': 'Allerheiligen',
'locale': 'de-DE',
'notes': '',
'region': 'RP',
'type': 'RF'
},
{
'date': '2013-11-01',
'description': 'Allerheiligen',
'locale': 'de-DE',
'notes': '',
'region': 'SL',
'type': 'RF'
},
{
'date': '2013-11-20',
'description': 'Buß- und Bettag',
'locale': 'de-DE',
'notes': '',
'region': 'SN',
'type': 'RV'
},
{
'date': '2013-12-24',
'description': 'Heilig Abend',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2013-12-25',
'description': 'Weihnachtstag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2013-12-26',
'description': 'Zweiter Weihnachtstag',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2013-12-31',
'description': 'Silvester',
'locale': 'de-DE',
'notes': '',
'region': '',
'type': 'NF'
}
] | 21.923358 | 51 | 0.36857 | 479 | 6,007 | 4.622129 | 0.150313 | 0.122855 | 0.153568 | 0.230352 | 0.821138 | 0.819332 | 0.809395 | 0.775971 | 0.775971 | 0.7028 | 0 | 0.075346 | 0.399034 | 6,007 | 274 | 52 | 21.923358 | 0.53795 | 0 | 0 | 0.620438 | 0 | 0 | 0.388316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b7c5ab3bfac487d47130320bae3d341b5e3e2f03 | 284,809 | py | Python | suiter/result/template.py | studenma/Suiter | ca982645cb5c349dc2f944364edfc191454754dc | [
"MIT"
] | null | null | null | suiter/result/template.py | studenma/Suiter | ca982645cb5c349dc2f944364edfc191454754dc | [
"MIT"
] | null | null | null | suiter/result/template.py | studenma/Suiter | ca982645cb5c349dc2f944364edfc191454754dc | [
"MIT"
] | null | null | null | from unittest import TestCase
from json import dumps
import requests
def setup():
#####################################
# TODO: HERE IS YOUR CODE
# Insert your code to define prerequisities of SUT
None
def verify(test_case, request_id, response, context):
"""
Method to describe the expected values for all test cases
Take into account that these if-else statements will be duplicated for all test cases
You can also rewrite whole method from scretch and use [TODO:] argument while calling
suiter to avoid code duplicate
context[0] = URL (string)
context[1] = METHOD (string)
context[2] = HEADERS (list)
context[3] = BODY (file path)
"""
if test_case == "test_case_001":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_002":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_003":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_004":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_005":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_006":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_007":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_008":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_009":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_010":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_011":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_012":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_013":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_014":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_015":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_016":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_017":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_018":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_019":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_020":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_021":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_022":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_023":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_024":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_025":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_026":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_027":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_028":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_029":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_030":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_031":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_032":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_033":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_034":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_035":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_036":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_037":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_038":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_039":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_040":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_041":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_042":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_043":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_044":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_045":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_046":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_047":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_048":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_049":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_050":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_051":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_052":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_053":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_054":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_055":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_056":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_057":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_058":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_059":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_060":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_061":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_062":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_063":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_064":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_065":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_066":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_067":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_068":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_069":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_070":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_071":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_072":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_073":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_074":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_075":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_076":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_077":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_078":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_079":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_080":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_081":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_082":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_083":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_084":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_085":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_086":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_087":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_088":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_089":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_090":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_091":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_092":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_093":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_094":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_095":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_096":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_097":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_098":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_099":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_100":
if request_id == "call_1":
assert response.status_code == 200
elif request_id == "call_2":
assert response.status_code == 200
elif request_id == "call_3":
assert response.status_code == 200
elif request_id == "call_4":
assert response.status_code == 200
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
def teardown():
#####################################
# TODO: HERE IS YOUR CODE
# Write a code to set the SUT to it's original state
# if it is dependend on given test_case, add a 'test_case' parameter to this function
# and write a code for all test_cases
## def teardown(test_case):
None
def list_of_all_cases(test_case, request_id):
"""
List of all test cases in this test suite
"""
if test_case == "test_case_001":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_002":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_003":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_004":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_005":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_006":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_007":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_008":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_009":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_010":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_011":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_012":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_013":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_014":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_015":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_016":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_017":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_018":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_019":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_020":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_021":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_022":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_023":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_024":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_025":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_026":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_027":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_028":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_029":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_030":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_031":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_032":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_033":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_034":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_035":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_036":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_037":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_038":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_039":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_040":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_041":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_042":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_043":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_044":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_045":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_046":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_047":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_048":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_049":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_050":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_051":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_052":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_053":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_054":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_055":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_056":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_057":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_058":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_059":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_060":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_061":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_062":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_063":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_064":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_065":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_066":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_067":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_068":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_069":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_070":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_071":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_072":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_073":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_074":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_075":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_076":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_077":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_078":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_079":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_080":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_081":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_082":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_083":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_084":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_085":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_086":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_087":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_088":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_089":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_090":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_091":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_092":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_093":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_094":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_095":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_096":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_097":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body2.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_098":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body3.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_099":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
elif test_case == "test_case_100":
if request_id == "call_1":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=add"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_2":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=substract"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_3":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=multiply"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
elif request_id == "call_4":
url = "http://127.0.0.1:5000/api/v1/calculator?operation=divide"
method = "GET"
header = {'Content-Type': 'application/json'}
body = "../input/body1.json"
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
else:
raise Exception("Should have never gotten here: [{},{}]".format(test_case,request_id))
return (url, method, header, body)
class TestClass(TestCase):
def test_sequence_001(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_001", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_001", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_001", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_001", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_001", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_001", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_001", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_001", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_002(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_002", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_002", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_002", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_002", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_002", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_002", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_002", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_002", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_003(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_003", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_003", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_003", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_003", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_003", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_003", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_003", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_003", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_004(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_004", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_004", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_004", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_004", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_004", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_004", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_004", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_004", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_005(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_005", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_005", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_005", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_005", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_005", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_005", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_005", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_005", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_006(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_006", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_006", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_006", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_006", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_006", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_006", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_006", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_006", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_007(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_007", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_007", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_007", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_007", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_007", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_007", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_007", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_007", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_008(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_008", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_008", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_008", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_008", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_008", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_008", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_008", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_008", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_009(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_009", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_009", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_009", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_009", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_009", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_009", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_009", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_009", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_010(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_010", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_010", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_010", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_010", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_010", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_010", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_010", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_010", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_011(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_011", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_011", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_011", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_011", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_011", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_011", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_011", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_011", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_012(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_012", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_012", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_012", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_012", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_012", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_012", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_012", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_012", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_013(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_013", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_013", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_013", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_013", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_013", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_013", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_013", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_013", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_014(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_014", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_014", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_014", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_014", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_014", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_014", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_014", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_014", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_015(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_015", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_015", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_015", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_015", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_015", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_015", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_015", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_015", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_016(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_016", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_016", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_016", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_016", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_016", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_016", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_016", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_016", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_017(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_017", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_017", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_017", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_017", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_017", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_017", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_017", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_017", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_018(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_018", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_018", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_018", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_018", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_018", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_018", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_018", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_018", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_019(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_019", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_019", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_019", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_019", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_019", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_019", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_019", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_019", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_020(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_020", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_020", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_020", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_020", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_020", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_020", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_020", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_020", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_021(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_021", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_021", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_021", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_021", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_021", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_021", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_021", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_021", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_022(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_022", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_022", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_022", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_022", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_022", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_022", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_022", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_022", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_023(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_023", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_023", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_023", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_023", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_023", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_023", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_023", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_023", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_024(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_024", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_024", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_024", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_024", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_024", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_024", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_024", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_024", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_025(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_025", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_025", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_025", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_025", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_025", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_025", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_025", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_025", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_026(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_026", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_026", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_026", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_026", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_026", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_026", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_026", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_026", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_027(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_027", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_027", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_027", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_027", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_027", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_027", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_027", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_027", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_028(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_028", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_028", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_028", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_028", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_028", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_028", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_028", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_028", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_029(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_029", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_029", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_029", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_029", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_029", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_029", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_029", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_029", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_030(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_030", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_030", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_030", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_030", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_030", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_030", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_030", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_030", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_031(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_031", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_031", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_031", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_031", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_031", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_031", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_031", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_031", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_032(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_032", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_032", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_032", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_032", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_032", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_032", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_032", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_032", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_033(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_033", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_033", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_033", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_033", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_033", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_033", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_033", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_033", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_034(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_034", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_034", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_034", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_034", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_034", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_034", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_034", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_034", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_035(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_035", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_035", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_035", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_035", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_035", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_035", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_035", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_035", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_036(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_036", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_036", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_036", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_036", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_036", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_036", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_036", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_036", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_037(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_037", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_037", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_037", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_037", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_037", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_037", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_037", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_037", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_038(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_038", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_038", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_038", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_038", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_038", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_038", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_038", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_038", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_039(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_039", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_039", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_039", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_039", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_039", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_039", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_039", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_039", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_040(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_040", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_040", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_040", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_040", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_040", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_040", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_040", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_040", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_041(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_041", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_041", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_041", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_041", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_041", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_041", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_041", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_041", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_042(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_042", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_042", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_042", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_042", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_042", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_042", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_042", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_042", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_043(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_043", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_043", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_043", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_043", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_043", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_043", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_043", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_043", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_044(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_044", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_044", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_044", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_044", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_044", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_044", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_044", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_044", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_045(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_045", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_045", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_045", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_045", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_045", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_045", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_045", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_045", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_046(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_046", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_046", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_046", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_046", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_046", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_046", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_046", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_046", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_047(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_047", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_047", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_047", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_047", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_047", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_047", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_047", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_047", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_048(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_048", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_048", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_048", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_048", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_048", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_048", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_048", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_048", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_049(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_049", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_049", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_049", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_049", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_049", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_049", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_049", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_049", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_050(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_050", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_050", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_050", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_050", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_050", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_050", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_050", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_050", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_051(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_051", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_051", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_051", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_051", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_051", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_051", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_051", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_051", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_052(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_052", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_052", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_052", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_052", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_052", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_052", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_052", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_052", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_053(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_053", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_053", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_053", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_053", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_053", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_053", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_053", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_053", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_054(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_054", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_054", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_054", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_054", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_054", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_054", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_054", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_054", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_055(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_055", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_055", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_055", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_055", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_055", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_055", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_055", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_055", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_056(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_056", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_056", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_056", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_056", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_056", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_056", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_056", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_056", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_057(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_057", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_057", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_057", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_057", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_057", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_057", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_057", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_057", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_058(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_058", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_058", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_058", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_058", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_058", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_058", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_058", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_058", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_059(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_059", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_059", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_059", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_059", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_059", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_059", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_059", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_059", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_060(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_060", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_060", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_060", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_060", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_060", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_060", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_060", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_060", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_061(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_061", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_061", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_061", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_061", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_061", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_061", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_061", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_061", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_062(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_062", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_062", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_062", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_062", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_062", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_062", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_062", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_062", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_063(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_063", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_063", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_063", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_063", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_063", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_063", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_063", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_063", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_064(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_064", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_064", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_064", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_064", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_064", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_064", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_064", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_064", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_065(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_065", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_065", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_065", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_065", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_065", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_065", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_065", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_065", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_066(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_066", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_066", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_066", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_066", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_066", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_066", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_066", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_066", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_067(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_067", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_067", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_067", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_067", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_067", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_067", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_067", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_067", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_068(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_068", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_068", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_068", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_068", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_068", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_068", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_068", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_068", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_069(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_069", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_069", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_069", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_069", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_069", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_069", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_069", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_069", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_070(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_070", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_070", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_070", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_070", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_070", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_070", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_070", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_070", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_071(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_071", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_071", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_071", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_071", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_071", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_071", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_071", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_071", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_072(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_072", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_072", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_072", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_072", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_072", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_072", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_072", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_072", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_073(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_073", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_073", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_073", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_073", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_073", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_073", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_073", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_073", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_074(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_074", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_074", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_074", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_074", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_074", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_074", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_074", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_074", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_075(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_075", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_075", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_075", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_075", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_075", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_075", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_075", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_075", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_076(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_076", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_076", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_076", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_076", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_076", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_076", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_076", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_076", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_077(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_077", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_077", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_077", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_077", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_077", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_077", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_077", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_077", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_078(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_078", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_078", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_078", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_078", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_078", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_078", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_078", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_078", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_079(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_079", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_079", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_079", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_079", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_079", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_079", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_079", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_079", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_080(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_080", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_080", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_080", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_080", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_080", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_080", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_080", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_080", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_081(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_081", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_081", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_081", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_081", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_081", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_081", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_081", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_081", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_082(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_082", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_082", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_082", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_082", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_082", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_082", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_082", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_082", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_083(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_083", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_083", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_083", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_083", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_083", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_083", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_083", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_083", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_084(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_084", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_084", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_084", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_084", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_084", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_084", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_084", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_084", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_085(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_085", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_085", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_085", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_085", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_085", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_085", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_085", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_085", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_086(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_086", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_086", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_086", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_086", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_086", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_086", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_086", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_086", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_087(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_087", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_087", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_087", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_087", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_087", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_087", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_087", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_087", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_088(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_088", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_088", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_088", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_088", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_088", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_088", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_088", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_088", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_089(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_089", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_089", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_089", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_089", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_089", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_089", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_089", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_089", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_090(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_090", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_090", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_090", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_090", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_090", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_090", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_090", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_090", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_091(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_091", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_091", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_091", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_091", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_091", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_091", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_091", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_091", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_092(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_092", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_092", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_092", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_092", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_092", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_092", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_092", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_092", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_093(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_093", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_093", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_093", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_093", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_093", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_093", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_093", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_093", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_094(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_094", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_094", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_094", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_094", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_094", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_094", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_094", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_094", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_095(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_095", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_095", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_095", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_095", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_095", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_095", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_095", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_095", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_096(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_096", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_096", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_096", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_096", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_096", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_096", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_096", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_096", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_097(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_097", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_097", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_097", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_097", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_097", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_097", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_097", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_097", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_098(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_098", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_098", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_098", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_098", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_098", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_098", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_098", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_098", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_099(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_099", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_099", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_099", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_099", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_099", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_099", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_099", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_099", "call_4", response, call)
### SUT Teardown ###
teardown()
def test_sequence_100(self):
### SUT Setup ###
setup()
### 1. Request ###
call = list_of_all_cases("test_case_100", "call_1")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_100", "call_1", response, call)
### 2. Request ###
call = list_of_all_cases("test_case_100", "call_2")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_100", "call_2", response, call)
### 3. Request ###
call = list_of_all_cases("test_case_100", "call_3")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_100", "call_3", response, call)
### 4. Request ###
call = list_of_all_cases("test_case_100", "call_4")
with open(call[3],'rb') as payload:
response = requests.request(call[1], call[0], headers=call[2], data=payload)
verify("test_case_100", "call_4", response, call)
### SUT Teardown ###
teardown()
| 47.122601 | 98 | 0.564245 | 35,895 | 284,809 | 4.287143 | 0.006463 | 0.073145 | 0.067582 | 0.066282 | 0.993398 | 0.993164 | 0.993021 | 0.992813 | 0.992813 | 0.992813 | 0 | 0.062767 | 0.278158 | 284,809 | 6,043 | 99 | 47.130399 | 0.685758 | 0.028124 | 0 | 0.828975 | 0 | 0.075259 | 0.269819 | 0 | 0 | 0 | 0 | 0.000496 | 0.075259 | 1 | 0.019567 | false | 0 | 0.000564 | 0 | 0.020508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7dcb3b7e46eca0dfa192a4eeb705c75ea28aa14 | 10,242 | py | Python | src/zen/tests/bellman_ford.py | wangyiranamy/Testing | 2a729d1f73b6df69150807b965b8fedbb7661c04 | [
"BSD-3-Clause"
] | 41 | 2015-01-13T19:49:50.000Z | 2021-05-02T04:11:19.000Z | src/zen/tests/bellman_ford.py | wangyiranamy/Testing | 2a729d1f73b6df69150807b965b8fedbb7661c04 | [
"BSD-3-Clause"
] | 9 | 2015-01-28T10:46:27.000Z | 2022-03-12T06:32:39.000Z | src/zen/tests/bellman_ford.py | wangyiranamy/Testing | 2a729d1f73b6df69150807b965b8fedbb7661c04 | [
"BSD-3-Clause"
] | 19 | 2015-01-27T12:19:42.000Z | 2019-07-20T21:30:56.000Z | import unittest
from zen import *
import networkx
import random
class AllPairsBellmanFordPathLength_TestCase(unittest.TestCase):
def test_apdp_undirected_ignore_weights(self):
G = Graph()
G.add_edge(1,2,weight=4)
G.add_edge(2,3,weight=1)
G.add_edge(1,4,weight=2)
G.add_edge(4,5,weight=1)
G.add_edge(5,3,weight=1)
D = all_pairs_bellman_ford_path_length_(G,ignore_weights=True)
self.assertEqual(D[0,2],2)
def test_apdp_undirected_w_weights(self):
G = Graph()
G.add_edge(1,2,weight=4)
G.add_edge(2,3,weight=1)
G.add_edge(1,4,weight=2)
G.add_edge(4,5,weight=1)
G.add_edge(5,3,weight=1)
D = all_pairs_bellman_ford_path_length_(G)
self.assertEqual(D[0,2],4)
self.assertEqual(D[0,1],4)
self.assertEqual(D[1,2],1)
self.assertEqual(D[1,3],3)
def test_apdp_undirected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path_length_(G)
self.assertEqual(D[0,0],0)
self.assertEqual(D[0,1],1)
self.assertEqual(D[0,2],2)
self.assertEqual(D[0,3],2)
self.assertEqual(D[1,2],1)
self.assertEqual(D[1,3],1)
self.assertEqual(D[2,3],2)
def test_apdp_undirected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path_length_(G)
self.assertEqual(D[0,0],0)
self.assertEqual(D[0,1],1)
self.assertEqual(D[0,2],2)
self.assertEqual(D[0,3],2)
self.assertEqual(D[1,2],1)
self.assertEqual(D[1,3],1)
self.assertEqual(D[2,3],2)
def test_apdp_directed(self):
G = DiGraph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path_length_(G)
self.assertEqual(D[0,0],0)
self.assertEqual(D[0,1],1)
self.assertEqual(D[0,2],2)
self.assertEqual(D[0,3],2)
self.assertEqual(D[1,2],1)
self.assertEqual(D[1,3],1)
self.assertEqual(D[2,3],float('infinity'))
def test_disconnected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_node(4)
D = all_pairs_bellman_ford_path_length_(G)
self.assertEqual(D[0,3],float('infinity'))
self.assertEqual(D[1,3],float('infinity'))
self.assertEqual(D[2,3],float('infinity'))
class AllPairsBellmanFordPath_TestCase(unittest.TestCase):
def test_apdp_undirected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D,P = all_pairs_bellman_ford_path_(G)
self.assertEqual(D[0,0],0)
self.assertEqual(D[0,1],1)
self.assertEqual(D[0,2],2)
self.assertEqual(D[0,3],2)
self.assertEqual(D[1,2],1)
self.assertEqual(D[1,3],1)
self.assertEqual(D[2,3],2)
self.assertEqual(P[0,0],-1)
self.assertEqual(P[0,1],0)
self.assertEqual(P[0,2],1)
self.assertEqual(P[0,3],1)
self.assertEqual(P[1,2],1)
self.assertEqual(P[1,3],1)
self.assertEqual(P[2,3],1)
def test_apdp_directed(self):
G = DiGraph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D,P = all_pairs_bellman_ford_path_(G)
self.assertEqual(D[0,0],0)
self.assertEqual(D[0,1],1)
self.assertEqual(D[0,2],2)
self.assertEqual(D[0,3],2)
self.assertEqual(D[1,2],1)
self.assertEqual(D[1,3],1)
self.assertEqual(D[2,3],float('infinity'))
self.assertEqual(P[0,0],-1)
self.assertEqual(P[0,1],0)
self.assertEqual(P[0,2],1)
self.assertEqual(P[0,3],1)
self.assertEqual(P[1,2],1)
self.assertEqual(P[1,3],1)
self.assertEqual(P[2,3],-1)
def test_disconnected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_node(4)
D,P = all_pairs_bellman_ford_path_(G)
self.assertEqual(D[0,3],float('infinity'))
self.assertEqual(D[1,3],float('infinity'))
self.assertEqual(D[2,3],float('infinity'))
self.assertEqual(P[0,3],-1)
self.assertEqual(P[1,3],-1)
self.assertEqual(P[2,3],-1)
class AllPairsBellmanFordPathLengthTestCase(unittest.TestCase):
def test_apdp_undirected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path_length(G)
self.assertEqual(D[1][1],0)
self.assertEqual(D[1][2],1)
self.assertEqual(D[1][3],2)
self.assertEqual(D[1][4],2)
self.assertEqual(D[2][3],1)
self.assertEqual(D[2][4],1)
self.assertEqual(D[3][4],2)
def test_apdp_directed(self):
G = DiGraph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path_length(G)
self.assertEqual(D[1][1],0)
self.assertEqual(D[1][2],1)
self.assertEqual(D[1][3],2)
self.assertEqual(D[1][4],2)
self.assertEqual(D[2][3],1)
self.assertEqual(D[2][4],1)
self.assertEqual(D[3][4],float('infinity'))
def test_disconnected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_node(4)
D = all_pairs_bellman_ford_path_length(G)
self.assertEqual(D[1][4],float('infinity'))
self.assertEqual(D[2][4],float('infinity'))
self.assertEqual(D[3][4],float('infinity'))
class AllPairsBellmanFordPathTestCase(unittest.TestCase):
def test_apdp_undirected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path(G)
self.assertEqual(D[1][1][0],0)
self.assertEqual(D[1][2][0],1)
self.assertEqual(D[1][3][0],2)
self.assertEqual(D[1][4][0],2)
self.assertEqual(D[2][3][0],1)
self.assertEqual(D[2][4][0],1)
self.assertEqual(D[3][4][0],2)
self.assertEqual(D[1][1][1],None)
self.assertEqual(D[1][2][1],1)
self.assertEqual(D[1][3][1],2)
self.assertEqual(D[1][4][1],2)
self.assertEqual(D[2][3][1],2)
self.assertEqual(D[2][4][1],2)
self.assertEqual(D[3][4][1],2)
def test_apdp_directed(self):
G = DiGraph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,4)
D = all_pairs_bellman_ford_path(G)
self.assertEqual(D[1][1][0],0)
self.assertEqual(D[1][2][0],1)
self.assertEqual(D[1][3][0],2)
self.assertEqual(D[1][4][0],2)
self.assertEqual(D[2][3][0],1)
self.assertEqual(D[2][4][0],1)
self.assertEqual(D[3][4][0],float('infinity'))
self.assertEqual(D[1][1][1],None)
self.assertEqual(D[1][2][1],1)
self.assertEqual(D[1][3][1],2)
self.assertEqual(D[1][4][1],2)
self.assertEqual(D[2][3][1],2)
self.assertEqual(D[2][4][1],2)
self.assertEqual(D[3][4][1],None)
def test_disconnected(self):
G = Graph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_node(4)
D = all_pairs_bellman_ford_path(G)
self.assertEqual(D[1][4][0],float('infinity'))
self.assertEqual(D[2][4][0],float('infinity'))
self.assertEqual(D[3][4][0],float('infinity'))
self.assertEqual(D[1][4][1],None)
self.assertEqual(D[2][4][1],None)
self.assertEqual(D[3][4][1],None)
class BellmanFordPathLengthTestCase(unittest.TestCase):
def test_sssp_undirected(self):
# following example from CLRS book page 596
G = Graph()
G.add_edge('o', 'a', None, 2)
G.add_edge('a', 'f', None, 12)
G.add_edge('f', 't', None, 3)
G.add_edge('t', 'e', None, 7)
G.add_edge('e', 'c', None, 4)
G.add_edge('c', 'o', None, 4)
G.add_edge('o', 'b', None, 5)
G.add_edge('a', 'b', None, 2)
G.add_edge('a', 'd', None, 7)
G.add_edge('b', 'd', None, 4)
G.add_edge('b', 'e', None, 3)
G.add_edge('t', 'd', None, 5)
G.add_edge('e', 'd', None, 1)
G.add_edge('b', 'c', None, 1)
G.add_edge('x', 'y', None, 1)
D = bellman_ford_path_length(G, 'o')
self.assertEqual(D['o'], 0)
self.assertEqual(D['a'], 2)
self.assertEqual(D['b'], 4)
#self.assertEqual(D['d'], (8, 'e'))
self.assertEqual(D['c'], 4)
self.assertEqual(D['e'], 7)
self.assertEqual(D['t'], 13)
self.assertEqual(D['f'], 14)
self.assertEqual(D['x'], float('infinity'))
self.assertEqual(D['y'], float('infinity'))
def test_sssp_directed(self):
# following example from CLRS book page 596
G = DiGraph()
G.add_edge('s', 't', None, 10)
G.add_edge('s', 'y', None, 5)
G.add_edge('t', 'x', None, 1)
G.add_edge('t', 'y', None, 2)
G.add_edge('y', 't', None, 3)
G.add_edge('y', 'x', None, 9)
G.add_edge('y', 'z', None, 2)
G.add_edge('z', 's', None, 7)
G.add_edge('z', 'x', None, 6)
G.add_edge('x', 'z', None, 4)
G.add_edge('a', 'b', None, 4)
D = bellman_ford_path_length(G, 's')
self.assertEqual(D['s'], 0)
self.assertEqual(D['t'], 8)
self.assertEqual(D['y'], 5)
self.assertEqual(D['x'], 9)
self.assertEqual(D['z'], 7)
self.assertEqual(D['a'], float('infinity'))
self.assertEqual(D['b'], float('infinity'))
class BellmanFordPathTestCase(unittest.TestCase):
def test_sssp_undirected(self):
# following example from CLRS book page 596
G = Graph()
G.add_edge('o', 'a', None, 2)
G.add_edge('a', 'f', None, 12)
G.add_edge('f', 't', None, 3)
G.add_edge('t', 'e', None, 7)
G.add_edge('e', 'c', None, 4)
G.add_edge('c', 'o', None, 4)
G.add_edge('o', 'b', None, 5)
G.add_edge('a', 'b', None, 2)
G.add_edge('a', 'd', None, 7)
G.add_edge('b', 'd', None, 4)
G.add_edge('b', 'e', None, 3)
G.add_edge('t', 'd', None, 5)
G.add_edge('e', 'd', None, 1)
G.add_edge('b', 'c', None, 1)
G.add_edge('x', 'y', None, 1)
D = bellman_ford_path(G, 'o')
self.assertEqual(D['o'], (0, None))
self.assertEqual(D['a'], (2, 'o'))
self.assertEqual(D['b'], (4, 'a'))
#self.assertEqual(D['d'], (8, 'e'))
self.assertEqual(D['c'], (4, 'o'))
self.assertEqual(D['e'], (7, 'b'))
self.assertEqual(D['t'], (13, 'd'))
self.assertEqual(D['f'], (14, 'a'))
self.assertEqual(D['x'], (float('infinity'),None))
self.assertEqual(D['y'], (float('infinity'),None))
def test_sssp_directed(self):
# following example from CLRS book page 596
G = DiGraph()
G.add_edge('s', 't', None, 10)
G.add_edge('s', 'y', None, 5)
G.add_edge('t', 'x', None, 1)
G.add_edge('t', 'y', None, 2)
G.add_edge('y', 't', None, 3)
G.add_edge('y', 'x', None, 9)
G.add_edge('y', 'z', None, 2)
G.add_edge('z', 's', None, 7)
G.add_edge('z', 'x', None, 6)
G.add_edge('x', 'z', None, 4)
G.add_edge('a', 'b', None, 4)
D = bellman_ford_path(G, 's')
self.assertEqual(D['s'], (0, None))
self.assertEqual(D['t'], (8, 'y'))
self.assertEqual(D['y'], (5, 's'))
self.assertEqual(D['x'], (9, 't'))
self.assertEqual(D['z'], (7, 'y'))
self.assertEqual(D['a'], (float('infinity'),None))
self.assertEqual(D['b'], (float('infinity'),None))
if __name__ == '__main__':
unittest.main()
| 24.385714 | 64 | 0.624878 | 1,902 | 10,242 | 3.239748 | 0.041535 | 0.360273 | 0.340149 | 0.113113 | 0.947744 | 0.886725 | 0.836417 | 0.790652 | 0.790652 | 0.790652 | 0 | 0.063703 | 0.147823 | 10,242 | 419 | 65 | 24.443914 | 0.642301 | 0.022945 | 0 | 0.791139 | 0 | 0 | 0.035 | 0 | 0 | 0 | 0 | 0 | 0.462025 | 1 | 0.060127 | false | 0 | 0.012658 | 0 | 0.091772 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
b7e98e53bcfc8cc77537b8945bc426f2034cfa57 | 59,783 | py | Python | angr/procedures/definitions/win32_mscms.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_mscms.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_mscms.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | # pylint:disable=line-too-long
import logging
from ...sim_type import SimTypeFunction, SimTypeShort, SimTypeInt, SimTypeLong, SimTypeLongLong, SimTypeDouble, SimTypeFloat, SimTypePointer, SimTypeChar, SimStruct, SimTypeFixedSizeArray, SimTypeBottom, SimUnion, SimTypeBool
from ...calling_conventions import SimCCStdcall, SimCCMicrosoftAMD64
from .. import SIM_PROCEDURES as P
from . import SimLibrary
_l = logging.getLogger(name=__name__)
lib = SimLibrary()
lib.set_default_cc('X86', SimCCStdcall)
lib.set_default_cc('AMD64', SimCCMicrosoftAMD64)
lib.set_library_names("mscms.dll")
prototypes = \
{
#
'SpoolerCopyFileEvent': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPrinterName", "pszKey", "dwCopyFileEvent"]),
#
'GenerateCopyFilePaths': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["pszPrinterName", "pszDirectory", "pSplClientInfo", "dwLevel", "pszSourceDir", "pcchSourceDirSize", "pszTargetDir", "pcchTargetDirSize", "dwFlags"]),
#
'OpenColorProfileA': SimTypeFunction([SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pProfile", "dwDesiredAccess", "dwShareMode", "dwCreationMode"]),
#
'OpenColorProfileW': SimTypeFunction([SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pProfile", "dwDesiredAccess", "dwShareMode", "dwCreationMode"]),
#
'CloseColorProfile': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile"]),
#
'GetColorProfileFromHandle': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "pProfile", "pcbProfile"]),
#
'IsColorProfileValid': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "pbValid"]),
#
'CreateProfileFromLogColorSpaceA': SimTypeFunction([SimTypePointer(SimStruct({"lcsSignature": SimTypeInt(signed=False, label="UInt32"), "lcsVersion": SimTypeInt(signed=False, label="UInt32"), "lcsSize": SimTypeInt(signed=False, label="UInt32"), "lcsCSType": SimTypeInt(signed=True, label="Int32"), "lcsIntent": SimTypeInt(signed=True, label="Int32"), "lcsEndpoints": SimStruct({"ciexyzRed": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzGreen": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzBlue": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None)}, name="CIEXYZTRIPLE", pack=False, align=None), "lcsGammaRed": SimTypeInt(signed=False, label="UInt32"), "lcsGammaGreen": SimTypeInt(signed=False, label="UInt32"), "lcsGammaBlue": SimTypeInt(signed=False, label="UInt32"), "lcsFilename": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 260)}, name="LOGCOLORSPACEA", pack=False, align=None), offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pLogColorSpace", "pProfile"]),
#
'CreateProfileFromLogColorSpaceW': SimTypeFunction([SimTypePointer(SimStruct({"lcsSignature": SimTypeInt(signed=False, label="UInt32"), "lcsVersion": SimTypeInt(signed=False, label="UInt32"), "lcsSize": SimTypeInt(signed=False, label="UInt32"), "lcsCSType": SimTypeInt(signed=True, label="Int32"), "lcsIntent": SimTypeInt(signed=True, label="Int32"), "lcsEndpoints": SimStruct({"ciexyzRed": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzGreen": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzBlue": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None)}, name="CIEXYZTRIPLE", pack=False, align=None), "lcsGammaRed": SimTypeInt(signed=False, label="UInt32"), "lcsGammaGreen": SimTypeInt(signed=False, label="UInt32"), "lcsGammaBlue": SimTypeInt(signed=False, label="UInt32"), "lcsFilename": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 260)}, name="LOGCOLORSPACEW", pack=False, align=None), offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pLogColorSpace", "pProfile"]),
#
'GetCountColorProfileElements': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "pnElementCount"]),
#
'GetColorProfileHeader': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"phSize": SimTypeInt(signed=False, label="UInt32"), "phCMMType": SimTypeInt(signed=False, label="UInt32"), "phVersion": SimTypeInt(signed=False, label="UInt32"), "phClass": SimTypeInt(signed=False, label="UInt32"), "phDataColorSpace": SimTypeInt(signed=False, label="UInt32"), "phConnectionSpace": SimTypeInt(signed=False, label="UInt32"), "phDateTime": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 3), "phSignature": SimTypeInt(signed=False, label="UInt32"), "phPlatform": SimTypeInt(signed=False, label="UInt32"), "phProfileFlags": SimTypeInt(signed=False, label="UInt32"), "phManufacturer": SimTypeInt(signed=False, label="UInt32"), "phModel": SimTypeInt(signed=False, label="UInt32"), "phAttributes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "phRenderingIntent": SimTypeInt(signed=False, label="UInt32"), "phIlluminant": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "phCreator": SimTypeInt(signed=False, label="UInt32"), "phReserved": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 44)}, name="PROFILEHEADER", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "pHeader"]),
#
'GetColorProfileElementTag': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "dwIndex", "pTag"]),
#
'IsColorProfileTagPresent': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "tag", "pbPresent"]),
#
'GetColorProfileElement': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "tag", "dwOffset", "pcbElement", "pElement", "pbReference"]),
#
'SetColorProfileHeader': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"phSize": SimTypeInt(signed=False, label="UInt32"), "phCMMType": SimTypeInt(signed=False, label="UInt32"), "phVersion": SimTypeInt(signed=False, label="UInt32"), "phClass": SimTypeInt(signed=False, label="UInt32"), "phDataColorSpace": SimTypeInt(signed=False, label="UInt32"), "phConnectionSpace": SimTypeInt(signed=False, label="UInt32"), "phDateTime": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 3), "phSignature": SimTypeInt(signed=False, label="UInt32"), "phPlatform": SimTypeInt(signed=False, label="UInt32"), "phProfileFlags": SimTypeInt(signed=False, label="UInt32"), "phManufacturer": SimTypeInt(signed=False, label="UInt32"), "phModel": SimTypeInt(signed=False, label="UInt32"), "phAttributes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "phRenderingIntent": SimTypeInt(signed=False, label="UInt32"), "phIlluminant": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "phCreator": SimTypeInt(signed=False, label="UInt32"), "phReserved": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 44)}, name="PROFILEHEADER", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "pHeader"]),
#
'SetColorProfileElementSize': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "tagType", "pcbElement"]),
#
'SetColorProfileElement': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "tag", "dwOffset", "pcbElement", "pElement"]),
#
'SetColorProfileElementReference': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "newTag", "refTag"]),
#
'GetPS2ColorSpaceArray': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "dwIntent", "dwCSAType", "pPS2ColorSpaceArray", "pcbPS2ColorSpaceArray", "pbBinary"]),
#
'GetPS2ColorRenderingIntent': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "dwIntent", "pBuffer", "pcbPS2ColorRenderingIntent"]),
#
'GetPS2ColorRenderingDictionary': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "dwIntent", "pPS2ColorRenderingDictionary", "pcbPS2ColorRenderingDictionary", "pbBinary"]),
#
'GetNamedProfileInfo': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"dwFlags": SimTypeInt(signed=False, label="UInt32"), "dwCount": SimTypeInt(signed=False, label="UInt32"), "dwCountDevCoordinates": SimTypeInt(signed=False, label="UInt32"), "szPrefix": SimTypeFixedSizeArray(SimTypeChar(label="SByte"), 32), "szSuffix": SimTypeFixedSizeArray(SimTypeChar(label="SByte"), 32)}, name="NAMED_PROFILE_INFO", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "pNamedProfileInfo"]),
#
'ConvertColorNameToIndex': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="SByte"), offset=0), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "paColorName", "paIndex", "dwCount"]),
#
'ConvertIndexToColorName': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), label="LPArray", offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="SByte"), offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "paIndex", "paColorName", "dwCount"]),
#
'CreateDeviceLinkProfile': SimTypeFunction([SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProfile", "nProfiles", "padwIntent", "nIntents", "dwFlags", "pProfileData", "indexPreferredCMM"]),
#
'CreateColorTransformA': SimTypeFunction([SimTypePointer(SimStruct({"lcsSignature": SimTypeInt(signed=False, label="UInt32"), "lcsVersion": SimTypeInt(signed=False, label="UInt32"), "lcsSize": SimTypeInt(signed=False, label="UInt32"), "lcsCSType": SimTypeInt(signed=True, label="Int32"), "lcsIntent": SimTypeInt(signed=True, label="Int32"), "lcsEndpoints": SimStruct({"ciexyzRed": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzGreen": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzBlue": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None)}, name="CIEXYZTRIPLE", pack=False, align=None), "lcsGammaRed": SimTypeInt(signed=False, label="UInt32"), "lcsGammaGreen": SimTypeInt(signed=False, label="UInt32"), "lcsGammaBlue": SimTypeInt(signed=False, label="UInt32"), "lcsFilename": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 260)}, name="LOGCOLORSPACEA", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pLogColorSpace", "hDestProfile", "hTargetProfile", "dwFlags"]),
#
'CreateColorTransformW': SimTypeFunction([SimTypePointer(SimStruct({"lcsSignature": SimTypeInt(signed=False, label="UInt32"), "lcsVersion": SimTypeInt(signed=False, label="UInt32"), "lcsSize": SimTypeInt(signed=False, label="UInt32"), "lcsCSType": SimTypeInt(signed=True, label="Int32"), "lcsIntent": SimTypeInt(signed=True, label="Int32"), "lcsEndpoints": SimStruct({"ciexyzRed": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzGreen": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None), "ciexyzBlue": SimStruct({"ciexyzX": SimTypeInt(signed=True, label="Int32"), "ciexyzY": SimTypeInt(signed=True, label="Int32"), "ciexyzZ": SimTypeInt(signed=True, label="Int32")}, name="CIEXYZ", pack=False, align=None)}, name="CIEXYZTRIPLE", pack=False, align=None), "lcsGammaRed": SimTypeInt(signed=False, label="UInt32"), "lcsGammaGreen": SimTypeInt(signed=False, label="UInt32"), "lcsGammaBlue": SimTypeInt(signed=False, label="UInt32"), "lcsFilename": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 260)}, name="LOGCOLORSPACEW", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pLogColorSpace", "hDestProfile", "hTargetProfile", "dwFlags"]),
#
'CreateMultiProfileTransform': SimTypeFunction([SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pahProfiles", "nProfiles", "padwIntent", "nIntents", "dwFlags", "indexPreferredCMM"]),
#
'DeleteColorTransform': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hxform"]),
#
'TranslateBitmapBits': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="BMFORMAT"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="BMFORMAT"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2"]), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hColorTransform", "pSrcBits", "bmInput", "dwWidth", "dwHeight", "dwInputStride", "pDestBits", "bmOutput", "dwOutputStride", "pfnCallBack", "ulCallbackData"]),
#
'CheckBitmapBits': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="BMFORMAT"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2"]), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hColorTransform", "pSrcBits", "bmInput", "dwWidth", "dwHeight", "dwStride", "paResult", "pfnCallback", "lpCallbackData"]),
#
'TranslateColors': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimUnion({"gray": SimStruct({"gray": SimTypeShort(signed=False, label="UInt16")}, name="GRAYCOLOR", pack=False, align=None), "rgb": SimStruct({"red": SimTypeShort(signed=False, label="UInt16"), "green": SimTypeShort(signed=False, label="UInt16"), "blue": SimTypeShort(signed=False, label="UInt16")}, name="RGBCOLOR", pack=False, align=None), "cmyk": SimStruct({"cyan": SimTypeShort(signed=False, label="UInt16"), "magenta": SimTypeShort(signed=False, label="UInt16"), "yellow": SimTypeShort(signed=False, label="UInt16"), "black": SimTypeShort(signed=False, label="UInt16")}, name="CMYKCOLOR", pack=False, align=None), "XYZ": SimStruct({"X": SimTypeShort(signed=False, label="UInt16"), "Y": SimTypeShort(signed=False, label="UInt16"), "Z": SimTypeShort(signed=False, label="UInt16")}, name="XYZCOLOR", pack=False, align=None), "Yxy": SimStruct({"Y": SimTypeShort(signed=False, label="UInt16"), "x": SimTypeShort(signed=False, label="UInt16"), "y": SimTypeShort(signed=False, label="UInt16")}, name="YxyCOLOR", pack=False, align=None), "Lab": SimStruct({"L": SimTypeShort(signed=False, label="UInt16"), "a": SimTypeShort(signed=False, label="UInt16"), "b": SimTypeShort(signed=False, label="UInt16")}, name="LabCOLOR", pack=False, align=None), "gen3ch": SimStruct({"ch1": SimTypeShort(signed=False, label="UInt16"), "ch2": SimTypeShort(signed=False, label="UInt16"), "ch3": SimTypeShort(signed=False, label="UInt16")}, name="GENERIC3CHANNEL", pack=False, align=None), "named": SimStruct({"dwIndex": SimTypeInt(signed=False, label="UInt32")}, name="NAMEDCOLOR", pack=False, align=None), "hifi": SimStruct({"channel": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 8)}, name="HiFiCOLOR", pack=False, align=None), "Anonymous": SimStruct({"reserved1": SimTypeInt(signed=False, label="UInt32"), "reserved2": SimTypePointer(SimTypeBottom(label="Void"), offset=0)}, name="_Anonymous_e__Struct", pack=False, align=None)}, name="<anon>", label="None"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="COLORTYPE"), SimTypePointer(SimUnion({"gray": SimStruct({"gray": SimTypeShort(signed=False, label="UInt16")}, name="GRAYCOLOR", pack=False, align=None), "rgb": SimStruct({"red": SimTypeShort(signed=False, label="UInt16"), "green": SimTypeShort(signed=False, label="UInt16"), "blue": SimTypeShort(signed=False, label="UInt16")}, name="RGBCOLOR", pack=False, align=None), "cmyk": SimStruct({"cyan": SimTypeShort(signed=False, label="UInt16"), "magenta": SimTypeShort(signed=False, label="UInt16"), "yellow": SimTypeShort(signed=False, label="UInt16"), "black": SimTypeShort(signed=False, label="UInt16")}, name="CMYKCOLOR", pack=False, align=None), "XYZ": SimStruct({"X": SimTypeShort(signed=False, label="UInt16"), "Y": SimTypeShort(signed=False, label="UInt16"), "Z": SimTypeShort(signed=False, label="UInt16")}, name="XYZCOLOR", pack=False, align=None), "Yxy": SimStruct({"Y": SimTypeShort(signed=False, label="UInt16"), "x": SimTypeShort(signed=False, label="UInt16"), "y": SimTypeShort(signed=False, label="UInt16")}, name="YxyCOLOR", pack=False, align=None), "Lab": SimStruct({"L": SimTypeShort(signed=False, label="UInt16"), "a": SimTypeShort(signed=False, label="UInt16"), "b": SimTypeShort(signed=False, label="UInt16")}, name="LabCOLOR", pack=False, align=None), "gen3ch": SimStruct({"ch1": SimTypeShort(signed=False, label="UInt16"), "ch2": SimTypeShort(signed=False, label="UInt16"), "ch3": SimTypeShort(signed=False, label="UInt16")}, name="GENERIC3CHANNEL", pack=False, align=None), "named": SimStruct({"dwIndex": SimTypeInt(signed=False, label="UInt32")}, name="NAMEDCOLOR", pack=False, align=None), "hifi": SimStruct({"channel": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 8)}, name="HiFiCOLOR", pack=False, align=None), "Anonymous": SimStruct({"reserved1": SimTypeInt(signed=False, label="UInt32"), "reserved2": SimTypePointer(SimTypeBottom(label="Void"), offset=0)}, name="_Anonymous_e__Struct", pack=False, align=None)}, name="<anon>", label="None"), label="LPArray", offset=0), SimTypeInt(signed=False, label="COLORTYPE")], SimTypeInt(signed=True, label="Int32"), arg_names=["hColorTransform", "paInputColors", "nColors", "ctInput", "paOutputColors", "ctOutput"]),
#
'CheckColors': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimUnion({"gray": SimStruct({"gray": SimTypeShort(signed=False, label="UInt16")}, name="GRAYCOLOR", pack=False, align=None), "rgb": SimStruct({"red": SimTypeShort(signed=False, label="UInt16"), "green": SimTypeShort(signed=False, label="UInt16"), "blue": SimTypeShort(signed=False, label="UInt16")}, name="RGBCOLOR", pack=False, align=None), "cmyk": SimStruct({"cyan": SimTypeShort(signed=False, label="UInt16"), "magenta": SimTypeShort(signed=False, label="UInt16"), "yellow": SimTypeShort(signed=False, label="UInt16"), "black": SimTypeShort(signed=False, label="UInt16")}, name="CMYKCOLOR", pack=False, align=None), "XYZ": SimStruct({"X": SimTypeShort(signed=False, label="UInt16"), "Y": SimTypeShort(signed=False, label="UInt16"), "Z": SimTypeShort(signed=False, label="UInt16")}, name="XYZCOLOR", pack=False, align=None), "Yxy": SimStruct({"Y": SimTypeShort(signed=False, label="UInt16"), "x": SimTypeShort(signed=False, label="UInt16"), "y": SimTypeShort(signed=False, label="UInt16")}, name="YxyCOLOR", pack=False, align=None), "Lab": SimStruct({"L": SimTypeShort(signed=False, label="UInt16"), "a": SimTypeShort(signed=False, label="UInt16"), "b": SimTypeShort(signed=False, label="UInt16")}, name="LabCOLOR", pack=False, align=None), "gen3ch": SimStruct({"ch1": SimTypeShort(signed=False, label="UInt16"), "ch2": SimTypeShort(signed=False, label="UInt16"), "ch3": SimTypeShort(signed=False, label="UInt16")}, name="GENERIC3CHANNEL", pack=False, align=None), "named": SimStruct({"dwIndex": SimTypeInt(signed=False, label="UInt32")}, name="NAMEDCOLOR", pack=False, align=None), "hifi": SimStruct({"channel": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 8)}, name="HiFiCOLOR", pack=False, align=None), "Anonymous": SimStruct({"reserved1": SimTypeInt(signed=False, label="UInt32"), "reserved2": SimTypePointer(SimTypeBottom(label="Void"), offset=0)}, name="_Anonymous_e__Struct", pack=False, align=None)}, name="<anon>", label="None"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="COLORTYPE"), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hColorTransform", "paInputColors", "nColors", "ctInput", "paResult"]),
#
'GetCMMInfo': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["hColorTransform", "param1"]),
#
'RegisterCMMA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "cmmID", "pCMMdll"]),
#
'RegisterCMMW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "cmmID", "pCMMdll"]),
#
'UnregisterCMMA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "cmmID"]),
#
'UnregisterCMMW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "cmmID"]),
#
'SelectCMM': SimTypeFunction([SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["dwCMMType"]),
#
'GetColorDirectoryA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pBuffer", "pdwSize"]),
#
'GetColorDirectoryW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pBuffer", "pdwSize"]),
#
'InstallColorProfileA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName"]),
#
'InstallColorProfileW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName"]),
#
'UninstallColorProfileA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName", "bDelete"]),
#
'UninstallColorProfileW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName", "bDelete"]),
#
'EnumColorProfilesA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwVersion": SimTypeInt(signed=False, label="UInt32"), "dwFields": SimTypeInt(signed=False, label="UInt32"), "pDeviceName": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "dwMediaType": SimTypeInt(signed=False, label="UInt32"), "dwDitheringMode": SimTypeInt(signed=False, label="UInt32"), "dwResolution": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwCMMType": SimTypeInt(signed=False, label="UInt32"), "dwClass": SimTypeInt(signed=False, label="UInt32"), "dwDataColorSpace": SimTypeInt(signed=False, label="UInt32"), "dwConnectionSpace": SimTypeInt(signed=False, label="UInt32"), "dwSignature": SimTypeInt(signed=False, label="UInt32"), "dwPlatform": SimTypeInt(signed=False, label="UInt32"), "dwProfileFlags": SimTypeInt(signed=False, label="UInt32"), "dwManufacturer": SimTypeInt(signed=False, label="UInt32"), "dwModel": SimTypeInt(signed=False, label="UInt32"), "dwAttributes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwRenderingIntent": SimTypeInt(signed=False, label="UInt32"), "dwCreator": SimTypeInt(signed=False, label="UInt32"), "dwDeviceClass": SimTypeInt(signed=False, label="UInt32")}, name="ENUMTYPEA", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pEnumRecord", "pEnumerationBuffer", "pdwSizeOfEnumerationBuffer", "pnProfiles"]),
#
'EnumColorProfilesW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwVersion": SimTypeInt(signed=False, label="UInt32"), "dwFields": SimTypeInt(signed=False, label="UInt32"), "pDeviceName": SimTypePointer(SimTypeChar(label="Char"), offset=0), "dwMediaType": SimTypeInt(signed=False, label="UInt32"), "dwDitheringMode": SimTypeInt(signed=False, label="UInt32"), "dwResolution": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwCMMType": SimTypeInt(signed=False, label="UInt32"), "dwClass": SimTypeInt(signed=False, label="UInt32"), "dwDataColorSpace": SimTypeInt(signed=False, label="UInt32"), "dwConnectionSpace": SimTypeInt(signed=False, label="UInt32"), "dwSignature": SimTypeInt(signed=False, label="UInt32"), "dwPlatform": SimTypeInt(signed=False, label="UInt32"), "dwProfileFlags": SimTypeInt(signed=False, label="UInt32"), "dwManufacturer": SimTypeInt(signed=False, label="UInt32"), "dwModel": SimTypeInt(signed=False, label="UInt32"), "dwAttributes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwRenderingIntent": SimTypeInt(signed=False, label="UInt32"), "dwCreator": SimTypeInt(signed=False, label="UInt32"), "dwDeviceClass": SimTypeInt(signed=False, label="UInt32")}, name="ENUMTYPEW", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pEnumRecord", "pEnumerationBuffer", "pdwSizeOfEnumerationBuffer", "pnProfiles"]),
#
'SetStandardColorSpaceProfileA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "dwProfileID", "pProfilename"]),
#
'SetStandardColorSpaceProfileW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "dwProfileID", "pProfileName"]),
#
'GetStandardColorSpaceProfileA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "dwSCS", "pBuffer", "pcbSize"]),
#
'GetStandardColorSpaceProfileW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "dwSCS", "pBuffer", "pcbSize"]),
#
'AssociateColorProfileWithDeviceA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName", "pDeviceName"]),
#
'AssociateColorProfileWithDeviceW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName", "pDeviceName"]),
#
'DisassociateColorProfileFromDeviceA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName", "pDeviceName"]),
#
'DisassociateColorProfileFromDeviceW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pMachineName", "pProfileName", "pDeviceName"]),
#
'WcsAssociateColorProfileWithDevice': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pProfileName", "pDeviceName"]),
#
'WcsDisassociateColorProfileFromDevice': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pProfileName", "pDeviceName"]),
#
'WcsEnumColorProfilesSize': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwVersion": SimTypeInt(signed=False, label="UInt32"), "dwFields": SimTypeInt(signed=False, label="UInt32"), "pDeviceName": SimTypePointer(SimTypeChar(label="Char"), offset=0), "dwMediaType": SimTypeInt(signed=False, label="UInt32"), "dwDitheringMode": SimTypeInt(signed=False, label="UInt32"), "dwResolution": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwCMMType": SimTypeInt(signed=False, label="UInt32"), "dwClass": SimTypeInt(signed=False, label="UInt32"), "dwDataColorSpace": SimTypeInt(signed=False, label="UInt32"), "dwConnectionSpace": SimTypeInt(signed=False, label="UInt32"), "dwSignature": SimTypeInt(signed=False, label="UInt32"), "dwPlatform": SimTypeInt(signed=False, label="UInt32"), "dwProfileFlags": SimTypeInt(signed=False, label="UInt32"), "dwManufacturer": SimTypeInt(signed=False, label="UInt32"), "dwModel": SimTypeInt(signed=False, label="UInt32"), "dwAttributes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwRenderingIntent": SimTypeInt(signed=False, label="UInt32"), "dwCreator": SimTypeInt(signed=False, label="UInt32"), "dwDeviceClass": SimTypeInt(signed=False, label="UInt32")}, name="ENUMTYPEW", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pEnumRecord", "pdwSize"]),
#
'WcsEnumColorProfiles': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwVersion": SimTypeInt(signed=False, label="UInt32"), "dwFields": SimTypeInt(signed=False, label="UInt32"), "pDeviceName": SimTypePointer(SimTypeChar(label="Char"), offset=0), "dwMediaType": SimTypeInt(signed=False, label="UInt32"), "dwDitheringMode": SimTypeInt(signed=False, label="UInt32"), "dwResolution": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwCMMType": SimTypeInt(signed=False, label="UInt32"), "dwClass": SimTypeInt(signed=False, label="UInt32"), "dwDataColorSpace": SimTypeInt(signed=False, label="UInt32"), "dwConnectionSpace": SimTypeInt(signed=False, label="UInt32"), "dwSignature": SimTypeInt(signed=False, label="UInt32"), "dwPlatform": SimTypeInt(signed=False, label="UInt32"), "dwProfileFlags": SimTypeInt(signed=False, label="UInt32"), "dwManufacturer": SimTypeInt(signed=False, label="UInt32"), "dwModel": SimTypeInt(signed=False, label="UInt32"), "dwAttributes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 2), "dwRenderingIntent": SimTypeInt(signed=False, label="UInt32"), "dwCreator": SimTypeInt(signed=False, label="UInt32"), "dwDeviceClass": SimTypeInt(signed=False, label="UInt32")}, name="ENUMTYPEW", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pEnumRecord", "pBuffer", "dwSize", "pnProfiles"]),
#
'WcsGetDefaultColorProfileSize': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="COLORPROFILETYPE"), SimTypeInt(signed=False, label="COLORPROFILESUBTYPE"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pDeviceName", "cptColorProfileType", "cpstColorProfileSubType", "dwProfileID", "pcbProfileName"]),
#
'WcsGetDefaultColorProfile': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="COLORPROFILETYPE"), SimTypeInt(signed=False, label="COLORPROFILESUBTYPE"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pDeviceName", "cptColorProfileType", "cpstColorProfileSubType", "dwProfileID", "cbProfileName", "pProfileName"]),
#
'WcsSetDefaultColorProfile': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="COLORPROFILETYPE"), SimTypeInt(signed=False, label="COLORPROFILESUBTYPE"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pDeviceName", "cptColorProfileType", "cpstColorProfileSubType", "dwProfileID", "pProfileName"]),
#
'WcsSetDefaultRenderingIntent': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "dwRenderingIntent"]),
#
'WcsGetDefaultRenderingIntent': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "pdwRenderingIntent"]),
#
'WcsGetUsePerUserProfiles': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pDeviceName", "dwDeviceClass", "pUsePerUserProfiles"]),
#
'WcsSetUsePerUserProfiles': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pDeviceName", "dwDeviceClass", "usePerUserProfiles"]),
#
'WcsTranslateColors': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="COLORDATATYPE"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="COLORDATATYPE"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hColorTransform", "nColors", "nInputChannels", "cdtInput", "cbInput", "pInputData", "nOutputChannels", "cdtOutput", "cbOutput", "pOutputData"]),
#
'WcsCheckColors': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="COLORDATATYPE"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hColorTransform", "nColors", "nInputChannels", "cdtInput", "cbInput", "pInputData", "paResult"]),
#
'WcsOpenColorProfileA': SimTypeFunction([SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pCDMPProfile", "pCAMPProfile", "pGMMPProfile", "dwDesireAccess", "dwShareMode", "dwCreationMode", "dwFlags"]),
#
'WcsOpenColorProfileW': SimTypeFunction([SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"dwType": SimTypeInt(signed=False, label="UInt32"), "pProfileData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cbDataSize": SimTypeInt(signed=False, label="UInt32")}, name="PROFILE", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pCDMPProfile", "pCAMPProfile", "pGMMPProfile", "dwDesireAccess", "dwShareMode", "dwCreationMode", "dwFlags"]),
#
'WcsCreateIccProfile': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hWcsProfile", "dwOptions"]),
#
'WcsGetCalibrationManagementState': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pbIsEnabled"]),
#
'WcsSetCalibrationManagementState': SimTypeFunction([SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["bIsEnabled"]),
#
'ColorAdapterGetSystemModifyWhitePointCaps': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["whitePointAdjCapable", "isColorOverrideActive"]),
#
'ColorAdapterUpdateDisplayGamma': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypePointer(SimStruct({"red": SimTypeFixedSizeArray(SimTypeShort(signed=False, label="UInt16"), 256), "green": SimTypeFixedSizeArray(SimTypeShort(signed=False, label="UInt16"), 256), "blue": SimTypeFixedSizeArray(SimTypeShort(signed=False, label="UInt16"), 256)}, name="DisplayTransformLut", pack=False, align=None), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "displayTransform", "internal"]),
#
'ColorAdapterUpdateDeviceProfile': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "profName"]),
#
'ColorAdapterGetDisplayCurrentStateID': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypePointer(SimStruct({"profileID": SimTypeInt(signed=False, label="UInt32"), "transformID": SimTypeInt(signed=False, label="UInt32"), "whitepointID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayStateID", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "displayStateID"]),
#
'ColorAdapterGetDisplayTransformData': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypePointer(SimStruct({"red": SimTypeFixedSizeArray(SimTypeShort(signed=False, label="UInt16"), 256), "green": SimTypeFixedSizeArray(SimTypeShort(signed=False, label="UInt16"), 256), "blue": SimTypeFixedSizeArray(SimTypeShort(signed=False, label="UInt16"), 256)}, name="DisplayTransformLut", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "displayTransformLut", "transformID"]),
#
'ColorAdapterGetDisplayTargetWhitePoint': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypePointer(SimStruct({"type": SimTypeInt(signed=True, label="Int32"), "Anonymous": SimUnion({"xyY": SimStruct({"x": SimTypeFloat(size=32), "y": SimTypeFloat(size=32), "Y": SimTypeFloat(size=32)}, name="XYYPoint", pack=False, align=None), "CCT": SimTypeFloat(size=32)}, name="<anon>", label="None")}, name="WhitePoint", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "wtpt", "transitionTime", "whitepointID"]),
#
'ColorAdapterGetDisplayProfile': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "displayProfile", "profileID", "bUseAccurate"]),
#
'ColorAdapterGetCurrentProfileCalibration': SimTypeFunction([SimStruct({"targetAdapterID": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), "sourceInfoID": SimTypeInt(signed=False, label="UInt32")}, name="DisplayID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["displayID", "maxCalibrationBlobSize", "blobSize", "calibrationBlob"]),
#
'ColorAdapterRegisterOEMColorService': SimTypeFunction([SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["registration"]),
#
'ColorAdapterUnregisterOEMColorService': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["registration"]),
#
'ColorProfileAddDisplayAssociation': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "profileName", "targetAdapterID", "sourceID", "setAsDefault", "associateAsAdvancedColor"]),
#
'ColorProfileRemoveDisplayAssociation': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "profileName", "targetAdapterID", "sourceID", "dissociateAdvancedColor"]),
#
'ColorProfileSetDisplayDefaultAssociation': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="COLORPROFILETYPE"), SimTypeInt(signed=False, label="COLORPROFILESUBTYPE"), SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "profileName", "profileType", "profileSubType", "targetAdapterID", "sourceID"]),
#
'ColorProfileGetDisplayList': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "targetAdapterID", "sourceID", "profileList", "profileCount"]),
#
'ColorProfileGetDisplayDefault': SimTypeFunction([SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="COLORPROFILETYPE"), SimTypeInt(signed=False, label="COLORPROFILESUBTYPE"), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["scope", "targetAdapterID", "sourceID", "profileType", "profileSubType", "profileName"]),
#
'ColorProfileGetDisplayUserScope': SimTypeFunction([SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=True, label="Int32")}, name="LUID", pack=False, align=None), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="WCS_PROFILE_MANAGEMENT_SCOPE"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["targetAdapterID", "sourceID", "scope"]),
}
lib.set_prototypes(prototypes)
| 298.915 | 4,373 | 0.735945 | 6,189 | 59,783 | 7.082243 | 0.068993 | 0.199306 | 0.146742 | 0.199306 | 0.89916 | 0.892613 | 0.8917 | 0.890423 | 0.887069 | 0.878993 | 0 | 0.025316 | 0.074954 | 59,783 | 199 | 4,374 | 300.417085 | 0.767278 | 0.000468 | 0 | 0 | 0 | 0 | 0.247436 | 0.041679 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.048544 | 0 | 0.048544 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4d51b05c47314bdb35161e919abd8a73a41f4bc1 | 159 | py | Python | app/utils/gym-battlesnake/gym_battlesnake/envs/__init__.py | miroesli/psscscs | cf070f71bb7688eee400825505a8d62c35d3d983 | [
"MIT"
] | 2 | 2020-09-06T17:43:08.000Z | 2021-11-15T08:35:50.000Z | app/utils/gym-battlesnake/gym_battlesnake/envs/__init__.py | miroesli/psscscs | cf070f71bb7688eee400825505a8d62c35d3d983 | [
"MIT"
] | 3 | 2020-09-07T06:52:46.000Z | 2020-09-07T06:56:53.000Z | app/utils/gym-battlesnake/gym_battlesnake/envs/__init__.py | miroesli/psscscs | cf070f71bb7688eee400825505a8d62c35d3d983 | [
"MIT"
] | 3 | 2020-11-25T03:42:34.000Z | 2020-12-15T21:36:56.000Z | from gym_battlesnake.envs.amz_env import BattlesnakeGym
from gym_battlesnake.envs.bs_env import BsEnv
from gym_battlesnake.envs.bs_other_env import BsOtherEnv
| 39.75 | 56 | 0.886792 | 25 | 159 | 5.36 | 0.48 | 0.156716 | 0.402985 | 0.492537 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 159 | 3 | 57 | 53 | 0.911565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4d8321404643a8d99257e5883f608fccb1c65e2b | 7,764 | py | Python | dingtalk/python/alibabacloud_dingtalk/robot_1_0/client.py | yndu13/dingtalk-sdk | 700fb7bb49c4d3167f84afc5fcb5e7aa5a09735f | [
"Apache-2.0"
] | null | null | null | dingtalk/python/alibabacloud_dingtalk/robot_1_0/client.py | yndu13/dingtalk-sdk | 700fb7bb49c4d3167f84afc5fcb5e7aa5a09735f | [
"Apache-2.0"
] | null | null | null | dingtalk/python/alibabacloud_dingtalk/robot_1_0/client.py | yndu13/dingtalk-sdk | 700fb7bb49c4d3167f84afc5fcb5e7aa5a09735f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
from Tea.core import TeaCore
from alibabacloud_tea_openapi.client import Client as OpenApiClient
from alibabacloud_tea_openapi import models as open_api_models
from alibabacloud_tea_util.client import Client as UtilClient
from alibabacloud_dingtalk.robot_1_0 import models as dingtalkrobot__1__0_models
from alibabacloud_tea_util import models as util_models
from alibabacloud_openapi_util.client import Client as OpenApiUtilClient
class Client(OpenApiClient):
"""
*\
"""
def __init__(
self,
config: open_api_models.Config,
):
super().__init__(config)
self._endpoint_rule = ''
if UtilClient.empty(self._endpoint):
self._endpoint = 'api.dingtalk.com'
def batch_send_oto(
self,
request: dingtalkrobot__1__0_models.BatchSendOTORequest,
) -> dingtalkrobot__1__0_models.BatchSendOTOResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkrobot__1__0_models.BatchSendOTOHeaders()
return self.batch_send_otowith_options(request, headers, runtime)
async def batch_send_oto_async(
self,
request: dingtalkrobot__1__0_models.BatchSendOTORequest,
) -> dingtalkrobot__1__0_models.BatchSendOTOResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkrobot__1__0_models.BatchSendOTOHeaders()
return await self.batch_send_otowith_options_async(request, headers, runtime)
def batch_send_otowith_options(
self,
request: dingtalkrobot__1__0_models.BatchSendOTORequest,
headers: dingtalkrobot__1__0_models.BatchSendOTOHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkrobot__1__0_models.BatchSendOTOResponse:
UtilClient.validate_model(request)
body = {}
if not UtilClient.is_unset(request.robot_code):
body['robotCode'] = request.robot_code
if not UtilClient.is_unset(request.user_ids):
body['userIds'] = request.user_ids
if not UtilClient.is_unset(request.msg_key):
body['msgKey'] = request.msg_key
if not UtilClient.is_unset(request.msg_param):
body['msgParam'] = request.msg_param
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
body=OpenApiUtilClient.parse_to_map(body)
)
return TeaCore.from_map(
dingtalkrobot__1__0_models.BatchSendOTOResponse(),
self.do_roarequest('BatchSendOTO', 'robot_1.0', 'HTTP', 'POST', 'AK', f'/v1.0/robot/oToMessages/batchSend', 'json', req, runtime)
)
async def batch_send_otowith_options_async(
self,
request: dingtalkrobot__1__0_models.BatchSendOTORequest,
headers: dingtalkrobot__1__0_models.BatchSendOTOHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkrobot__1__0_models.BatchSendOTOResponse:
UtilClient.validate_model(request)
body = {}
if not UtilClient.is_unset(request.robot_code):
body['robotCode'] = request.robot_code
if not UtilClient.is_unset(request.user_ids):
body['userIds'] = request.user_ids
if not UtilClient.is_unset(request.msg_key):
body['msgKey'] = request.msg_key
if not UtilClient.is_unset(request.msg_param):
body['msgParam'] = request.msg_param
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
body=OpenApiUtilClient.parse_to_map(body)
)
return TeaCore.from_map(
dingtalkrobot__1__0_models.BatchSendOTOResponse(),
await self.do_roarequest_async('BatchSendOTO', 'robot_1.0', 'HTTP', 'POST', 'AK', f'/v1.0/robot/oToMessages/batchSend', 'json', req, runtime)
)
def batch_otoquery(
self,
request: dingtalkrobot__1__0_models.BatchOTOQueryRequest,
) -> dingtalkrobot__1__0_models.BatchOTOQueryResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkrobot__1__0_models.BatchOTOQueryHeaders()
return self.batch_otoquery_with_options(request, headers, runtime)
async def batch_otoquery_async(
self,
request: dingtalkrobot__1__0_models.BatchOTOQueryRequest,
) -> dingtalkrobot__1__0_models.BatchOTOQueryResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkrobot__1__0_models.BatchOTOQueryHeaders()
return await self.batch_otoquery_with_options_async(request, headers, runtime)
def batch_otoquery_with_options(
self,
request: dingtalkrobot__1__0_models.BatchOTOQueryRequest,
headers: dingtalkrobot__1__0_models.BatchOTOQueryHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkrobot__1__0_models.BatchOTOQueryResponse:
UtilClient.validate_model(request)
query = {}
if not UtilClient.is_unset(request.robot_code):
query['robotCode'] = request.robot_code
if not UtilClient.is_unset(request.process_query_key):
query['processQueryKey'] = request.process_query_key
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
query=OpenApiUtilClient.query(query)
)
return TeaCore.from_map(
dingtalkrobot__1__0_models.BatchOTOQueryResponse(),
self.do_roarequest('BatchOTOQuery', 'robot_1.0', 'HTTP', 'GET', 'AK', f'/v1.0/robot/oToMessages/readStatus', 'json', req, runtime)
)
async def batch_otoquery_with_options_async(
self,
request: dingtalkrobot__1__0_models.BatchOTOQueryRequest,
headers: dingtalkrobot__1__0_models.BatchOTOQueryHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkrobot__1__0_models.BatchOTOQueryResponse:
UtilClient.validate_model(request)
query = {}
if not UtilClient.is_unset(request.robot_code):
query['robotCode'] = request.robot_code
if not UtilClient.is_unset(request.process_query_key):
query['processQueryKey'] = request.process_query_key
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
query=OpenApiUtilClient.query(query)
)
return TeaCore.from_map(
dingtalkrobot__1__0_models.BatchOTOQueryResponse(),
await self.do_roarequest_async('BatchOTOQuery', 'robot_1.0', 'HTTP', 'GET', 'AK', f'/v1.0/robot/oToMessages/readStatus', 'json', req, runtime)
)
| 45.940828 | 154 | 0.698609 | 866 | 7,764 | 5.855658 | 0.124711 | 0.01341 | 0.085782 | 0.120095 | 0.894498 | 0.836521 | 0.831394 | 0.796293 | 0.789588 | 0.789588 | 0 | 0.012652 | 0.216126 | 7,764 | 168 | 155 | 46.214286 | 0.820572 | 0.010304 | 0 | 0.743421 | 1 | 0 | 0.065971 | 0.031552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032895 | false | 0 | 0.046053 | 0 | 0.138158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4daac3d07175acc1b2d5312e61da9ec965f4f2c1 | 9,791 | py | Python | layout/power-stage/gate_drive_layout_gen.py | westonb/open-pmic | 2ec3130e71791a275147c50bef978a48a7c98e15 | [
"Apache-2.0"
] | 17 | 2021-03-27T07:46:53.000Z | 2022-03-31T12:40:59.000Z | images/foss-asic-tools/addons/examples/open-pmic/layout/power-stage/gate_drive_layout_gen.py | efabless/foss-asic-tools | d59b40f4ba8751a2cbf50120c7cdd956a406c4c1 | [
"Apache-2.0"
] | 2 | 2021-06-02T17:42:58.000Z | 2022-03-03T02:59:07.000Z | images/foss-asic-tools/addons/examples/open-pmic/layout/power-stage/gate_drive_layout_gen.py | efabless/foss-asic-tools | d59b40f4ba8751a2cbf50120c7cdd956a406c4c1 | [
"Apache-2.0"
] | 3 | 2021-09-30T16:17:51.000Z | 2022-01-30T09:54:47.000Z | from math import floor
PITCH = 0.79
def via_1_2(x, y, fout):
cmd_str = "box %gum %gum %gum %gum\n" % (x, y, x, y)
fout.write(cmd_str)
#expand to via width
cmd_str = "box grow center %gum\n" % (0.26/2)
fout.write(cmd_str)
cmd_str = "paint m2contact\n"
fout.write(cmd_str)
#m1 vertical extention
cmd_str = "box grow n %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "box grow s %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
#m2 vertical extention
cmd_str = "box shrink n %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "box shrink s %gum\n" % (0.03)
fout.write(cmd_str)
#m1 vertical extention
cmd_str = "box grow e %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "box grow w %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "paint metal2\n"
fout.write(cmd_str)
def via_locali(x, y, fout):
#base via
cmd_str = "box %gum %gum %gum %gum\n" % (x, y, x, y)
fout.write(cmd_str)
cmd_str = "box grow center %gum\n" % (0.17/2)
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#metal cover
cmd_str = "box grow center %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "box grow n %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "box grow s %gum\n" % (0.03)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
fin = open("helper_functions.tcl", "r")
fout = open("gate_drive_build.tcl", "w")
for line in fin:
fout.write(line)
#measured distances of generated FETS
PMOS_GATE_VIA_OFFSET = 15.64/2
NMOS_GATE_VIA_OFFSET = 7.55/2
MOS_PITCH = 0.79
PMOS_DRAIN_OFFSET = 15/2
NMOS_DRAIN_OFFSET = 7/2
NUM_FINGERS = 88
MOS_CENTER_X = 0
PMOS_CENTER_Y = 15.64/2 + 2
NMOS_CENTER_Y = -7.55/2 - 2
SOURCE_EXTENTION = 1.5
DRAIN_EXTENTION = 0.25
DRAIN_EXTENTION_M2 = 1
GATE_EXTENTION = 3
DIFF_METAL_WIDTH = 0.23
M1_GATE_WIDTH = 0.23
cmd_str = "load gate_drive\n"
fout.write(cmd_str)
#place fets
cmd_str = "place_nmos %g %g %g %g %g %g \n" % (MOS_CENTER_X, NMOS_CENTER_Y, 7, 0.5, NUM_FINGERS, 1)
fout.write(cmd_str)
cmd_str = "place_pmos %g %g %g %g %g %g \n" % (MOS_CENTER_X, PMOS_CENTER_Y, 15, 0.5, NUM_FINGERS, 2)
fout.write(cmd_str)
#place smaller fets
cmd_str = "place_nmos %g %g %g %g %g %g \n" % (MOS_CENTER_X - PITCH*(NUM_FINGERS/2) - 3, NMOS_CENTER_Y + (7-2)/2, 2, 0.5, 4, 3)
fout.write(cmd_str)
cmd_str = "place_pmos %g %g %g %g %g %g \n" % (MOS_CENTER_X - PITCH*(NUM_FINGERS/2) - 3, PMOS_CENTER_Y - (15-4)/2, 4, 0.5, 4, 4)
fout.write(cmd_str)
#extend connections
#even fingers are source, extend up,
#odd fingers are drain, extend down
#this works for an even number of FETs
#source contacts
for index in range(NUM_FINGERS+1):
drain_center_x = (index-(NUM_FINGERS/2)) * MOS_PITCH
if (index%2 ==0):
#source contact, extend PMOS on top, NMOS on bottom
x1 = drain_center_x - DIFF_METAL_WIDTH/2
x2 = drain_center_x + DIFF_METAL_WIDTH/2
y1 = PMOS_CENTER_Y + PMOS_DRAIN_OFFSET
y2 = PMOS_CENTER_Y + PMOS_DRAIN_OFFSET + SOURCE_EXTENTION
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
y1 = NMOS_CENTER_Y - NMOS_DRAIN_OFFSET - SOURCE_EXTENTION
y2 = NMOS_CENTER_Y - NMOS_DRAIN_OFFSET
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
else:
#drain contact, extend PMOS on bottom, NMOS on top
x1 = drain_center_x - DIFF_METAL_WIDTH/2
x2 = drain_center_x + DIFF_METAL_WIDTH/2
y1 = PMOS_CENTER_Y - PMOS_DRAIN_OFFSET - DRAIN_EXTENTION
y2 = PMOS_CENTER_Y - PMOS_DRAIN_OFFSET
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
via_1_2(drain_center_x, y1, fout)
y2 = PMOS_CENTER_Y - PMOS_DRAIN_OFFSET - DRAIN_EXTENTION
y1 = PMOS_CENTER_Y - PMOS_DRAIN_OFFSET - DRAIN_EXTENTION - DRAIN_EXTENTION_M2
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal2\n"
fout.write(cmd_str)
y1 = NMOS_CENTER_Y + NMOS_DRAIN_OFFSET
y2 = NMOS_CENTER_Y + NMOS_DRAIN_OFFSET + DRAIN_EXTENTION
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
via_1_2(drain_center_x, y2, fout)
y1 = NMOS_CENTER_Y + NMOS_DRAIN_OFFSET + DRAIN_EXTENTION
y2 = NMOS_CENTER_Y + NMOS_DRAIN_OFFSET + DRAIN_EXTENTION + DRAIN_EXTENTION_M2
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal2\n"
fout.write(cmd_str)
#gate contacts
for index in range(NUM_FINGERS):
gate_center_x = index * MOS_PITCH -(NUM_FINGERS/2-1+0.5)*MOS_PITCH
#draw vias on PMOS top and bottom, connect across gate with M1, extend gate contact on M2 towards center
via_locali(gate_center_x, PMOS_CENTER_Y-PMOS_GATE_VIA_OFFSET, fout)
via_locali(gate_center_x, PMOS_CENTER_Y+PMOS_GATE_VIA_OFFSET, fout)
via_locali(gate_center_x, NMOS_CENTER_Y-NMOS_GATE_VIA_OFFSET, fout)
via_locali(gate_center_x, NMOS_CENTER_Y+NMOS_GATE_VIA_OFFSET, fout)
#gate runers connecting top and bottom contact, extending down.
x1 = gate_center_x - M1_GATE_WIDTH/2
x2 = gate_center_x + M1_GATE_WIDTH/2
y1 = PMOS_CENTER_Y-PMOS_GATE_VIA_OFFSET - GATE_EXTENTION
y2 = PMOS_CENTER_Y+PMOS_GATE_VIA_OFFSET
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
y1 = NMOS_CENTER_Y-NMOS_GATE_VIA_OFFSET
y2 = NMOS_CENTER_Y+NMOS_GATE_VIA_OFFSET + GATE_EXTENTION
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
#nwell
x1 = -42.5
x2 = 38
y1 = 0
y2 = 20
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint nwell\n"
fout.write(cmd_str)
#pwell
x1 = -42.5
x2 = 38
y1 = -12
y2 = 0
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint pwell\n"
fout.write(cmd_str)
#place well contacts
#left side pwell
x1 = -42
x2 = -41.5
y1 = -11.5
y2 = -0.5
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiffcont\n"
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#right side pwell
x1 = 37
x2 = 37.5
y1 = -11.5
y2 = -0.5
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiffcont\n"
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#top side pwell
x1 = -42
x2 = 37.5
y1 = -1
y2 = -0.5
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiffcont\n"
fout.write(cmd_str)
#bottom side pwell
x1 = -42
x2 = 37.5
y1 = -11.5
y2 = -11
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvpsubdiffcont\n"
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#top size nwell
x1 = -42
x2 = 37.5
y1 = 19
y2 = 19.5
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiffcont\n"
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#bottom size nwell
x1 = -42
x2 = 37.5
y1 = 0.5
y2 = 1
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiffcont\n"
fout.write(cmd_str)
#left nwell
x1 = -42
x2 = -41.5
y1 = 0.5
y2 = 19.5
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiffcont\n"
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#right nwell
x1 = 37
x2 = 37.5
y1 = 0.5
y2 = 19.5
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiff\n"
fout.write(cmd_str)
cmd_str = "paint metal1\n"
fout.write(cmd_str)
cmd_str = "paint locali\n"
fout.write(cmd_str)
cmd_str = "box shrink center %gum\n" % (0.12)
fout.write(cmd_str)
cmd_str = "paint mvnsubdiffcont\n"
fout.write(cmd_str)
cmd_str = "paint viali\n"
fout.write(cmd_str)
#power rails
x1 = -42.5
x2 = 38
y1 = 18
y2 = 20
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint m1\n"
fout.write(cmd_str)
#power rails
x1 = -42.5
x2 = 38
y1 = -10
y2 = -12
cmd_str = "box %gum %gum %gum %gum\n" % (x1, y1, x2, y2)
fout.write(cmd_str)
cmd_str = "paint m1\n"
fout.write(cmd_str)
| 23.592771 | 129 | 0.689204 | 1,873 | 9,791 | 3.382274 | 0.075814 | 0.18753 | 0.18753 | 0.234412 | 0.825888 | 0.809629 | 0.791792 | 0.749013 | 0.70371 | 0.689661 | 0 | 0.054977 | 0.165867 | 9,791 | 414 | 130 | 23.649758 | 0.720705 | 0.077316 | 0 | 0.742574 | 0 | 0 | 0.217198 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006601 | false | 0 | 0.0033 | 0 | 0.009901 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4db032ce542992e8d2ae5021028618b5a6a00a4d | 2,699 | py | Python | test/exploits/hashes/collisions/test_fletcher.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 107 | 2018-05-03T16:53:01.000Z | 2022-02-23T14:47:20.000Z | test/exploits/hashes/collisions/test_fletcher.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 7 | 2019-04-28T00:41:35.000Z | 2021-05-04T20:35:54.000Z | test/exploits/hashes/collisions/test_fletcher.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 16 | 2019-03-29T12:39:16.000Z | 2021-03-03T11:09:45.000Z | import pytest
from exploits.hashes.collisions import fletcher
from test.exploits.dummy_output import DummyOutput
def fletcher_hash(bytes, hash_table_size, modulus):
v1 = 0
v2 = 0
for byte in bytes:
v1 = (v1 + ord(byte)) % modulus
v2 = (v2 + v1) % modulus
return (v2 * (modulus+1) + v1) % hash_table_size
def test_run():
output = DummyOutput()
n_collisions = 1
length = 10
width = 16
hash_table_size = 100
target = '42'
fletcher.options['n_collisions'] = n_collisions
fletcher.options['length'] = length
fletcher.options['hash_table_size'] = hash_table_size
fletcher.options['width'] = width
fletcher.options['target_type'] = 'image'
fletcher.options['target'] = target
fletcher.run(output)
assert output.count() == n_collisions
for i in output:
assert fletcher_hash(i, hash_table_size, (2**(width/2))-1) == int(target)
assert len(i) == length
def test_more_collisions():
output = DummyOutput()
n_collisions = 4
length = 5
width = 16
hash_table_size = 5
target = '2'
fletcher.options['n_collisions'] = n_collisions
fletcher.options['length'] = length
fletcher.options['hash_table_size'] = hash_table_size
fletcher.options['width'] = width
fletcher.options['target_type'] = 'image'
fletcher.options['target'] = target
fletcher.run(output)
assert output.count() == n_collisions
for i in output:
assert fletcher_hash(i, hash_table_size, (2**(width/2))-1) == int(target)
assert len(i) == length
def test_larger_width():
output = DummyOutput()
n_collisions = 1
length = 5
width = 32
hash_table_size = 100
target = '42'
fletcher.options['n_collisions'] = n_collisions
fletcher.options['length'] = length
fletcher.options['hash_table_size'] = hash_table_size
fletcher.options['width'] = width
fletcher.options['target_type'] = 'image'
fletcher.options['target'] = target
fletcher.run(output)
assert output.count() == n_collisions
for i in output:
assert fletcher_hash(i, hash_table_size, (2**(width/2))-1) == int(target)
assert len(i) == length
def test_length_too_short():
output = DummyOutput()
n_collisions = 300
length = 2
width = 16
hash_table_size = 100
target = '42'
fletcher.options['n_collisions'] = n_collisions
fletcher.options['length'] = length
fletcher.options['hash_table_size'] = hash_table_size
fletcher.options['width'] = width
fletcher.options['target_type'] = 'image'
fletcher.options['target'] = target
with pytest.raises(ValueError):
fletcher.run(output)
| 29.988889 | 81 | 0.659874 | 342 | 2,699 | 5.01462 | 0.160819 | 0.209913 | 0.128863 | 0.065306 | 0.764431 | 0.75277 | 0.711953 | 0.711953 | 0.711953 | 0.711953 | 0 | 0.027001 | 0.217858 | 2,699 | 89 | 82 | 30.325843 | 0.78541 | 0 | 0 | 0.721519 | 0 | 0 | 0.091515 | 0 | 0 | 0 | 0 | 0 | 0.113924 | 1 | 0.063291 | false | 0 | 0.037975 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4db0643774acb2723ae44c22c9c301a068207f5e | 9,484 | py | Python | pysptools/classification/inval.py | ctherien/pysptools | fbcd3ecaa7ab27f0158b28b4327537c3e75db160 | [
"Apache-2.0"
] | 35 | 2016-03-20T15:25:07.000Z | 2022-03-29T04:05:56.000Z | pysptools/classification/inval.py | ctherien/pysptools | fbcd3ecaa7ab27f0158b28b4327537c3e75db160 | [
"Apache-2.0"
] | 12 | 2016-03-24T13:38:52.000Z | 2021-04-06T07:11:19.000Z | pysptools/classification/inval.py | ctherien/pysptools | fbcd3ecaa7ab27f0158b28b4327537c3e75db160 | [
"Apache-2.0"
] | 14 | 2016-03-21T17:26:46.000Z | 2022-01-18T08:39:27.000Z | #
#------------------------------------------------------------------------------
# Copyright (c) 2013-2014, Christian Therien
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#------------------------------------------------------------------------------
#
# inval.py - This file is part of the PySptools package.
#
"""
"""
import pysptools.util as util
# SAM, SID, NormXCorr
def ClassifyInputValidation(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, M, E, threshold=0.1, mask=None):
check = util.InputValidation(class_name)
check.dispatch(check.cube_type, method.__name__, M, 'M')
check.dispatch(check.lib_type, method.__name__, E, 'E')
check.dispatch(check.threshold_type, method.__name__, E, threshold)
check.dispatch(check.spectrum_length, method.__name__, M, 'M', E, 'E')
check.dispatch(check.mask_type, method.__name__, mask)
return method(self, M, E, threshold=threshold, mask=mask)
return checker
return wrap
# AbundanceClassification
def ClassifyInputValidation2(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, M, threshold=0.1):
check = util.InputValidation(class_name)
check.dispatch(check.cube_type, method.__name__, M, 'M')
check.dispatch(check.threshold_type3, method.__name__, M, threshold)
return method(self, M, threshold=threshold)
return checker
return wrap
# SVC
def ClassifyInputValidation3(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, M):
check = util.InputValidation(class_name)
check.dispatch(check.cube_type, method.__name__, M, 'M')
return method(self, M)
return checker
return wrap
# KMeans
def PredictInputValidation(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, M, n_clusters=5, n_jobs=1, init='k-means++'):
check = util.InputValidation(class_name)
check.dispatch(check.cube_type, method.__name__, M, 'M')
check.dispatch(check.int_type, method.__name__, n_clusters, 'n_clusters')
check.dispatch(check.int_type, method.__name__, n_jobs, 'n_jobs')
# init is string or array
#check.dispatch(check.string_type, method.__name__, init, 'init')
return method(self, M, n_clusters=n_clusters, n_jobs=n_jobs, init=init)
return checker
return wrap
# SAM, SID, NormXCorr
def GetMapInputValidation(class_name, call_before):
@util.simple_decorator
def wrap(method):
def checker(self):
check = util.InputValidation(class_name)
check.dispatch(check.cmap_exist, method.__name__, self.cmap, call_before)
return method(self)
return checker
return wrap
# SAM, SID, NormXCorr
def GetSingleMapInputValidation(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, lib_idx, constrained=True):
check = util.InputValidation(class_name)
check.dispatch(check.index_range, method.__name__, lib_idx, self.n_classes)
check.dispatch(check.bool_type, method.__name__, constrained, 'constrained')
return method(self, lib_idx, constrained=constrained)
return checker
return wrap
# SAM, SID, NormXCorr
def PlotSingleMapInputValidation(class_name, call_before):
@util.simple_decorator
def wrap(method):
def checker(self, path, lib_idx, constrained=True, stretch=False, colorMap='spectral', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.cmap_exist, method.__name__, self.cmap, call_before)
check.dispatch(check.index_range, method.__name__, lib_idx, self.n_classes)
check.dispatch(check.bool_type, method.__name__, constrained, 'constrained')
check.dispatch(check.bool_type, method.__name__, stretch, 'stretch')
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, path, lib_idx, constrained=constrained, stretch=stretch, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# SAM, SID, NormXCorr
def DisplaySingleMapInputValidation(class_name, call_before):
@util.simple_decorator
def wrap(method):
def checker(self, lib_idx, constrained=True, stretch=False, colorMap='spectral', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.cmap_exist, method.__name__, self.cmap, call_before)
check.dispatch(check.index_range, method.__name__, lib_idx, self.n_classes)
check.dispatch(check.bool_type, method.__name__, constrained, 'constrained')
check.dispatch(check.bool_type, method.__name__, stretch, 'stretch')
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, lib_idx, constrained=constrained, stretch=stretch, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# SAM, SID, NormXCorr, AbundanceValidation
def PlotInputValidation(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, path, labels=None, mask=None, interpolation='none', colorMap='Accent', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.labels_type, method.__name__, labels)
check.dispatch(check.mask_type, method.__name__, mask)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, path, labels=labels, mask=mask, interpolation=interpolation, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# SAM, SID, NormXCorr, AbundanceValidation
def DisplayInputValidation(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, labels=None, mask=None, interpolation='none', colorMap='Accent', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.labels_type, method.__name__, labels)
check.dispatch(check.mask_type, method.__name__, mask)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, labels=labels, mask=mask, interpolation=interpolation, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# SVC
def PlotInputValidation2(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, path, labels=None, interpolation='none', colorMap='Accent', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.labels_type, method.__name__, labels)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, path, labels=labels, interpolation=interpolation, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# SVC
def DisplayInputValidation2(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, labels=None, interpolation='none', colorMap='Accent', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.labels_type, method.__name__, labels)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, labels=labels, interpolation=interpolation, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# Kmeans
def PlotInputValidation3(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, path, interpolation='none', colorMap='Accent', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, path, interpolation=interpolation, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# Kmeans
def DisplayInputValidation3(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, interpolation='none', colorMap='Accent', suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, interpolation=interpolation, colorMap=colorMap, suffix=suffix)
return checker
return wrap
# SAM, SID, NormXCorr
def PlotHistoInputValidation(class_name):
@util.simple_decorator
def wrap(method):
def checker(self, path, suffix=None):
check = util.InputValidation(class_name)
check.dispatch(check.suffix_type, method.__name__, suffix)
method(self, path, suffix=suffix)
return checker
return wrap
| 41.414847 | 120 | 0.661113 | 1,065 | 9,484 | 5.640376 | 0.139906 | 0.082237 | 0.113867 | 0.054936 | 0.775096 | 0.766939 | 0.744631 | 0.744631 | 0.703013 | 0.702347 | 0 | 0.003414 | 0.227857 | 9,484 | 228 | 121 | 41.596491 | 0.816878 | 0.118305 | 0 | 0.670886 | 0 | 0 | 0.019157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28481 | false | 0 | 0.006329 | 0 | 0.518987 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
150602373276b49e660bf237c8214211222120fb | 7,633 | py | Python | test_ci/nose/note.py | luml6/goexam | e18036cec627c901f622d3295df9be47b151ec89 | [
"BSD-2-Clause"
] | null | null | null | test_ci/nose/note.py | luml6/goexam | e18036cec627c901f622d3295df9be47b151ec89 | [
"BSD-2-Clause"
] | null | null | null | test_ci/nose/note.py | luml6/goexam | e18036cec627c901f622d3295df9be47b151ec89 | [
"BSD-2-Clause"
] | null | null | null |
from basehttp import NetUtil
import category
domain_baseurl = "http://10.0.52.83:50047/API/V1.0/Note"
AccountID = "76618461"
token="00308835d1bbe351b58418ca1e3bfbc4"
def setup_module(module):
print('setup_module every func exec ')
def setup_deco():
print('setup_deco use with_setup ')
def teardown_deco():
print('teardown_deco all use with_setup')
class TestUM():
def setup(self):
print('setup each fuc this class ')
@classmethod
def setup_class(cls):
print('setup_class use for this class, just one time')
def test_1noCategory(self):
m=NetUtil()
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_get(domain_baseurl + "/NoCategory")
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
assert resp["Code"]==0
def test_2noteAll(self):
m=NetUtil()
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_get(domain_baseurl + "/All")
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
assert resp["Code"]==0
def test_3noteGetID(self):
m=NetUtil()
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_get(domain_baseurl + "/GetNoteID")
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
assert resp["Code"]==0
def test_4noteCreate(self):
c=category.TestUM()
catePath=c.create()
m=NetUtil()
values = {
"Author": "string",
"Title": "string",
"Summary": "string",
"Content": "string",
"CheckSum": "string",
"IsPinned": 0
}
values["CategoryPath"]=catePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Create",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp
print "aaaaa:",resp["Data"]
if resp["Code"]==0:
global notePath
notePath=resp["Data"]["Path"]
assert resp["Code"]==0
def notecreate(self):
c=category.TestUM()
catePath=c.create()
print catePath
m=NetUtil()
values = {
"Author": "string",
"Title": "string",
"Summary": "string",
"Content": "string",
"CheckSum": "string",
"IsPinned": 0
}
values["CategoryPath"]=catePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Create",values, headers)
if resp == None:
print m.errCode,m.errmsg
if resp["Code"]==0:
notePath=resp["Data"]["Path"]
return notePath
def test_5noteGet(self):
m=NetUtil()
values = {
}
values["Path"]=self.notecreate()
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Get",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
print "aaaaa:",resp["Code"]
assert resp["Code"]==0
def test_6NoteUpdate(self):
m = NetUtil()
values = {
"Author": "test",
"Title": "string",
"Summary": "test",
"CheckSum": "string",
"Content": "string",
"IsPinned": 0,
"UpdateTime": 0
}
values["Path"]=self.notecreate()
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Update",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp
print "aaaaa:",resp["Code"]
assert resp["Code"]==0
def test_7noteDelete(self):
m=NetUtil()
values = {
}
notePath=self.notecreate()
values["Path"]=notePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Delete",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
print "aaaaa:",resp["Code"]
assert notePath==resp["Data"]["Path"]
def test_8noteGetDeleted(self):
m=NetUtil()
values = {
}
notePath=self.notecreate()
values["Path"]=notePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Delete",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
print "aaaaa:",resp["Code"]
assert notePath==resp["Data"]["Path"]
def test_9noteRecover(self):
m=NetUtil()
values = {
}
notePath=self.notecreate()
values["Path"]=notePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Delete",values, headers)
if resp["Code"]==0:
m=NetUtil()
values = {
}
values["Path"]=notePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Recover",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
print "aaaaa:",resp["Code"]
assert notePath==resp["Data"]["Path"]
def test_10noteMove(self):
c=category.TestUM()
catePath=c.create()
print catePath
m=NetUtil()
values = {
}
notePath=self.notecreate()
values["Path"]=notePath
values["CategoryPath"]=catePath
headers = {"Content-type": "application/json", "Accept": "text/plain",
"AccountID":AccountID,"token":token}
resp=m.http_post(domain_baseurl + "/Move",values, headers)
if resp == None:
print m.errCode,m.errmsg
else:
print resp["Code"]
print "aaaaa:",resp["Code"]
assert notePath==resp["Data"]["Path"]
| 29.133588 | 82 | 0.494956 | 725 | 7,633 | 5.148966 | 0.136552 | 0.04929 | 0.057862 | 0.093223 | 0.797482 | 0.785159 | 0.785159 | 0.776051 | 0.776051 | 0.765336 | 0 | 0.013676 | 0.367745 | 7,633 | 262 | 83 | 29.133588 | 0.759843 | 0 | 0 | 0.746269 | 0 | 0 | 0.19455 | 0.004192 | 0 | 0 | 0 | 0 | 0.049751 | 0 | null | null | 0 | 0.00995 | null | null | 0.174129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
151b9f4190dd1d0df2df7bb3a26abe8cc32d2990 | 17,426 | py | Python | GM2AUTOSAR_MM/merge_preprocess_rules/Himesis/HMoveOneOutputDirectApplyDiffRulesLHS.py | levilucio/SyVOLT | 7526ec794d21565e3efcc925a7b08ae8db27d46a | [
"MIT"
] | 3 | 2017-06-02T19:26:27.000Z | 2021-06-14T04:25:45.000Z | UMLRT2Kiltera_MM/merge_inter_layer_rules/Himesis/HMoveOneOutputDirectApplyDiffRulesLHS.py | levilucio/SyVOLT | 7526ec794d21565e3efcc925a7b08ae8db27d46a | [
"MIT"
] | 8 | 2016-08-24T07:04:07.000Z | 2017-05-26T16:22:47.000Z | GM2AUTOSAR_MM/merge_inter_layer_rules/Himesis/HMoveOneOutputDirectApplyDiffRulesLHS.py | levilucio/SyVOLT | 7526ec794d21565e3efcc925a7b08ae8db27d46a | [
"MIT"
] | 1 | 2019-10-31T06:00:23.000Z | 2019-10-31T06:00:23.000Z |
from core.himesis import Himesis, HimesisPreConditionPatternLHS
import cPickle as pickle
from uuid import UUID
class HMoveOneOutputDirectApplyDiffRulesLHS(HimesisPreConditionPatternLHS):
def __init__(self):
"""
Creates the himesis graph representing the AToM3 model HMoveOneOutputDirectApplyDiffRulesLHS.
"""
# Flag this instance as compiled now
self.is_compiled = True
super(HMoveOneOutputDirectApplyDiffRulesLHS, self).__init__(name='HMoveOneOutputDirectApplyDiffRulesLHS', num_nodes=4, edges=[])
# Add the edges
self.add_edges([(2, 0), (0, 3)])
# Set the graph attributes
self["mm__"] = pickle.loads("""(lp1
S'MT_pre__GM2AUTOSAR_MM'
p2
aS'MoTifRule'
p3
a.""")
self["MT_constraint__"] = pickle.loads("""V#if len([i for i in graph.neighbors(PreNode('5').index) if graph.vs[i]['mm__'] == 'apply_contains']) == 0:\u000a# return True\u000a\u000a#return False\u000areturn True\u000a
p1
.""")
self["name"] = """"""
self["GUID__"] = UUID('0c788c4f-e8e3-4e1d-8545-2b7bb35fb067')
# Set the node attributes
self.vs[0]["MT_subtypeMatching__"] = False
self.vs[0]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[0]["MT_label__"] = """9"""
self.vs[0]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[0]["MT_dirty__"] = False
self.vs[0]["mm__"] = """MT_pre__directLink_T"""
self.vs[0]["GUID__"] = UUID('05cab2d2-c548-435e-a99c-20d1196b94e1')
self.vs[1]["MT_pivotOut__"] = """element1"""
self.vs[1]["MT_subtypeMatching__"] = True
self.vs[1]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[1]["MT_pivotIn__"] = """element1"""
self.vs[1]["MT_label__"] = """3"""
self.vs[1]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__EcuInstance'
p2
aS'MT_pre__System'
p3
aS'MT_pre__SystemMapping'
p4
aS'MT_pre__ComponentPrototype'
p5
aS'MT_pre__SwCompToEcuMapping_component'
p6
aS'MT_pre__CompositionType'
p7
aS'MT_pre__PPortPrototype'
p8
aS'MT_pre__SwcToEcuMapping'
p9
aS'MT_pre__SoftwareComposition'
p10
aS'MT_pre__RPortPrototype'
p11
aS'MT_pre__PortPrototype'
p12
aS'MT_pre__ComponentType'
p13
a.""")
self.vs[1]["MT_dirty__"] = False
self.vs[1]["mm__"] = """MT_pre__MetaModelElement_T"""
self.vs[1]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[1]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[1]["GUID__"] = UUID('83a88bf9-159b-49e5-bc4f-e564e350fe0f')
self.vs[2]["MT_pivotOut__"] = """element2"""
self.vs[2]["MT_subtypeMatching__"] = True
self.vs[2]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[2]["MT_pivotIn__"] = """element2"""
self.vs[2]["MT_label__"] = """4"""
self.vs[2]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__EcuInstance'
p2
aS'MT_pre__System'
p3
aS'MT_pre__SystemMapping'
p4
aS'MT_pre__ComponentPrototype'
p5
aS'MT_pre__SwCompToEcuMapping_component'
p6
aS'MT_pre__CompositionType'
p7
aS'MT_pre__PPortPrototype'
p8
aS'MT_pre__SwcToEcuMapping'
p9
aS'MT_pre__SoftwareComposition'
p10
aS'MT_pre__RPortPrototype'
p11
aS'MT_pre__PortPrototype'
p12
aS'MT_pre__ComponentType'
p13
a.""")
self.vs[2]["MT_dirty__"] = False
self.vs[2]["mm__"] = """MT_pre__MetaModelElement_T"""
self.vs[2]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[2]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[2]["GUID__"] = UUID('5969d185-c396-4074-ae02-19b4eba4fedf')
self.vs[3]["MT_subtypeMatching__"] = True
self.vs[3]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[3]["MT_label__"] = """5"""
self.vs[3]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__EcuInstance'
p2
aS'MT_pre__System'
p3
aS'MT_pre__SystemMapping'
p4
aS'MT_pre__ComponentPrototype'
p5
aS'MT_pre__SwCompToEcuMapping_component'
p6
aS'MT_pre__CompositionType'
p7
aS'MT_pre__PPortPrototype'
p8
aS'MT_pre__SwcToEcuMapping'
p9
aS'MT_pre__SoftwareComposition'
p10
aS'MT_pre__RPortPrototype'
p11
aS'MT_pre__PortPrototype'
p12
aS'MT_pre__ComponentType'
p13
a.""")
self.vs[3]["MT_dirty__"] = False
self.vs[3]["mm__"] = """MT_pre__MetaModelElement_T"""
self.vs[3]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[3]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[3]["GUID__"] = UUID('c05c0f99-90db-4930-84d1-d8765afba7c9')
def eval_classtype3(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality3(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name3(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype4(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality4(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name4(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype5(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality5(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name5(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType9(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def constraint(self, PreNode, graph):
"""
Executable constraint code.
@param PreNode: Function taking an integer as parameter
and returns the node corresponding to that label.
"""
#if len([i for i in graph.neighbors(PreNode('5').index) if graph.vs[i]['mm__'] == 'apply_contains']) == 0:
# return True
#return False
return True
| 42.502439 | 227 | 0.55245 | 2,073 | 17,426 | 4.498794 | 0.091655 | 0.027343 | 0.051469 | 0.038602 | 0.86157 | 0.838623 | 0.83294 | 0.82329 | 0.82329 | 0.82329 | 0 | 0.016944 | 0.193963 | 17,426 | 409 | 228 | 42.606357 | 0.647017 | 0.333697 | 0 | 0.74902 | 0 | 0.003922 | 0.687691 | 0.25439 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047059 | false | 0 | 0.011765 | 0.039216 | 0.145098 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
12efc5457813216056e38342cbf777f172799b9b | 340 | py | Python | temboo/core/Library/Withings/Measure/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Withings/Measure/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Withings/Measure/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Withings.Measure.GetActivityMetrics import GetActivityMetrics, GetActivityMetricsInputSet, GetActivityMetricsResultSet, GetActivityMetricsChoreographyExecution
from temboo.Library.Withings.Measure.GetBodyMetrics import GetBodyMetrics, GetBodyMetricsInputSet, GetBodyMetricsResultSet, GetBodyMetricsChoreographyExecution
| 113.333333 | 179 | 0.917647 | 22 | 340 | 14.181818 | 0.636364 | 0.064103 | 0.108974 | 0.160256 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041176 | 340 | 2 | 180 | 170 | 0.957055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
423798996ebab94032dcccec80201fe7d02627d4 | 37,560 | py | Python | tests/__init__.py | mercma/papa | c5807bb7d1f8f76f33d638b49576d81753fa1b8c | [
"MIT"
] | 15 | 2015-05-13T14:20:22.000Z | 2022-01-25T08:32:54.000Z | tests/__init__.py | codedstructure/papa | e3c56cc9fcf2e51ecfe77f9c2fb77493c506809a | [
"MIT"
] | 4 | 2015-11-29T14:54:06.000Z | 2018-11-08T15:58:42.000Z | tests/__init__.py | codedstructure/papa | e3c56cc9fcf2e51ecfe77f9c2fb77493c506809a | [
"MIT"
] | 4 | 2016-01-20T19:38:06.000Z | 2018-11-21T14:01:59.000Z | from random import randint
import os
import sys
import os.path
import socket
from time import sleep
try:
# noinspection PyPackageRequirements
import unittest2 as unittest
except ImportError:
import unittest
import select
import papa
from papa.server.papa_socket import unix_socket
from papa.utils import cast_bytes
from tempfile import gettempdir
import logging
logging.basicConfig()
import inspect
isdebugging = False
for frame in inspect.stack():
if frame[1].endswith("pydevd.py"):
isdebugging = True
break
here = os.path.dirname(os.path.realpath(__file__))
class SocketTest(unittest.TestCase):
def setUp(self):
papa.set_debug_mode(quit_when_connection_closed=True)
def check_subset(self, expected, result):
for key, value in expected.items():
self.assertIn(key, result)
self.assertEqual(value, result[key])
def test_inet(self):
with papa.Papa() as p:
expected = {'type': 'stream', 'family': 'inet', 'backlog': 5, 'host': '127.0.0.1'}
reply = p.make_socket('inet_sock')
self.check_subset(expected, reply)
self.assertIn('port', reply)
p.remove_sockets('inet_sock')
self.assertDictEqual({}, p.list_sockets())
reply = p.make_socket('inet_sock', reuseport=True)
self.check_subset(expected, reply)
self.assertIn('port', reply)
def test_inet_interface(self):
with papa.Papa() as p:
expected = {'type': 'stream', 'interface': 'eth0', 'family': 'inet', 'backlog': 5, 'host': '0.0.0.0'}
self.assertDictEqual({}, p.list_sockets())
reply = p.make_socket('interface_socket', interface='eth0')
self.assertIn('port', reply)
self.assertIn('fileno', reply)
expected['port'] = reply['port']
expected['fileno'] = reply['fileno']
self.assertDictEqual(expected, reply)
self.assertDictEqual({'interface_socket': expected}, p.list_sockets())
p.remove_sockets('interface_socket')
self.assertDictEqual({}, p.list_sockets())
reply = p.make_socket('interface_socket', interface='eth0', port=expected['port'])
self.assertDictEqual(expected, reply)
self.assertDictEqual({'interface_socket': expected}, p.list_sockets())
def test_inet6(self):
with papa.Papa() as p:
expected = {'type': 'stream', 'family': 'inet6', 'backlog': 5, 'host': '::1'}
reply = p.make_socket('inet6_sock', family=socket.AF_INET6)
self.check_subset(expected, reply)
self.assertIn('port', reply)
p.remove_sockets('inet6_sock')
self.assertDictEqual({}, p.list_sockets())
reply = p.make_socket('inet6_sock', family=socket.AF_INET6, reuseport=True)
self.check_subset(expected, reply)
self.assertIn('port', reply)
def test_inet6_interface(self):
with papa.Papa() as p:
expected = {'type': 'stream', 'interface': 'eth0', 'family': 'inet6', 'backlog': 5, 'host': '::'}
self.assertDictEqual({}, p.list_sockets())
reply = p.make_socket('interface_socket', family=socket.AF_INET6, interface='eth0')
self.assertIn('port', reply)
self.assertIn('fileno', reply)
expected['port'] = reply['port']
expected['fileno'] = reply['fileno']
self.assertDictEqual(expected, reply)
self.assertDictEqual({'interface_socket': expected}, p.list_sockets())
@unittest.skipIf(unix_socket is None, 'Unix socket not supported on this platform')
def test_file_socket(self):
with papa.Papa() as p:
path = os.path.join(gettempdir(), 'tst.sock')
expected = {'path': path, 'backlog': 5, 'type': 'stream', 'family': 'unix'}
reply = p.make_socket('fsock', path=path)
self.assertIn('fileno', reply)
expected['fileno'] = reply['fileno']
self.assertDictEqual(expected, reply)
self.assertDictEqual({'fsock': expected}, p.list_sockets())
self.assertRaises(papa.Error, p.make_socket, 'fsock', path='path')
def test_already_exists(self):
with papa.Papa() as p:
reply = p.make_socket('exists_sock')
self.assertDictEqual(reply, p.make_socket('exists_sock'))
self.assertRaises(papa.Error, p.make_socket, 'exists_sock', family=socket.AF_INET6)
def test_wildcard(self):
with papa.Papa() as p:
expected = {'type': 'stream', 'family': 'inet', 'backlog': 5, 'host': '127.0.0.1'}
reply = p.make_socket('inet.0')
self.check_subset(expected, reply)
self.assertIn('port', reply)
reply = p.make_socket('inet.1')
self.check_subset(expected, reply)
self.assertIn('port', reply)
reply = p.make_socket('other')
self.check_subset(expected, reply)
self.assertIn('port', reply)
reply = p.list_sockets('inet.*')
self.assertEqual(2, len(list(reply.keys())))
self.assertEqual(['inet.0', 'inet.1'], sorted(reply.keys()))
reply = p.list_sockets('other')
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual(['other'], list(reply.keys()))
reply = p.list_sockets('not_there')
self.assertEqual({}, reply)
reply = p.list_sockets('other', 'inet.1')
self.assertEqual(2, len(list(reply.keys())))
self.assertEqual(['inet.1', 'other'], sorted(reply.keys()))
reply = p.list_sockets('other', 'inet*')
self.assertEqual(3, len(list(reply.keys())))
self.assertEqual(['inet.0', 'inet.1', 'other'], sorted(reply.keys()))
reply = p.list_sockets('*')
self.assertEqual(3, len(list(reply.keys())))
self.assertEqual(['inet.0', 'inet.1', 'other'], sorted(reply.keys()))
reply = p.list_sockets()
self.assertEqual(3, len(list(reply.keys())))
self.assertEqual(['inet.0', 'inet.1', 'other'], sorted(reply.keys()))
class ValueTest(unittest.TestCase):
def setUp(self):
papa.set_debug_mode(quit_when_connection_closed=True)
def test_value(self):
with papa.Papa() as p:
self.assertEqual(None, p.get('aack'))
self.assertDictEqual({}, p.list_values())
p.set('aack', 'bar')
self.assertEqual('bar', p.get('aack'))
self.assertDictEqual({'aack': 'bar'}, p.list_values())
p.set('aack2', 'barry')
self.assertEqual('barry', p.get('aack2'))
self.assertDictEqual({'aack': 'bar', 'aack2': 'barry'}, p.list_values())
p.set('aack3', 'larry')
self.assertEqual('larry', p.get('aack3'))
self.assertDictEqual({'aack': 'bar', 'aack2': 'barry', 'aack3': 'larry'}, p.list_values())
p.set('bar', 'aack')
self.assertEqual('aack', p.get('bar'))
self.assertDictEqual({'aack': 'bar', 'aack2': 'barry', 'aack3': 'larry', 'bar': 'aack'}, p.list_values())
self.assertDictEqual({'aack': 'bar', 'aack2': 'barry', 'aack3': 'larry', 'bar': 'aack'}, p.list_values('*'))
self.assertDictEqual({'aack': 'bar', 'aack2': 'barry', 'aack3': 'larry'}, p.list_values('aack*'))
self.assertDictEqual({'bar': 'aack'}, p.list_values('b*'))
self.assertDictEqual({'aack2': 'barry', 'bar': 'aack'}, p.list_values('aack2', 'b*'))
p.set('aack')
self.assertEqual(None, p.get('aack'))
self.assertDictEqual({'aack2': 'barry', 'aack3': 'larry'}, p.list_values('a*'))
p.remove_values('aack*')
self.assertDictEqual({'bar': 'aack'}, p.list_values())
def test_wildcard_clear(self):
with papa.Papa() as p:
self.assertRaises(papa.Error, p.remove_values)
self.assertRaises(papa.Error, p.remove_values, '*')
class ProcessTest(unittest.TestCase):
def setUp(self):
papa.set_debug_mode(quit_when_connection_closed=True)
@staticmethod
def _merge_lines(output):
if len(output) > 1:
by_name = {}
merged_lines = False
for line in output:
by_name.setdefault(line.name, []).append(line)
for named_lines in by_name.values():
line_number = 1
while line_number < len(named_lines):
line = named_lines[line_number]
prev = named_lines[line_number - 1]
if line.timestamp - prev.timestamp > .05:
line_number += 1
else:
named_lines[line_number - 1] = papa.ProcessOutput(prev.name, prev.timestamp, prev.data + line.data)
del named_lines[line_number]
merged_lines = True
if merged_lines:
output = sorted((item for items in by_name.values() for item in items), key=lambda x: x.timestamp)
return output
def gather_output(self, watcher):
out = []
err = []
close = []
while watcher:
reply = watcher.read()
if reply:
out.extend(reply[0])
err.extend(reply[1])
close.extend(reply[2])
return self._merge_lines(out), self._merge_lines(err), close
if isdebugging:
_non_debug_gather_output = gather_output
def _filter_list(self, output):
remove = []
for line_number, line in enumerate(output):
if line.data.startswith(b'pydev debugger: process ') and b'is connecting' in line.data:
if line.data.endswith(b'is connecting\n\n'):
remove.append(line_number)
else:
end = line.data.find(b'is connecting\n\n')
output[line_number] = papa.ProcessOutput(line.name, line.timestamp, line.data[end + 14:])
for line_number in reversed(remove):
del output[line_number]
def gather_output(self, watcher):
out, err, close = self._non_debug_gather_output(watcher)
self._filter_list(out)
self._filter_list(err)
return out, err, close
def test_process_with_out_and_err(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3', list(reply.keys())[0])
self.assertIn('pid', list(reply.values())[0])
reply2 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertEqual(reply1['pid'], reply2['pid'])
self.assertRaises(papa.Error, p.watch_processes, 'not_there')
with p.watch_processes('write*') as w:
select.select([w], [], [])
self.assertTrue(w.ready)
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(3, len(out))
self.assertEqual(1, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', out[0].name)
self.assertEqual('write3', out[1].name)
self.assertEqual('write3', out[2].name)
self.assertEqual('write3', err[0].name)
self.assertEqual('write3', close[0].name)
self.assertLess(out[0].timestamp, out[1].timestamp)
self.assertLess(out[1].timestamp, out[2].timestamp)
self.assertLessEqual(out[2].timestamp, err[0].timestamp)
self.assertLessEqual(out[2].timestamp, close[0].timestamp)
self.assertLessEqual(err[0].timestamp, close[0].timestamp)
self.assertEqual(b'Version: ' + cast_bytes(sys.version.partition(' ')[0]) + b'\n', out[0].data)
self.assertEqual(b'Executable: ' + cast_bytes(sys.executable) + b'\n', out[1].data)
self.assertEqual(b'Args: \n', out[2].data)
self.assertEqual(b'done', err[0].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_process_with_none_executable(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3', None, args=[sys.executable, 'executables/write_three_lines.py'], working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3', list(reply.keys())[0])
self.assertIn('pid', list(reply.values())[0])
reply2 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertEqual(reply1['pid'], reply2['pid'])
self.assertRaises(papa.Error, p.watch_processes, 'not_there')
with p.watch_processes('write*') as w:
select.select([w], [], [])
self.assertTrue(w.ready)
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(3, len(out))
self.assertEqual(1, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', out[0].name)
self.assertEqual('write3', out[1].name)
self.assertEqual('write3', out[2].name)
self.assertEqual('write3', err[0].name)
self.assertEqual('write3', close[0].name)
self.assertLess(out[0].timestamp, out[1].timestamp)
self.assertLess(out[1].timestamp, out[2].timestamp)
self.assertLessEqual(out[2].timestamp, err[0].timestamp)
self.assertLessEqual(out[2].timestamp, close[0].timestamp)
self.assertLessEqual(err[0].timestamp, close[0].timestamp)
self.assertEqual(b'Version: ' + cast_bytes(sys.version.partition(' ')[0]) + b'\n', out[0].data)
self.assertEqual(b'Executable: ' + cast_bytes(sys.executable) + b'\n', out[1].data)
self.assertEqual(b'Args: \n', out[2].data)
self.assertEqual(b'done', err[0].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_process_with_watch_immediately(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
with p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ, watch_immediately=True) as w:
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(3, len(out))
self.assertEqual(1, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', out[0].name)
self.assertEqual('write3', out[1].name)
self.assertEqual('write3', out[2].name)
self.assertEqual('write3', err[0].name)
self.assertEqual('write3', close[0].name)
self.assertLess(out[0].timestamp, out[1].timestamp)
self.assertLess(out[1].timestamp, out[2].timestamp)
self.assertLessEqual(out[2].timestamp, err[0].timestamp)
self.assertLessEqual(out[2].timestamp, close[0].timestamp)
self.assertLessEqual(err[0].timestamp, close[0].timestamp)
self.assertEqual(b'Version: ' + cast_bytes(sys.version.partition(' ')[0]) + b'\n', out[0].data)
self.assertEqual(b'Executable: ' + cast_bytes(sys.executable) + b'\n', out[1].data)
self.assertEqual(b'Args: \n', out[2].data)
self.assertEqual(b'done', err[0].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_process_with_err_redirected_to_out(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ, stderr=papa.STDOUT)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3', list(reply.keys())[0])
self.assertIn('pid', list(reply.values())[0])
with p.watch_processes('write*') as w:
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(3, len(out))
self.assertEqual(0, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', out[0].name)
self.assertEqual('write3', out[1].name)
self.assertEqual('write3', out[2].name)
self.assertEqual('write3', close[0].name)
self.assertLess(out[0].timestamp, out[1].timestamp)
self.assertLess(out[1].timestamp, out[2].timestamp)
self.assertLessEqual(out[2].timestamp, close[0].timestamp)
self.assertEqual(b'Version: ' + cast_bytes(sys.version.partition(' ')[0]) + b'\n', out[0].data)
self.assertEqual(b'Executable: ' + cast_bytes(sys.executable) + b'\n', out[1].data)
self.assertEqual(b'Args: \ndone', out[2].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_process_with_no_out(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ, stdout=papa.DEVNULL)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3', list(reply.keys())[0])
self.assertIn('pid', list(reply.values())[0])
with p.watch_processes('write*') as w:
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(0, len(out))
self.assertEqual(1, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', err[0].name)
self.assertEqual('write3', close[0].name)
self.assertLessEqual(err[0].timestamp, close[0].timestamp)
self.assertEqual(b'done', err[0].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_process_with_no_buffer(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ, bufsize=0)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3', list(reply.keys())[0])
self.assertIn('pid', list(reply.values())[0])
with p.watch_processes('write*') as w:
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(0, len(out))
self.assertEqual(0, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', close[0].name)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_two_list_processes_full_output(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3.0', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply2 = p.make_process('write3.1', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply2)
self.assertIsInstance(reply2['pid'], int)
reply = p.list_processes()
self.assertEqual(2, len(list(reply.keys())))
self.assertEqual(['write3.0', 'write3.1'], sorted(reply.keys()))
self.assertIn('pid', list(reply.values())[0])
self.assertIn('pid', list(reply.values())[1])
self.assertNotEqual(list(reply.values())[0]['pid'], list(reply.values())[1]['pid'])
with p.watch_processes('write3.*') as w:
select.select([w], [], [])
self.assertTrue(w.ready)
out, err, close = self.gather_output(w)
exit_code0 = w.exit_code['write3.0']
exit_code1 = w.exit_code['write3.1']
self.assertEqual(6, len(out))
self.assertEqual(2, len(err))
self.assertEqual(2, len(close))
self.assertEqual(3, len([item for item in out if item.name == 'write3.0']))
self.assertEqual(3, len([item for item in out if item.name == 'write3.1']))
self.assertEqual(1, len([item for item in err if item.name == 'write3.0']))
self.assertEqual(1, len([item for item in err if item.name == 'write3.1']))
self.assertEqual(1, len([item for item in close if item.name == 'write3.0']))
self.assertEqual(1, len([item for item in close if item.name == 'write3.1']))
self.assertEqual(b'done', err[0].data)
self.assertEqual(b'done', err[1].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code0)
self.assertEqual(0, close[1].data)
self.assertEqual(0, exit_code1)
self.assertDictEqual({}, p.list_processes())
def test_two_list_processes_wait_for_one_to_close(self):
with papa.Papa() as p:
f = p.fileno()
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3.0', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
sleep(.2)
reply2 = p.make_process('write3.1', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply2)
self.assertIsInstance(reply2['pid'], int)
reply = p.list_processes()
self.assertEqual(2, len(list(reply.keys())))
self.assertEqual(['write3.0', 'write3.1'], sorted(reply.keys()))
self.assertIn('pid', list(reply.values())[0])
self.assertIn('pid', list(reply.values())[1])
self.assertNotEqual(list(reply.values())[0]['pid'], list(reply.values())[1]['pid'])
with p.watch_processes('write3.*') as w:
while True:
out, err, close = w.read()
if close:
break
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3.1', list(reply.keys())[0])
self.assertEqual(f, p.fileno())
def test_multiple_watchers(self):
with papa.Papa() as p:
f = p.fileno()
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3.0', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply2 = p.make_process('write3.1', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply2)
self.assertIsInstance(reply2['pid'], int)
reply = p.list_processes()
self.assertEqual(2, len(list(reply.keys())))
self.assertEqual(['write3.0', 'write3.1'], sorted(reply.keys()))
self.assertIn('pid', list(reply.values())[0])
self.assertIn('pid', list(reply.values())[1])
self.assertNotEqual(list(reply.values())[0]['pid'], list(reply.values())[1]['pid'])
w1 = p.watch_processes('write3.0')
self.assertEqual(f, w1.fileno())
w2 = p.watch_processes('write3.1')
self.assertNotEqual(f, w2.fileno())
p.set('p1', 't1')
self.assertNotEqual(f, p.fileno())
self.assertNotEqual(p.fileno(), w1.fileno())
self.assertNotEqual(p.fileno(), w2.fileno())
out1, err1, close1 = self.gather_output(w1)
out2, err2, close2 = self.gather_output(w2)
w1.close()
w2.close()
self.assertEqual(3, len(out1))
self.assertEqual(1, len(err1))
self.assertEqual(1, len(close1))
self.assertEqual('write3.0', out1[0].name)
self.assertEqual('write3.0', out1[1].name)
self.assertEqual('write3.0', out1[2].name)
self.assertEqual('write3.0', err1[0].name)
self.assertEqual('write3.0', close1[0].name)
self.assertLess(out1[0].timestamp, out1[1].timestamp)
self.assertLess(out1[1].timestamp, out1[2].timestamp)
self.assertLessEqual(out1[2].timestamp, err1[0].timestamp)
self.assertLessEqual(out1[2].timestamp, close1[0].timestamp)
self.assertLessEqual(err1[0].timestamp, close1[0].timestamp)
self.assertEqual(b'Version: ' + cast_bytes(sys.version.partition(' ')[0]) + b'\n', out1[0].data)
self.assertEqual(b'Executable: ' + cast_bytes(sys.executable) + b'\n', out1[1].data)
self.assertEqual(b'Args: \n', out1[2].data)
self.assertEqual(b'done', err1[0].data)
self.assertEqual(0, close1[0].data)
self.assertEqual(3, len(out2))
self.assertEqual(1, len(err2))
self.assertEqual(1, len(close2))
self.assertEqual('write3.1', out2[0].name)
self.assertEqual('write3.1', out2[1].name)
self.assertEqual('write3.1', out2[2].name)
self.assertEqual('write3.1', err2[0].name)
self.assertEqual('write3.1', close2[0].name)
self.assertLess(out2[0].timestamp, out2[1].timestamp)
self.assertLess(out2[1].timestamp, out2[2].timestamp)
self.assertLessEqual(out2[2].timestamp, err2[0].timestamp)
self.assertLessEqual(out2[2].timestamp, close2[0].timestamp)
self.assertLessEqual(err2[0].timestamp, close2[0].timestamp)
self.assertEqual(b'Version: ' + cast_bytes(sys.version.partition(' ')[0]) + b'\n', out2[0].data)
self.assertEqual(b'Executable: ' + cast_bytes(sys.executable) + b'\n', out2[1].data)
self.assertEqual(b'Args: \n', out2[2].data)
self.assertEqual(b'done', err2[0].data)
self.assertEqual(0, close2[0].data)
self.assertDictEqual({}, p.list_processes())
self.assertEqual('t1', p.get('p1'))
def test_process_with_small_buffer(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply1 = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ, bufsize=14)
self.assertIn('pid', reply1)
self.assertIsInstance(reply1['pid'], int)
reply = p.list_processes()
self.assertEqual(1, len(list(reply.keys())))
self.assertEqual('write3', list(reply.keys())[0])
self.assertIn('pid', list(reply.values())[0])
sleep(.7)
with p.watch_processes('write*') as w:
select.select([w], [], [])
self.assertTrue(w.ready)
out, err, close = self.gather_output(w)
exit_code = w.exit_code['write3']
self.assertEqual(1, len(out))
self.assertEqual(1, len(err))
self.assertEqual(1, len(close))
self.assertEqual('write3', out[0].name)
self.assertEqual('write3', err[0].name)
self.assertEqual('write3', close[0].name)
self.assertLessEqual(out[0].timestamp, err[0].timestamp)
self.assertLessEqual(out[0].timestamp, close[0].timestamp)
self.assertLessEqual(err[0].timestamp, close[0].timestamp)
self.assertEqual(b'Args: \n', out[0].data)
self.assertEqual(b'done', err[0].data)
self.assertEqual(0, close[0].data)
self.assertEqual(0, exit_code)
self.assertDictEqual({}, p.list_processes())
def test_one_process_two_parallel_watchers(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply)
self.assertIn('running', reply)
self.assertIsInstance(reply['pid'], int)
self.assertIsInstance(reply['running'], bool)
w1 = p.watch_processes('write*')
w2 = p.watch_processes('write*')
out1, err1, close1 = w1.read()
if isdebugging and not out1 and err1 and err1[0].data.startswith('pydev debugger'):
out1, err1, close1 = w1.read()
out2, err2, close2 = w2.read()
self.assertEqual(out1[0], out2[0])
w1.close()
w2.close()
def test_one_process_two_serial_watchers(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
reply = p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
self.assertIn('pid', reply)
self.assertIn('running', reply)
self.assertIsInstance(reply['pid'], int)
self.assertIsInstance(reply['running'], bool)
with p.watch_processes('write*') as w:
out1, err1, close1 = w.read()
if isdebugging and not out1 and err1 and err1[0].data.startswith('pydev debugger'):
out1, err1, close1 = w.read()
with p.watch_processes('write*') as w:
out2, err2, close2 = self.gather_output(w)
self.assertLess(out1[0].timestamp, out2[0].timestamp)
def test_echo_server_with_normal_socket(self):
with papa.Papa() as p:
reply = p.make_socket('echo_socket')
self.assertIn('port', reply)
self.assertIn('fileno', reply)
port = reply['port']
reply = p.make_process('echo1', sys.executable, args=('executables/echo_server.py', '$(socket.echo_socket.fileno)'), working_dir=here)
self.assertIn('pid', reply)
s = socket.socket()
s.connect(('127.0.0.1', port))
s.send(b'test\n')
msg = b''
while len(msg) < 5:
msg += s.recv(5)
self.assertEqual(b'test\n', msg)
s.send(b'and do some more\n')
msg = b''
while len(msg) < 17:
msg += s.recv(17)
self.assertEqual(b'and do some more\n', msg)
s.close()
with p.watch_processes('echo*') as w:
out, err, close = self.gather_output(w)
self.assertEqual(b'test\nand do some more\n', out[0].data)
def test_echo_server_with_echo_client(self):
with papa.Papa() as p:
reply = p.make_socket('echo_socket')
self.assertIn('port', reply)
self.assertIn('fileno', reply)
reply = p.make_process('echo.server', sys.executable, args=('executables/echo_server.py', '$(socket.echo_socket.fileno)'), working_dir=here)
self.assertIn('pid', reply)
reply = p.make_process('echo.client', sys.executable, args=('executables/echo_client.py', '$(socket.echo_socket.port)'), working_dir=here)
self.assertIn('pid', reply)
with p.watch_processes('echo.*') as w:
out, err, close = self.gather_output(w)
self.assertEqual(2, len(out))
self.assertEqual(2, len(close))
self.assertEqual(b'howdy\n', out[0].data)
self.assertEqual(b'howdy\n', out[1].data)
self.assertIn(out[0].name, ('echo.client', 'echo.server'))
self.assertIn(out[1].name, ('echo.client', 'echo.server'))
self.assertNotEqual(out[0].name, out[1].name)
def test_echo_server_with_reuseport(self):
with papa.Papa() as p:
reply = p.make_socket('echo_socket', reuseport=True)
self.assertIn('port', reply)
port = reply['port']
reply = p.make_process('echo1', sys.executable, args=('executables/echo_server.py', '$(socket.echo_socket.fileno)'), working_dir=here)
self.assertIn('pid', reply)
s = socket.socket()
s.connect(('127.0.0.1', port))
s.send(b'test\n')
msg = b''
while len(msg) < 5:
msg += s.recv(5)
self.assertEqual(b'test\n', msg)
s.send(b'and do some more\n')
msg = b''
while len(msg) < 17:
msg += s.recv(17)
self.assertEqual(b'and do some more\n', msg)
s.close()
with p.watch_processes('echo*') as w:
out, err, close = self.gather_output(w)
self.assertEqual(b'test\nand do some more\n', out[0].data)
def test_process_with_close_output_late(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
socket_reply = p.make_socket('echo_socket')
p.make_process('echo', sys.executable, args=('executables/echo_server.py', '$(socket.echo_socket.fileno)'), working_dir=here)
s = socket.socket()
s.connect(('127.0.0.1', socket_reply['port']))
s.send(b'test\n')
msg = b''
while len(msg) < 5:
msg += s.recv(5)
self.assertEqual(b'test\n', msg)
s.close()
sleep(.4)
p.remove_processes('echo')
self.assertDictEqual({}, p.list_processes())
def test_process_with_close_output_early(self):
with papa.Papa() as p:
self.assertDictEqual({}, p.list_processes())
socket_reply = p.make_socket('echo_socket')
p.make_process('echo', sys.executable, args=('executables/echo_server.py', '$(socket.echo_socket.fileno)'), working_dir=here)
p.remove_processes('echo')
s = socket.socket()
s.connect(('127.0.0.1', socket_reply['port']))
s.send(b'test\n')
msg = b''
while len(msg) < 5:
msg += s.recv(5)
self.assertEqual(b'test\n', msg)
s.close()
sleep(.4)
self.assertDictEqual({}, p.list_processes())
def test_bad_socket_reference(self):
with papa.Papa() as p:
self.assertRaises(papa.Error, p.make_process, 'bad', sys.executable, args=('executables/echo_server.py', '$(socket.echo_socket.fileno)'), working_dir=here)
def test_bad_process(self):
with papa.Papa() as p:
self.assertRaises(papa.Error, p.make_process, 'bad', sys.executable + '-blah')
def test_bad_working_dir(self):
with papa.Papa() as p:
self.assertRaises(papa.Error, p.make_process, 'bad', sys.executable, working_dir=here + '-blah')
if __name__ == '__main__':
papa.set_default_port(randint(20000, 21000))
unittest.main()
| 46.832918 | 189 | 0.579499 | 4,540 | 37,560 | 4.687445 | 0.058811 | 0.124759 | 0.044406 | 0.034961 | 0.829331 | 0.785912 | 0.749401 | 0.721771 | 0.711997 | 0.701236 | 0 | 0.024042 | 0.266906 | 37,560 | 801 | 190 | 46.891386 | 0.748829 | 0.000905 | 0 | 0.612809 | 0 | 0 | 0.100522 | 0.024518 | 0 | 0 | 0 | 0 | 0.508006 | 1 | 0.053857 | false | 0 | 0.02329 | 0 | 0.085881 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c42f578f72cb121a24d6b852334cbd8a977f2730 | 2,276 | py | Python | python/paddle/v2/fluid/tests/test_sigmoid_cross_entropy_with_logits_op.py | hero9968/PaddlePaddle-book | 1ff47b284c565d030b198705d5f18b4bd4ce53e5 | [
"Apache-2.0"
] | 3 | 2018-04-16T23:35:32.000Z | 2019-08-12T01:01:07.000Z | python/paddle/v2/fluid/tests/test_sigmoid_cross_entropy_with_logits_op.py | hero9968/PaddlePaddle-book | 1ff47b284c565d030b198705d5f18b4bd4ce53e5 | [
"Apache-2.0"
] | null | null | null | python/paddle/v2/fluid/tests/test_sigmoid_cross_entropy_with_logits_op.py | hero9968/PaddlePaddle-book | 1ff47b284c565d030b198705d5f18b4bd4ce53e5 | [
"Apache-2.0"
] | 2 | 2020-11-04T08:07:46.000Z | 2020-11-06T08:33:24.000Z | import numpy as np
from op_test import OpTest
from scipy.special import logit
from scipy.special import expit
import unittest
class TestSigmoidCrossEntropyWithLogitsOp1(OpTest):
"""Test sigmoid_cross_entropy_with_logit_op with binary label
"""
def setUp(self):
self.op_type = "sigmoid_cross_entropy_with_logits"
batch_size = 64
num_classes = 20
self.inputs = {
'X': logit(
np.random.uniform(0, 1, (batch_size, num_classes))
.astype("float32")),
'Label': np.random.randint(0, 2, (batch_size, num_classes))
.astype("float32")
}
# Fw Pass is implemented as elementwise sigmoid followed by
# elementwise logistic loss
# Label * -log(sigmoid(X)) + (1 - label) * -log(1 - sigmoid(X))
sigmoid_X = expit(self.inputs['X'])
term1 = self.inputs['Label'] * np.log(sigmoid_X)
term2 = (1 - self.inputs['Label']) * np.log(1 - sigmoid_X)
self.outputs = {'Out': -term1 - term2}
def test_check_output(self):
self.check_output()
def test_check_grad(self):
self.check_grad(['X'], 'Out')
class TestSigmoidCrossEntropyWithLogitsOp2(OpTest):
"""Test sigmoid_cross_entropy_with_logit_op with probabalistic label
"""
def setUp(self):
self.op_type = "sigmoid_cross_entropy_with_logits"
batch_size = 64
num_classes = 20
self.inputs = {
'X': logit(
np.random.uniform(0, 1, (batch_size, num_classes))
.astype("float32")),
'Label': np.random.uniform(0, 1, (batch_size, num_classes))
.astype("float32")
}
# Fw Pass is implemented as elementwise sigmoid followed by
# elementwise logistic loss
# Label * -log(sigmoid(X)) + (1 - label) * -log(1 - sigmoid(X))
sigmoid_X = expit(self.inputs['X'])
term1 = self.inputs['Label'] * np.log(sigmoid_X)
term2 = (1 - self.inputs['Label']) * np.log(1 - sigmoid_X)
self.outputs = {'Out': -term1 - term2}
def test_check_output(self):
self.check_output()
def test_check_grad(self):
self.check_grad(['X'], 'Out')
if __name__ == '__main__':
unittest.main()
| 31.611111 | 72 | 0.598858 | 278 | 2,276 | 4.690647 | 0.230216 | 0.06135 | 0.058282 | 0.070552 | 0.818252 | 0.818252 | 0.818252 | 0.818252 | 0.818252 | 0.750767 | 0 | 0.025439 | 0.274605 | 2,276 | 71 | 73 | 32.056338 | 0.764385 | 0.18761 | 0 | 0.723404 | 0 | 0 | 0.081833 | 0.036007 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12766 | false | 0 | 0.106383 | 0 | 0.276596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c465a2e526fa0152a1f52946d11055cf965760fe | 40,965 | py | Python | napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 64 | 2016-10-20T15:47:18.000Z | 2021-11-11T11:57:32.000Z | napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 126 | 2016-10-05T10:36:14.000Z | 2019-05-15T08:43:23.000Z | napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 63 | 2016-11-07T15:23:08.000Z | 2021-09-22T14:41:16.000Z | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/transport/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Configuration parameters relating to the transport
session(s) used for the BGP neighbor
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__tcp_mss",
"__mtu_discovery",
"__passive_mode",
"__local_address",
)
_yang_name = "config"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__tcp_mss = YANGDynClass(
base=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
is_leaf=True,
yang_name="tcp-mss",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=True,
)
self.__mtu_discovery = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="mtu-discovery",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
self.__passive_mode = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="passive-mode",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
self.__local_address = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$"
},
),
six.text_type,
],
is_leaf=True,
yang_name="local-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="union",
is_config=True,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"protocols",
"protocol",
"bgp",
"neighbors",
"neighbor",
"transport",
"config",
]
def _get_tcp_mss(self):
"""
Getter method for tcp_mss, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/tcp_mss (uint16)
YANG Description: Sets the max segment size for BGP TCP sessions.
"""
return self.__tcp_mss
def _set_tcp_mss(self, v, load=False):
"""
Setter method for tcp_mss, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/tcp_mss (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_tcp_mss is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_tcp_mss() directly.
YANG Description: Sets the max segment size for BGP TCP sessions.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
is_leaf=True,
yang_name="tcp-mss",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """tcp_mss must be of a type compatible with uint16""",
"defined-type": "uint16",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="tcp-mss", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint16', is_config=True)""",
}
)
self.__tcp_mss = t
if hasattr(self, "_set"):
self._set()
def _unset_tcp_mss(self):
self.__tcp_mss = YANGDynClass(
base=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
is_leaf=True,
yang_name="tcp-mss",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=True,
)
def _get_mtu_discovery(self):
"""
Getter method for mtu_discovery, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/mtu_discovery (boolean)
YANG Description: Turns path mtu discovery for BGP TCP sessions on (true)
or off (false)
"""
return self.__mtu_discovery
def _set_mtu_discovery(self, v, load=False):
"""
Setter method for mtu_discovery, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/mtu_discovery (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_mtu_discovery is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mtu_discovery() directly.
YANG Description: Turns path mtu discovery for BGP TCP sessions on (true)
or off (false)
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="mtu-discovery",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mtu_discovery must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, default=YANGBool("false"), is_leaf=True, yang_name="mtu-discovery", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=True)""",
}
)
self.__mtu_discovery = t
if hasattr(self, "_set"):
self._set()
def _unset_mtu_discovery(self):
self.__mtu_discovery = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="mtu-discovery",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
def _get_passive_mode(self):
"""
Getter method for passive_mode, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/passive_mode (boolean)
YANG Description: Wait for peers to issue requests to open a BGP session,
rather than initiating sessions from the local router.
"""
return self.__passive_mode
def _set_passive_mode(self, v, load=False):
"""
Setter method for passive_mode, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/passive_mode (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_passive_mode is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_passive_mode() directly.
YANG Description: Wait for peers to issue requests to open a BGP session,
rather than initiating sessions from the local router.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="passive-mode",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """passive_mode must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, default=YANGBool("false"), is_leaf=True, yang_name="passive-mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=True)""",
}
)
self.__passive_mode = t
if hasattr(self, "_set"):
self._set()
def _unset_passive_mode(self):
self.__passive_mode = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="passive-mode",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
def _get_local_address(self):
"""
Getter method for local_address, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/local_address (union)
YANG Description: Set the local IP (either IPv4 or IPv6) address to use
for the session when sending BGP update messages. This
may be expressed as either an IP address or reference
to the name of an interface.
"""
return self.__local_address
def _set_local_address(self, v, load=False):
"""
Setter method for local_address, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/local_address (union)
If this variable is read-only (config: false) in the
source YANG file, then _set_local_address is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_local_address() directly.
YANG Description: Set the local IP (either IPv4 or IPv6) address to use
for the session when sending BGP update messages. This
may be expressed as either an IP address or reference
to the name of an interface.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$"
},
),
six.text_type,
],
is_leaf=True,
yang_name="local-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="union",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """local_address must be of a type compatible with union""",
"defined-type": "openconfig-network-instance:union",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$'}),RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$'}),six.text_type,], is_leaf=True, yang_name="local-address", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='union', is_config=True)""",
}
)
self.__local_address = t
if hasattr(self, "_set"):
self._set()
def _unset_local_address(self):
self.__local_address = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$"
},
),
six.text_type,
],
is_leaf=True,
yang_name="local-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="union",
is_config=True,
)
tcp_mss = __builtin__.property(_get_tcp_mss, _set_tcp_mss)
mtu_discovery = __builtin__.property(_get_mtu_discovery, _set_mtu_discovery)
passive_mode = __builtin__.property(_get_passive_mode, _set_passive_mode)
local_address = __builtin__.property(_get_local_address, _set_local_address)
_pyangbind_elements = OrderedDict(
[
("tcp_mss", tcp_mss),
("mtu_discovery", mtu_discovery),
("passive_mode", passive_mode),
("local_address", local_address),
]
)
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/transport/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Configuration parameters relating to the transport
session(s) used for the BGP neighbor
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__tcp_mss",
"__mtu_discovery",
"__passive_mode",
"__local_address",
)
_yang_name = "config"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__tcp_mss = YANGDynClass(
base=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
is_leaf=True,
yang_name="tcp-mss",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=True,
)
self.__mtu_discovery = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="mtu-discovery",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
self.__passive_mode = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="passive-mode",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
self.__local_address = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$"
},
),
six.text_type,
],
is_leaf=True,
yang_name="local-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="union",
is_config=True,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"protocols",
"protocol",
"bgp",
"neighbors",
"neighbor",
"transport",
"config",
]
def _get_tcp_mss(self):
"""
Getter method for tcp_mss, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/tcp_mss (uint16)
YANG Description: Sets the max segment size for BGP TCP sessions.
"""
return self.__tcp_mss
def _set_tcp_mss(self, v, load=False):
"""
Setter method for tcp_mss, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/tcp_mss (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_tcp_mss is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_tcp_mss() directly.
YANG Description: Sets the max segment size for BGP TCP sessions.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
is_leaf=True,
yang_name="tcp-mss",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """tcp_mss must be of a type compatible with uint16""",
"defined-type": "uint16",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="tcp-mss", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint16', is_config=True)""",
}
)
self.__tcp_mss = t
if hasattr(self, "_set"):
self._set()
def _unset_tcp_mss(self):
self.__tcp_mss = YANGDynClass(
base=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
is_leaf=True,
yang_name="tcp-mss",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=True,
)
def _get_mtu_discovery(self):
"""
Getter method for mtu_discovery, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/mtu_discovery (boolean)
YANG Description: Turns path mtu discovery for BGP TCP sessions on (true)
or off (false)
"""
return self.__mtu_discovery
def _set_mtu_discovery(self, v, load=False):
"""
Setter method for mtu_discovery, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/mtu_discovery (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_mtu_discovery is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mtu_discovery() directly.
YANG Description: Turns path mtu discovery for BGP TCP sessions on (true)
or off (false)
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="mtu-discovery",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mtu_discovery must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, default=YANGBool("false"), is_leaf=True, yang_name="mtu-discovery", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=True)""",
}
)
self.__mtu_discovery = t
if hasattr(self, "_set"):
self._set()
def _unset_mtu_discovery(self):
self.__mtu_discovery = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="mtu-discovery",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
def _get_passive_mode(self):
"""
Getter method for passive_mode, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/passive_mode (boolean)
YANG Description: Wait for peers to issue requests to open a BGP session,
rather than initiating sessions from the local router.
"""
return self.__passive_mode
def _set_passive_mode(self, v, load=False):
"""
Setter method for passive_mode, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/passive_mode (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_passive_mode is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_passive_mode() directly.
YANG Description: Wait for peers to issue requests to open a BGP session,
rather than initiating sessions from the local router.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="passive-mode",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """passive_mode must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, default=YANGBool("false"), is_leaf=True, yang_name="passive-mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=True)""",
}
)
self.__passive_mode = t
if hasattr(self, "_set"):
self._set()
def _unset_passive_mode(self):
self.__passive_mode = YANGDynClass(
base=YANGBool,
default=YANGBool("false"),
is_leaf=True,
yang_name="passive-mode",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=True,
)
def _get_local_address(self):
"""
Getter method for local_address, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/local_address (union)
YANG Description: Set the local IP (either IPv4 or IPv6) address to use
for the session when sending BGP update messages. This
may be expressed as either an IP address or reference
to the name of an interface.
"""
return self.__local_address
def _set_local_address(self, v, load=False):
"""
Setter method for local_address, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/neighbors/neighbor/transport/config/local_address (union)
If this variable is read-only (config: false) in the
source YANG file, then _set_local_address is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_local_address() directly.
YANG Description: Set the local IP (either IPv4 or IPv6) address to use
for the session when sending BGP update messages. This
may be expressed as either an IP address or reference
to the name of an interface.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$"
},
),
six.text_type,
],
is_leaf=True,
yang_name="local-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="union",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """local_address must be of a type compatible with union""",
"defined-type": "openconfig-network-instance:union",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$'}),RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$'}),six.text_type,], is_leaf=True, yang_name="local-address", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='union', is_config=True)""",
}
)
self.__local_address = t
if hasattr(self, "_set"):
self._set()
def _unset_local_address(self):
self.__local_address = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "^(([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:))$"
},
),
six.text_type,
],
is_leaf=True,
yang_name="local-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="union",
is_config=True,
)
tcp_mss = __builtin__.property(_get_tcp_mss, _set_tcp_mss)
mtu_discovery = __builtin__.property(_get_mtu_discovery, _set_mtu_discovery)
passive_mode = __builtin__.property(_get_passive_mode, _set_passive_mode)
local_address = __builtin__.property(_get_local_address, _set_local_address)
_pyangbind_elements = OrderedDict(
[
("tcp_mss", tcp_mss),
("mtu_discovery", mtu_discovery),
("passive_mode", passive_mode),
("local_address", local_address),
]
)
| 44.478827 | 1,001 | 0.566825 | 5,130 | 40,965 | 4.345029 | 0.046004 | 0.012921 | 0.028712 | 0.034455 | 0.983176 | 0.973441 | 0.973441 | 0.973441 | 0.973441 | 0.973441 | 0 | 0.039972 | 0.29037 | 40,965 | 920 | 1,002 | 44.527174 | 0.726797 | 0.191993 | 0 | 0.876404 | 0 | 0.02809 | 0.328896 | 0.19487 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039326 | false | 0.042135 | 0.021067 | 0 | 0.102528 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c48ab178bf53558084fb500b2811c6f0b77a7943 | 17,565 | py | Python | tensorflow/compiler/tests/fake_quant_ops_test.py | elielhojman/tensorflow | 163aae337c875efce2518c3cd0fecb61968fe408 | [
"Apache-2.0"
] | 8 | 2017-03-20T12:04:21.000Z | 2021-06-24T20:34:30.000Z | tensorflow/compiler/tests/fake_quant_ops_test.py | shrikunjsarda/tensorflow | 7e8927e7af0c51ac20a63bd4eab6ff83df1a39ae | [
"Apache-2.0"
] | 4 | 2019-08-14T22:32:51.000Z | 2020-03-09T14:59:18.000Z | tensorflow/compiler/tests/fake_quant_ops_test.py | shrikunjsarda/tensorflow | 7e8927e7af0c51ac20a63bd4eab6ff83df1a39ae | [
"Apache-2.0"
] | 4 | 2019-11-11T13:46:27.000Z | 2020-03-14T05:36:53.000Z | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.compiler.tests import xla_test
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.platform import googletest
class FakeQuantWithMinMaxArgsTest(xla_test.XLATestCase):
"""Test cases for FakeQuantWithMinMaxArgs operation."""
# 8 bits, wide range.
def testOp_with8BitsNoScalingNoNudging(self):
self._TestOp(0.0, 255.0, 8, False, 0.0, 255.0, 1.0)
def testOp_with8BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 128.0, 8, False, 0.0, 127.5, 0.5)
def testOp_with8BitsScalingAndNudgingUp(self):
self._TestOp(-128.0, -0.5, 8, False, -127.5, 0.0, 0.5)
def testOp_with8BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 127.4, 8, False, 0.0, 127.5, 0.5)
# 8 bits, narrow range.
def testOp_with8BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 254.0, 8, True, 0.0, 254.0, 1.0)
def testOp_with8BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 127.1, 8, True, 0.0, 127.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-127.1, -0.1, 8, True, -127.0, 0.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 126.9, 8, True, 0.0, 127.0, 0.5)
# 7 bits, wide range.
def testOp_with7BitsNoScalingNoNudging(self):
self._TestOp(0.0, 127.0, 7, False, 0.0, 127.0, 1.0)
def testOp_with7BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 64.0, 7, False, 0.0, 63.5, 0.5)
def testOp_with7BitsScalingAndNudgingUp(self):
self._TestOp(-64.0, -0.5, 7, False, -63.5, 0.0, 0.5)
def testOp_with7BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 63.4, 7, False, 0.0, 63.5, 0.5)
# 7 bits, narrow range.
def testOp_with7BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 126.0, 7, True, 0.0, 126.0, 1.0)
def testOp_with7BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 63.1, 7, True, 0.0, 63.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-63.1, -0.1, 7, True, -63.0, 0.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 62.9, 7, True, 0.0, 63.0, 0.5)
def _TestOp(self, input_min, input_max, num_bits, narrow_range,
expected_nudged_input_min, expected_nudged_input_max,
expected_step):
inputs = np.array(
[
expected_nudged_input_min - expected_step,
expected_nudged_input_min - 0.01, expected_nudged_input_min,
expected_nudged_input_min + 0.01,
expected_nudged_input_min + expected_step - 0.01,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step + 0.01,
expected_nudged_input_max - 0.01, expected_nudged_input_max,
expected_nudged_input_max + 0.01,
expected_nudged_input_max + expected_step
],
dtype=np.float32)
expected = np.array(
[
expected_nudged_input_min, expected_nudged_input_min,
expected_nudged_input_min, expected_nudged_input_min,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step,
expected_nudged_input_max, expected_nudged_input_max,
expected_nudged_input_max, expected_nudged_input_max
],
dtype=np.float32)
with self.test_session() as session:
with self.test_scope():
input_placeholder = array_ops.placeholder(
dtypes.float32, inputs.shape, name="inputs")
outputs = array_ops.fake_quant_with_min_max_args(
input_placeholder,
min=input_min,
max=input_max,
num_bits=num_bits,
narrow_range=narrow_range)
result = session.run(outputs, {input_placeholder: inputs})
self.assertAllCloseAccordingToType(
result, expected, rtol=1e-3, atol=1e-5, bfloat16_rtol=0.03)
class FakeQuantWithMinMaxArgsGradientTest(xla_test.XLATestCase):
"""Test cases for FakeQuantWithMinMaxArgsGradient operation."""
# 8 bits, wide range.
def testOp_with8BitsNoScalingNoNudging(self):
self._TestOp(0.0, 255.0, 8, False, 0.0, 255.0, 1.0)
def testOp_with8BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 128.0, 8, False, 0.0, 127.5, 0.5)
def testOp_with8BitsScalingAndNudgingUp(self):
self._TestOp(-128.0, -0.5, 8, False, -127.5, 0.0, 0.5)
def testOp_with8BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 127.4, 8, False, 0.0, 127.5, 0.5)
# 8 bits, narrow range.
def testOp_with8BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 254.0, 8, True, 0.0, 254.0, 1.0)
def testOp_with8BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 127.1, 8, True, 0.0, 127.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-127.1, -0.1, 8, True, -127.0, 0.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 126.9, 8, True, 0.0, 127.0, 0.5)
# 7 bits, wide range.
def testOp_with7BitsNoScalingNoNudging(self):
self._TestOp(0.0, 127.0, 7, False, 0.0, 127.0, 1.0)
def testOp_with7BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 64.0, 7, False, 0.0, 63.5, 0.5)
def testOp_with7BitsScalingAndNudgingUp(self):
self._TestOp(-64.0, -0.5, 7, False, -63.5, 0.0, 0.5)
def testOp_with7BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 63.4, 7, False, 0.0, 63.5, 0.5)
# 7 bits, narrow range.
def testOp_with7BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 126.0, 7, True, 0.0, 126.0, 1.0)
def testOp_with7BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 63.1, 7, True, 0.0, 63.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-63.1, -0.1, 7, True, -63.0, 0.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 62.9, 7, True, 0.0, 63.0, 0.5)
def _TestOp(self, input_min, input_max, num_bits, narrow_range,
expected_nudged_input_min, expected_nudged_input_max,
expected_step):
inputs = np.array(
[
expected_nudged_input_min - expected_step,
expected_nudged_input_min - 0.01, expected_nudged_input_min,
expected_nudged_input_min + 0.01,
expected_nudged_input_min + expected_step - 0.01,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step + 0.01,
expected_nudged_input_max - 0.01, expected_nudged_input_max,
expected_nudged_input_max + 0.01,
expected_nudged_input_max + expected_step
],
dtype=np.float32)
gradients = np.arange(1, len(inputs) + 1, dtype=np.float32)
expected_backprops = np.array(
[0.0, 0.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 0.0, 0.0],
dtype=np.float32)
with self.test_session() as session:
with self.test_scope():
gradient_placeholder = array_ops.placeholder(
dtypes.float32, gradients.shape, name="gradients")
input_placeholder = array_ops.placeholder(
dtypes.float32, inputs.shape, name="inputs")
outputs = gen_array_ops.fake_quant_with_min_max_args_gradient(
gradient_placeholder,
input_placeholder,
min=input_min,
max=input_max,
num_bits=num_bits,
narrow_range=narrow_range)
backprops = session.run(outputs, {
gradient_placeholder: gradients,
input_placeholder: inputs
})
self.assertAllCloseAccordingToType(
backprops,
expected_backprops,
rtol=1e-3,
atol=1e-5,
bfloat16_rtol=0.03)
class FakeQuantWithMinMaxVarsTest(xla_test.XLATestCase):
"""Test cases for FakeQuantWithMinMaxVars operation."""
# 8 bits, wide range.
def testOp_with8BitsNoScalingNoNudging(self):
self._TestOp(0.0, 255.0, 8, False, 0.0, 255.0, 1.0)
def testOp_with8BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 128.0, 8, False, 0.0, 127.5, 0.5)
def testOp_with8BitsScalingAndNudgingUp(self):
self._TestOp(-128.0, -0.5, 8, False, -127.5, 0.0, 0.5)
def testOp_with8BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 127.4, 8, False, 0.0, 127.5, 0.5)
# 8 bits, narrow range.
def testOp_with8BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 254.0, 8, True, 0.0, 254.0, 1.0)
def testOp_with8BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 127.1, 8, True, 0.0, 127.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-127.1, -0.1, 8, True, -127.0, 0.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 126.9, 8, True, 0.0, 127.0, 0.5)
# 7 bits, wide range.
def testOp_with7BitsNoScalingNoNudging(self):
self._TestOp(0.0, 127.0, 7, False, 0.0, 127.0, 1.0)
def testOp_with7BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 64.0, 7, False, 0.0, 63.5, 0.5)
def testOp_with7BitsScalingAndNudgingUp(self):
self._TestOp(-64.0, -0.5, 7, False, -63.5, 0.0, 0.5)
def testOp_with7BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 63.4, 7, False, 0.0, 63.5, 0.5)
# 7 bits, narrow range.
def testOp_with7BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 126.0, 7, True, 0.0, 126.0, 1.0)
def testOp_with7BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 63.1, 7, True, 0.0, 63.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-63.1, -0.1, 7, True, -63.0, 0.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 62.9, 7, True, 0.0, 63.0, 0.5)
def _TestOp(self, input_min, input_max, num_bits, narrow_range,
expected_nudged_input_min, expected_nudged_input_max,
expected_step):
inputs = np.array(
[
expected_nudged_input_min - expected_step,
expected_nudged_input_min - 0.01, expected_nudged_input_min,
expected_nudged_input_min + 0.01,
expected_nudged_input_min + expected_step - 0.01,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step + 0.01,
expected_nudged_input_max - 0.01, expected_nudged_input_max,
expected_nudged_input_max + 0.01,
expected_nudged_input_max + expected_step
],
dtype=np.float32)
expected = np.array(
[
expected_nudged_input_min, expected_nudged_input_min,
expected_nudged_input_min, expected_nudged_input_min,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step,
expected_nudged_input_max, expected_nudged_input_max,
expected_nudged_input_max, expected_nudged_input_max
],
dtype=np.float32)
with self.test_session() as session:
with self.test_scope():
input_placeholder = array_ops.placeholder(
dtypes.float32, inputs.shape, name="inputs")
min_placeholder = array_ops.placeholder(dtypes.float32, (), name="min")
max_placeholder = array_ops.placeholder(dtypes.float32, (), name="max")
outputs = array_ops.fake_quant_with_min_max_vars(
input_placeholder,
min_placeholder,
max_placeholder,
num_bits=num_bits,
narrow_range=narrow_range)
result = session.run(
outputs, {
input_placeholder: inputs,
min_placeholder: input_min,
max_placeholder: input_max
})
self.assertAllCloseAccordingToType(
result, expected, rtol=1e-3, atol=1e-5, bfloat16_rtol=0.03)
class FakeQuantWithMinMaxVarsGradientTest(xla_test.XLATestCase):
"""Test cases for FakeQuantWithMinMaxVarsGradient operation."""
# 8 bits, wide range.
def testOp_with8BitsNoScalingNoNudging(self):
self._TestOp(0.0, 255.0, 8, False, 0.0, 255.0, 1.0)
def testOp_with8BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 128.0, 8, False, 0.0, 127.5, 0.5)
def testOp_with8BitsScalingAndNudgingUp(self):
self._TestOp(-128.0, -0.5, 8, False, -127.5, 0.0, 0.5)
def testOp_with8BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 127.4, 8, False, 0.0, 127.5, 0.5)
# 8 bits, narrow range.
def testOp_with8BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 254.0, 8, True, 0.0, 254.0, 1.0)
def testOp_with8BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 127.1, 8, True, 0.0, 127.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-127.1, -0.1, 8, True, -127.0, 0.0, 0.5)
def testOp_with8BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 126.9, 8, True, 0.0, 127.0, 0.5)
# 7 bits, wide range.
def testOp_with7BitsNoScalingNoNudging(self):
self._TestOp(0.0, 127.0, 7, False, 0.0, 127.0, 1.0)
def testOp_with7BitsScalingAndNudgingDown(self):
self._TestOp(0.5, 64.0, 7, False, 0.0, 63.5, 0.5)
def testOp_with7BitsScalingAndNudgingUp(self):
self._TestOp(-64.0, -0.5, 7, False, -63.5, 0.0, 0.5)
def testOp_with7BitsScalingAndNudgingBetween(self):
self._TestOp(-0.1, 63.4, 7, False, 0.0, 63.5, 0.5)
# 7 bits, narrow range.
def testOp_with7BitsNarrowRangeNoScalingNoNudging(self):
self._TestOp(0.0, 126.0, 7, True, 0.0, 126.0, 1.0)
def testOp_with7BitsNarrowRangeScalingAndNudgingDown(self):
self._TestOp(0.1, 63.1, 7, True, 0.0, 63.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingUp(self):
self._TestOp(-63.1, -0.1, 7, True, -63.0, 0.0, 0.5)
def testOp_with7BitsNarrowRangeScalingAndNudgingBetween(self):
self._TestOp(-0.1, 62.9, 7, True, 0.0, 63.0, 0.5)
def _TestOp(self, input_min, input_max, num_bits, narrow_range,
expected_nudged_input_min, expected_nudged_input_max,
expected_step):
inputs = np.array(
[
expected_nudged_input_min - expected_step,
expected_nudged_input_min - 0.01, expected_nudged_input_min,
expected_nudged_input_min + 0.01,
expected_nudged_input_min + expected_step - 0.01,
expected_nudged_input_min + expected_step,
expected_nudged_input_min + expected_step + 0.01,
expected_nudged_input_max - 0.01, expected_nudged_input_max,
expected_nudged_input_max + 0.01,
expected_nudged_input_max + expected_step
],
dtype=np.float32)
gradients = np.arange(1, len(inputs) + 1, dtype=np.float32)
expected_backprops_wrt_input = np.array(
[0.0, 0.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 0.0, 0.0],
dtype=np.float32)
expected_backprops_wrt_min = 1.0 + 2.0
expected_backprops_wrt_max = 10.0 + 11.0
with self.test_session() as session:
with self.test_scope():
gradient_placeholder = array_ops.placeholder(
dtypes.float32, gradients.shape, name="gradients")
input_placeholder = array_ops.placeholder(
dtypes.float32, inputs.shape, name="inputs")
min_placeholder = array_ops.placeholder(dtypes.float32, (), name="min")
max_placeholder = array_ops.placeholder(dtypes.float32, (), name="max")
outputs = array_ops.fake_quant_with_min_max_vars_gradient(
gradient_placeholder,
input_placeholder,
min_placeholder,
max_placeholder,
num_bits=num_bits,
narrow_range=narrow_range)
backprops_wrt_input, backprops_wrt_min, backprops_wrt_max = session.run(
outputs, {
gradient_placeholder: gradients,
input_placeholder: inputs,
min_placeholder: input_min,
max_placeholder: input_max
})
self.assertAllCloseAccordingToType(
backprops_wrt_input,
expected_backprops_wrt_input,
rtol=1e-3,
atol=1e-5,
bfloat16_rtol=0.03)
self.assertAllCloseAccordingToType(
backprops_wrt_min,
expected_backprops_wrt_min,
rtol=1e-3,
atol=1e-5,
bfloat16_rtol=0.03)
self.assertAllCloseAccordingToType(
backprops_wrt_max,
expected_backprops_wrt_max,
rtol=1e-3,
atol=1e-5,
bfloat16_rtol=0.03)
if __name__ == "__main__":
googletest.main()
| 38.774834 | 80 | 0.674751 | 2,390 | 17,565 | 4.721757 | 0.074895 | 0.025166 | 0.12459 | 0.063802 | 0.899424 | 0.88817 | 0.870713 | 0.870713 | 0.865219 | 0.8537 | 0 | 0.088273 | 0.210589 | 17,565 | 452 | 81 | 38.860619 | 0.725588 | 0.069058 | 0 | 0.874627 | 0 | 0 | 0.003803 | 0 | 0 | 0 | 0 | 0 | 0.01791 | 1 | 0.202985 | false | 0 | 0.026866 | 0 | 0.241791 | 0.002985 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6709eb27afdc6932b57bb85729f854226adc9936 | 26,009 | py | Python | data_steward/test/unit_test/data_steward/validation/participants/identity_match_test.py | berneskaracay/curation | 713e8ac606822a6f639ed3a74c9c0c07ba19f7c0 | [
"MIT"
] | null | null | null | data_steward/test/unit_test/data_steward/validation/participants/identity_match_test.py | berneskaracay/curation | 713e8ac606822a6f639ed3a74c9c0c07ba19f7c0 | [
"MIT"
] | null | null | null | data_steward/test/unit_test/data_steward/validation/participants/identity_match_test.py | berneskaracay/curation | 713e8ac606822a6f639ed3a74c9c0c07ba19f7c0 | [
"MIT"
] | null | null | null | # Python imports
import os
import unittest
# Third party imports
import googleapiclient
from mock import call, patch
# Project imports
import constants.bq_utils as bq_consts
import constants.validation.participants.identity_match as consts
import validation.participants.identity_match as id_match
class IdentityMatchTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
print('**************************************************************')
print(cls.__name__)
print('**************************************************************')
def setUp(self):
self.date_string = 20190503
self.project = 'foo'
self.dest_dataset = 'baz{}'.format(self.date_string)
self.pii_dataset = 'foo{}'.format(self.date_string)
self.rdr_dataset = 'bar{}'.format(self.date_string)
self.site_list = ['bogus-site', 'awesome-site']
self.bucket_ids = ['aou-bogus', 'aou-awesome']
self.internal_bucket_id = 'fantastic-internal'
self.pid = 8888
self.participant_info = {
'person_id': self.pid,
'first': 'Fancy-Nancy',
'middle': 'K',
'last': 'Drew',
'email': 'fancy-nancy_Drew_88@GMAIL.com',
'phone': '(555) 867-5309',
'street-one': '88 Lingerlost Rd',
'street-two': 'Apt. 4E',
'city': 'Frog Pond',
'state': 'AL',
'zip': '05645-1112',
'sex': 'Female',
'ehr_birthdate': '1990-01-01 00:00:00+00',
'rdr_birthdate': '1990-01-01'
}
mock_dest_dataset_patcher = patch(
'validation.participants.identity_match.bq_utils.create_dataset'
)
self.mock_dest_dataset = mock_dest_dataset_patcher.start()
self.mock_dest_dataset.return_value = {
bq_consts.DATASET_REF: {bq_consts.DATASET_ID: self.dest_dataset}
}
self.addCleanup(mock_dest_dataset_patcher.stop)
mock_match_tables_patcher = patch(
'validation.participants.identity_match.readers.create_match_values_table'
)
self.mock_match_tables = mock_match_tables_patcher.start()
self.addCleanup(mock_match_tables_patcher.stop)
mock_site_names_patcher = patch(
'validation.participants.identity_match.readers.get_hpo_site_names'
)
self.mock_site_names = mock_site_names_patcher.start()
self.mock_site_names.return_value = self.site_list
self.addCleanup(mock_site_names_patcher.stop)
mock_pii_match_tables_patcher = patch(
'validation.participants.identity_match.bq_utils.create_table'
)
self.mock_pii_match_tables = mock_pii_match_tables_patcher.start()
self.addCleanup(mock_pii_match_tables_patcher.stop)
mock_ehr_person_values_patcher = patch(
'validation.participants.identity_match.readers.get_ehr_person_values'
)
self.mock_ehr_person = mock_ehr_person_values_patcher.start()
self.mock_ehr_person.side_effect = [
{self.pid: 'Female'},
{self.pid: 'female'},
{self.pid: self.participant_info.get('ehr_birthdate')},
{self.pid: self.participant_info.get('ehr_birthdate')},
]
self.addCleanup(mock_ehr_person_values_patcher.stop)
mock_rdr_match_values_patcher = patch(
'validation.participants.identity_match.readers.get_rdr_match_values'
)
self.mock_rdr_values = mock_rdr_match_values_patcher.start()
self.mock_rdr_values.side_effect = [
{self.pid: self.participant_info.get('first')},
{self.pid: self.participant_info.get('first')},
{self.pid: self.participant_info.get('last')},
{self.pid: self.participant_info.get('last')},
{self.pid: self.participant_info.get('middle')},
{self.pid: self.participant_info.get('middle')},
{self.pid: self.participant_info.get('zip')},
{self.pid: self.participant_info.get('zip')},
{self.pid: self.participant_info.get('city')},
{self.pid: self.participant_info.get('city')},
{self.pid: self.participant_info.get('state')},
{self.pid: self.participant_info.get('state')},
{self.pid: self.participant_info.get('street-one')},
{self.pid: self.participant_info.get('street-two')},
{self.pid: self.participant_info.get('street-one')},
{self.pid: self.participant_info.get('street-two')},
{self.pid: self.participant_info.get('email')},
{self.pid: self.participant_info.get('email')},
{self.pid: self.participant_info.get('phone')},
{self.pid: self.participant_info.get('phone')},
{self.pid: 'Female'},
{self.pid: 'male'},
{self.pid: self.participant_info.get('rdr_birthdate')},
{self.pid: self.participant_info.get('rdr_birthdate')},
]
self.addCleanup(mock_rdr_match_values_patcher.stop)
mock_pii_values_patcher = patch(
'validation.participants.identity_match.readers.get_pii_values'
)
self.mock_pii_values = mock_pii_values_patcher.start()
self.mock_pii_values.side_effect = [
[(self.pid, self.participant_info.get('first'))],
[(self.pid, self.participant_info.get('first'))],
[(self.pid, self.participant_info.get('last'))],
[(self.pid, self.participant_info.get('last'))],
[(self.pid, self.participant_info.get('middle'))],
[(self.pid, self.participant_info.get('middle'))],
[(self.pid, self.participant_info.get('email'))],
[(self.pid, self.participant_info.get('email'))],
[(self.pid, self.participant_info.get('phone'))],
[(self.pid, self.participant_info.get('phone'))],
]
self.addCleanup(mock_pii_values_patcher.stop)
mock_append_to_result_table = patch(
'validation.participants.identity_match.writers.append_to_result_table'
)
self.mock_table_append = mock_append_to_result_table.start()
self.addCleanup(mock_append_to_result_table.stop)
mock_location_pii_patcher = patch(
'validation.participants.identity_match.readers.get_location_pii'
)
self.mock_location_pii = mock_location_pii_patcher.start()
self.mock_location_pii.side_effect = [
[(self.pid, self.participant_info.get('zip'))],
[(self.pid, self.participant_info.get('zip'))],
[(self.pid, self.participant_info.get('city'))],
[(self.pid, self.participant_info.get('city'))],
[(self.pid, self.participant_info.get('state'))],
[(self.pid, self.participant_info.get('state'))],
[(self.pid, self.participant_info.get('street-one'))],
[(self.pid, self.participant_info.get('street-two'))],
[(self.pid, self.participant_info.get('street-one'))],
[(self.pid, self.participant_info.get('street-two'))],
]
self.addCleanup(mock_location_pii_patcher.stop)
mock_merge_fields_patcher = patch(
'validation.participants.identity_match.writers.merge_fields_into_single_record'
)
self.mock_merge_fields = mock_merge_fields_patcher.start()
self.addCleanup(mock_merge_fields_patcher.stop)
mock_remove_sparse_records_patcher = patch(
'validation.participants.identity_match.writers.remove_sparse_records'
)
self.mock_remove_sparse_records = mock_remove_sparse_records_patcher.start()
self.addCleanup(mock_remove_sparse_records_patcher.stop)
mock_change_nulls_patcher = patch(
'validation.participants.identity_match.writers.change_nulls_to_missing_value'
)
self.mock_change_nulls = mock_change_nulls_patcher.start()
self.addCleanup(mock_change_nulls_patcher.stop)
mock_hpo_bucket_patcher = patch(
'validation.participants.identity_match.gcs_utils.get_hpo_bucket'
)
self.mock_hpo_bucket = mock_hpo_bucket_patcher.start()
self.mock_hpo_bucket.side_effect = self.bucket_ids
self.addCleanup(mock_hpo_bucket_patcher.stop)
mock_validation_report_patcher = patch(
'validation.participants.identity_match.writers.create_site_validation_report'
)
self.mock_validation_report = mock_validation_report_patcher.start()
self.mock_validation_report.return_value = ({}, 0)
self.addCleanup(mock_validation_report_patcher.stop)
mock_drc_bucket_patcher = patch(
'validation.participants.identity_match.gcs_utils.get_drc_bucket'
)
self.mock_drc_bucket = mock_drc_bucket_patcher.start()
self.mock_drc_bucket.return_value = self.internal_bucket_id
self.addCleanup(mock_drc_bucket_patcher.stop)
def test_match_participants_same_participant(self):
# pre conditions
# test
id_match.match_participants(
self.project,
self.rdr_dataset,
self.pii_dataset,
self.dest_dataset
)
# post conditions
self.assertEqual(self.mock_dest_dataset.call_count, 1)
self.assertEqual(
self.mock_dest_dataset.assert_called_with(
dataset_id=self.dest_dataset,
description=consts.DESTINATION_DATASET_DESCRIPTION.format(
version='', rdr_dataset=self.rdr_dataset, ehr_dataset=self.pii_dataset
),
overwrite_existing=True
),
None
)
self.assertEqual(self.mock_match_tables.call_count, 1)
self.assertEqual(
self.mock_match_tables.assert_called_with(
self.project, self.rdr_dataset, self.dest_dataset
),
None
)
self.assertEqual(self.mock_site_names.call_count, 1)
self.assertEqual(
self.mock_site_names.assert_called_once_with(),
None
)
num_sites = len(self.site_list)
self.assertEqual(self.mock_pii_match_tables.call_count, num_sites)
self.assertEqual(self.mock_ehr_person.call_count, num_sites * 2)
self.assertEqual(self.mock_rdr_values.call_count, num_sites * 12)
self.assertEqual(self.mock_pii_values.call_count, num_sites * 5)
self.assertEqual(self.mock_table_append.call_count, num_sites * 12)
self.assertEqual(self.mock_location_pii.call_count, num_sites * 5)
self.assertEqual(self.mock_merge_fields.call_count, num_sites)
self.assertEqual(self.mock_remove_sparse_records.call_count, num_sites)
self.assertEqual(self.mock_change_nulls.call_count, num_sites)
self.assertEqual(self.mock_hpo_bucket.call_count, 0)
self.assertEqual(self.mock_drc_bucket.call_count, 0)
self.assertEqual(self.mock_validation_report.call_count, 0)
def test_match_participants_same_participant_simulate_ehr_read_errors(self):
# pre conditions
self.mock_ehr_person.side_effect = googleapiclient.errors.HttpError(500, 'bar', 'baz')
# test
id_match.match_participants(
self.project,
self.rdr_dataset,
self.pii_dataset,
self.dest_dataset
)
# post conditions
self.assertEqual(self.mock_dest_dataset.call_count, 1)
self.assertEqual(
self.mock_dest_dataset.assert_called_with(
dataset_id=self.dest_dataset,
description=consts.DESTINATION_DATASET_DESCRIPTION.format(
version='', rdr_dataset=self.rdr_dataset, ehr_dataset=self.pii_dataset
),
overwrite_existing=True
),
None
)
self.assertEqual(self.mock_match_tables.call_count, 1)
self.assertEqual(
self.mock_match_tables.assert_called_with(
self.project, self.rdr_dataset, self.dest_dataset
),
None
)
self.assertEqual(self.mock_site_names.call_count, 1)
self.assertEqual(
self.mock_site_names.assert_called_once_with(),
None
)
num_sites = len(self.site_list)
self.assertEqual(self.mock_pii_match_tables.call_count, num_sites)
self.assertEqual(self.mock_ehr_person.call_count, num_sites * 2)
self.assertEqual(self.mock_rdr_values.call_count, num_sites * 12)
self.assertEqual(self.mock_pii_values.call_count, num_sites * 5)
self.assertEqual(self.mock_table_append.call_count, num_sites * 10)
self.assertEqual(self.mock_location_pii.call_count, num_sites * 5)
self.assertEqual(self.mock_merge_fields.call_count, num_sites)
self.assertEqual(self.mock_remove_sparse_records.call_count, num_sites)
self.assertEqual(self.mock_change_nulls.call_count, num_sites)
self.assertEqual(self.mock_hpo_bucket.call_count, 0)
self.assertEqual(self.mock_drc_bucket.call_count, 0)
self.assertEqual(self.mock_validation_report.call_count, 0)
def test_match_participants_same_participant_simulate_merge_errors(self):
# pre conditions
self.mock_merge_fields.side_effect = googleapiclient.errors.HttpError(500, 'bar', 'baz')
self.mock_remove_sparse_records.side_effect = googleapiclient.errors.HttpError(500, 'r', '')
self.mock_change_nulls.side_effect = googleapiclient.errors.HttpError(500, 'bar', 'baz')
# test
id_match.match_participants(
self.project,
self.rdr_dataset,
self.pii_dataset,
self.dest_dataset
)
# post conditions
self.assertEqual(self.mock_dest_dataset.call_count, 1)
self.assertEqual(
self.mock_dest_dataset.assert_called_with(
dataset_id=self.dest_dataset,
description=consts.DESTINATION_DATASET_DESCRIPTION.format(
version='', rdr_dataset=self.rdr_dataset, ehr_dataset=self.pii_dataset
),
overwrite_existing=True
),
None
)
self.assertEqual(self.mock_match_tables.call_count, 1)
self.assertEqual(
self.mock_match_tables.assert_called_with(
self.project, self.rdr_dataset, self.dest_dataset
),
None
)
self.assertEqual(self.mock_site_names.call_count, 1)
self.assertEqual(
self.mock_site_names.assert_called_once_with(),
None
)
num_sites = len(self.site_list)
self.assertEqual(self.mock_pii_match_tables.call_count, num_sites)
self.assertEqual(self.mock_ehr_person.call_count, num_sites * 2)
self.assertEqual(self.mock_rdr_values.call_count, num_sites * 12)
self.assertEqual(self.mock_pii_values.call_count, num_sites * 5)
self.assertEqual(self.mock_table_append.call_count, num_sites * 12)
self.assertEqual(self.mock_location_pii.call_count, num_sites * 5)
self.assertEqual(self.mock_merge_fields.call_count, num_sites)
self.assertEqual(self.mock_remove_sparse_records.call_count, num_sites)
self.assertEqual(self.mock_change_nulls.call_count, num_sites)
self.assertEqual(self.mock_hpo_bucket.call_count, 0)
self.assertEqual(self.mock_drc_bucket.call_count, 0)
self.assertEqual(self.mock_validation_report.call_count, 0)
def test_match_participants_same_participant_simulate_write_errors(self):
# pre conditions
self.mock_table_append.side_effect = googleapiclient.errors.HttpError(500, 'bar', 'baz')
# test
id_match.match_participants(
self.project,
self.rdr_dataset,
self.pii_dataset,
self.dest_dataset
)
# post conditions
self.assertEqual(self.mock_dest_dataset.call_count, 1)
self.assertEqual(
self.mock_dest_dataset.assert_called_with(
dataset_id=self.dest_dataset,
description=consts.DESTINATION_DATASET_DESCRIPTION.format(
version='', rdr_dataset=self.rdr_dataset, ehr_dataset=self.pii_dataset
),
overwrite_existing=True
),
None
)
self.assertEqual(self.mock_match_tables.call_count, 1)
self.assertEqual(
self.mock_match_tables.assert_called_with(
self.project, self.rdr_dataset, self.dest_dataset
),
None
)
self.assertEqual(self.mock_site_names.call_count, 1)
self.assertEqual(
self.mock_site_names.assert_called_once_with(),
None
)
num_sites = len(self.site_list)
self.assertEqual(self.mock_pii_match_tables.call_count, num_sites)
self.assertEqual(self.mock_ehr_person.call_count, num_sites * 2)
self.assertEqual(self.mock_rdr_values.call_count, num_sites * 12)
self.assertEqual(self.mock_pii_values.call_count, num_sites * 5)
self.assertEqual(self.mock_table_append.call_count, num_sites * 12)
self.assertEqual(self.mock_location_pii.call_count, num_sites * 5)
self.assertEqual(self.mock_merge_fields.call_count, num_sites)
self.assertEqual(self.mock_remove_sparse_records.call_count, num_sites)
self.assertEqual(self.mock_change_nulls.call_count, num_sites)
self.assertEqual(self.mock_hpo_bucket.call_count, 0)
self.assertEqual(self.mock_drc_bucket.call_count, 0)
self.assertEqual(self.mock_validation_report.call_count, 0)
def test_match_participants_same_participant_simulate_location_pii_read_errors(self):
# pre conditions
self.mock_location_pii.side_effect = googleapiclient.errors.HttpError(500, 'bar', 'baz')
# test
id_match.match_participants(
self.project,
self.rdr_dataset,
self.pii_dataset,
self.dest_dataset
)
# post conditions
self.assertEqual(self.mock_dest_dataset.call_count, 1)
self.assertEqual(
self.mock_dest_dataset.assert_called_with(
dataset_id=self.dest_dataset,
description=consts.DESTINATION_DATASET_DESCRIPTION.format(
version='', rdr_dataset=self.rdr_dataset, ehr_dataset=self.pii_dataset
),
overwrite_existing=True
),
None
)
self.assertEqual(self.mock_match_tables.call_count, 1)
self.assertEqual(
self.mock_match_tables.assert_called_with(
self.project, self.rdr_dataset, self.dest_dataset
),
None
)
self.assertEqual(self.mock_site_names.call_count, 1)
self.assertEqual(
self.mock_site_names.assert_called_once_with(),
None
)
num_sites = len(self.site_list)
self.assertEqual(self.mock_pii_match_tables.call_count, num_sites)
self.assertEqual(self.mock_ehr_person.call_count, num_sites * 2)
self.assertEqual(self.mock_rdr_values.call_count, num_sites * 12)
self.assertEqual(self.mock_pii_values.call_count, num_sites * 5)
self.assertEqual(self.mock_table_append.call_count, num_sites * 7)
self.assertEqual(self.mock_location_pii.call_count, num_sites * 4)
self.assertEqual(self.mock_merge_fields.call_count, num_sites)
self.assertEqual(self.mock_remove_sparse_records.call_count, num_sites)
self.assertEqual(self.mock_change_nulls.call_count, num_sites)
self.assertEqual(self.mock_hpo_bucket.call_count, 0)
self.assertEqual(self.mock_drc_bucket.call_count, 0)
self.assertEqual(self.mock_validation_report.call_count, 0)
def test_match_participants_same_participant_simulate_pii_read_errors(self):
# pre conditions
self.mock_pii_values.side_effect = googleapiclient.errors.HttpError(500, 'bar', 'baz')
# test
id_match.match_participants(
self.project,
self.rdr_dataset,
self.pii_dataset,
self.dest_dataset
)
# post conditions
self.assertEqual(self.mock_dest_dataset.call_count, 1)
self.assertEqual(
self.mock_dest_dataset.assert_called_with(
dataset_id=self.dest_dataset,
description=consts.DESTINATION_DATASET_DESCRIPTION.format(
version='', rdr_dataset=self.rdr_dataset, ehr_dataset=self.pii_dataset
),
overwrite_existing=True
),
None
)
self.assertEqual(self.mock_match_tables.call_count, 1)
self.assertEqual(
self.mock_match_tables.assert_called_with(
self.project, self.rdr_dataset, self.dest_dataset
),
None
)
self.assertEqual(self.mock_site_names.call_count, 1)
self.assertEqual(
self.mock_site_names.assert_called_once_with(),
None
)
num_sites = len(self.site_list)
self.assertEqual(self.mock_pii_match_tables.call_count, num_sites)
self.assertEqual(self.mock_ehr_person.call_count, num_sites * 2)
self.assertEqual(self.mock_rdr_values.call_count, num_sites * 12)
self.assertEqual(self.mock_pii_values.call_count, num_sites * 5)
self.assertEqual(self.mock_table_append.call_count, num_sites * 7)
self.assertEqual(self.mock_location_pii.call_count, num_sites * 5)
self.assertEqual(self.mock_merge_fields.call_count, num_sites)
self.assertEqual(self.mock_remove_sparse_records.call_count, num_sites)
self.assertEqual(self.mock_change_nulls.call_count, num_sites)
self.assertEqual(self.mock_hpo_bucket.call_count, 0)
self.assertEqual(self.mock_drc_bucket.call_count, 0)
self.assertEqual(self.mock_validation_report.call_count, 0)
def test_write_results_to_site_buckets(self):
# pre conditions
# test
id_match.write_results_to_site_buckets(self.project, self.dest_dataset)
# post conditions
num_sites = len(self.site_list)
self.assertEqual(self.mock_hpo_bucket.call_count, num_sites)
site_filename = os.path.join(
consts.REPORT_DIRECTORY.format(date=self.date_string), consts.REPORT_TITLE
)
drc_filename = os.path.join(self.dest_dataset, consts.REPORT_TITLE)
expected_report_calls = [
call(self.project, self.dest_dataset, [self.site_list[0]], self.bucket_ids[0], site_filename),
call(self.project, self.dest_dataset, [self.site_list[1]], self.bucket_ids[1], site_filename),
]
self.assertEqual(self.mock_validation_report.mock_calls, expected_report_calls)
def test_write_results_to_site_buckets_simulate_errors(self):
# pre conditions
self.mock_validation_report.return_value = ({}, 2)
# test
id_match.write_results_to_site_buckets(self.project, self.dest_dataset)
# post conditions
num_sites = len(self.site_list)
self.assertEqual(self.mock_hpo_bucket.call_count, num_sites)
site_filename = os.path.join(
consts.REPORT_DIRECTORY.format(date=self.date_string), consts.REPORT_TITLE
)
drc_filename = os.path.join(self.dest_dataset, consts.REPORT_TITLE)
expected_report_calls = [
call(self.project, self.dest_dataset, [self.site_list[0]], self.bucket_ids[0], site_filename),
call(self.project, self.dest_dataset, [self.site_list[1]], self.bucket_ids[1], site_filename),
]
self.assertEqual(self.mock_validation_report.mock_calls, expected_report_calls)
def test_write_results_to_site_buckets_None_dataset(self):
# pre conditions
# test
self.assertRaises(
RuntimeError,
id_match.write_results_to_site_buckets,
self.project,
None)
def test_write_results_to_drc_bucket(self):
# pre conditions
# test
id_match.write_results_to_drc_bucket(self.project, self.dest_dataset)
# post conditions
self.assertEqual(self.mock_drc_bucket.call_count, 1)
drc_filename = os.path.join(
self.dest_dataset,
consts.REPORT_DIRECTORY.format(date=self.date_string),
consts.REPORT_TITLE
)
expected_report_calls = [
call(self.project, self.dest_dataset, self.site_list, self.internal_bucket_id, drc_filename)
]
self.assertEqual(self.mock_validation_report.mock_calls, expected_report_calls)
def test_write_results_to_drc_bucket_simulate_error(self):
# pre conditions
self.mock_validation_report.return_value = ({}, 2)
# test
id_match.write_results_to_drc_bucket(self.project, self.dest_dataset)
# post conditions
self.assertEqual(self.mock_drc_bucket.call_count, 1)
drc_filename = os.path.join(
self.dest_dataset,
consts.REPORT_DIRECTORY.format(date=self.date_string),
consts.REPORT_TITLE
)
expected_report_calls = [
call(self.project, self.dest_dataset, self.site_list, self.internal_bucket_id, drc_filename)
]
self.assertEqual(self.mock_validation_report.mock_calls, expected_report_calls)
def test_write_results_to_drc_bucket_None_dataset(self):
# pre conditions
# test
self.assertRaises(
RuntimeError,
id_match.write_results_to_drc_bucket,
self.project,
None)
| 41.481659 | 106 | 0.655196 | 3,094 | 26,009 | 5.148998 | 0.0585 | 0.074823 | 0.138347 | 0.167472 | 0.871006 | 0.819283 | 0.798255 | 0.773461 | 0.753499 | 0.718348 | 0 | 0.008658 | 0.245069 | 26,009 | 626 | 107 | 41.547923 | 0.802699 | 0.017302 | 0 | 0.636364 | 0 | 0 | 0.070138 | 0.04561 | 0 | 0 | 0 | 0 | 0.268775 | 1 | 0.027668 | false | 0 | 0.013834 | 0 | 0.043478 | 0.005929 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
670fa7667c7ae45f176c513fe4bf61cf0e3c97ab | 2,562 | py | Python | evaluations/aachen/matchers.py | The-Learning-And-Vision-Atelier-LAVA/PoSFeat | e8a42c05158384113e1a0eafecf84b516a88c1f1 | [
"Apache-2.0"
] | 1 | 2022-03-23T05:49:50.000Z | 2022-03-23T05:49:50.000Z | evaluations/aachen/matchers.py | The-Learning-And-Vision-Atelier-LAVA/PoSFeat | e8a42c05158384113e1a0eafecf84b516a88c1f1 | [
"Apache-2.0"
] | null | null | null | evaluations/aachen/matchers.py | The-Learning-And-Vision-Atelier-LAVA/PoSFeat | e8a42c05158384113e1a0eafecf84b516a88c1f1 | [
"Apache-2.0"
] | null | null | null | import torch
# Mutual nearest neighbors matcher for L2 normalized descriptors.
def mutual_nn_matcher(descriptors1, descriptors2):
device = descriptors1.device
sim = descriptors1 @ descriptors2.t()
nn12 = torch.max(sim, dim=1)[1]
nn21 = torch.max(sim, dim=0)[1]
ids1 = torch.arange(0, sim.shape[0], device=device)
mask = ids1 == nn21[nn12]
matches = torch.stack([ids1[mask], nn12[mask]]).t()
return matches.data.cpu().numpy()
# Symmetric Lowe's ratio test matcher for L2 normalized descriptors.
def ratio_matcher(descriptors1, descriptors2, ratio=0.95):
device = descriptors1.device
sim = descriptors1 @ descriptors2.t()
# Retrieve top 2 nearest neighbors 1->2.
nns_sim, nns = torch.topk(sim, 2, dim=1)
nns_dist = torch.sqrt(2 - 2 * nns_sim)
# Compute Lowe's ratio.
ratios12 = nns_dist[:, 0] / (nns_dist[:, 1] + 1e-8)
# Save first NN.
nn12 = nns[:, 0]
# Retrieve top 2 nearest neighbors 1->2.
nns_sim, nns = torch.topk(sim.t(), 2, dim=1)
nns_dist = torch.sqrt(2 - 2 * nns_sim)
# Compute Lowe's ratio.
ratios21 = nns_dist[:, 0] / (nns_dist[:, 1] + 1e-8)
# Save first NN.
nn21 = nns[:, 0]
# Symmetric ratio test.
ids1 = torch.arange(0, sim.shape[0], device=device)
mask = torch.min(ratios12 <= ratio, ratios21[nn12] <= ratio)
# Final matches.
matches = torch.stack([ids1[mask], nn12[mask]], dim=-1)
return matches.data.cpu().numpy()
# Mutual NN + symmetric Lowe's ratio test matcher for L2 normalized descriptors.
def mutual_nn_ratio_matcher(descriptors1, descriptors2, ratio=0.95):
device = descriptors1.device
sim = descriptors1 @ descriptors2.t()
# Retrieve top 2 nearest neighbors 1->2.
nns_sim, nns = torch.topk(sim, 2, dim=1)
nns_dist = torch.sqrt(2 - 2 * nns_sim)
# Compute Lowe's ratio.
ratios12 = nns_dist[:, 0] / (nns_dist[:, 1] + 1e-8)
# Save first NN and match similarity.
nn12 = nns[:, 0]
# Retrieve top 2 nearest neighbors 1->2.
nns_sim, nns = torch.topk(sim.t(), 2, dim=1)
nns_dist = torch.sqrt(2 - 2 * nns_sim)
# Compute Lowe's ratio.
ratios21 = nns_dist[:, 0] / (nns_dist[:, 1] + 1e-8)
# Save first NN.
nn21 = nns[:, 0]
# Mutual NN + symmetric ratio test.
ids1 = torch.arange(0, sim.shape[0], device=device)
mask = torch.min(ids1 == nn21[nn12], torch.min(ratios12 <= ratio, ratios21[nn12] <= ratio))
# Final matches.
matches = torch.stack([ids1[mask], nn12[mask]], dim=-1)
return matches.data.cpu().numpy()
| 33.272727 | 95 | 0.63466 | 373 | 2,562 | 4.289544 | 0.155496 | 0.0525 | 0.035 | 0.0475 | 0.898125 | 0.8825 | 0.8825 | 0.829375 | 0.796875 | 0.796875 | 0 | 0.068966 | 0.21897 | 2,562 | 76 | 96 | 33.710526 | 0.730635 | 0.241998 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.025 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6713946724ef9acf22efa581754ae3072db8b165 | 16,865 | py | Python | model/resize.py | jingkunchen/MS-CMR_miccai_2019 | ce4b67e017c0891533efadbdce4947b1c4821d6c | [
"MIT"
] | 14 | 2019-08-29T07:34:29.000Z | 2021-06-07T13:16:39.000Z | model/resize.py | jingkunchen/MS-CMR_miccai_2019 | ce4b67e017c0891533efadbdce4947b1c4821d6c | [
"MIT"
] | 2 | 2020-11-03T05:07:43.000Z | 2021-05-07T12:03:24.000Z | model/resize.py | jingkunchen/MS-CMR_miccai_2019 | ce4b67e017c0891533efadbdce4947b1c4821d6c | [
"MIT"
] | 3 | 2019-09-12T07:04:08.000Z | 2021-10-29T18:50:42.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import print_function
import os
import numpy as np
import SimpleITK as sitk
import scipy.misc
from skimage.transform import resize
# from scipy.misc import imresize
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from scipy import ndimage
import cv2
import time
from decimal import Decimal
import skimage.io as io
data_dir = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/c0t2lge/'
thresh = 1
new_shape = (480,480)
rows = 256
cols = 256
xmin = 1
xmax = 1
ymin = 1
ymax = 1
xlenmin = 1
ylenmin = 1
img_count = 0
def show_img(data):
# for i in range(data.shape[0]):
# io.imshow(data[i, :, :], cmap='gray')
io.imshow(data[:,:], cmap = 'gray')
io.show()
def show_img_all(data):
for i in range(data.shape[0]):
io.imshow(data[i, :, :], cmap='gray')
# io.imshow(data[:,:], cmap = 'gray')
io.show()
# label transform, 500-->1, 200-->2, 600-->3
###### LGE
LGE_data_1ch = []
LGE_gt_1ch = []
img_dir = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/lge_images/'
if not os.path.exists(img_dir):
os.makedirs(img_dir)
gt_dir_1 = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/lgegt/'
lge_list = []
for pp in range(1, 6):
data_name = data_dir + 'patient' + str(pp) + '_LGE.nii.gz'
gt_name = gt_dir_1 + 'patient' + str(pp) + '_LGE_manual.nii.gz'
img = sitk.ReadImage(os.path.join(gt_name))
data_array = sitk.GetArrayFromImage(sitk.ReadImage(
os.path.join(data_name)))
gt_array = sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(gt_name)))
img_count +=gt_array.shape[0]
print(np.shape(data_array))
x = []
y = []
new_gt_list = []
for image in gt_array:
image = np.asarray(image)
image1 = image.copy()
image2 = image.copy()
image[image == 500] = 1
image[image == 200] = 0
image[image == 600] = 0
image1[image1 == 500] = 0
image1[image1 == 200] = 1
image1[image1 == 600] = 0
image2[image2 == 500] = 0
image2[image2 == 200] = 0
image2[image2 == 600] = 1
image = resize(image,new_shape, preserve_range =True)
image1 = resize(image1,new_shape, preserve_range =True)
image2 = resize(image2,new_shape, preserve_range =True)
image = np.around(image)
image1 = np.around(image1)
image2 = np.around(image2)
image = image.astype(np.int32)
image1 = image1.astype(np.int32)
image2 = image2.astype(np.int32)
image[image == 1] = 1
image1[image1 == 1] = 2
image2[image2 == 1] = 3
image = image +image1 +image2
[x_test, y_test] = image.shape
for i in range(x_test):
for j in range(y_test):
if(image[i, j] >3) :
print("--------error----------:", pp)
image[image == 1] = 500
image[image == 2] = 200
image[image == 3] = 600
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i][j] != 0:
if j < 40 or i < 40:
image[0:200, 0:50] = 0
else:
x.append(i)
y.append(j)
new_gt_list.append(image)
gt_array=np.array(new_gt_list)
print("new_array:",gt_array.shape)
new_data_list = []
print("idx:", pp)
for image in data_array:
image = np.asarray(image)
image = resize(image, new_shape, preserve_range =True)
image = np.around(image)
image = image.astype(np.int32)
new_data_list.append(image)
data_array=np.array(new_data_list)
print(min(x),max(x),max(x)-min(x),round(min(x)/np.shape(gt_array)[1],2), round(max(x)/np.shape(gt_array)[1],2))
print(min(y),max(y),max(y)-min(y),round(min(y)/np.shape(gt_array)[1],2), round(max(y)/np.shape(gt_array)[1],2))
mask = np.zeros(np.shape(data_array), dtype='float32')
mask[data_array >= thresh] = 1
mask[data_array < thresh] = 0
for iii in range(np.shape(data_array)[0]):
mask[iii, :, :] = scipy.ndimage.morphology.binary_fill_holes(
mask[iii, :, :]) #fill the holes inside br
data_array = data_array - np.mean(data_array[mask == 1])
data_array /= np.std(data_array[mask == 1])
rows_o = np.shape(data_array)[1]
cols_o = np.shape(data_array)[2]
data_array_ = data_array[:,
int((rows_o - rows) /
2):int((rows_o - rows) / 2) + rows,
int((cols_o - cols) /
2):int((cols_o - cols) / 2) + cols]
gt_array_ = gt_array[:,
int((rows_o - rows) /
2):int((rows_o - rows) / 2) + rows,
int((cols_o - cols) / 2):int((cols_o - cols) / 2) +
cols]
LGE_data_1ch.extend(np.float32(data_array_))
LGE_gt_1ch.extend(np.float32(gt_array_))
LGE_data_1ch = np.asarray(LGE_data_1ch)
LGE_gt_1ch = np.asarray(LGE_gt_1ch)
LGE_gt_1ch[LGE_gt_1ch == 500] = 1
LGE_gt_1ch[LGE_gt_1ch == 200] = 2
LGE_gt_1ch[LGE_gt_1ch == 600] = 3
# np.save('LGE_data_1ch_old.npy', LGE_data_1ch)
# np.save('LGE_gt_1ch_old.npy', LGE_gt_1ch)
##### T2
T2_data_1ch = []
T2_gt_1ch = []
T2_shape = []
img_dir = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/t2_images/'
if not os.path.exists(img_dir):
os.makedirs(img_dir)
gt_dir_1 = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/t2gt/'
for pp in range(1, 36):
data_name = data_dir + 'patient' + str(pp) + '_T2.nii.gz'
gt_name = gt_dir_1 + 'patient' + str(pp) + '_T2_manual.nii.gz'
data_array = sitk.GetArrayFromImage(sitk.ReadImage(
os.path.join(data_name)))
gt_array = sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(gt_name)))
data_array = np.nan_to_num(data_array, copy=True)
gt_array = np.nan_to_num(gt_array, copy=True)
img_count +=gt_array.shape[0]
x = []
y = []
count = 0
print("idx:", pp)
new_gt_list = []
for image in gt_array:
image = np.asarray(image)
image1 = image.copy()
image2 = image.copy()
image[image == 500] = 1
image[image == 200] = 0
image[image == 600] = 0
image1[image1 == 500] = 0
image1[image1 == 200] = 1
image1[image1 == 600] = 0
image2[image2 == 500] = 0
image2[image2 == 200] = 0
image2[image2 == 600] = 1
image = resize(image,new_shape, preserve_range =True)
image1 = resize(image1,new_shape, preserve_range =True)
image2 = resize(image2,new_shape, preserve_range =True)
image = np.around(image)
image1 = np.around(image1)
image2 = np.around(image2)
image = image.astype(np.int32)
image1 = image1.astype(np.int32)
image2 = image2.astype(np.int32)
image[image == 1] = 1
image1[image1 == 1] = 2
image2[image2 == 1] = 3
image = image +image1 +image2
[x_test, y_test] = image.shape
for i in range(x_test):
for j in range(y_test):
if(image[i, j] >3) :
print("--------error----------:", pp)
image[image == 1] = 500
image[image == 2] = 200
image[image == 3] = 600
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i][j] != 0:
if j < 40 or i < 40:
image[0:200, 0:50] = 0
else:
x.append(i)
y.append(j)
new_gt_list.append(image)
print("new_gt_list:",len(new_gt_list))
gt_array=np.array(new_gt_list)
print("new_array:",gt_array.shape)
new_data_list = []
for image in data_array:
image = np.asarray(image)
image = resize(image, new_shape, preserve_range =True)
image = np.around(image)
image = image.astype(np.int32)
new_data_list.append(image)
data_array=np.array(new_data_list)
print(min(x), max(x),
max(x) - min(x), round(min(x) / np.shape(gt_array)[1], 2),
round(max(x) / np.shape(gt_array)[1], 2))
print(min(y), max(y),
max(y) - min(y), round(min(y) / np.shape(gt_array)[1], 2),
round(max(y) / np.shape(gt_array)[1], 2))
if(round(min(x)/np.shape(gt_array)[1],2) < 0.2 or round(min(y)/np.shape(gt_array)[1],2)<0.2):
print("errorerrorerrorerrorerrorerror")
show_img(gt_array)
mask = np.zeros(np.shape(data_array), dtype='float32')
mask[data_array >= thresh] = 1
mask[data_array < thresh] = 0
for iii in range(np.shape(data_array)[0]):
mask[iii, :, :] = scipy.ndimage.morphology.binary_fill_holes(
mask[iii, :, :]) #fill the holes inside br
data_array = data_array - np.mean(data_array[mask == 1])
data_array /= np.std(data_array[mask == 1])
rows_o = np.shape(data_array)[1]
cols_o = np.shape(data_array)[2]
data_array_ = data_array[:,
int((rows_o - rows) /
2):int((rows_o - rows) / 2) + rows,
int((cols_o - cols) /
2):int((cols_o - cols) / 2) + cols]
gt_array_ = gt_array[:,
int((rows_o - rows) /
2):int((rows_o - rows) / 2) + rows,
int((cols_o - cols) / 2):int((cols_o - cols) / 2) +
cols]
T2_data_1ch.extend(np.float32(data_array_))
T2_gt_1ch.extend(np.float32(gt_array_))
T2_data_1ch = np.asarray(T2_data_1ch)
T2_gt_1ch = np.asarray(T2_gt_1ch)
T2_gt_1ch[T2_gt_1ch == 500] = 1
T2_gt_1ch[T2_gt_1ch == 200] = 2
T2_gt_1ch[T2_gt_1ch == 600] = 3
#######C0
#
C0_data_1ch = []
C0_gt_1ch = []
img_dir = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/c0_images/'
if not os.path.exists(img_dir):
os.makedirs(img_dir)
gt_dir_1 = '/Users/chenjingkun/Documents/data/C0LET2_nii45_for_challenge19/c0gt/'
for pp in range(1, 36):
data_name = data_dir + 'patient' + str(pp) + '_C0.nii.gz'
gt_name = gt_dir_1 + 'patient' + str(pp) + '_C0_manual.nii.gz'
data_array = sitk.GetArrayFromImage(sitk.ReadImage(
os.path.join(data_name)))
gt_array = sitk.GetArrayFromImage(sitk.ReadImage(os.path.join(gt_name)))
print(np.shape(data_array))
img_count +=gt_array.shape[0]
x = []
y = []
for image in gt_array:
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i][j] != 0:
if i < 30 or j <30:
print("label_error:", pp,image.shape)
else:
x.append(i)
y.append(j)
new_gt_list = []
for image in gt_array:
image = np.asarray(image)
image1 = image.copy()
image2 = image.copy()
image[image == 500] = 1
image[image == 200] = 0
image[image == 600] = 0
image1[image1 == 500] = 0
image1[image1 == 200] = 1
image1[image1 == 600] = 0
image2[image2 == 500] = 0
image2[image2 == 200] = 0
image2[image2 == 600] = 1
image = resize(image,new_shape, preserve_range =True)
image1 = resize(image1,new_shape, preserve_range =True)
image2 = resize(image2,new_shape, preserve_range =True)
image = np.around(image)
image1 = np.around(image1)
image2 = np.around(image2)
image = image.astype(np.int32)
image1 = image1.astype(np.int32)
image2 = image2.astype(np.int32)
image[image == 1] = 1
image1[image1 == 1] = 2
image2[image2 == 1] = 3
image = image +image1 +image2
[x_test, y_test] = image.shape
for i in range(x_test):
for j in range(y_test):
if(image[i, j] >3) :
print("--------error----------:", pp)
image[image == 1] = 500
image[image == 2] = 200
image[image == 3] = 600
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i][j] != 0:
if j < 40 or i < 40:
image[0:200, 0:50] = 0
else:
x.append(i)
y.append(j)
new_gt_list.append(image)
print("new_gt_list:",len(new_gt_list))
gt_array=np.array(new_gt_list)
print("new_array:",gt_array.shape)
new_data_list = []
for image in data_array:
image = np.asarray(image)
image = resize(image, new_shape, preserve_range =True)
image = np.around(image)
image = image.astype(np.int32)
new_data_list.append(image)
data_array=np.array(new_data_list)
print("idx:", pp)
print(min(x), max(x),
max(x) - min(x), round(min(x) / np.shape(gt_array)[1], 2),
round(max(x) / np.shape(gt_array)[1], 2))
print(min(y), max(y),
max(y) - min(y), round(min(y) / np.shape(gt_array)[1], 2),
round(max(y) / np.shape(gt_array)[1], 2))
mask = np.zeros(np.shape(data_array), dtype='float32')
mask[data_array >= thresh] = 1
mask[data_array < thresh] = 0
for iii in range(np.shape(data_array)[0]):
mask[iii, :, :] = scipy.ndimage.morphology.binary_fill_holes(
mask[iii, :, :]) #fill the holes inside br
data_array = data_array - np.mean(data_array[mask == 1])
data_array /= np.std(data_array[mask == 1])
rows_o = np.shape(data_array)[1]
cols_o = np.shape(data_array)[2]
data_array_ = data_array[:,
int((rows_o - rows) /
2):int((rows_o - rows) / 2) + rows,
int((cols_o - cols) /
2):int((cols_o - cols) / 2) + cols]
gt_array_ = gt_array[:,
int((rows_o - rows) /
2):int((rows_o - rows) / 2) + rows,
int((cols_o - cols) / 2):int((cols_o - cols) / 2) +
cols]
C0_data_1ch.extend(np.float32(data_array_))
C0_gt_1ch.extend(np.float32(gt_array_))
C0_data_1ch = np.asarray(C0_data_1ch)
C0_gt_1ch = np.asarray(C0_gt_1ch)
C0_gt_1ch[C0_gt_1ch == 500] = 1
C0_gt_1ch[C0_gt_1ch == 200] = 2
C0_gt_1ch[C0_gt_1ch == 600] = 3
print("LGE_data_1ch:", LGE_data_1ch.shape)
print("C0_data_1ch:", C0_data_1ch.shape)
print("T2_data_1ch:", T2_data_1ch.shape)
print("LGE_gt_1ch:", LGE_gt_1ch.shape)
print("C0_gt_1ch:", C0_gt_1ch.shape)
print("T2_gt_1ch:", T2_gt_1ch.shape)
new_data_array = np.concatenate((LGE_data_1ch, C0_data_1ch), axis=0)
new_data_array = np.concatenate((new_data_array, LGE_data_1ch), axis=0)
new_data_array = np.concatenate((new_data_array, T2_data_1ch), axis=0)
new_gt_array = np.concatenate((LGE_gt_1ch, C0_gt_1ch), axis=0)
new_gt_array = np.concatenate((new_gt_array, LGE_gt_1ch), axis=0)
new_gt_array = np.concatenate((new_gt_array, T2_gt_1ch), axis=0)
print("new_gt_array:", new_gt_array.shape)
print("new_gt_array:", new_gt_array.shape)
train_imgs_new = new_data_array.copy()
train_masks_new = new_gt_array.copy()
count_i = 0
count = 0
count_list = []
for image in new_gt_array:
max_1 = np.max(image)
if max_1 < 0:
delete_number = count_i - count
train_imgs_new = np.delete(train_imgs_new, delete_number, axis=0)
train_masks_new = np.delete(train_masks_new, delete_number, axis=0)
count += 1
print("empty:",count, count_i)
count_i +=1
new_data_array = train_imgs_new
new_gt_array = train_masks_new
print("new_gt_array:", new_gt_array.shape)
print("new_gt_array:", new_gt_array.shape)
# np.save('/Users/chenjingkun/Documents/result/MS-CMR_miccai_2019_result/del/all_data_resize_256_256.npy', new_data_array[:, :, :, np.newaxis])
# np.save('/Users/chenjingkun/Documents/result/MS-CMR_miccai_2019_result/del/all_gt_resize_256_256.npy', new_gt_array[:, :, :, np.newaxis])
# output_path = "/Users/chenjingkun/Documents/result/MS-CMR_miccai_2019_result/del/all_data_resize_256_256.nii.gz"
# sitk.WriteImage(sitk.GetImageFromArray(new_data_array),output_path)
# output_path = "/Users/chenjingkun/Documents/result/MS-CMR_miccai_2019_result/del/all_gt_resize_256_256.nii.gz"
# sitk.WriteImage(sitk.GetImageFromArray(new_gt_array),output_path)
print("img_count:",img_count)
print("new_gt_array:",new_gt_array.shape) | 35.356394 | 143 | 0.574741 | 2,456 | 16,865 | 3.713762 | 0.069625 | 0.063151 | 0.021927 | 0.024559 | 0.85188 | 0.828637 | 0.793115 | 0.771078 | 0.758908 | 0.743011 | 0 | 0.061004 | 0.281708 | 16,865 | 477 | 144 | 35.356394 | 0.691927 | 0.06131 | 0 | 0.720102 | 0 | 0 | 0.061982 | 0.03786 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005089 | false | 0 | 0.033079 | 0 | 0.038168 | 0.089059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.