hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0ad5cebaebfc45c709ee5e2afc5821fe88e796c9 | 1,780 | py | Python | smart-bookmarks-django/smart_bookmarks/core/interfaces.py | tpiwonski/smart-bookmarks | 747968ad96242937c9369776bbf52bf44b4b87b4 | [
"MIT"
] | null | null | null | smart-bookmarks-django/smart_bookmarks/core/interfaces.py | tpiwonski/smart-bookmarks | 747968ad96242937c9369776bbf52bf44b4b87b4 | [
"MIT"
] | 1 | 2021-03-10T12:22:14.000Z | 2021-03-10T12:22:14.000Z | smart-bookmarks-django/smart_bookmarks/core/interfaces.py | tpiwonski/smart-bookmarks | 747968ad96242937c9369776bbf52bf44b4b87b4 | [
"MIT"
] | null | null | null | import abc
from dataclasses import dataclass
from typing import List, Optional, Sequence
from smart_bookmarks.core.models import Bookmark, Page
class CreateBookmarkInterface(abc.ABC):
@abc.abstractmethod
def create_bookmark(self, url: str) -> Bookmark:
"""TODO"""
@dataclass
class PageData:
title: str
description: str
text: str
source: str
class CreatePageInterface(abc.ABC):
@abc.abstractmethod
def create_page(self, bookmark: Bookmark, page_data: PageData) -> Page:
"""TODO"""
class ScrapePageInterface(abc.ABC):
@abc.abstractmethod
def scrape_page_async(self, bookmark: Bookmark):
"""TODO"""
class IndexBookmarkInterface(abc.ABC):
@abc.abstractmethod
def index_bookmark_async(self, bookmark: Bookmark):
"""TODO"""
@abc.abstractmethod
def index_bookmark(self, bookmark: Bookmark):
"""TODO"""
@abc.abstractmethod
def index_bookmark_by_id(self, bookmark_id):
"""TODO"""
@dataclass
class BookmarkSearchHighlights:
url: List[str] = None
title: List[str] = None
description: List[str] = None
text: List[str] = None
@dataclass
class BookmarkSearchResult:
score: float = None
highlights: BookmarkSearchHighlights = None
bookmark: Bookmark = None
@dataclass
class BookmarkSearchResults:
total_results: int = None
max_score: float = None
results: List[BookmarkSearchResult] = None
class SearchBookmarkInterface(abc.ABC):
@abc.abstractmethod
def search(
self, query: str, operator: str, page_number: int, per_page: int
) -> BookmarkSearchResults:
"""TODO"""
class ImportBookmarkInterface(abc.ABC):
@abc.abstractmethod
def import_file(self, file_path: str):
"""TODO"""
| 21.707317 | 75 | 0.685955 | 191 | 1,780 | 6.293194 | 0.303665 | 0.0599 | 0.133111 | 0.114809 | 0.27371 | 0.148087 | 0.094842 | 0.094842 | 0.094842 | 0 | 0 | 0 | 0.208427 | 1,780 | 81 | 76 | 21.975309 | 0.853087 | 0.02191 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0 | 1 | 0.16 | false | 0 | 0.12 | 0 | 0.76 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0ad60b7e0838ae42d4399110f13c786448c3005d | 281 | py | Python | modules/runtimes/PrefabLua.py | threefoldtech/jumpscale_prefab9 | 75cb6267618d9087d4a9a7eaad121a14e497f07d | [
"Apache-2.0"
] | null | null | null | modules/runtimes/PrefabLua.py | threefoldtech/jumpscale_prefab9 | 75cb6267618d9087d4a9a7eaad121a14e497f07d | [
"Apache-2.0"
] | 31 | 2018-07-31T15:40:07.000Z | 2019-02-20T11:07:15.000Z | modules/runtimes/PrefabLua.py | threefoldtech/jumpscale_prefab | 75cb6267618d9087d4a9a7eaad121a14e497f07d | [
"Apache-2.0"
] | null | null | null | from jumpscale import j
app = j.tools.prefab._getBaseAppClass()
class PrefabLua(app):
NAME = "lua"
def package(self, name, server=''):
if server:
server = '--server=' + server
self.prefab.core.run("luarocks install %s %s" % (server, name))
| 20.071429 | 71 | 0.601423 | 34 | 281 | 4.941176 | 0.647059 | 0.214286 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252669 | 281 | 13 | 72 | 21.615385 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.120996 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0ae0fa03c55372ee815adc45b9952f7e256685c9 | 26,586 | py | Python | pythonocc/lib/OCC/ChFi2d.py | mbattistello/lambda_converters | 0dfef8927d3ddbc74e5c8ce1c38701ea182a27b3 | [
"MIT"
] | 2 | 2017-10-30T15:51:48.000Z | 2020-11-04T02:23:36.000Z | pythonocc/lib/OCC/ChFi2d.py | mbattistello/lambda_converters | 0dfef8927d3ddbc74e5c8ce1c38701ea182a27b3 | [
"MIT"
] | null | null | null | pythonocc/lib/OCC/ChFi2d.py | mbattistello/lambda_converters | 0dfef8927d3ddbc74e5c8ce1c38701ea182a27b3 | [
"MIT"
] | 4 | 2017-11-29T05:24:34.000Z | 2020-03-05T23:23:23.000Z | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 2.0.10
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (3,0,0):
new_instancemethod = lambda func, inst, cls: _ChFi2d.SWIG_PyInstanceMethod_New(func)
else:
from new import instancemethod as new_instancemethod
if version_info >= (2,6,0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_ChFi2d', [dirname(__file__)])
except ImportError:
import _ChFi2d
return _ChFi2d
if fp is not None:
try:
_mod = imp.load_module('_ChFi2d', fp, pathname, description)
finally:
fp.close()
return _mod
_ChFi2d = swig_import_helper()
del swig_import_helper
else:
import _ChFi2d
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self,class_type,name,value,static=1):
if (name == "thisown"): return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name,None)
if method: return method(self,value)
if (not static):
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self,class_type,name,value):
return _swig_setattr_nondynamic(self,class_type,name,value,0)
def _swig_getattr(self,class_type,name):
if (name == "thisown"): return self.this.own()
method = class_type.__swig_getmethods__.get(name,None)
if method: return method(self)
raise AttributeError(name)
def _swig_repr(self):
try: strthis = "proxy of " + self.this.__repr__()
except: strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object : pass
_newclass = 0
def _swig_setattr_nondynamic_method(set):
def set_attr(self,name,value):
if (name == "thisown"): return self.this.own(value)
if hasattr(self,name) or (name == "this"):
set(self,name,value)
else:
raise AttributeError("You cannot add attributes to %s" % self)
return set_attr
class SwigPyIterator(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
def __init__(self, *args, **kwargs): raise AttributeError("No constructor defined - class is abstract")
__repr__ = _swig_repr
__swig_destroy__ = _ChFi2d.delete_SwigPyIterator
def __iter__(self): return self
SwigPyIterator.value = new_instancemethod(_ChFi2d.SwigPyIterator_value,None,SwigPyIterator)
SwigPyIterator.incr = new_instancemethod(_ChFi2d.SwigPyIterator_incr,None,SwigPyIterator)
SwigPyIterator.decr = new_instancemethod(_ChFi2d.SwigPyIterator_decr,None,SwigPyIterator)
SwigPyIterator.distance = new_instancemethod(_ChFi2d.SwigPyIterator_distance,None,SwigPyIterator)
SwigPyIterator.equal = new_instancemethod(_ChFi2d.SwigPyIterator_equal,None,SwigPyIterator)
SwigPyIterator.copy = new_instancemethod(_ChFi2d.SwigPyIterator_copy,None,SwigPyIterator)
SwigPyIterator.next = new_instancemethod(_ChFi2d.SwigPyIterator_next,None,SwigPyIterator)
SwigPyIterator.__next__ = new_instancemethod(_ChFi2d.SwigPyIterator___next__,None,SwigPyIterator)
SwigPyIterator.previous = new_instancemethod(_ChFi2d.SwigPyIterator_previous,None,SwigPyIterator)
SwigPyIterator.advance = new_instancemethod(_ChFi2d.SwigPyIterator_advance,None,SwigPyIterator)
SwigPyIterator.__eq__ = new_instancemethod(_ChFi2d.SwigPyIterator___eq__,None,SwigPyIterator)
SwigPyIterator.__ne__ = new_instancemethod(_ChFi2d.SwigPyIterator___ne__,None,SwigPyIterator)
SwigPyIterator.__iadd__ = new_instancemethod(_ChFi2d.SwigPyIterator___iadd__,None,SwigPyIterator)
SwigPyIterator.__isub__ = new_instancemethod(_ChFi2d.SwigPyIterator___isub__,None,SwigPyIterator)
SwigPyIterator.__add__ = new_instancemethod(_ChFi2d.SwigPyIterator___add__,None,SwigPyIterator)
SwigPyIterator.__sub__ = new_instancemethod(_ChFi2d.SwigPyIterator___sub__,None,SwigPyIterator)
SwigPyIterator_swigregister = _ChFi2d.SwigPyIterator_swigregister
SwigPyIterator_swigregister(SwigPyIterator)
import OCC.TopoDS
import OCC.MMgt
import OCC.Standard
import OCC.TCollection
import OCC.TopLoc
import OCC.gp
import OCC.TopAbs
import OCC.TopTools
import OCC.TColStd
import OCC.Message
def register_handle(handle, base_object):
"""
Inserts the handle into the base object to
prevent memory corruption in certain cases
"""
try:
if base_object.IsKind("Standard_Transient"):
base_object.thisHandle = handle
base_object.thisown = False
except:
pass
ChFi2d_NotPlanar = _ChFi2d.ChFi2d_NotPlanar
ChFi2d_NoFace = _ChFi2d.ChFi2d_NoFace
ChFi2d_InitialisationError = _ChFi2d.ChFi2d_InitialisationError
ChFi2d_ParametersError = _ChFi2d.ChFi2d_ParametersError
ChFi2d_Ready = _ChFi2d.ChFi2d_Ready
ChFi2d_IsDone = _ChFi2d.ChFi2d_IsDone
ChFi2d_ComputationError = _ChFi2d.ChFi2d_ComputationError
ChFi2d_ConnexionError = _ChFi2d.ChFi2d_ConnexionError
ChFi2d_TangencyError = _ChFi2d.ChFi2d_TangencyError
ChFi2d_FirstEdgeDegenerated = _ChFi2d.ChFi2d_FirstEdgeDegenerated
ChFi2d_LastEdgeDegenerated = _ChFi2d.ChFi2d_LastEdgeDegenerated
ChFi2d_BothEdgesDegenerated = _ChFi2d.ChFi2d_BothEdgesDegenerated
ChFi2d_NotAuthorized = _ChFi2d.ChFi2d_NotAuthorized
class chfi2d(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
def __init__(self, *args, **kwargs): raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_destroy__ = _ChFi2d.delete_chfi2d
chfi2d_swigregister = _ChFi2d.chfi2d_swigregister
chfi2d_swigregister(chfi2d)
class ChFi2d_AnaFilletAlgo(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
__repr__ = _swig_repr
def __init__(self, *args):
"""
* An empty constructor. Use the method Init() to initialize the class.
:rtype: None
* A constructor. It expects a wire consisting of two edges of type (any combination of): - segment - arc of circle.
:param theWire:
:type theWire: TopoDS_Wire &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
* A constructor. It expects two edges having a common point of type: - segment - arc of circle.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
"""
_ChFi2d.ChFi2d_AnaFilletAlgo_swiginit(self,_ChFi2d.new_ChFi2d_AnaFilletAlgo(*args))
def Init(self, *args):
"""
* Initializes the class by a wire consisting of two edges.
:param theWire:
:type theWire: TopoDS_Wire &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
* Initializes the class by two edges.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
"""
return _ChFi2d.ChFi2d_AnaFilletAlgo_Init(self, *args)
def Perform(self, *args):
"""
* Calculates a fillet.
:param radius:
:type radius: float
:rtype: bool
"""
return _ChFi2d.ChFi2d_AnaFilletAlgo_Perform(self, *args)
def Result(self, *args):
"""
* Retrieves a result (fillet and shrinked neighbours).
:param e1:
:type e1: TopoDS_Edge &
:param e2:
:type e2: TopoDS_Edge &
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_AnaFilletAlgo_Result(self, *args)
__swig_destroy__ = _ChFi2d.delete_ChFi2d_AnaFilletAlgo
ChFi2d_AnaFilletAlgo.Init = new_instancemethod(_ChFi2d.ChFi2d_AnaFilletAlgo_Init,None,ChFi2d_AnaFilletAlgo)
ChFi2d_AnaFilletAlgo.Perform = new_instancemethod(_ChFi2d.ChFi2d_AnaFilletAlgo_Perform,None,ChFi2d_AnaFilletAlgo)
ChFi2d_AnaFilletAlgo.Result = new_instancemethod(_ChFi2d.ChFi2d_AnaFilletAlgo_Result,None,ChFi2d_AnaFilletAlgo)
ChFi2d_AnaFilletAlgo_swigregister = _ChFi2d.ChFi2d_AnaFilletAlgo_swigregister
ChFi2d_AnaFilletAlgo_swigregister(ChFi2d_AnaFilletAlgo)
class ChFi2d_Builder(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
__repr__ = _swig_repr
def __init__(self, *args):
"""
:rtype: None
* The face <F> can be build on a closed or an open wire.
:param F:
:type F: TopoDS_Face &
:rtype: None
"""
_ChFi2d.ChFi2d_Builder_swiginit(self,_ChFi2d.new_ChFi2d_Builder(*args))
def Init(self, *args):
"""
:param F:
:type F: TopoDS_Face &
:rtype: None
:param RefFace:
:type RefFace: TopoDS_Face &
:param ModFace:
:type ModFace: TopoDS_Face &
:rtype: None
"""
return _ChFi2d.ChFi2d_Builder_Init(self, *args)
def AddFillet(self, *args):
"""
* Add a fillet of radius <Radius> on the wire between the two edges connected to the vertex <V>. <AddFillet> returns the fillet edge. The returned edge has sense only if the status <status> is <IsDone>
:param V:
:type V: TopoDS_Vertex &
:param Radius:
:type Radius: float
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_Builder_AddFillet(self, *args)
def ModifyFillet(self, *args):
"""
* modify the fillet radius and return the new fillet edge. this edge has sense only if the status <status> is <IsDone>.
:param Fillet:
:type Fillet: TopoDS_Edge &
:param Radius:
:type Radius: float
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_Builder_ModifyFillet(self, *args)
def RemoveFillet(self, *args):
"""
* removes the fillet <Fillet> and returns the vertex connecting the two adjacent edges to this fillet.
:param Fillet:
:type Fillet: TopoDS_Edge &
:rtype: TopoDS_Vertex
"""
return _ChFi2d.ChFi2d_Builder_RemoveFillet(self, *args)
def AddChamfer(self, *args):
"""
* Add a chamfer on the wire between the two edges connected <E1> and <E2>. <AddChamfer> returns the chamfer edge. This edge has sense only if the status <status> is <IsDone>.
:param E1:
:type E1: TopoDS_Edge &
:param E2:
:type E2: TopoDS_Edge &
:param D1:
:type D1: float
:param D2:
:type D2: float
:rtype: TopoDS_Edge
* Add a chamfer on the wire between the two edges connected to the vertex <V>. The chamfer will make an angle <Ang> with the edge <E>, and one of its extremities will be on <E> at distance <D>. The returned edge has sense only if the status <status> is <IsDone>. Warning: The value of <Ang> must be expressed in Radian.
:param E:
:type E: TopoDS_Edge &
:param V:
:type V: TopoDS_Vertex &
:param D:
:type D: float
:param Ang:
:type Ang: float
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_Builder_AddChamfer(self, *args)
def ModifyChamfer(self, *args):
"""
* modify the chamfer <Chamfer> and returns the new chamfer edge. This edge as sense only if the status <status> is <IsDone>.
:param Chamfer:
:type Chamfer: TopoDS_Edge &
:param E1:
:type E1: TopoDS_Edge &
:param E2:
:type E2: TopoDS_Edge &
:param D1:
:type D1: float
:param D2:
:type D2: float
:rtype: TopoDS_Edge
* modify the chamfer <Chamfer> and returns the new chamfer edge. This edge as sense only if the status <status> is <IsDone>. Warning: The value of <Ang> must be expressed in Radian.
:param Chamfer:
:type Chamfer: TopoDS_Edge &
:param E:
:type E: TopoDS_Edge &
:param D:
:type D: float
:param Ang:
:type Ang: float
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_Builder_ModifyChamfer(self, *args)
def RemoveChamfer(self, *args):
"""
* removes the chamfer <Chamfer> and returns the vertex connecting the two adjacent edges to this chamfer.
:param Chamfer:
:type Chamfer: TopoDS_Edge &
:rtype: TopoDS_Vertex
"""
return _ChFi2d.ChFi2d_Builder_RemoveChamfer(self, *args)
def Result(self, *args):
"""
* returns the modified face
:rtype: TopoDS_Face
"""
return _ChFi2d.ChFi2d_Builder_Result(self, *args)
def IsModified(self, *args):
"""
:param E:
:type E: TopoDS_Edge &
:rtype: bool
"""
return _ChFi2d.ChFi2d_Builder_IsModified(self, *args)
def FilletEdges(self, *args):
"""
* returns the list of new edges
:rtype: TopTools_SequenceOfShape
"""
return _ChFi2d.ChFi2d_Builder_FilletEdges(self, *args)
def NbFillet(self, *args):
"""
:rtype: int
"""
return _ChFi2d.ChFi2d_Builder_NbFillet(self, *args)
def ChamferEdges(self, *args):
"""
* returns the list of new edges
:rtype: TopTools_SequenceOfShape
"""
return _ChFi2d.ChFi2d_Builder_ChamferEdges(self, *args)
def NbChamfer(self, *args):
"""
:rtype: int
"""
return _ChFi2d.ChFi2d_Builder_NbChamfer(self, *args)
def HasDescendant(self, *args):
"""
:param E:
:type E: TopoDS_Edge &
:rtype: bool
"""
return _ChFi2d.ChFi2d_Builder_HasDescendant(self, *args)
def DescendantEdge(self, *args):
"""
* returns the modified edge if <E> has descendant or <E> in the other case.
:param E:
:type E: TopoDS_Edge &
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_Builder_DescendantEdge(self, *args)
def BasisEdge(self, *args):
"""
* Returns the parent edge of <E> Warning: If <E>is a basis edge, the returned edge would be equal to <E>
:param E:
:type E: TopoDS_Edge &
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_Builder_BasisEdge(self, *args)
def Status(self, *args):
"""
:rtype: ChFi2d_ConstructionError
"""
return _ChFi2d.ChFi2d_Builder_Status(self, *args)
__swig_destroy__ = _ChFi2d.delete_ChFi2d_Builder
ChFi2d_Builder.Init = new_instancemethod(_ChFi2d.ChFi2d_Builder_Init,None,ChFi2d_Builder)
ChFi2d_Builder.AddFillet = new_instancemethod(_ChFi2d.ChFi2d_Builder_AddFillet,None,ChFi2d_Builder)
ChFi2d_Builder.ModifyFillet = new_instancemethod(_ChFi2d.ChFi2d_Builder_ModifyFillet,None,ChFi2d_Builder)
ChFi2d_Builder.RemoveFillet = new_instancemethod(_ChFi2d.ChFi2d_Builder_RemoveFillet,None,ChFi2d_Builder)
ChFi2d_Builder.AddChamfer = new_instancemethod(_ChFi2d.ChFi2d_Builder_AddChamfer,None,ChFi2d_Builder)
ChFi2d_Builder.ModifyChamfer = new_instancemethod(_ChFi2d.ChFi2d_Builder_ModifyChamfer,None,ChFi2d_Builder)
ChFi2d_Builder.RemoveChamfer = new_instancemethod(_ChFi2d.ChFi2d_Builder_RemoveChamfer,None,ChFi2d_Builder)
ChFi2d_Builder.Result = new_instancemethod(_ChFi2d.ChFi2d_Builder_Result,None,ChFi2d_Builder)
ChFi2d_Builder.IsModified = new_instancemethod(_ChFi2d.ChFi2d_Builder_IsModified,None,ChFi2d_Builder)
ChFi2d_Builder.FilletEdges = new_instancemethod(_ChFi2d.ChFi2d_Builder_FilletEdges,None,ChFi2d_Builder)
ChFi2d_Builder.NbFillet = new_instancemethod(_ChFi2d.ChFi2d_Builder_NbFillet,None,ChFi2d_Builder)
ChFi2d_Builder.ChamferEdges = new_instancemethod(_ChFi2d.ChFi2d_Builder_ChamferEdges,None,ChFi2d_Builder)
ChFi2d_Builder.NbChamfer = new_instancemethod(_ChFi2d.ChFi2d_Builder_NbChamfer,None,ChFi2d_Builder)
ChFi2d_Builder.HasDescendant = new_instancemethod(_ChFi2d.ChFi2d_Builder_HasDescendant,None,ChFi2d_Builder)
ChFi2d_Builder.DescendantEdge = new_instancemethod(_ChFi2d.ChFi2d_Builder_DescendantEdge,None,ChFi2d_Builder)
ChFi2d_Builder.BasisEdge = new_instancemethod(_ChFi2d.ChFi2d_Builder_BasisEdge,None,ChFi2d_Builder)
ChFi2d_Builder.Status = new_instancemethod(_ChFi2d.ChFi2d_Builder_Status,None,ChFi2d_Builder)
ChFi2d_Builder_swigregister = _ChFi2d.ChFi2d_Builder_swigregister
ChFi2d_Builder_swigregister(ChFi2d_Builder)
class ChFi2d_ChamferAPI(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
__repr__ = _swig_repr
def __init__(self, *args):
"""
* An empty constructor.
:rtype: None
* A constructor accepting a wire consisting of two linear edges.
:param theWire:
:type theWire: TopoDS_Wire &
:rtype: None
* A constructor accepting two linear edges.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:rtype: None
"""
_ChFi2d.ChFi2d_ChamferAPI_swiginit(self,_ChFi2d.new_ChFi2d_ChamferAPI(*args))
def Init(self, *args):
"""
* Initializes the class by a wire consisting of two libear edges.
:param theWire:
:type theWire: TopoDS_Wire &
:rtype: None
* Initializes the class by two linear edges.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:rtype: None
"""
return _ChFi2d.ChFi2d_ChamferAPI_Init(self, *args)
def Perform(self, *args):
"""
* Constructs a chamfer edge. Returns true if the edge is constructed.
:rtype: bool
"""
return _ChFi2d.ChFi2d_ChamferAPI_Perform(self, *args)
def Result(self, *args):
"""
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param theLength1:
:type theLength1: float
:param theLength2:
:type theLength2: float
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_ChamferAPI_Result(self, *args)
__swig_destroy__ = _ChFi2d.delete_ChFi2d_ChamferAPI
ChFi2d_ChamferAPI.Init = new_instancemethod(_ChFi2d.ChFi2d_ChamferAPI_Init,None,ChFi2d_ChamferAPI)
ChFi2d_ChamferAPI.Perform = new_instancemethod(_ChFi2d.ChFi2d_ChamferAPI_Perform,None,ChFi2d_ChamferAPI)
ChFi2d_ChamferAPI.Result = new_instancemethod(_ChFi2d.ChFi2d_ChamferAPI_Result,None,ChFi2d_ChamferAPI)
ChFi2d_ChamferAPI_swigregister = _ChFi2d.ChFi2d_ChamferAPI_swigregister
ChFi2d_ChamferAPI_swigregister(ChFi2d_ChamferAPI)
class ChFi2d_FilletAPI(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
__repr__ = _swig_repr
def __init__(self, *args):
"""
* An empty constructor of the fillet algorithm. Call a method Init() to initialize the algorithm before calling of a Perform() method.
:rtype: None
* A constructor of a fillet algorithm: accepts a wire consisting of two edges in a plane.
:param theWire:
:type theWire: TopoDS_Wire &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
* A constructor of a fillet algorithm: accepts two edges in a plane.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
"""
_ChFi2d.ChFi2d_FilletAPI_swiginit(self,_ChFi2d.new_ChFi2d_FilletAPI(*args))
def Init(self, *args):
"""
* Initializes a fillet algorithm: accepts a wire consisting of two edges in a plane.
:param theWire:
:type theWire: TopoDS_Wire &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
* Initializes a fillet algorithm: accepts two edges in a plane.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
"""
return _ChFi2d.ChFi2d_FilletAPI_Init(self, *args)
def Perform(self, *args):
"""
* Constructs a fillet edge. Returns true if at least one result was found.
:param theRadius:
:type theRadius: float
:rtype: bool
"""
return _ChFi2d.ChFi2d_FilletAPI_Perform(self, *args)
def NbResults(self, *args):
"""
* Returns number of possible solutions. <thePoint> chooses a particular fillet in case of several fillets may be constructed (for example, a circle intersecting a segment in 2 points). Put the intersecting (or common) point of the edges.
:param thePoint:
:type thePoint: gp_Pnt
:rtype: int
"""
return _ChFi2d.ChFi2d_FilletAPI_NbResults(self, *args)
def Result(self, *args):
"""
* Returns result (fillet edge, modified edge1, modified edge2), nearest to the given point <thePoint> if iSolution == -1 <thePoint> chooses a particular fillet in case of several fillets may be constructed (for example, a circle intersecting a segment in 2 points). Put the intersecting (or common) point of the edges.
:param thePoint:
:type thePoint: gp_Pnt
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param iSolution: default value is -1
:type iSolution: int
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_FilletAPI_Result(self, *args)
__swig_destroy__ = _ChFi2d.delete_ChFi2d_FilletAPI
ChFi2d_FilletAPI.Init = new_instancemethod(_ChFi2d.ChFi2d_FilletAPI_Init,None,ChFi2d_FilletAPI)
ChFi2d_FilletAPI.Perform = new_instancemethod(_ChFi2d.ChFi2d_FilletAPI_Perform,None,ChFi2d_FilletAPI)
ChFi2d_FilletAPI.NbResults = new_instancemethod(_ChFi2d.ChFi2d_FilletAPI_NbResults,None,ChFi2d_FilletAPI)
ChFi2d_FilletAPI.Result = new_instancemethod(_ChFi2d.ChFi2d_FilletAPI_Result,None,ChFi2d_FilletAPI)
ChFi2d_FilletAPI_swigregister = _ChFi2d.ChFi2d_FilletAPI_swigregister
ChFi2d_FilletAPI_swigregister(ChFi2d_FilletAPI)
class ChFi2d_FilletAlgo(object):
thisown = _swig_property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc='The membership flag')
__repr__ = _swig_repr
def __init__(self, *args):
"""
* An empty constructor of the fillet algorithm. Call a method Init() to initialize the algorithm before calling of a Perform() method.
:rtype: None
* A constructor of a fillet algorithm: accepts a wire consisting of two edges in a plane.
:param theWire:
:type theWire: TopoDS_Wire &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
* A constructor of a fillet algorithm: accepts two edges in a plane.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
"""
_ChFi2d.ChFi2d_FilletAlgo_swiginit(self,_ChFi2d.new_ChFi2d_FilletAlgo(*args))
def Init(self, *args):
"""
* Initializes a fillet algorithm: accepts a wire consisting of two edges in a plane.
:param theWire:
:type theWire: TopoDS_Wire &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
* Initializes a fillet algorithm: accepts two edges in a plane.
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param thePlane:
:type thePlane: gp_Pln
:rtype: None
"""
return _ChFi2d.ChFi2d_FilletAlgo_Init(self, *args)
def Perform(self, *args):
"""
* Constructs a fillet edge. Returns true, if at least one result was found
:param theRadius:
:type theRadius: float
:rtype: bool
"""
return _ChFi2d.ChFi2d_FilletAlgo_Perform(self, *args)
def NbResults(self, *args):
"""
* Returns number of possible solutions. <thePoint> chooses a particular fillet in case of several fillets may be constructed (for example, a circle intersecting a segment in 2 points). Put the intersecting (or common) point of the edges.
:param thePoint:
:type thePoint: gp_Pnt
:rtype: int
"""
return _ChFi2d.ChFi2d_FilletAlgo_NbResults(self, *args)
def Result(self, *args):
"""
* Returns result (fillet edge, modified edge1, modified edge2), neares to the given point <thePoint> if iSolution == -1. <thePoint> chooses a particular fillet in case of several fillets may be constructed (for example, a circle intersecting a segment in 2 points). Put the intersecting (or common) point of the edges.
:param thePoint:
:type thePoint: gp_Pnt
:param theEdge1:
:type theEdge1: TopoDS_Edge &
:param theEdge2:
:type theEdge2: TopoDS_Edge &
:param iSolution: default value is -1
:type iSolution: int
:rtype: TopoDS_Edge
"""
return _ChFi2d.ChFi2d_FilletAlgo_Result(self, *args)
__swig_destroy__ = _ChFi2d.delete_ChFi2d_FilletAlgo
ChFi2d_FilletAlgo.Init = new_instancemethod(_ChFi2d.ChFi2d_FilletAlgo_Init,None,ChFi2d_FilletAlgo)
ChFi2d_FilletAlgo.Perform = new_instancemethod(_ChFi2d.ChFi2d_FilletAlgo_Perform,None,ChFi2d_FilletAlgo)
ChFi2d_FilletAlgo.NbResults = new_instancemethod(_ChFi2d.ChFi2d_FilletAlgo_NbResults,None,ChFi2d_FilletAlgo)
ChFi2d_FilletAlgo.Result = new_instancemethod(_ChFi2d.ChFi2d_FilletAlgo_Result,None,ChFi2d_FilletAlgo)
ChFi2d_FilletAlgo_swigregister = _ChFi2d.ChFi2d_FilletAlgo_swigregister
ChFi2d_FilletAlgo_swigregister(ChFi2d_FilletAlgo)
| 35.120211 | 327 | 0.683555 | 3,108 | 26,586 | 5.576577 | 0.105856 | 0.060235 | 0.06237 | 0.051869 | 0.663109 | 0.486384 | 0.474613 | 0.447265 | 0.418244 | 0.399896 | 0 | 0.020427 | 0.232152 | 26,586 | 756 | 328 | 35.166667 | 0.828598 | 0.349733 | 0 | 0.206897 | 1 | 0 | 0.024265 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.180077 | false | 0.011494 | 0.076628 | 0.007663 | 0.509579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0aeb24969c355b039311c262ed067542c11ea60f | 3,725 | py | Python | metrices1.py | DevikalyanDas/Multiclass-Segmentation | 58a4ad3fb4bea753e53e109c62123ff26797a877 | [
"Apache-2.0"
] | 1 | 2021-04-21T12:37:49.000Z | 2021-04-21T12:37:49.000Z | metrices1.py | DevikalyanDas/Semantic-Segmentation-Pytorch | 58a4ad3fb4bea753e53e109c62123ff26797a877 | [
"Apache-2.0"
] | null | null | null | metrices1.py | DevikalyanDas/Semantic-Segmentation-Pytorch | 58a4ad3fb4bea753e53e109c62123ff26797a877 | [
"Apache-2.0"
] | null | null | null | import torch
def _threshold(x, threshold=None):
if threshold is not None:
return (x > threshold).type(x.dtype)
else:
return x
def make_one_hot(labels, classes):
one_hot = torch.FloatTensor(labels.size()[0], classes, labels.size()[2], labels.size()[3]).zero_().to(labels.device)
target = one_hot.scatter_(1, labels.data, 1)
return target
class IoU(object):
def __init__(self, eps=1e-7, threshold=0.5, activation=None,classes=19):
self.eps = eps
self.threshold = threshold
self.activation = torch.nn.Softmax(dim=1)
self.classes = classes
def __call__(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
y_pr = _threshold(y_pr, threshold=self.threshold)
y_gt = make_one_hot(y_gt.long().unsqueeze(dim=1),self.classes)
intersection = torch.sum(y_gt * y_pr)
union = torch.sum(y_gt) + torch.sum(y_pr) - intersection + self.eps
score = (intersection + self.eps) / union
return score.item()
class Fscore(object):
def __init__(self, beta=1, eps=1e-7, threshold=0.5, activation=None,classes=19):
self.eps = eps
self.beta = beta
self.threshold = threshold
self.activation = torch.nn.Softmax(dim=1)
self.classes = classes
def __call__(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
y_pr = _threshold(y_pr, threshold=self.threshold)
y_gt = make_one_hot(y_gt.long().unsqueeze(dim=1),self.classes)
tp = torch.sum(y_gt * y_pr)
fp = torch.sum(y_pr) - tp
fn = torch.sum(y_gt) - tp
score = ((1 +self.beta ** 2) * tp + self.eps) \
/ ((1 + self.beta ** 2) * tp + self.beta ** 2 * fn + fp + self.eps)
return score.item()
class Accuracy(object):
def __init__(self, threshold=0.5, activation=None,classes=19):
self.threshold = threshold
self.activation = torch.nn.Softmax(dim=1)
self.classes = classes
def __call__(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
y_pr = _threshold(y_pr, threshold=self.threshold)
y_gt = make_one_hot(y_gt.long().unsqueeze(dim=1),self.classes)
tp = torch.sum(y_gt == y_pr, dtype=y_pr.dtype)
score = tp / y_gt.view(-1).shape[0]
return score.item()
class Sensitivity(object):
# Sensitivity
def __init__(self, eps=1e-7,activation=None, threshold=0.5,classes=19):
self.eps = eps
self.threshold = threshold
self.activation = torch.nn.Softmax(dim=1)
self.classes = classes
def __call__(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
y_pr = _threshold(y_pr, threshold=self.threshold)
y_gt = make_one_hot(y_gt.long().unsqueeze(dim=1),self.classes)
tp = torch.sum(y_gt * y_pr)
fn = torch.sum(y_gt) - tp
score = (tp + self.eps) / (tp + fn + self.eps)
return score.item()
class Specificity(object):
def __init__(self, eps=1e-7,activation=None, threshold=0.5,classes=19):
self.eps = eps
self.threshold = threshold
self.activation = torch.nn.Softmax(dim=1)
self.classes = classes
def __call__(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
y_pr = _threshold(y_pr, threshold=self.threshold)
y_gt = make_one_hot(y_gt.long().unsqueeze(dim=1),self.classes)
tn = torch.sum(y_gt == y_pr, dtype=y_pr.dtype)-torch.sum(y_gt * y_pr)
tp = torch.sum(y_gt * y_pr)
fp = torch.sum(y_pr) - tp
score = (tn + self.eps) / (tn + fp + self.eps)
return score.item() | 35.141509 | 121 | 0.593826 | 541 | 3,725 | 3.857671 | 0.121996 | 0.053186 | 0.056061 | 0.034499 | 0.741255 | 0.741255 | 0.674173 | 0.643987 | 0.627695 | 0.627695 | 0 | 0.018868 | 0.274362 | 3,725 | 106 | 122 | 35.141509 | 0.753237 | 0.002953 | 0 | 0.646341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0 | 0.012195 | 0 | 0.317073 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0af3b693d57ac5b8b8c24f56b2796218115915c1 | 5,137 | py | Python | setup/setup_database.py | Gui-Luz/Empire | 6f5eeff5f46dd085e1317cb09b39853a2fce5d13 | [
"BSD-3-Clause"
] | 5,720 | 2017-02-02T13:59:40.000Z | 2022-03-31T09:50:10.000Z | setup/setup_database.py | VookiBoo/Empire | 5aae31e7de591282773d2c8498af04ee4e8778f5 | [
"BSD-3-Clause"
] | 866 | 2017-02-02T10:56:31.000Z | 2020-01-17T07:47:05.000Z | setup/setup_database.py | VookiBoo/Empire | 5aae31e7de591282773d2c8498af04ee4e8778f5 | [
"BSD-3-Clause"
] | 2,181 | 2017-02-04T10:28:41.000Z | 2022-03-31T04:36:56.000Z | #!/usr/bin/env python
import sqlite3, os, string, hashlib, random
###################################################
#
# Default values for the config
#
###################################################
# Staging Key is set up via environmental variable
# or via command line. By setting RANDOM a randomly
# selected password will automatically be selected
# or it can be set to any bash acceptable character
# set for a password.
STAGING_KEY = os.getenv('STAGING_KEY', "BLANK")
punctuation = '!#%&()*+,-./:;<=>?@[]^_{|}~'
# otherwise prompt the user for a set value to hash for the negotiation password
if STAGING_KEY == "BLANK":
choice = raw_input("\n [>] Enter server negotiation password, enter for random generation: ")
if choice == "":
# if no password is entered, generation something random
STAGING_KEY = ''.join(random.sample(string.ascii_letters + string.digits + punctuation, 32))
else:
STAGING_KEY = hashlib.md5(choice).hexdigest()
elif STAGING_KEY == "RANDOM":
STAGING_KEY = ''.join(random.sample(string.ascii_letters + string.digits + punctuation, 32))
# Calculate the install path. We know the project directory will always be the parent of the current directory. Any modifications of the folder structure will
# need to be applied here.
INSTALL_PATH = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) + "/"
# an IP white list to ONLY accept clients from
# format is "192.168.1.1,192.168.1.10-192.168.1.100,10.0.0.0/8"
IP_WHITELIST = ""
# an IP black list to reject accept clients from
# format is "192.168.1.1,192.168.1.10-192.168.1.100,10.0.0.0/8"
IP_BLACKLIST = ""
# default credentials used to log into the RESTful API
API_USERNAME = "empireadmin"
API_PASSWORD = ''.join(random.sample(string.ascii_letters + string.digits + punctuation, 32))
# the 'permanent' API token (doesn't change)
API_PERMANENT_TOKEN = ''.join(random.choice(string.ascii_lowercase + string.digits) for x in range(40))
# default obfuscation setting
OBFUSCATE = 0
# default obfuscation command
OBFUSCATE_COMMAND = r'Token\All\1'
###################################################
#
# Database setup.
#
###################################################
conn = sqlite3.connect('%s/data/empire.db'%INSTALL_PATH)
c = conn.cursor()
# try to prevent some of the weird sqlite I/O errors
c.execute('PRAGMA journal_mode = OFF')
c.execute('DROP TABLE IF EXISTS config')
c.execute('''CREATE TABLE config (
"staging_key" text,
"install_path" text,
"ip_whitelist" text,
"ip_blacklist" text,
"autorun_command" text,
"autorun_data" text,
"rootuser" boolean,
"api_username" text,
"api_password" text,
"api_current_token" text,
"api_permanent_token" text,
"obfuscate" integer,
"obfuscate_command" text
)''')
# kick off the config component of the database
c.execute("INSERT INTO config VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)", (STAGING_KEY, INSTALL_PATH, IP_WHITELIST, IP_BLACKLIST, '', '', False, API_USERNAME, API_PASSWORD, '', API_PERMANENT_TOKEN, OBFUSCATE, OBFUSCATE_COMMAND))
c.execute('''CREATE TABLE "agents" (
"id" integer PRIMARY KEY,
"session_id" text,
"listener" text,
"name" text,
"language" text,
"language_version" text,
"delay" integer,
"jitter" real,
"external_ip" text,
"internal_ip" text,
"username" text,
"high_integrity" integer,
"process_name" text,
"process_id" text,
"hostname" text,
"os_details" text,
"session_key" text,
"nonce" text,
"checkin_time" text,
"lastseen_time" text,
"parent" text,
"children" text,
"servers" text,
"profile" text,
"functions" text,
"kill_date" text,
"working_hours" text,
"lost_limit" integer,
"taskings" text,
"results" text
)''')
# the 'options' field contains a pickled version of all
# currently set listener options
c.execute('''CREATE TABLE "listeners" (
"id" integer PRIMARY KEY,
"name" text,
"module" text,
"listener_type" text,
"listener_category" text,
"enabled" boolean,
"options" blob
)''')
# type = hash, plaintext, token
# for krbtgt, the domain SID is stored in misc
# for tokens, the data is base64'ed and stored in pass
c.execute('''CREATE TABLE "credentials" (
"id" integer PRIMARY KEY,
"credtype" text,
"domain" text,
"username" text,
"password" text,
"host" text,
"os" text,
"sid" text,
"notes" text
)''')
c.execute( '''CREATE TABLE "taskings" (
"id" integer,
"data" text,
"agent" text,
PRIMARY KEY(id, agent)
)''')
c.execute( '''CREATE TABLE "results" (
"id" integer,
"data" text,
"agent" text,
PRIMARY KEY(id, agent)
)''')
# event_types -> checkin, task, result, rename
c.execute('''CREATE TABLE "reporting" (
"id" integer PRIMARY KEY,
"name" text,
"event_type" text,
"message" text,
"time_stamp" text,
"taskID" integer,
FOREIGN KEY(taskID) REFERENCES results(id)
)''')
# commit the changes and close everything off
conn.commit()
conn.close()
print "\n [*] Database setup completed!\n"
| 28.698324 | 221 | 0.642593 | 659 | 5,137 | 4.905918 | 0.37481 | 0.030931 | 0.030312 | 0.041138 | 0.154964 | 0.145067 | 0.128364 | 0.128364 | 0.128364 | 0.128364 | 0 | 0.019429 | 0.188437 | 5,137 | 178 | 222 | 28.859551 | 0.756057 | 0.268055 | 0 | 0.226087 | 0 | 0 | 0.628158 | 0.021289 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.043478 | 0.008696 | null | null | 0.008696 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0af6e2eadb4d9b50c467a824fa706d045a541dca | 445 | py | Python | wy_ctf_website/users/migrations/0007_user_rank.py | pattyjogal/wy_ctf_website | c248dcf9c5926b6534e8d8c0eae02ed220b07cde | [
"MIT"
] | null | null | null | wy_ctf_website/users/migrations/0007_user_rank.py | pattyjogal/wy_ctf_website | c248dcf9c5926b6534e8d8c0eae02ed220b07cde | [
"MIT"
] | null | null | null | wy_ctf_website/users/migrations/0007_user_rank.py | pattyjogal/wy_ctf_website | c248dcf9c5926b6534e8d8c0eae02ed220b07cde | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-11-21 06:24
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0006_user_terminal_password'),
]
operations = [
migrations.AddField(
model_name='user',
name='rank',
field=models.IntegerField(default=1),
),
]
| 21.190476 | 49 | 0.611236 | 49 | 445 | 5.367347 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064615 | 0.269663 | 445 | 20 | 50 | 22.25 | 0.744615 | 0.150562 | 0 | 0 | 1 | 0 | 0.106667 | 0.072 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0af72cf3b828c6eec9493810194b505ad11cc502 | 163 | py | Python | Unit_02/Exercise_02_06.py | icerovski/SoftUni_Python_Fundamentals | 4816304fcd459269b6a044c88b0cb7113657a986 | [
"MIT"
] | null | null | null | Unit_02/Exercise_02_06.py | icerovski/SoftUni_Python_Fundamentals | 4816304fcd459269b6a044c88b0cb7113657a986 | [
"MIT"
] | null | null | null | Unit_02/Exercise_02_06.py | icerovski/SoftUni_Python_Fundamentals | 4816304fcd459269b6a044c88b0cb7113657a986 | [
"MIT"
] | null | null | null | # 6. Math Power
def math_power(n, p):
power_output = n ** p
print(power_output)
number = float(input())
power = float(input())
math_power(number, power) | 16.3 | 25 | 0.662577 | 25 | 163 | 4.16 | 0.44 | 0.259615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.190184 | 163 | 10 | 26 | 16.3 | 0.780303 | 0.079755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e404949fb05b8a2e8735f93a8e3bf6f8c0a5b0c6 | 1,644 | py | Python | Diena_1_4_thonny/forloops.py | ValdisG31/Python_RTU_08_20 | 0c49376bdfb4c3f521c1d7a443a690b5c0f96efa | [
"MIT"
] | null | null | null | Diena_1_4_thonny/forloops.py | ValdisG31/Python_RTU_08_20 | 0c49376bdfb4c3f521c1d7a443a690b5c0f96efa | [
"MIT"
] | null | null | null | Diena_1_4_thonny/forloops.py | ValdisG31/Python_RTU_08_20 | 0c49376bdfb4c3f521c1d7a443a690b5c0f96efa | [
"MIT"
] | null | null | null | # # for loops are for definite iteration
#
# for n in range(10):
# print("Number is", n)
# # print("out of loop", n)
# for i in range(1, 11): # so careful of off-by-one errors
# print(f"I like this {i} better")
# my_name = "Valdis"
# for c in my_name: # c could be also char, or my_c, c is just shorter
# print("Letter ", c)
# print("This happens after the loop is done")
# for n in range(20,25):
# print(n)
#
# for my_num in range(100,110,2): # i can add step to range
# print(my_num)
# #
# # my_name = "Valdis"
# # for c in my_name:
# # print("Letter ", c)
# #
# # my_list = [1,2,100,105,"Valdis","potatoes", 9000, 107.35]
# # total = 0
# # big_items = 0
# # for item in my_list:
# # print("Working with item: ", item)
# # if type(item) == int or type(item) == float:
# # total += item
# # if item > 100:
# # big_items += 1
# #
# # my_num_list = [1,6,17,7,-6,49,642,6,2,-5555]
# #
# # my_max = None
# # for num in my_num_list:
# # if my_max == None: # this will happen with first item
# # my_max = num
# # if num > my_max:
# # my_max = num
# #
# # print(max(*my_num_list))
#
# # So what do we do when we need and index
# my_name = "Valdis Saulespurēns"
# print(f"{my_name} is {len(my_name)} characters long")
# print(my_name[0])
# print(my_name[5])
# # # anti-pattern do not write this way in Python
# for n in range(0, len(my_name)):
# print(n, my_name[n])
# #
# # more Pythonic is using enumerate:
# # use this if you need index
# for i, c in enumerate(my_name, start=1001):
# print(i, c)
#
# | 28.344828 | 70 | 0.563869 | 273 | 1,644 | 3.285714 | 0.395604 | 0.080268 | 0.020067 | 0.036789 | 0.053512 | 0.053512 | 0.053512 | 0.053512 | 0 | 0 | 0 | 0.052808 | 0.274331 | 1,644 | 58 | 71 | 28.344828 | 0.699078 | 0.900852 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e40a7eab088beaf6a2ad1c3c0bd9fb206f824f43 | 5,329 | py | Python | sphinx/locale/__init__.py | sphinxjp/doc10 | 0f47a8091cfa976bac5033aa07ec9a39ab71e7f0 | [
"BSD-2-Clause"
] | 2 | 2019-03-03T00:04:36.000Z | 2020-10-06T16:22:38.000Z | sphinx/locale/__init__.py | sphinxjp/doc10 | 0f47a8091cfa976bac5033aa07ec9a39ab71e7f0 | [
"BSD-2-Clause"
] | null | null | null | sphinx/locale/__init__.py | sphinxjp/doc10 | 0f47a8091cfa976bac5033aa07ec9a39ab71e7f0 | [
"BSD-2-Clause"
] | 1 | 2019-03-03T00:04:38.000Z | 2019-03-03T00:04:38.000Z | # -*- coding: utf-8 -*-
"""
sphinx.locale
~~~~~~~~~~~~~
Locale utilities.
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import gettext
import UserString
class _TranslationProxy(UserString.UserString, object):
"""Class for proxy strings from gettext translations. This is a helper
for the lazy_* functions from this module.
The proxy implementation attempts to be as complete as possible, so that
the lazy objects should mostly work as expected, for example for sorting.
This inherits from UserString because some docutils versions use UserString
for their Text nodes, which then checks its argument for being either a
basestring or UserString, otherwise calls str() -- not unicode() -- on it.
This also inherits from object to make the __new__ method work.
"""
__slots__ = ('_func', '_args')
def __new__(cls, func, *args):
if not args:
# not called with "function" and "arguments", but a plain string
return unicode(func)
return object.__new__(cls)
def __getnewargs__(self):
return (self._func,) + self._args
def __init__(self, func, *args):
self._func = func
self._args = args
data = property(lambda x: x._func(*x._args))
# replace function from UserString; it instantiates a self.__class__
# for the encoding result
def encode(self, encoding=None, errors=None):
if encoding:
if errors:
return self.data.encode(encoding, errors)
else:
return self.data.encode(encoding)
else:
return self.data.encode()
def __contains__(self, key):
return key in self.data
def __nonzero__(self):
return bool(self.data)
def __dir__(self):
return dir(unicode)
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def __str__(self):
return str(self.data)
def __unicode__(self):
return unicode(self.data)
def __add__(self, other):
return self.data + other
def __radd__(self, other):
return other + self.data
def __mod__(self, other):
return self.data % other
def __rmod__(self, other):
return other % self.data
def __mul__(self, other):
return self.data * other
def __rmul__(self, other):
return other * self.data
def __lt__(self, other):
return self.data < other
def __le__(self, other):
return self.data <= other
def __eq__(self, other):
return self.data == other
def __ne__(self, other):
return self.data != other
def __gt__(self, other):
return self.data > other
def __ge__(self, other):
return self.data >= other
def __getattr__(self, name):
if name == '__members__':
return self.__dir__()
return getattr(self.data, name)
def __getstate__(self):
return self._func, self._args
def __setstate__(self, tup):
self._func, self._args = tup
def __getitem__(self, key):
return self.data[key]
def __copy__(self):
return self
def __repr__(self):
try:
return 'i' + repr(unicode(self.data))
except:
return '<%s broken>' % self.__class__.__name__
def mygettext(string):
"""Used instead of _ when creating TranslationProxies, because _ is
not bound yet at that time."""
return _(string)
def lazy_gettext(string):
"""A lazy version of `gettext`."""
#if isinstance(string, _TranslationProxy):
# return string
return _TranslationProxy(mygettext, string)
l_ = lazy_gettext
admonitionlabels = {
'attention': l_('Attention'),
'caution': l_('Caution'),
'danger': l_('Danger'),
'error': l_('Error'),
'hint': l_('Hint'),
'important': l_('Important'),
'note': l_('Note'),
'seealso': l_('See Also'),
'tip': l_('Tip'),
'warning': l_('Warning'),
}
versionlabels = {
'versionadded': l_('New in version %s'),
'versionchanged': l_('Changed in version %s'),
'deprecated': l_('Deprecated since version %s'),
}
pairindextypes = {
'module': l_('module'),
'keyword': l_('keyword'),
'operator': l_('operator'),
'object': l_('object'),
'exception': l_('exception'),
'statement': l_('statement'),
'builtin': l_('built-in function'),
}
translator = None
def _(message):
return translator.ugettext(message)
def init(locale_dirs, language):
global translator
# the None entry is the system's default locale path
has_translation = True
for dir_ in locale_dirs:
try:
trans = gettext.translation('sphinx', localedir=dir_,
languages=[language])
if translator is None:
translator = trans
else:
translator._catalog.update(trans._catalog)
except Exception:
# Language couldn't be found in the specified path
pass
if translator is None:
translator = gettext.NullTranslations()
has_translation = False
return translator, has_translation
| 26.251232 | 79 | 0.610058 | 618 | 5,329 | 4.943366 | 0.33657 | 0.062848 | 0.059574 | 0.055974 | 0.186579 | 0.140753 | 0.140753 | 0 | 0 | 0 | 0 | 0.002349 | 0.280916 | 5,329 | 202 | 80 | 26.381188 | 0.794885 | 0.217114 | 0 | 0.054688 | 0 | 0 | 0.092289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.257813 | false | 0.007813 | 0.023438 | 0.1875 | 0.585938 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7c24715734c0010d9582d80125f2d9915e146637 | 172,063 | py | Python | sdk/servicefabric/azure-mgmt-servicefabric/azure/mgmt/servicefabric/models/_models.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 2,728 | 2015-01-09T10:19:32.000Z | 2022-03-31T14:50:33.000Z | sdk/servicefabric/azure-mgmt-servicefabric/azure/mgmt/servicefabric/models/_models.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 17,773 | 2015-01-05T15:57:17.000Z | 2022-03-31T23:50:25.000Z | sdk/servicefabric/azure-mgmt-servicefabric/azure/mgmt/servicefabric/models/_models.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 1,916 | 2015-01-19T05:05:41.000Z | 2022-03-31T19:36:44.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from azure.core.exceptions import HttpResponseError
import msrest.serialization
class ApplicationDeltaHealthPolicy(msrest.serialization.Model):
"""Defines a delta health policy used to evaluate the health of an application or one of its child entities when upgrading the cluster.
:param default_service_type_delta_health_policy: The delta health policy used by default to
evaluate the health of a service type when upgrading the cluster.
:type default_service_type_delta_health_policy:
~azure.mgmt.servicefabric.models.ServiceTypeDeltaHealthPolicy
:param service_type_delta_health_policies: The map with service type delta health policy per
service type name. The map is empty by default.
:type service_type_delta_health_policies: dict[str,
~azure.mgmt.servicefabric.models.ServiceTypeDeltaHealthPolicy]
"""
_attribute_map = {
'default_service_type_delta_health_policy': {'key': 'defaultServiceTypeDeltaHealthPolicy', 'type': 'ServiceTypeDeltaHealthPolicy'},
'service_type_delta_health_policies': {'key': 'serviceTypeDeltaHealthPolicies', 'type': '{ServiceTypeDeltaHealthPolicy}'},
}
def __init__(
self,
**kwargs
):
super(ApplicationDeltaHealthPolicy, self).__init__(**kwargs)
self.default_service_type_delta_health_policy = kwargs.get('default_service_type_delta_health_policy', None)
self.service_type_delta_health_policies = kwargs.get('service_type_delta_health_policies', None)
class ApplicationHealthPolicy(msrest.serialization.Model):
"""Defines a health policy used to evaluate the health of an application or one of its children entities.
:param default_service_type_health_policy: The health policy used by default to evaluate the
health of a service type.
:type default_service_type_health_policy:
~azure.mgmt.servicefabric.models.ServiceTypeHealthPolicy
:param service_type_health_policies: The map with service type health policy per service type
name. The map is empty by default.
:type service_type_health_policies: dict[str,
~azure.mgmt.servicefabric.models.ServiceTypeHealthPolicy]
"""
_attribute_map = {
'default_service_type_health_policy': {'key': 'defaultServiceTypeHealthPolicy', 'type': 'ServiceTypeHealthPolicy'},
'service_type_health_policies': {'key': 'serviceTypeHealthPolicies', 'type': '{ServiceTypeHealthPolicy}'},
}
def __init__(
self,
**kwargs
):
super(ApplicationHealthPolicy, self).__init__(**kwargs)
self.default_service_type_health_policy = kwargs.get('default_service_type_health_policy', None)
self.service_type_health_policies = kwargs.get('service_type_health_policies', None)
class ApplicationMetricDescription(msrest.serialization.Model):
"""Describes capacity information for a custom resource balancing metric. This can be used to limit the total consumption of this metric by the services of this application.
:param name: The name of the metric.
:type name: str
:param maximum_capacity: The maximum node capacity for Service Fabric application.
This is the maximum Load for an instance of this application on a single node. Even if the
capacity of node is greater than this value, Service Fabric will limit the total load of
services within the application on each node to this value.
If set to zero, capacity for this metric is unlimited on each node.
When creating a new application with application capacity defined, the product of MaximumNodes
and this value must always be smaller than or equal to TotalApplicationCapacity.
When updating existing application with application capacity, the product of MaximumNodes and
this value must always be smaller than or equal to TotalApplicationCapacity.
:type maximum_capacity: long
:param reservation_capacity: The node reservation capacity for Service Fabric application.
This is the amount of load which is reserved on nodes which have instances of this
application.
If MinimumNodes is specified, then the product of these values will be the capacity reserved
in the cluster for the application.
If set to zero, no capacity is reserved for this metric.
When setting application capacity or when updating application capacity; this value must be
smaller than or equal to MaximumCapacity for each metric.
:type reservation_capacity: long
:param total_application_capacity: The total metric capacity for Service Fabric application.
This is the total metric capacity for this application in the cluster. Service Fabric will try
to limit the sum of loads of services within the application to this value.
When creating a new application with application capacity defined, the product of MaximumNodes
and MaximumCapacity must always be smaller than or equal to this value.
:type total_application_capacity: long
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'maximum_capacity': {'key': 'maximumCapacity', 'type': 'long'},
'reservation_capacity': {'key': 'reservationCapacity', 'type': 'long'},
'total_application_capacity': {'key': 'totalApplicationCapacity', 'type': 'long'},
}
def __init__(
self,
**kwargs
):
super(ApplicationMetricDescription, self).__init__(**kwargs)
self.name = kwargs.get('name', None)
self.maximum_capacity = kwargs.get('maximum_capacity', None)
self.reservation_capacity = kwargs.get('reservation_capacity', None)
self.total_application_capacity = kwargs.get('total_application_capacity', None)
class ProxyResource(msrest.serialization.Model):
"""The resource model definition for proxy-only resource.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
}
def __init__(
self,
**kwargs
):
super(ProxyResource, self).__init__(**kwargs)
self.id = None
self.name = None
self.type = None
self.location = kwargs.get('location', None)
self.tags = kwargs.get('tags', None)
self.etag = None
self.system_data = None
class ApplicationResource(ProxyResource):
"""The application resource.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:param identity: Describes the managed identities for an Azure resource.
:type identity: ~azure.mgmt.servicefabric.models.ManagedIdentity
:param type_version: The version of the application type as defined in the application
manifest.
:type type_version: str
:param parameters: List of application parameters with overridden values from their default
values specified in the application manifest.
:type parameters: dict[str, str]
:param upgrade_policy: Describes the policy for a monitored application upgrade.
:type upgrade_policy: ~azure.mgmt.servicefabric.models.ApplicationUpgradePolicy
:param minimum_nodes: The minimum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. If this property is set to zero, no capacity will be reserved.
The value of this property cannot be more than the value of the MaximumNodes property.
:type minimum_nodes: long
:param maximum_nodes: The maximum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. By default, the value of this property is zero and it means
that the services can be placed on any node.
:type maximum_nodes: long
:param remove_application_capacity: Remove the current application capacity settings.
:type remove_application_capacity: bool
:param metrics: List of application capacity metric description.
:type metrics: list[~azure.mgmt.servicefabric.models.ApplicationMetricDescription]
:param managed_identities: List of user assigned identities for the application, each mapped to
a friendly name.
:type managed_identities:
list[~azure.mgmt.servicefabric.models.ApplicationUserAssignedIdentity]
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param type_name: The application type name as defined in the application manifest.
:type type_name: str
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
'minimum_nodes': {'minimum': 0},
'maximum_nodes': {'minimum': 0},
'provisioning_state': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'identity': {'key': 'identity', 'type': 'ManagedIdentity'},
'type_version': {'key': 'properties.typeVersion', 'type': 'str'},
'parameters': {'key': 'properties.parameters', 'type': '{str}'},
'upgrade_policy': {'key': 'properties.upgradePolicy', 'type': 'ApplicationUpgradePolicy'},
'minimum_nodes': {'key': 'properties.minimumNodes', 'type': 'long'},
'maximum_nodes': {'key': 'properties.maximumNodes', 'type': 'long'},
'remove_application_capacity': {'key': 'properties.removeApplicationCapacity', 'type': 'bool'},
'metrics': {'key': 'properties.metrics', 'type': '[ApplicationMetricDescription]'},
'managed_identities': {'key': 'properties.managedIdentities', 'type': '[ApplicationUserAssignedIdentity]'},
'provisioning_state': {'key': 'properties.provisioningState', 'type': 'str'},
'type_name': {'key': 'properties.typeName', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationResource, self).__init__(**kwargs)
self.identity = kwargs.get('identity', None)
self.type_version = kwargs.get('type_version', None)
self.parameters = kwargs.get('parameters', None)
self.upgrade_policy = kwargs.get('upgrade_policy', None)
self.minimum_nodes = kwargs.get('minimum_nodes', None)
self.maximum_nodes = kwargs.get('maximum_nodes', 0)
self.remove_application_capacity = kwargs.get('remove_application_capacity', None)
self.metrics = kwargs.get('metrics', None)
self.managed_identities = kwargs.get('managed_identities', None)
self.provisioning_state = None
self.type_name = kwargs.get('type_name', None)
class ApplicationResourceList(msrest.serialization.Model):
"""The list of application resources.
Variables are only populated by the server, and will be ignored when sending a request.
:param value:
:type value: list[~azure.mgmt.servicefabric.models.ApplicationResource]
:ivar next_link: URL to get the next set of application list results if there are any.
:vartype next_link: str
"""
_validation = {
'next_link': {'readonly': True},
}
_attribute_map = {
'value': {'key': 'value', 'type': '[ApplicationResource]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationResourceList, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = None
class ApplicationResourceUpdateProperties(msrest.serialization.Model):
"""The application resource properties for patch operations.
:param type_version: The version of the application type as defined in the application
manifest.
:type type_version: str
:param parameters: List of application parameters with overridden values from their default
values specified in the application manifest.
:type parameters: dict[str, str]
:param upgrade_policy: Describes the policy for a monitored application upgrade.
:type upgrade_policy: ~azure.mgmt.servicefabric.models.ApplicationUpgradePolicy
:param minimum_nodes: The minimum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. If this property is set to zero, no capacity will be reserved.
The value of this property cannot be more than the value of the MaximumNodes property.
:type minimum_nodes: long
:param maximum_nodes: The maximum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. By default, the value of this property is zero and it means
that the services can be placed on any node.
:type maximum_nodes: long
:param remove_application_capacity: Remove the current application capacity settings.
:type remove_application_capacity: bool
:param metrics: List of application capacity metric description.
:type metrics: list[~azure.mgmt.servicefabric.models.ApplicationMetricDescription]
:param managed_identities: List of user assigned identities for the application, each mapped to
a friendly name.
:type managed_identities:
list[~azure.mgmt.servicefabric.models.ApplicationUserAssignedIdentity]
"""
_validation = {
'minimum_nodes': {'minimum': 0},
'maximum_nodes': {'minimum': 0},
}
_attribute_map = {
'type_version': {'key': 'typeVersion', 'type': 'str'},
'parameters': {'key': 'parameters', 'type': '{str}'},
'upgrade_policy': {'key': 'upgradePolicy', 'type': 'ApplicationUpgradePolicy'},
'minimum_nodes': {'key': 'minimumNodes', 'type': 'long'},
'maximum_nodes': {'key': 'maximumNodes', 'type': 'long'},
'remove_application_capacity': {'key': 'removeApplicationCapacity', 'type': 'bool'},
'metrics': {'key': 'metrics', 'type': '[ApplicationMetricDescription]'},
'managed_identities': {'key': 'managedIdentities', 'type': '[ApplicationUserAssignedIdentity]'},
}
def __init__(
self,
**kwargs
):
super(ApplicationResourceUpdateProperties, self).__init__(**kwargs)
self.type_version = kwargs.get('type_version', None)
self.parameters = kwargs.get('parameters', None)
self.upgrade_policy = kwargs.get('upgrade_policy', None)
self.minimum_nodes = kwargs.get('minimum_nodes', None)
self.maximum_nodes = kwargs.get('maximum_nodes', 0)
self.remove_application_capacity = kwargs.get('remove_application_capacity', None)
self.metrics = kwargs.get('metrics', None)
self.managed_identities = kwargs.get('managed_identities', None)
class ApplicationResourceProperties(ApplicationResourceUpdateProperties):
"""The application resource properties.
Variables are only populated by the server, and will be ignored when sending a request.
:param type_version: The version of the application type as defined in the application
manifest.
:type type_version: str
:param parameters: List of application parameters with overridden values from their default
values specified in the application manifest.
:type parameters: dict[str, str]
:param upgrade_policy: Describes the policy for a monitored application upgrade.
:type upgrade_policy: ~azure.mgmt.servicefabric.models.ApplicationUpgradePolicy
:param minimum_nodes: The minimum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. If this property is set to zero, no capacity will be reserved.
The value of this property cannot be more than the value of the MaximumNodes property.
:type minimum_nodes: long
:param maximum_nodes: The maximum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. By default, the value of this property is zero and it means
that the services can be placed on any node.
:type maximum_nodes: long
:param remove_application_capacity: Remove the current application capacity settings.
:type remove_application_capacity: bool
:param metrics: List of application capacity metric description.
:type metrics: list[~azure.mgmt.servicefabric.models.ApplicationMetricDescription]
:param managed_identities: List of user assigned identities for the application, each mapped to
a friendly name.
:type managed_identities:
list[~azure.mgmt.servicefabric.models.ApplicationUserAssignedIdentity]
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param type_name: The application type name as defined in the application manifest.
:type type_name: str
"""
_validation = {
'minimum_nodes': {'minimum': 0},
'maximum_nodes': {'minimum': 0},
'provisioning_state': {'readonly': True},
}
_attribute_map = {
'type_version': {'key': 'typeVersion', 'type': 'str'},
'parameters': {'key': 'parameters', 'type': '{str}'},
'upgrade_policy': {'key': 'upgradePolicy', 'type': 'ApplicationUpgradePolicy'},
'minimum_nodes': {'key': 'minimumNodes', 'type': 'long'},
'maximum_nodes': {'key': 'maximumNodes', 'type': 'long'},
'remove_application_capacity': {'key': 'removeApplicationCapacity', 'type': 'bool'},
'metrics': {'key': 'metrics', 'type': '[ApplicationMetricDescription]'},
'managed_identities': {'key': 'managedIdentities', 'type': '[ApplicationUserAssignedIdentity]'},
'provisioning_state': {'key': 'provisioningState', 'type': 'str'},
'type_name': {'key': 'typeName', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationResourceProperties, self).__init__(**kwargs)
self.provisioning_state = None
self.type_name = kwargs.get('type_name', None)
class ApplicationResourceUpdate(ProxyResource):
"""The application resource for patch operations.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:param type_version: The version of the application type as defined in the application
manifest.
:type type_version: str
:param parameters: List of application parameters with overridden values from their default
values specified in the application manifest.
:type parameters: dict[str, str]
:param upgrade_policy: Describes the policy for a monitored application upgrade.
:type upgrade_policy: ~azure.mgmt.servicefabric.models.ApplicationUpgradePolicy
:param minimum_nodes: The minimum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. If this property is set to zero, no capacity will be reserved.
The value of this property cannot be more than the value of the MaximumNodes property.
:type minimum_nodes: long
:param maximum_nodes: The maximum number of nodes where Service Fabric will reserve capacity
for this application. Note that this does not mean that the services of this application will
be placed on all of those nodes. By default, the value of this property is zero and it means
that the services can be placed on any node.
:type maximum_nodes: long
:param remove_application_capacity: Remove the current application capacity settings.
:type remove_application_capacity: bool
:param metrics: List of application capacity metric description.
:type metrics: list[~azure.mgmt.servicefabric.models.ApplicationMetricDescription]
:param managed_identities: List of user assigned identities for the application, each mapped to
a friendly name.
:type managed_identities:
list[~azure.mgmt.servicefabric.models.ApplicationUserAssignedIdentity]
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
'minimum_nodes': {'minimum': 0},
'maximum_nodes': {'minimum': 0},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'type_version': {'key': 'properties.typeVersion', 'type': 'str'},
'parameters': {'key': 'properties.parameters', 'type': '{str}'},
'upgrade_policy': {'key': 'properties.upgradePolicy', 'type': 'ApplicationUpgradePolicy'},
'minimum_nodes': {'key': 'properties.minimumNodes', 'type': 'long'},
'maximum_nodes': {'key': 'properties.maximumNodes', 'type': 'long'},
'remove_application_capacity': {'key': 'properties.removeApplicationCapacity', 'type': 'bool'},
'metrics': {'key': 'properties.metrics', 'type': '[ApplicationMetricDescription]'},
'managed_identities': {'key': 'properties.managedIdentities', 'type': '[ApplicationUserAssignedIdentity]'},
}
def __init__(
self,
**kwargs
):
super(ApplicationResourceUpdate, self).__init__(**kwargs)
self.type_version = kwargs.get('type_version', None)
self.parameters = kwargs.get('parameters', None)
self.upgrade_policy = kwargs.get('upgrade_policy', None)
self.minimum_nodes = kwargs.get('minimum_nodes', None)
self.maximum_nodes = kwargs.get('maximum_nodes', 0)
self.remove_application_capacity = kwargs.get('remove_application_capacity', None)
self.metrics = kwargs.get('metrics', None)
self.managed_identities = kwargs.get('managed_identities', None)
class ApplicationTypeResource(ProxyResource):
"""The application type name resource.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
'provisioning_state': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'provisioning_state': {'key': 'properties.provisioningState', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationTypeResource, self).__init__(**kwargs)
self.provisioning_state = None
class ApplicationTypeResourceList(msrest.serialization.Model):
"""The list of application type names.
Variables are only populated by the server, and will be ignored when sending a request.
:param value:
:type value: list[~azure.mgmt.servicefabric.models.ApplicationTypeResource]
:ivar next_link: URL to get the next set of application type list results if there are any.
:vartype next_link: str
"""
_validation = {
'next_link': {'readonly': True},
}
_attribute_map = {
'value': {'key': 'value', 'type': '[ApplicationTypeResource]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationTypeResourceList, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = None
class ApplicationTypeVersionResource(ProxyResource):
"""An application type version resource for the specified application type name resource.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param app_package_url: The URL to the application package.
:type app_package_url: str
:ivar default_parameter_list: List of application type parameters that can be overridden when
creating or updating the application.
:vartype default_parameter_list: dict[str, str]
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
'provisioning_state': {'readonly': True},
'default_parameter_list': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'provisioning_state': {'key': 'properties.provisioningState', 'type': 'str'},
'app_package_url': {'key': 'properties.appPackageUrl', 'type': 'str'},
'default_parameter_list': {'key': 'properties.defaultParameterList', 'type': '{str}'},
}
def __init__(
self,
**kwargs
):
super(ApplicationTypeVersionResource, self).__init__(**kwargs)
self.provisioning_state = None
self.app_package_url = kwargs.get('app_package_url', None)
self.default_parameter_list = None
class ApplicationTypeVersionResourceList(msrest.serialization.Model):
"""The list of application type version resources for the specified application type name resource.
Variables are only populated by the server, and will be ignored when sending a request.
:param value:
:type value: list[~azure.mgmt.servicefabric.models.ApplicationTypeVersionResource]
:ivar next_link: URL to get the next set of application type version list results if there are
any.
:vartype next_link: str
"""
_validation = {
'next_link': {'readonly': True},
}
_attribute_map = {
'value': {'key': 'value', 'type': '[ApplicationTypeVersionResource]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationTypeVersionResourceList, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = None
class ApplicationTypeVersionsCleanupPolicy(msrest.serialization.Model):
"""ApplicationTypeVersionsCleanupPolicy.
All required parameters must be populated in order to send to Azure.
:param max_unused_versions_to_keep: Required. Number of unused versions per application type to
keep.
:type max_unused_versions_to_keep: long
"""
_validation = {
'max_unused_versions_to_keep': {'required': True, 'minimum': 0},
}
_attribute_map = {
'max_unused_versions_to_keep': {'key': 'maxUnusedVersionsToKeep', 'type': 'long'},
}
def __init__(
self,
**kwargs
):
super(ApplicationTypeVersionsCleanupPolicy, self).__init__(**kwargs)
self.max_unused_versions_to_keep = kwargs['max_unused_versions_to_keep']
class ApplicationUpgradePolicy(msrest.serialization.Model):
"""Describes the policy for a monitored application upgrade.
:param upgrade_replica_set_check_timeout: The maximum amount of time to block processing of an
upgrade domain and prevent loss of availability when there are unexpected issues. When this
timeout expires, processing of the upgrade domain will proceed regardless of availability loss
issues. The timeout is reset at the start of each upgrade domain. Valid values are between 0
and 42949672925 inclusive. (unsigned 32-bit integer).
:type upgrade_replica_set_check_timeout: str
:param force_restart: If true, then processes are forcefully restarted during upgrade even when
the code version has not changed (the upgrade only changes configuration or data).
:type force_restart: bool
:param rolling_upgrade_monitoring_policy: The policy used for monitoring the application
upgrade.
:type rolling_upgrade_monitoring_policy:
~azure.mgmt.servicefabric.models.ArmRollingUpgradeMonitoringPolicy
:param application_health_policy: Defines a health policy used to evaluate the health of an
application or one of its children entities.
:type application_health_policy: ~azure.mgmt.servicefabric.models.ArmApplicationHealthPolicy
:param upgrade_mode: The mode used to monitor health during a rolling upgrade. The values are
UnmonitoredAuto, UnmonitoredManual, and Monitored. Possible values include: "Invalid",
"UnmonitoredAuto", "UnmonitoredManual", "Monitored". Default value: "Monitored".
:type upgrade_mode: str or ~azure.mgmt.servicefabric.models.RollingUpgradeMode
:param recreate_application: Determines whether the application should be recreated on update.
If value=true, the rest of the upgrade policy parameters are not allowed and it will result in
availability loss.
:type recreate_application: bool
"""
_attribute_map = {
'upgrade_replica_set_check_timeout': {'key': 'upgradeReplicaSetCheckTimeout', 'type': 'str'},
'force_restart': {'key': 'forceRestart', 'type': 'bool'},
'rolling_upgrade_monitoring_policy': {'key': 'rollingUpgradeMonitoringPolicy', 'type': 'ArmRollingUpgradeMonitoringPolicy'},
'application_health_policy': {'key': 'applicationHealthPolicy', 'type': 'ArmApplicationHealthPolicy'},
'upgrade_mode': {'key': 'upgradeMode', 'type': 'str'},
'recreate_application': {'key': 'recreateApplication', 'type': 'bool'},
}
def __init__(
self,
**kwargs
):
super(ApplicationUpgradePolicy, self).__init__(**kwargs)
self.upgrade_replica_set_check_timeout = kwargs.get('upgrade_replica_set_check_timeout', None)
self.force_restart = kwargs.get('force_restart', False)
self.rolling_upgrade_monitoring_policy = kwargs.get('rolling_upgrade_monitoring_policy', None)
self.application_health_policy = kwargs.get('application_health_policy', None)
self.upgrade_mode = kwargs.get('upgrade_mode', "Monitored")
self.recreate_application = kwargs.get('recreate_application', None)
class ApplicationUserAssignedIdentity(msrest.serialization.Model):
"""ApplicationUserAssignedIdentity.
All required parameters must be populated in order to send to Azure.
:param name: Required. The friendly name of user assigned identity.
:type name: str
:param principal_id: Required. The principal id of user assigned identity.
:type principal_id: str
"""
_validation = {
'name': {'required': True},
'principal_id': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'principal_id': {'key': 'principalId', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ApplicationUserAssignedIdentity, self).__init__(**kwargs)
self.name = kwargs['name']
self.principal_id = kwargs['principal_id']
class ArmApplicationHealthPolicy(msrest.serialization.Model):
"""Defines a health policy used to evaluate the health of an application or one of its children entities.
:param consider_warning_as_error: Indicates whether warnings are treated with the same severity
as errors.
:type consider_warning_as_error: bool
:param max_percent_unhealthy_deployed_applications: The maximum allowed percentage of unhealthy
deployed applications. Allowed values are Byte values from zero to 100.
The percentage represents the maximum tolerated percentage of deployed applications that can
be unhealthy before the application is considered in error.
This is calculated by dividing the number of unhealthy deployed applications over the number
of nodes where the application is currently deployed on in the cluster.
The computation rounds up to tolerate one failure on small numbers of nodes. Default
percentage is zero.
:type max_percent_unhealthy_deployed_applications: int
:param default_service_type_health_policy: The health policy used by default to evaluate the
health of a service type.
:type default_service_type_health_policy:
~azure.mgmt.servicefabric.models.ArmServiceTypeHealthPolicy
:param service_type_health_policy_map: The map with service type health policy per service type
name. The map is empty by default.
:type service_type_health_policy_map: dict[str,
~azure.mgmt.servicefabric.models.ArmServiceTypeHealthPolicy]
"""
_attribute_map = {
'consider_warning_as_error': {'key': 'considerWarningAsError', 'type': 'bool'},
'max_percent_unhealthy_deployed_applications': {'key': 'maxPercentUnhealthyDeployedApplications', 'type': 'int'},
'default_service_type_health_policy': {'key': 'defaultServiceTypeHealthPolicy', 'type': 'ArmServiceTypeHealthPolicy'},
'service_type_health_policy_map': {'key': 'serviceTypeHealthPolicyMap', 'type': '{ArmServiceTypeHealthPolicy}'},
}
def __init__(
self,
**kwargs
):
super(ArmApplicationHealthPolicy, self).__init__(**kwargs)
self.consider_warning_as_error = kwargs.get('consider_warning_as_error', False)
self.max_percent_unhealthy_deployed_applications = kwargs.get('max_percent_unhealthy_deployed_applications', 0)
self.default_service_type_health_policy = kwargs.get('default_service_type_health_policy', None)
self.service_type_health_policy_map = kwargs.get('service_type_health_policy_map', None)
class ArmRollingUpgradeMonitoringPolicy(msrest.serialization.Model):
"""The policy used for monitoring the application upgrade.
:param failure_action: The activation Mode of the service package. Possible values include:
"Rollback", "Manual".
:type failure_action: str or ~azure.mgmt.servicefabric.models.ArmUpgradeFailureAction
:param health_check_wait_duration: The amount of time to wait after completing an upgrade
domain before applying health policies. It is first interpreted as a string representing an ISO
8601 duration. If that fails, then it is interpreted as a number representing the total number
of milliseconds.
:type health_check_wait_duration: str
:param health_check_stable_duration: The amount of time that the application or cluster must
remain healthy before the upgrade proceeds to the next upgrade domain. It is first interpreted
as a string representing an ISO 8601 duration. If that fails, then it is interpreted as a
number representing the total number of milliseconds.
:type health_check_stable_duration: str
:param health_check_retry_timeout: The amount of time to retry health evaluation when the
application or cluster is unhealthy before FailureAction is executed. It is first interpreted
as a string representing an ISO 8601 duration. If that fails, then it is interpreted as a
number representing the total number of milliseconds.
:type health_check_retry_timeout: str
:param upgrade_timeout: The amount of time the overall upgrade has to complete before
FailureAction is executed. It is first interpreted as a string representing an ISO 8601
duration. If that fails, then it is interpreted as a number representing the total number of
milliseconds.
:type upgrade_timeout: str
:param upgrade_domain_timeout: The amount of time each upgrade domain has to complete before
FailureAction is executed. It is first interpreted as a string representing an ISO 8601
duration. If that fails, then it is interpreted as a number representing the total number of
milliseconds.
:type upgrade_domain_timeout: str
"""
_attribute_map = {
'failure_action': {'key': 'failureAction', 'type': 'str'},
'health_check_wait_duration': {'key': 'healthCheckWaitDuration', 'type': 'str'},
'health_check_stable_duration': {'key': 'healthCheckStableDuration', 'type': 'str'},
'health_check_retry_timeout': {'key': 'healthCheckRetryTimeout', 'type': 'str'},
'upgrade_timeout': {'key': 'upgradeTimeout', 'type': 'str'},
'upgrade_domain_timeout': {'key': 'upgradeDomainTimeout', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ArmRollingUpgradeMonitoringPolicy, self).__init__(**kwargs)
self.failure_action = kwargs.get('failure_action', None)
self.health_check_wait_duration = kwargs.get('health_check_wait_duration', "0")
self.health_check_stable_duration = kwargs.get('health_check_stable_duration', "PT0H2M0S")
self.health_check_retry_timeout = kwargs.get('health_check_retry_timeout', "PT0H10M0S")
self.upgrade_timeout = kwargs.get('upgrade_timeout', "P10675199DT02H48M05.4775807S")
self.upgrade_domain_timeout = kwargs.get('upgrade_domain_timeout', "P10675199DT02H48M05.4775807S")
class ArmServiceTypeHealthPolicy(msrest.serialization.Model):
"""Represents the health policy used to evaluate the health of services belonging to a service type.
:param max_percent_unhealthy_services: The maximum percentage of services allowed to be
unhealthy before your application is considered in error.
:type max_percent_unhealthy_services: int
:param max_percent_unhealthy_partitions_per_service: The maximum percentage of partitions per
service allowed to be unhealthy before your application is considered in error.
:type max_percent_unhealthy_partitions_per_service: int
:param max_percent_unhealthy_replicas_per_partition: The maximum percentage of replicas per
partition allowed to be unhealthy before your application is considered in error.
:type max_percent_unhealthy_replicas_per_partition: int
"""
_validation = {
'max_percent_unhealthy_services': {'maximum': 100, 'minimum': 0},
'max_percent_unhealthy_partitions_per_service': {'maximum': 100, 'minimum': 0},
'max_percent_unhealthy_replicas_per_partition': {'maximum': 100, 'minimum': 0},
}
_attribute_map = {
'max_percent_unhealthy_services': {'key': 'maxPercentUnhealthyServices', 'type': 'int'},
'max_percent_unhealthy_partitions_per_service': {'key': 'maxPercentUnhealthyPartitionsPerService', 'type': 'int'},
'max_percent_unhealthy_replicas_per_partition': {'key': 'maxPercentUnhealthyReplicasPerPartition', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(ArmServiceTypeHealthPolicy, self).__init__(**kwargs)
self.max_percent_unhealthy_services = kwargs.get('max_percent_unhealthy_services', 0)
self.max_percent_unhealthy_partitions_per_service = kwargs.get('max_percent_unhealthy_partitions_per_service', 0)
self.max_percent_unhealthy_replicas_per_partition = kwargs.get('max_percent_unhealthy_replicas_per_partition', 0)
class AvailableOperationDisplay(msrest.serialization.Model):
"""Operation supported by the Service Fabric resource provider.
:param provider: The name of the provider.
:type provider: str
:param resource: The resource on which the operation is performed.
:type resource: str
:param operation: The operation that can be performed.
:type operation: str
:param description: Operation description.
:type description: str
"""
_attribute_map = {
'provider': {'key': 'provider', 'type': 'str'},
'resource': {'key': 'resource', 'type': 'str'},
'operation': {'key': 'operation', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(AvailableOperationDisplay, self).__init__(**kwargs)
self.provider = kwargs.get('provider', None)
self.resource = kwargs.get('resource', None)
self.operation = kwargs.get('operation', None)
self.description = kwargs.get('description', None)
class AzureActiveDirectory(msrest.serialization.Model):
"""The settings to enable AAD authentication on the cluster.
:param tenant_id: Azure active directory tenant id.
:type tenant_id: str
:param cluster_application: Azure active directory cluster application id.
:type cluster_application: str
:param client_application: Azure active directory client application id.
:type client_application: str
"""
_attribute_map = {
'tenant_id': {'key': 'tenantId', 'type': 'str'},
'cluster_application': {'key': 'clusterApplication', 'type': 'str'},
'client_application': {'key': 'clientApplication', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(AzureActiveDirectory, self).__init__(**kwargs)
self.tenant_id = kwargs.get('tenant_id', None)
self.cluster_application = kwargs.get('cluster_application', None)
self.client_application = kwargs.get('client_application', None)
class CertificateDescription(msrest.serialization.Model):
"""Describes the certificate details.
All required parameters must be populated in order to send to Azure.
:param thumbprint: Required. Thumbprint of the primary certificate.
:type thumbprint: str
:param thumbprint_secondary: Thumbprint of the secondary certificate.
:type thumbprint_secondary: str
:param x509_store_name: The local certificate store location. Possible values include:
"AddressBook", "AuthRoot", "CertificateAuthority", "Disallowed", "My", "Root", "TrustedPeople",
"TrustedPublisher".
:type x509_store_name: str or ~azure.mgmt.servicefabric.models.StoreName
"""
_validation = {
'thumbprint': {'required': True},
}
_attribute_map = {
'thumbprint': {'key': 'thumbprint', 'type': 'str'},
'thumbprint_secondary': {'key': 'thumbprintSecondary', 'type': 'str'},
'x509_store_name': {'key': 'x509StoreName', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(CertificateDescription, self).__init__(**kwargs)
self.thumbprint = kwargs['thumbprint']
self.thumbprint_secondary = kwargs.get('thumbprint_secondary', None)
self.x509_store_name = kwargs.get('x509_store_name', None)
class ClientCertificateCommonName(msrest.serialization.Model):
"""Describes the client certificate details using common name.
All required parameters must be populated in order to send to Azure.
:param is_admin: Required. Indicates if the client certificate has admin access to the cluster.
Non admin clients can perform only read only operations on the cluster.
:type is_admin: bool
:param certificate_common_name: Required. The common name of the client certificate.
:type certificate_common_name: str
:param certificate_issuer_thumbprint: Required. The issuer thumbprint of the client
certificate.
:type certificate_issuer_thumbprint: str
"""
_validation = {
'is_admin': {'required': True},
'certificate_common_name': {'required': True},
'certificate_issuer_thumbprint': {'required': True},
}
_attribute_map = {
'is_admin': {'key': 'isAdmin', 'type': 'bool'},
'certificate_common_name': {'key': 'certificateCommonName', 'type': 'str'},
'certificate_issuer_thumbprint': {'key': 'certificateIssuerThumbprint', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ClientCertificateCommonName, self).__init__(**kwargs)
self.is_admin = kwargs['is_admin']
self.certificate_common_name = kwargs['certificate_common_name']
self.certificate_issuer_thumbprint = kwargs['certificate_issuer_thumbprint']
class ClientCertificateThumbprint(msrest.serialization.Model):
"""Describes the client certificate details using thumbprint.
All required parameters must be populated in order to send to Azure.
:param is_admin: Required. Indicates if the client certificate has admin access to the cluster.
Non admin clients can perform only read only operations on the cluster.
:type is_admin: bool
:param certificate_thumbprint: Required. The thumbprint of the client certificate.
:type certificate_thumbprint: str
"""
_validation = {
'is_admin': {'required': True},
'certificate_thumbprint': {'required': True},
}
_attribute_map = {
'is_admin': {'key': 'isAdmin', 'type': 'bool'},
'certificate_thumbprint': {'key': 'certificateThumbprint', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ClientCertificateThumbprint, self).__init__(**kwargs)
self.is_admin = kwargs['is_admin']
self.certificate_thumbprint = kwargs['certificate_thumbprint']
class Resource(msrest.serialization.Model):
"""The resource model definition.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: Required. Azure resource location.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'location': {'required': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
}
def __init__(
self,
**kwargs
):
super(Resource, self).__init__(**kwargs)
self.id = None
self.name = None
self.type = None
self.location = kwargs['location']
self.tags = kwargs.get('tags', None)
self.etag = None
self.system_data = None
class Cluster(Resource):
"""The cluster resource.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: Required. Azure resource location.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:param add_on_features: The list of add-on features to enable in the cluster.
:type add_on_features: list[str or ~azure.mgmt.servicefabric.models.AddOnFeatures]
:ivar available_cluster_versions: The Service Fabric runtime versions available for this
cluster.
:vartype available_cluster_versions:
list[~azure.mgmt.servicefabric.models.ClusterVersionDetails]
:param azure_active_directory: The AAD authentication settings of the cluster.
:type azure_active_directory: ~azure.mgmt.servicefabric.models.AzureActiveDirectory
:param certificate: The certificate to use for securing the cluster. The certificate provided
will be used for node to node security within the cluster, SSL certificate for cluster
management endpoint and default admin client.
:type certificate: ~azure.mgmt.servicefabric.models.CertificateDescription
:param certificate_common_names: Describes a list of server certificates referenced by common
name that are used to secure the cluster.
:type certificate_common_names: ~azure.mgmt.servicefabric.models.ServerCertificateCommonNames
:param client_certificate_common_names: The list of client certificates referenced by common
name that are allowed to manage the cluster.
:type client_certificate_common_names:
list[~azure.mgmt.servicefabric.models.ClientCertificateCommonName]
:param client_certificate_thumbprints: The list of client certificates referenced by thumbprint
that are allowed to manage the cluster.
:type client_certificate_thumbprints:
list[~azure.mgmt.servicefabric.models.ClientCertificateThumbprint]
:param cluster_code_version: The Service Fabric runtime version of the cluster. This property
can only by set the user when **upgradeMode** is set to 'Manual'. To get list of available
Service Fabric versions for new clusters use `ClusterVersion API <./ClusterVersion.md>`_. To
get the list of available version for existing clusters use **availableClusterVersions**.
:type cluster_code_version: str
:ivar cluster_endpoint: The Azure Resource Provider endpoint. A system service in the cluster
connects to this endpoint.
:vartype cluster_endpoint: str
:ivar cluster_id: A service generated unique identifier for the cluster resource.
:vartype cluster_id: str
:ivar cluster_state: The current state of the cluster.
* WaitingForNodes - Indicates that the cluster resource is created and the resource provider
is waiting for Service Fabric VM extension to boot up and report to it.
* Deploying - Indicates that the Service Fabric runtime is being installed on the VMs. Cluster
resource will be in this state until the cluster boots up and system services are up.
* BaselineUpgrade - Indicates that the cluster is upgrading to establishes the cluster
version. This upgrade is automatically initiated when the cluster boots up for the first time.
* UpdatingUserConfiguration - Indicates that the cluster is being upgraded with the user
provided configuration.
* UpdatingUserCertificate - Indicates that the cluster is being upgraded with the user
provided certificate.
* UpdatingInfrastructure - Indicates that the cluster is being upgraded with the latest
Service Fabric runtime version. This happens only when the **upgradeMode** is set to
'Automatic'.
* EnforcingClusterVersion - Indicates that cluster is on a different version than expected and
the cluster is being upgraded to the expected version.
* UpgradeServiceUnreachable - Indicates that the system service in the cluster is no longer
polling the Resource Provider. Clusters in this state cannot be managed by the Resource
Provider.
* AutoScale - Indicates that the ReliabilityLevel of the cluster is being adjusted.
* Ready - Indicates that the cluster is in a stable state. Possible values include:
"WaitingForNodes", "Deploying", "BaselineUpgrade", "UpdatingUserConfiguration",
"UpdatingUserCertificate", "UpdatingInfrastructure", "EnforcingClusterVersion",
"UpgradeServiceUnreachable", "AutoScale", "Ready".
:vartype cluster_state: str or ~azure.mgmt.servicefabric.models.ClusterState
:param diagnostics_storage_account_config: The storage account information for storing Service
Fabric diagnostic logs.
:type diagnostics_storage_account_config:
~azure.mgmt.servicefabric.models.DiagnosticsStorageAccountConfig
:param event_store_service_enabled: Indicates if the event store service is enabled.
:type event_store_service_enabled: bool
:param fabric_settings: The list of custom fabric settings to configure the cluster.
:type fabric_settings: list[~azure.mgmt.servicefabric.models.SettingsSectionDescription]
:param management_endpoint: The http management endpoint of the cluster.
:type management_endpoint: str
:param node_types: The list of node types in the cluster.
:type node_types: list[~azure.mgmt.servicefabric.models.NodeTypeDescription]
:ivar provisioning_state: The provisioning state of the cluster resource. Possible values
include: "Updating", "Succeeded", "Failed", "Canceled".
:vartype provisioning_state: str or ~azure.mgmt.servicefabric.models.ProvisioningState
:param reliability_level: The reliability level sets the replica set size of system services.
Learn about `ReliabilityLevel
<https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-capacity>`_.
* None - Run the System services with a target replica set count of 1. This should only be
used for test clusters.
* Bronze - Run the System services with a target replica set count of 3. This should only be
used for test clusters.
* Silver - Run the System services with a target replica set count of 5.
* Gold - Run the System services with a target replica set count of 7.
* Platinum - Run the System services with a target replica set count of 9. Possible values
include: "None", "Bronze", "Silver", "Gold", "Platinum".
:type reliability_level: str or ~azure.mgmt.servicefabric.models.ReliabilityLevel
:param reverse_proxy_certificate: The server certificate used by reverse proxy.
:type reverse_proxy_certificate: ~azure.mgmt.servicefabric.models.CertificateDescription
:param reverse_proxy_certificate_common_names: Describes a list of server certificates
referenced by common name that are used to secure the cluster.
:type reverse_proxy_certificate_common_names:
~azure.mgmt.servicefabric.models.ServerCertificateCommonNames
:param upgrade_description: The policy to use when upgrading the cluster.
:type upgrade_description: ~azure.mgmt.servicefabric.models.ClusterUpgradePolicy
:param upgrade_mode: The upgrade mode of the cluster when new Service Fabric runtime version is
available. Possible values include: "Automatic", "Manual". Default value: "Automatic".
:type upgrade_mode: str or ~azure.mgmt.servicefabric.models.UpgradeMode
:param application_type_versions_cleanup_policy: The policy used to clean up unused versions.
:type application_type_versions_cleanup_policy:
~azure.mgmt.servicefabric.models.ApplicationTypeVersionsCleanupPolicy
:param vm_image: The VM image VMSS has been configured with. Generic names such as Windows or
Linux can be used.
:type vm_image: str
:param sf_zonal_upgrade_mode: This property controls the logical grouping of VMs in upgrade
domains (UDs). This property can't be modified if a node type with multiple Availability Zones
is already present in the cluster. Possible values include: "Parallel", "Hierarchical".
:type sf_zonal_upgrade_mode: str or ~azure.mgmt.servicefabric.models.SfZonalUpgradeMode
:param vmss_zonal_upgrade_mode: This property defines the upgrade mode for the virtual machine
scale set, it is mandatory if a node type with multiple Availability Zones is added. Possible
values include: "Parallel", "Hierarchical".
:type vmss_zonal_upgrade_mode: str or ~azure.mgmt.servicefabric.models.VmssZonalUpgradeMode
:param infrastructure_service_manager: Indicates if infrastructure service manager is enabled.
:type infrastructure_service_manager: bool
:param upgrade_wave: Indicates when new cluster runtime version upgrades will be applied after
they are released. By default is Wave0. Only applies when **upgradeMode** is set to
'Automatic'. Possible values include: "Wave0", "Wave1", "Wave2".
:type upgrade_wave: str or ~azure.mgmt.servicefabric.models.ClusterUpgradeCadence
:param upgrade_pause_start_timestamp_utc: Indicates the start date and time to pause automatic
runtime version upgrades on the cluster for an specific period of time on the cluster (UTC).
:type upgrade_pause_start_timestamp_utc: ~datetime.datetime
:param upgrade_pause_end_timestamp_utc: Indicates the end date and time to pause automatic
runtime version upgrades on the cluster for an specific period of time on the cluster (UTC).
:type upgrade_pause_end_timestamp_utc: ~datetime.datetime
:param wave_upgrade_paused: Boolean to pause automatic runtime version upgrades to the cluster.
:type wave_upgrade_paused: bool
:param notifications: Indicates a list of notification channels for cluster events.
:type notifications: list[~azure.mgmt.servicefabric.models.Notification]
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'location': {'required': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
'available_cluster_versions': {'readonly': True},
'cluster_endpoint': {'readonly': True},
'cluster_id': {'readonly': True},
'cluster_state': {'readonly': True},
'provisioning_state': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'add_on_features': {'key': 'properties.addOnFeatures', 'type': '[str]'},
'available_cluster_versions': {'key': 'properties.availableClusterVersions', 'type': '[ClusterVersionDetails]'},
'azure_active_directory': {'key': 'properties.azureActiveDirectory', 'type': 'AzureActiveDirectory'},
'certificate': {'key': 'properties.certificate', 'type': 'CertificateDescription'},
'certificate_common_names': {'key': 'properties.certificateCommonNames', 'type': 'ServerCertificateCommonNames'},
'client_certificate_common_names': {'key': 'properties.clientCertificateCommonNames', 'type': '[ClientCertificateCommonName]'},
'client_certificate_thumbprints': {'key': 'properties.clientCertificateThumbprints', 'type': '[ClientCertificateThumbprint]'},
'cluster_code_version': {'key': 'properties.clusterCodeVersion', 'type': 'str'},
'cluster_endpoint': {'key': 'properties.clusterEndpoint', 'type': 'str'},
'cluster_id': {'key': 'properties.clusterId', 'type': 'str'},
'cluster_state': {'key': 'properties.clusterState', 'type': 'str'},
'diagnostics_storage_account_config': {'key': 'properties.diagnosticsStorageAccountConfig', 'type': 'DiagnosticsStorageAccountConfig'},
'event_store_service_enabled': {'key': 'properties.eventStoreServiceEnabled', 'type': 'bool'},
'fabric_settings': {'key': 'properties.fabricSettings', 'type': '[SettingsSectionDescription]'},
'management_endpoint': {'key': 'properties.managementEndpoint', 'type': 'str'},
'node_types': {'key': 'properties.nodeTypes', 'type': '[NodeTypeDescription]'},
'provisioning_state': {'key': 'properties.provisioningState', 'type': 'str'},
'reliability_level': {'key': 'properties.reliabilityLevel', 'type': 'str'},
'reverse_proxy_certificate': {'key': 'properties.reverseProxyCertificate', 'type': 'CertificateDescription'},
'reverse_proxy_certificate_common_names': {'key': 'properties.reverseProxyCertificateCommonNames', 'type': 'ServerCertificateCommonNames'},
'upgrade_description': {'key': 'properties.upgradeDescription', 'type': 'ClusterUpgradePolicy'},
'upgrade_mode': {'key': 'properties.upgradeMode', 'type': 'str'},
'application_type_versions_cleanup_policy': {'key': 'properties.applicationTypeVersionsCleanupPolicy', 'type': 'ApplicationTypeVersionsCleanupPolicy'},
'vm_image': {'key': 'properties.vmImage', 'type': 'str'},
'sf_zonal_upgrade_mode': {'key': 'properties.sfZonalUpgradeMode', 'type': 'str'},
'vmss_zonal_upgrade_mode': {'key': 'properties.vmssZonalUpgradeMode', 'type': 'str'},
'infrastructure_service_manager': {'key': 'properties.infrastructureServiceManager', 'type': 'bool'},
'upgrade_wave': {'key': 'properties.upgradeWave', 'type': 'str'},
'upgrade_pause_start_timestamp_utc': {'key': 'properties.upgradePauseStartTimestampUtc', 'type': 'iso-8601'},
'upgrade_pause_end_timestamp_utc': {'key': 'properties.upgradePauseEndTimestampUtc', 'type': 'iso-8601'},
'wave_upgrade_paused': {'key': 'properties.waveUpgradePaused', 'type': 'bool'},
'notifications': {'key': 'properties.notifications', 'type': '[Notification]'},
}
def __init__(
self,
**kwargs
):
super(Cluster, self).__init__(**kwargs)
self.add_on_features = kwargs.get('add_on_features', None)
self.available_cluster_versions = None
self.azure_active_directory = kwargs.get('azure_active_directory', None)
self.certificate = kwargs.get('certificate', None)
self.certificate_common_names = kwargs.get('certificate_common_names', None)
self.client_certificate_common_names = kwargs.get('client_certificate_common_names', None)
self.client_certificate_thumbprints = kwargs.get('client_certificate_thumbprints', None)
self.cluster_code_version = kwargs.get('cluster_code_version', None)
self.cluster_endpoint = None
self.cluster_id = None
self.cluster_state = None
self.diagnostics_storage_account_config = kwargs.get('diagnostics_storage_account_config', None)
self.event_store_service_enabled = kwargs.get('event_store_service_enabled', None)
self.fabric_settings = kwargs.get('fabric_settings', None)
self.management_endpoint = kwargs.get('management_endpoint', None)
self.node_types = kwargs.get('node_types', None)
self.provisioning_state = None
self.reliability_level = kwargs.get('reliability_level', None)
self.reverse_proxy_certificate = kwargs.get('reverse_proxy_certificate', None)
self.reverse_proxy_certificate_common_names = kwargs.get('reverse_proxy_certificate_common_names', None)
self.upgrade_description = kwargs.get('upgrade_description', None)
self.upgrade_mode = kwargs.get('upgrade_mode', "Automatic")
self.application_type_versions_cleanup_policy = kwargs.get('application_type_versions_cleanup_policy', None)
self.vm_image = kwargs.get('vm_image', None)
self.sf_zonal_upgrade_mode = kwargs.get('sf_zonal_upgrade_mode', None)
self.vmss_zonal_upgrade_mode = kwargs.get('vmss_zonal_upgrade_mode', None)
self.infrastructure_service_manager = kwargs.get('infrastructure_service_manager', None)
self.upgrade_wave = kwargs.get('upgrade_wave', None)
self.upgrade_pause_start_timestamp_utc = kwargs.get('upgrade_pause_start_timestamp_utc', None)
self.upgrade_pause_end_timestamp_utc = kwargs.get('upgrade_pause_end_timestamp_utc', None)
self.wave_upgrade_paused = kwargs.get('wave_upgrade_paused', None)
self.notifications = kwargs.get('notifications', None)
class ClusterCodeVersionsListResult(msrest.serialization.Model):
"""The list results of the Service Fabric runtime versions.
:param value:
:type value: list[~azure.mgmt.servicefabric.models.ClusterCodeVersionsResult]
:param next_link: The URL to use for getting the next set of results.
:type next_link: str
"""
_attribute_map = {
'value': {'key': 'value', 'type': '[ClusterCodeVersionsResult]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ClusterCodeVersionsListResult, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = kwargs.get('next_link', None)
class ClusterCodeVersionsResult(msrest.serialization.Model):
"""The result of the Service Fabric runtime versions.
:param id: The identification of the result.
:type id: str
:param name: The name of the result.
:type name: str
:param type: The result resource type.
:type type: str
:param code_version: The Service Fabric runtime version of the cluster.
:type code_version: str
:param support_expiry_utc: The date of expiry of support of the version.
:type support_expiry_utc: str
:param environment: Indicates if this version is for Windows or Linux operating system.
Possible values include: "Windows", "Linux".
:type environment: str or ~azure.mgmt.servicefabric.models.ClusterEnvironment
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'code_version': {'key': 'properties.codeVersion', 'type': 'str'},
'support_expiry_utc': {'key': 'properties.supportExpiryUtc', 'type': 'str'},
'environment': {'key': 'properties.environment', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ClusterCodeVersionsResult, self).__init__(**kwargs)
self.id = kwargs.get('id', None)
self.name = kwargs.get('name', None)
self.type = kwargs.get('type', None)
self.code_version = kwargs.get('code_version', None)
self.support_expiry_utc = kwargs.get('support_expiry_utc', None)
self.environment = kwargs.get('environment', None)
class ClusterHealthPolicy(msrest.serialization.Model):
"""Defines a health policy used to evaluate the health of the cluster or of a cluster node.
:param max_percent_unhealthy_nodes: The maximum allowed percentage of unhealthy nodes before
reporting an error. For example, to allow 10% of nodes to be unhealthy, this value would be 10.
The percentage represents the maximum tolerated percentage of nodes that can be unhealthy
before the cluster is considered in error.
If the percentage is respected but there is at least one unhealthy node, the health is
evaluated as Warning.
The percentage is calculated by dividing the number of unhealthy nodes over the total number
of nodes in the cluster.
The computation rounds up to tolerate one failure on small numbers of nodes. Default
percentage is zero.
In large clusters, some nodes will always be down or out for repairs, so this percentage
should be configured to tolerate that.
:type max_percent_unhealthy_nodes: int
:param max_percent_unhealthy_applications: The maximum allowed percentage of unhealthy
applications before reporting an error. For example, to allow 10% of applications to be
unhealthy, this value would be 10.
The percentage represents the maximum tolerated percentage of applications that can be
unhealthy before the cluster is considered in error.
If the percentage is respected but there is at least one unhealthy application, the health is
evaluated as Warning.
This is calculated by dividing the number of unhealthy applications over the total number of
application instances in the cluster, excluding applications of application types that are
included in the ApplicationTypeHealthPolicyMap.
The computation rounds up to tolerate one failure on small numbers of applications. Default
percentage is zero.
:type max_percent_unhealthy_applications: int
:param application_health_policies: Defines the application health policy map used to evaluate
the health of an application or one of its children entities.
:type application_health_policies: dict[str,
~azure.mgmt.servicefabric.models.ApplicationHealthPolicy]
"""
_validation = {
'max_percent_unhealthy_nodes': {'maximum': 100, 'minimum': 0},
'max_percent_unhealthy_applications': {'maximum': 100, 'minimum': 0},
}
_attribute_map = {
'max_percent_unhealthy_nodes': {'key': 'maxPercentUnhealthyNodes', 'type': 'int'},
'max_percent_unhealthy_applications': {'key': 'maxPercentUnhealthyApplications', 'type': 'int'},
'application_health_policies': {'key': 'applicationHealthPolicies', 'type': '{ApplicationHealthPolicy}'},
}
def __init__(
self,
**kwargs
):
super(ClusterHealthPolicy, self).__init__(**kwargs)
self.max_percent_unhealthy_nodes = kwargs.get('max_percent_unhealthy_nodes', 0)
self.max_percent_unhealthy_applications = kwargs.get('max_percent_unhealthy_applications', 0)
self.application_health_policies = kwargs.get('application_health_policies', None)
class ClusterListResult(msrest.serialization.Model):
"""Cluster list results.
:param value:
:type value: list[~azure.mgmt.servicefabric.models.Cluster]
:param next_link: The URL to use for getting the next set of results.
:type next_link: str
"""
_attribute_map = {
'value': {'key': 'value', 'type': '[Cluster]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ClusterListResult, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = kwargs.get('next_link', None)
class ClusterUpdateParameters(msrest.serialization.Model):
"""Cluster update request.
:param tags: A set of tags. Cluster update parameters.
:type tags: dict[str, str]
:param add_on_features: The list of add-on features to enable in the cluster.
:type add_on_features: list[str or ~azure.mgmt.servicefabric.models.AddOnFeatures]
:param certificate: The certificate to use for securing the cluster. The certificate provided
will be used for node to node security within the cluster, SSL certificate for cluster
management endpoint and default admin client.
:type certificate: ~azure.mgmt.servicefabric.models.CertificateDescription
:param certificate_common_names: Describes a list of server certificates referenced by common
name that are used to secure the cluster.
:type certificate_common_names: ~azure.mgmt.servicefabric.models.ServerCertificateCommonNames
:param client_certificate_common_names: The list of client certificates referenced by common
name that are allowed to manage the cluster. This will overwrite the existing list.
:type client_certificate_common_names:
list[~azure.mgmt.servicefabric.models.ClientCertificateCommonName]
:param client_certificate_thumbprints: The list of client certificates referenced by thumbprint
that are allowed to manage the cluster. This will overwrite the existing list.
:type client_certificate_thumbprints:
list[~azure.mgmt.servicefabric.models.ClientCertificateThumbprint]
:param cluster_code_version: The Service Fabric runtime version of the cluster. This property
can only by set the user when **upgradeMode** is set to 'Manual'. To get list of available
Service Fabric versions for new clusters use `ClusterVersion API <./ClusterVersion.md>`_. To
get the list of available version for existing clusters use **availableClusterVersions**.
:type cluster_code_version: str
:param event_store_service_enabled: Indicates if the event store service is enabled.
:type event_store_service_enabled: bool
:param fabric_settings: The list of custom fabric settings to configure the cluster. This will
overwrite the existing list.
:type fabric_settings: list[~azure.mgmt.servicefabric.models.SettingsSectionDescription]
:param node_types: The list of node types in the cluster. This will overwrite the existing
list.
:type node_types: list[~azure.mgmt.servicefabric.models.NodeTypeDescription]
:param reliability_level: The reliability level sets the replica set size of system services.
Learn about `ReliabilityLevel
<https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-capacity>`_.
* None - Run the System services with a target replica set count of 1. This should only be
used for test clusters.
* Bronze - Run the System services with a target replica set count of 3. This should only be
used for test clusters.
* Silver - Run the System services with a target replica set count of 5.
* Gold - Run the System services with a target replica set count of 7.
* Platinum - Run the System services with a target replica set count of 9. Possible values
include: "None", "Bronze", "Silver", "Gold", "Platinum".
:type reliability_level: str or ~azure.mgmt.servicefabric.models.ReliabilityLevel
:param reverse_proxy_certificate: The server certificate used by reverse proxy.
:type reverse_proxy_certificate: ~azure.mgmt.servicefabric.models.CertificateDescription
:param upgrade_description: The policy to use when upgrading the cluster.
:type upgrade_description: ~azure.mgmt.servicefabric.models.ClusterUpgradePolicy
:param application_type_versions_cleanup_policy: The policy used to clean up unused versions.
:type application_type_versions_cleanup_policy:
~azure.mgmt.servicefabric.models.ApplicationTypeVersionsCleanupPolicy
:param upgrade_mode: The upgrade mode of the cluster when new Service Fabric runtime version is
available. Possible values include: "Automatic", "Manual". Default value: "Automatic".
:type upgrade_mode: str or ~azure.mgmt.servicefabric.models.UpgradeMode
:param sf_zonal_upgrade_mode: This property controls the logical grouping of VMs in upgrade
domains (UDs). This property can't be modified if a node type with multiple Availability Zones
is already present in the cluster. Possible values include: "Parallel", "Hierarchical".
:type sf_zonal_upgrade_mode: str or ~azure.mgmt.servicefabric.models.SfZonalUpgradeMode
:param vmss_zonal_upgrade_mode: This property defines the upgrade mode for the virtual machine
scale set, it is mandatory if a node type with multiple Availability Zones is added. Possible
values include: "Parallel", "Hierarchical".
:type vmss_zonal_upgrade_mode: str or ~azure.mgmt.servicefabric.models.VmssZonalUpgradeMode
:param infrastructure_service_manager: Indicates if infrastructure service manager is enabled.
:type infrastructure_service_manager: bool
:param upgrade_wave: Indicates when new cluster runtime version upgrades will be applied after
they are released. By default is Wave0. Only applies when **upgradeMode** is set to
'Automatic'. Possible values include: "Wave0", "Wave1", "Wave2".
:type upgrade_wave: str or ~azure.mgmt.servicefabric.models.ClusterUpgradeCadence
:param upgrade_pause_start_timestamp_utc: The start timestamp to pause runtime version upgrades
on the cluster (UTC).
:type upgrade_pause_start_timestamp_utc: ~datetime.datetime
:param upgrade_pause_end_timestamp_utc: The end timestamp of pause runtime version upgrades on
the cluster (UTC).
:type upgrade_pause_end_timestamp_utc: ~datetime.datetime
:param wave_upgrade_paused: Boolean to pause automatic runtime version upgrades to the cluster.
:type wave_upgrade_paused: bool
:param notifications: Indicates a list of notification channels for cluster events.
:type notifications: list[~azure.mgmt.servicefabric.models.Notification]
"""
_attribute_map = {
'tags': {'key': 'tags', 'type': '{str}'},
'add_on_features': {'key': 'properties.addOnFeatures', 'type': '[str]'},
'certificate': {'key': 'properties.certificate', 'type': 'CertificateDescription'},
'certificate_common_names': {'key': 'properties.certificateCommonNames', 'type': 'ServerCertificateCommonNames'},
'client_certificate_common_names': {'key': 'properties.clientCertificateCommonNames', 'type': '[ClientCertificateCommonName]'},
'client_certificate_thumbprints': {'key': 'properties.clientCertificateThumbprints', 'type': '[ClientCertificateThumbprint]'},
'cluster_code_version': {'key': 'properties.clusterCodeVersion', 'type': 'str'},
'event_store_service_enabled': {'key': 'properties.eventStoreServiceEnabled', 'type': 'bool'},
'fabric_settings': {'key': 'properties.fabricSettings', 'type': '[SettingsSectionDescription]'},
'node_types': {'key': 'properties.nodeTypes', 'type': '[NodeTypeDescription]'},
'reliability_level': {'key': 'properties.reliabilityLevel', 'type': 'str'},
'reverse_proxy_certificate': {'key': 'properties.reverseProxyCertificate', 'type': 'CertificateDescription'},
'upgrade_description': {'key': 'properties.upgradeDescription', 'type': 'ClusterUpgradePolicy'},
'application_type_versions_cleanup_policy': {'key': 'properties.applicationTypeVersionsCleanupPolicy', 'type': 'ApplicationTypeVersionsCleanupPolicy'},
'upgrade_mode': {'key': 'properties.upgradeMode', 'type': 'str'},
'sf_zonal_upgrade_mode': {'key': 'properties.sfZonalUpgradeMode', 'type': 'str'},
'vmss_zonal_upgrade_mode': {'key': 'properties.vmssZonalUpgradeMode', 'type': 'str'},
'infrastructure_service_manager': {'key': 'properties.infrastructureServiceManager', 'type': 'bool'},
'upgrade_wave': {'key': 'properties.upgradeWave', 'type': 'str'},
'upgrade_pause_start_timestamp_utc': {'key': 'properties.upgradePauseStartTimestampUtc', 'type': 'iso-8601'},
'upgrade_pause_end_timestamp_utc': {'key': 'properties.upgradePauseEndTimestampUtc', 'type': 'iso-8601'},
'wave_upgrade_paused': {'key': 'properties.waveUpgradePaused', 'type': 'bool'},
'notifications': {'key': 'properties.notifications', 'type': '[Notification]'},
}
def __init__(
self,
**kwargs
):
super(ClusterUpdateParameters, self).__init__(**kwargs)
self.tags = kwargs.get('tags', None)
self.add_on_features = kwargs.get('add_on_features', None)
self.certificate = kwargs.get('certificate', None)
self.certificate_common_names = kwargs.get('certificate_common_names', None)
self.client_certificate_common_names = kwargs.get('client_certificate_common_names', None)
self.client_certificate_thumbprints = kwargs.get('client_certificate_thumbprints', None)
self.cluster_code_version = kwargs.get('cluster_code_version', None)
self.event_store_service_enabled = kwargs.get('event_store_service_enabled', None)
self.fabric_settings = kwargs.get('fabric_settings', None)
self.node_types = kwargs.get('node_types', None)
self.reliability_level = kwargs.get('reliability_level', None)
self.reverse_proxy_certificate = kwargs.get('reverse_proxy_certificate', None)
self.upgrade_description = kwargs.get('upgrade_description', None)
self.application_type_versions_cleanup_policy = kwargs.get('application_type_versions_cleanup_policy', None)
self.upgrade_mode = kwargs.get('upgrade_mode', "Automatic")
self.sf_zonal_upgrade_mode = kwargs.get('sf_zonal_upgrade_mode', None)
self.vmss_zonal_upgrade_mode = kwargs.get('vmss_zonal_upgrade_mode', None)
self.infrastructure_service_manager = kwargs.get('infrastructure_service_manager', None)
self.upgrade_wave = kwargs.get('upgrade_wave', None)
self.upgrade_pause_start_timestamp_utc = kwargs.get('upgrade_pause_start_timestamp_utc', None)
self.upgrade_pause_end_timestamp_utc = kwargs.get('upgrade_pause_end_timestamp_utc', None)
self.wave_upgrade_paused = kwargs.get('wave_upgrade_paused', None)
self.notifications = kwargs.get('notifications', None)
class ClusterUpgradeDeltaHealthPolicy(msrest.serialization.Model):
"""Describes the delta health policies for the cluster upgrade.
All required parameters must be populated in order to send to Azure.
:param max_percent_delta_unhealthy_nodes: Required. The maximum allowed percentage of nodes
health degradation allowed during cluster upgrades.
The delta is measured between the state of the nodes at the beginning of upgrade and the state
of the nodes at the time of the health evaluation.
The check is performed after every upgrade domain upgrade completion to make sure the global
state of the cluster is within tolerated limits.
:type max_percent_delta_unhealthy_nodes: int
:param max_percent_upgrade_domain_delta_unhealthy_nodes: Required. The maximum allowed
percentage of upgrade domain nodes health degradation allowed during cluster upgrades.
The delta is measured between the state of the upgrade domain nodes at the beginning of
upgrade and the state of the upgrade domain nodes at the time of the health evaluation.
The check is performed after every upgrade domain upgrade completion for all completed upgrade
domains to make sure the state of the upgrade domains is within tolerated limits.
:type max_percent_upgrade_domain_delta_unhealthy_nodes: int
:param max_percent_delta_unhealthy_applications: Required. The maximum allowed percentage of
applications health degradation allowed during cluster upgrades.
The delta is measured between the state of the applications at the beginning of upgrade and
the state of the applications at the time of the health evaluation.
The check is performed after every upgrade domain upgrade completion to make sure the global
state of the cluster is within tolerated limits. System services are not included in this.
:type max_percent_delta_unhealthy_applications: int
:param application_delta_health_policies: Defines the application delta health policy map used
to evaluate the health of an application or one of its child entities when upgrading the
cluster.
:type application_delta_health_policies: dict[str,
~azure.mgmt.servicefabric.models.ApplicationDeltaHealthPolicy]
"""
_validation = {
'max_percent_delta_unhealthy_nodes': {'required': True, 'maximum': 100, 'minimum': 0},
'max_percent_upgrade_domain_delta_unhealthy_nodes': {'required': True, 'maximum': 100, 'minimum': 0},
'max_percent_delta_unhealthy_applications': {'required': True, 'maximum': 100, 'minimum': 0},
}
_attribute_map = {
'max_percent_delta_unhealthy_nodes': {'key': 'maxPercentDeltaUnhealthyNodes', 'type': 'int'},
'max_percent_upgrade_domain_delta_unhealthy_nodes': {'key': 'maxPercentUpgradeDomainDeltaUnhealthyNodes', 'type': 'int'},
'max_percent_delta_unhealthy_applications': {'key': 'maxPercentDeltaUnhealthyApplications', 'type': 'int'},
'application_delta_health_policies': {'key': 'applicationDeltaHealthPolicies', 'type': '{ApplicationDeltaHealthPolicy}'},
}
def __init__(
self,
**kwargs
):
super(ClusterUpgradeDeltaHealthPolicy, self).__init__(**kwargs)
self.max_percent_delta_unhealthy_nodes = kwargs['max_percent_delta_unhealthy_nodes']
self.max_percent_upgrade_domain_delta_unhealthy_nodes = kwargs['max_percent_upgrade_domain_delta_unhealthy_nodes']
self.max_percent_delta_unhealthy_applications = kwargs['max_percent_delta_unhealthy_applications']
self.application_delta_health_policies = kwargs.get('application_delta_health_policies', None)
class ClusterUpgradePolicy(msrest.serialization.Model):
"""Describes the policy used when upgrading the cluster.
All required parameters must be populated in order to send to Azure.
:param force_restart: If true, then processes are forcefully restarted during upgrade even when
the code version has not changed (the upgrade only changes configuration or data).
:type force_restart: bool
:param upgrade_replica_set_check_timeout: Required. The maximum amount of time to block
processing of an upgrade domain and prevent loss of availability when there are unexpected
issues. When this timeout expires, processing of the upgrade domain will proceed regardless of
availability loss issues. The timeout is reset at the start of each upgrade domain. The timeout
can be in either hh:mm:ss or in d.hh:mm:ss.ms format.
:type upgrade_replica_set_check_timeout: str
:param health_check_wait_duration: Required. The length of time to wait after completing an
upgrade domain before performing health checks. The duration can be in either hh:mm:ss or in
d.hh:mm:ss.ms format.
:type health_check_wait_duration: str
:param health_check_stable_duration: Required. The amount of time that the application or
cluster must remain healthy before the upgrade proceeds to the next upgrade domain. The
duration can be in either hh:mm:ss or in d.hh:mm:ss.ms format.
:type health_check_stable_duration: str
:param health_check_retry_timeout: Required. The amount of time to retry health evaluation when
the application or cluster is unhealthy before the upgrade rolls back. The timeout can be in
either hh:mm:ss or in d.hh:mm:ss.ms format.
:type health_check_retry_timeout: str
:param upgrade_timeout: Required. The amount of time the overall upgrade has to complete before
the upgrade rolls back. The timeout can be in either hh:mm:ss or in d.hh:mm:ss.ms format.
:type upgrade_timeout: str
:param upgrade_domain_timeout: Required. The amount of time each upgrade domain has to complete
before the upgrade rolls back. The timeout can be in either hh:mm:ss or in d.hh:mm:ss.ms
format.
:type upgrade_domain_timeout: str
:param health_policy: Required. The cluster health policy used when upgrading the cluster.
:type health_policy: ~azure.mgmt.servicefabric.models.ClusterHealthPolicy
:param delta_health_policy: The cluster delta health policy used when upgrading the cluster.
:type delta_health_policy: ~azure.mgmt.servicefabric.models.ClusterUpgradeDeltaHealthPolicy
"""
_validation = {
'upgrade_replica_set_check_timeout': {'required': True},
'health_check_wait_duration': {'required': True},
'health_check_stable_duration': {'required': True},
'health_check_retry_timeout': {'required': True},
'upgrade_timeout': {'required': True},
'upgrade_domain_timeout': {'required': True},
'health_policy': {'required': True},
}
_attribute_map = {
'force_restart': {'key': 'forceRestart', 'type': 'bool'},
'upgrade_replica_set_check_timeout': {'key': 'upgradeReplicaSetCheckTimeout', 'type': 'str'},
'health_check_wait_duration': {'key': 'healthCheckWaitDuration', 'type': 'str'},
'health_check_stable_duration': {'key': 'healthCheckStableDuration', 'type': 'str'},
'health_check_retry_timeout': {'key': 'healthCheckRetryTimeout', 'type': 'str'},
'upgrade_timeout': {'key': 'upgradeTimeout', 'type': 'str'},
'upgrade_domain_timeout': {'key': 'upgradeDomainTimeout', 'type': 'str'},
'health_policy': {'key': 'healthPolicy', 'type': 'ClusterHealthPolicy'},
'delta_health_policy': {'key': 'deltaHealthPolicy', 'type': 'ClusterUpgradeDeltaHealthPolicy'},
}
def __init__(
self,
**kwargs
):
super(ClusterUpgradePolicy, self).__init__(**kwargs)
self.force_restart = kwargs.get('force_restart', None)
self.upgrade_replica_set_check_timeout = kwargs['upgrade_replica_set_check_timeout']
self.health_check_wait_duration = kwargs['health_check_wait_duration']
self.health_check_stable_duration = kwargs['health_check_stable_duration']
self.health_check_retry_timeout = kwargs['health_check_retry_timeout']
self.upgrade_timeout = kwargs['upgrade_timeout']
self.upgrade_domain_timeout = kwargs['upgrade_domain_timeout']
self.health_policy = kwargs['health_policy']
self.delta_health_policy = kwargs.get('delta_health_policy', None)
class ClusterVersionDetails(msrest.serialization.Model):
"""The detail of the Service Fabric runtime version result.
:param code_version: The Service Fabric runtime version of the cluster.
:type code_version: str
:param support_expiry_utc: The date of expiry of support of the version.
:type support_expiry_utc: str
:param environment: Indicates if this version is for Windows or Linux operating system.
Possible values include: "Windows", "Linux".
:type environment: str or ~azure.mgmt.servicefabric.models.ClusterEnvironment
"""
_attribute_map = {
'code_version': {'key': 'codeVersion', 'type': 'str'},
'support_expiry_utc': {'key': 'supportExpiryUtc', 'type': 'str'},
'environment': {'key': 'environment', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ClusterVersionDetails, self).__init__(**kwargs)
self.code_version = kwargs.get('code_version', None)
self.support_expiry_utc = kwargs.get('support_expiry_utc', None)
self.environment = kwargs.get('environment', None)
class DiagnosticsStorageAccountConfig(msrest.serialization.Model):
"""The storage account information for storing Service Fabric diagnostic logs.
All required parameters must be populated in order to send to Azure.
:param storage_account_name: Required. The Azure storage account name.
:type storage_account_name: str
:param protected_account_key_name: Required. The protected diagnostics storage key name.
:type protected_account_key_name: str
:param protected_account_key_name2: The secondary protected diagnostics storage key name. If
one of the storage account keys is rotated the cluster will fallback to using the other.
:type protected_account_key_name2: str
:param blob_endpoint: Required. The blob endpoint of the azure storage account.
:type blob_endpoint: str
:param queue_endpoint: Required. The queue endpoint of the azure storage account.
:type queue_endpoint: str
:param table_endpoint: Required. The table endpoint of the azure storage account.
:type table_endpoint: str
"""
_validation = {
'storage_account_name': {'required': True},
'protected_account_key_name': {'required': True},
'blob_endpoint': {'required': True},
'queue_endpoint': {'required': True},
'table_endpoint': {'required': True},
}
_attribute_map = {
'storage_account_name': {'key': 'storageAccountName', 'type': 'str'},
'protected_account_key_name': {'key': 'protectedAccountKeyName', 'type': 'str'},
'protected_account_key_name2': {'key': 'protectedAccountKeyName2', 'type': 'str'},
'blob_endpoint': {'key': 'blobEndpoint', 'type': 'str'},
'queue_endpoint': {'key': 'queueEndpoint', 'type': 'str'},
'table_endpoint': {'key': 'tableEndpoint', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(DiagnosticsStorageAccountConfig, self).__init__(**kwargs)
self.storage_account_name = kwargs['storage_account_name']
self.protected_account_key_name = kwargs['protected_account_key_name']
self.protected_account_key_name2 = kwargs.get('protected_account_key_name2', None)
self.blob_endpoint = kwargs['blob_endpoint']
self.queue_endpoint = kwargs['queue_endpoint']
self.table_endpoint = kwargs['table_endpoint']
class EndpointRangeDescription(msrest.serialization.Model):
"""Port range details.
All required parameters must be populated in order to send to Azure.
:param start_port: Required. Starting port of a range of ports.
:type start_port: int
:param end_port: Required. End port of a range of ports.
:type end_port: int
"""
_validation = {
'start_port': {'required': True},
'end_port': {'required': True},
}
_attribute_map = {
'start_port': {'key': 'startPort', 'type': 'int'},
'end_port': {'key': 'endPort', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(EndpointRangeDescription, self).__init__(**kwargs)
self.start_port = kwargs['start_port']
self.end_port = kwargs['end_port']
class ErrorModel(msrest.serialization.Model):
"""The structure of the error.
:param error: The error details.
:type error: ~azure.mgmt.servicefabric.models.ErrorModelError
"""
_attribute_map = {
'error': {'key': 'error', 'type': 'ErrorModelError'},
}
def __init__(
self,
**kwargs
):
super(ErrorModel, self).__init__(**kwargs)
self.error = kwargs.get('error', None)
class ErrorModelError(msrest.serialization.Model):
"""The error details.
:param code: The error code.
:type code: str
:param message: The error message.
:type message: str
"""
_attribute_map = {
'code': {'key': 'code', 'type': 'str'},
'message': {'key': 'message', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ErrorModelError, self).__init__(**kwargs)
self.code = kwargs.get('code', None)
self.message = kwargs.get('message', None)
class ManagedIdentity(msrest.serialization.Model):
"""Describes the managed identities for an Azure resource.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar principal_id: The principal id of the managed identity. This property will only be
provided for a system assigned identity.
:vartype principal_id: str
:ivar tenant_id: The tenant id of the managed identity. This property will only be provided for
a system assigned identity.
:vartype tenant_id: str
:param type: The type of managed identity for the resource. Possible values include:
"SystemAssigned", "UserAssigned", "SystemAssigned, UserAssigned", "None".
:type type: str or ~azure.mgmt.servicefabric.models.ManagedIdentityType
:param user_assigned_identities: The list of user identities associated with the resource. The
user identity dictionary key references will be ARM resource ids in the form:
'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.
:type user_assigned_identities: dict[str,
~azure.mgmt.servicefabric.models.UserAssignedIdentity]
"""
_validation = {
'principal_id': {'readonly': True},
'tenant_id': {'readonly': True},
}
_attribute_map = {
'principal_id': {'key': 'principalId', 'type': 'str'},
'tenant_id': {'key': 'tenantId', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'user_assigned_identities': {'key': 'userAssignedIdentities', 'type': '{UserAssignedIdentity}'},
}
def __init__(
self,
**kwargs
):
super(ManagedIdentity, self).__init__(**kwargs)
self.principal_id = None
self.tenant_id = None
self.type = kwargs.get('type', None)
self.user_assigned_identities = kwargs.get('user_assigned_identities', None)
class PartitionSchemeDescription(msrest.serialization.Model):
"""Describes how the service is partitioned.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: NamedPartitionSchemeDescription, SingletonPartitionSchemeDescription, UniformInt64RangePartitionSchemeDescription.
All required parameters must be populated in order to send to Azure.
:param partition_scheme: Required. Specifies how the service is partitioned.Constant filled by
server. Possible values include: "Invalid", "Singleton", "UniformInt64Range", "Named".
:type partition_scheme: str or ~azure.mgmt.servicefabric.models.PartitionScheme
"""
_validation = {
'partition_scheme': {'required': True},
}
_attribute_map = {
'partition_scheme': {'key': 'partitionScheme', 'type': 'str'},
}
_subtype_map = {
'partition_scheme': {'Named': 'NamedPartitionSchemeDescription', 'Singleton': 'SingletonPartitionSchemeDescription', 'UniformInt64Range': 'UniformInt64RangePartitionSchemeDescription'}
}
def __init__(
self,
**kwargs
):
super(PartitionSchemeDescription, self).__init__(**kwargs)
self.partition_scheme = None # type: Optional[str]
class NamedPartitionSchemeDescription(PartitionSchemeDescription):
"""Describes the named partition scheme of the service.
All required parameters must be populated in order to send to Azure.
:param partition_scheme: Required. Specifies how the service is partitioned.Constant filled by
server. Possible values include: "Invalid", "Singleton", "UniformInt64Range", "Named".
:type partition_scheme: str or ~azure.mgmt.servicefabric.models.PartitionScheme
:param count: Required. The number of partitions.
:type count: int
:param names: Required. Array of size specified by the ‘count’ parameter, for the names of the
partitions.
:type names: list[str]
"""
_validation = {
'partition_scheme': {'required': True},
'count': {'required': True},
'names': {'required': True},
}
_attribute_map = {
'partition_scheme': {'key': 'partitionScheme', 'type': 'str'},
'count': {'key': 'count', 'type': 'int'},
'names': {'key': 'names', 'type': '[str]'},
}
def __init__(
self,
**kwargs
):
super(NamedPartitionSchemeDescription, self).__init__(**kwargs)
self.partition_scheme = 'Named' # type: str
self.count = kwargs['count']
self.names = kwargs['names']
class NodeTypeDescription(msrest.serialization.Model):
"""Describes a node type in the cluster, each node type represents sub set of nodes in the cluster.
All required parameters must be populated in order to send to Azure.
:param name: Required. The name of the node type.
:type name: str
:param placement_properties: The placement tags applied to nodes in the node type, which can be
used to indicate where certain services (workload) should run.
:type placement_properties: dict[str, str]
:param capacities: The capacity tags applied to the nodes in the node type, the cluster
resource manager uses these tags to understand how much resource a node has.
:type capacities: dict[str, str]
:param client_connection_endpoint_port: Required. The TCP cluster management endpoint port.
:type client_connection_endpoint_port: int
:param http_gateway_endpoint_port: Required. The HTTP cluster management endpoint port.
:type http_gateway_endpoint_port: int
:param durability_level: The durability level of the node type. Learn about `DurabilityLevel
<https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-capacity>`_.
* Bronze - No privileges. This is the default.
* Silver - The infrastructure jobs can be paused for a duration of 10 minutes per UD.
* Gold - The infrastructure jobs can be paused for a duration of 2 hours per UD. Gold
durability can be enabled only on full node VM skus like D15_V2, G5 etc. Possible values
include: "Bronze", "Silver", "Gold".
:type durability_level: str or ~azure.mgmt.servicefabric.models.DurabilityLevel
:param application_ports: The range of ports from which cluster assigned port to Service Fabric
applications.
:type application_ports: ~azure.mgmt.servicefabric.models.EndpointRangeDescription
:param ephemeral_ports: The range of ephemeral ports that nodes in this node type should be
configured with.
:type ephemeral_ports: ~azure.mgmt.servicefabric.models.EndpointRangeDescription
:param is_primary: Required. The node type on which system services will run. Only one node
type should be marked as primary. Primary node type cannot be deleted or changed for existing
clusters.
:type is_primary: bool
:param vm_instance_count: Required. VMInstanceCount should be 1 to n, where n indicates the
number of VM instances corresponding to this nodeType. VMInstanceCount = 0 can be done only in
these scenarios: NodeType is a secondary nodeType. Durability = Bronze or Durability >= Bronze
and InfrastructureServiceManager = true. If VMInstanceCount = 0, implies the VMs for this
nodeType will not be used for the initial cluster size computation.
:type vm_instance_count: int
:param reverse_proxy_endpoint_port: The endpoint used by reverse proxy.
:type reverse_proxy_endpoint_port: int
:param is_stateless: Indicates if the node type can only host Stateless workloads.
:type is_stateless: bool
:param multiple_availability_zones: Indicates if the node type is enabled to support multiple
zones.
:type multiple_availability_zones: bool
"""
_validation = {
'name': {'required': True},
'client_connection_endpoint_port': {'required': True},
'http_gateway_endpoint_port': {'required': True},
'is_primary': {'required': True},
'vm_instance_count': {'required': True, 'maximum': 2147483647, 'minimum': 0},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'placement_properties': {'key': 'placementProperties', 'type': '{str}'},
'capacities': {'key': 'capacities', 'type': '{str}'},
'client_connection_endpoint_port': {'key': 'clientConnectionEndpointPort', 'type': 'int'},
'http_gateway_endpoint_port': {'key': 'httpGatewayEndpointPort', 'type': 'int'},
'durability_level': {'key': 'durabilityLevel', 'type': 'str'},
'application_ports': {'key': 'applicationPorts', 'type': 'EndpointRangeDescription'},
'ephemeral_ports': {'key': 'ephemeralPorts', 'type': 'EndpointRangeDescription'},
'is_primary': {'key': 'isPrimary', 'type': 'bool'},
'vm_instance_count': {'key': 'vmInstanceCount', 'type': 'int'},
'reverse_proxy_endpoint_port': {'key': 'reverseProxyEndpointPort', 'type': 'int'},
'is_stateless': {'key': 'isStateless', 'type': 'bool'},
'multiple_availability_zones': {'key': 'multipleAvailabilityZones', 'type': 'bool'},
}
def __init__(
self,
**kwargs
):
super(NodeTypeDescription, self).__init__(**kwargs)
self.name = kwargs['name']
self.placement_properties = kwargs.get('placement_properties', None)
self.capacities = kwargs.get('capacities', None)
self.client_connection_endpoint_port = kwargs['client_connection_endpoint_port']
self.http_gateway_endpoint_port = kwargs['http_gateway_endpoint_port']
self.durability_level = kwargs.get('durability_level', None)
self.application_ports = kwargs.get('application_ports', None)
self.ephemeral_ports = kwargs.get('ephemeral_ports', None)
self.is_primary = kwargs['is_primary']
self.vm_instance_count = kwargs['vm_instance_count']
self.reverse_proxy_endpoint_port = kwargs.get('reverse_proxy_endpoint_port', None)
self.is_stateless = kwargs.get('is_stateless', None)
self.multiple_availability_zones = kwargs.get('multiple_availability_zones', None)
class Notification(msrest.serialization.Model):
"""Describes the notification channel for cluster events.
All required parameters must be populated in order to send to Azure.
:param is_enabled: Required. Indicates if the notification is enabled.
:type is_enabled: bool
:param notification_category: Required. The category of notification. Possible values include:
"WaveProgress".
:type notification_category: str or ~azure.mgmt.servicefabric.models.NotificationCategory
:param notification_level: Required. The level of notification. Possible values include:
"Critical", "All".
:type notification_level: str or ~azure.mgmt.servicefabric.models.NotificationLevel
:param notification_targets: Required. List of targets that subscribe to the notification.
:type notification_targets: list[~azure.mgmt.servicefabric.models.NotificationTarget]
"""
_validation = {
'is_enabled': {'required': True},
'notification_category': {'required': True},
'notification_level': {'required': True},
'notification_targets': {'required': True},
}
_attribute_map = {
'is_enabled': {'key': 'isEnabled', 'type': 'bool'},
'notification_category': {'key': 'notificationCategory', 'type': 'str'},
'notification_level': {'key': 'notificationLevel', 'type': 'str'},
'notification_targets': {'key': 'notificationTargets', 'type': '[NotificationTarget]'},
}
def __init__(
self,
**kwargs
):
super(Notification, self).__init__(**kwargs)
self.is_enabled = kwargs['is_enabled']
self.notification_category = kwargs['notification_category']
self.notification_level = kwargs['notification_level']
self.notification_targets = kwargs['notification_targets']
class NotificationTarget(msrest.serialization.Model):
"""Describes the notification target properties.
All required parameters must be populated in order to send to Azure.
:param notification_channel: Required. The notification channel indicates the type of receivers
subscribed to the notification, either user or subscription. Possible values include:
"EmailUser", "EmailSubscription".
:type notification_channel: str or ~azure.mgmt.servicefabric.models.NotificationChannel
:param receivers: Required. List of targets that subscribe to the notification.
:type receivers: list[str]
"""
_validation = {
'notification_channel': {'required': True},
'receivers': {'required': True},
}
_attribute_map = {
'notification_channel': {'key': 'notificationChannel', 'type': 'str'},
'receivers': {'key': 'receivers', 'type': '[str]'},
}
def __init__(
self,
**kwargs
):
super(NotificationTarget, self).__init__(**kwargs)
self.notification_channel = kwargs['notification_channel']
self.receivers = kwargs['receivers']
class OperationListResult(msrest.serialization.Model):
"""Describes the result of the request to list Service Fabric resource provider operations.
Variables are only populated by the server, and will be ignored when sending a request.
:param value: List of operations supported by the Service Fabric resource provider.
:type value: list[~azure.mgmt.servicefabric.models.OperationResult]
:ivar next_link: URL to get the next set of operation list results if there are any.
:vartype next_link: str
"""
_validation = {
'next_link': {'readonly': True},
}
_attribute_map = {
'value': {'key': 'value', 'type': '[OperationResult]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(OperationListResult, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = None
class OperationResult(msrest.serialization.Model):
"""Available operation list result.
:param name: The name of the operation.
:type name: str
:param is_data_action: Indicates whether the operation is a data action.
:type is_data_action: bool
:param display: The object that represents the operation.
:type display: ~azure.mgmt.servicefabric.models.AvailableOperationDisplay
:param origin: Origin result.
:type origin: str
:param next_link: The URL to use for getting the next set of results.
:type next_link: str
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'is_data_action': {'key': 'isDataAction', 'type': 'bool'},
'display': {'key': 'display', 'type': 'AvailableOperationDisplay'},
'origin': {'key': 'origin', 'type': 'str'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(OperationResult, self).__init__(**kwargs)
self.name = kwargs.get('name', None)
self.is_data_action = kwargs.get('is_data_action', None)
self.display = kwargs.get('display', None)
self.origin = kwargs.get('origin', None)
self.next_link = kwargs.get('next_link', None)
class ServerCertificateCommonName(msrest.serialization.Model):
"""Describes the server certificate details using common name.
All required parameters must be populated in order to send to Azure.
:param certificate_common_name: Required. The common name of the server certificate.
:type certificate_common_name: str
:param certificate_issuer_thumbprint: Required. The issuer thumbprint of the server
certificate.
:type certificate_issuer_thumbprint: str
"""
_validation = {
'certificate_common_name': {'required': True},
'certificate_issuer_thumbprint': {'required': True},
}
_attribute_map = {
'certificate_common_name': {'key': 'certificateCommonName', 'type': 'str'},
'certificate_issuer_thumbprint': {'key': 'certificateIssuerThumbprint', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServerCertificateCommonName, self).__init__(**kwargs)
self.certificate_common_name = kwargs['certificate_common_name']
self.certificate_issuer_thumbprint = kwargs['certificate_issuer_thumbprint']
class ServerCertificateCommonNames(msrest.serialization.Model):
"""Describes a list of server certificates referenced by common name that are used to secure the cluster.
:param common_names: The list of server certificates referenced by common name that are used to
secure the cluster.
:type common_names: list[~azure.mgmt.servicefabric.models.ServerCertificateCommonName]
:param x509_store_name: The local certificate store location. Possible values include:
"AddressBook", "AuthRoot", "CertificateAuthority", "Disallowed", "My", "Root", "TrustedPeople",
"TrustedPublisher".
:type x509_store_name: str or ~azure.mgmt.servicefabric.models.StoreName
"""
_attribute_map = {
'common_names': {'key': 'commonNames', 'type': '[ServerCertificateCommonName]'},
'x509_store_name': {'key': 'x509StoreName', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServerCertificateCommonNames, self).__init__(**kwargs)
self.common_names = kwargs.get('common_names', None)
self.x509_store_name = kwargs.get('x509_store_name', None)
class ServiceCorrelationDescription(msrest.serialization.Model):
"""Creates a particular correlation between services.
All required parameters must be populated in order to send to Azure.
:param scheme: Required. The ServiceCorrelationScheme which describes the relationship between
this service and the service specified via ServiceName. Possible values include: "Invalid",
"Affinity", "AlignedAffinity", "NonAlignedAffinity".
:type scheme: str or ~azure.mgmt.servicefabric.models.ServiceCorrelationScheme
:param service_name: Required. The name of the service that the correlation relationship is
established with.
:type service_name: str
"""
_validation = {
'scheme': {'required': True},
'service_name': {'required': True},
}
_attribute_map = {
'scheme': {'key': 'scheme', 'type': 'str'},
'service_name': {'key': 'serviceName', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServiceCorrelationDescription, self).__init__(**kwargs)
self.scheme = kwargs['scheme']
self.service_name = kwargs['service_name']
class ServiceLoadMetricDescription(msrest.serialization.Model):
"""Specifies a metric to load balance a service during runtime.
All required parameters must be populated in order to send to Azure.
:param name: Required. The name of the metric. If the service chooses to report load during
runtime, the load metric name should match the name that is specified in Name exactly. Note
that metric names are case sensitive.
:type name: str
:param weight: The service load metric relative weight, compared to other metrics configured
for this service, as a number. Possible values include: "Zero", "Low", "Medium", "High".
:type weight: str or ~azure.mgmt.servicefabric.models.ServiceLoadMetricWeight
:param primary_default_load: Used only for Stateful services. The default amount of load, as a
number, that this service creates for this metric when it is a Primary replica.
:type primary_default_load: int
:param secondary_default_load: Used only for Stateful services. The default amount of load, as
a number, that this service creates for this metric when it is a Secondary replica.
:type secondary_default_load: int
:param default_load: Used only for Stateless services. The default amount of load, as a number,
that this service creates for this metric.
:type default_load: int
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'weight': {'key': 'weight', 'type': 'str'},
'primary_default_load': {'key': 'primaryDefaultLoad', 'type': 'int'},
'secondary_default_load': {'key': 'secondaryDefaultLoad', 'type': 'int'},
'default_load': {'key': 'defaultLoad', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(ServiceLoadMetricDescription, self).__init__(**kwargs)
self.name = kwargs['name']
self.weight = kwargs.get('weight', None)
self.primary_default_load = kwargs.get('primary_default_load', None)
self.secondary_default_load = kwargs.get('secondary_default_load', None)
self.default_load = kwargs.get('default_load', None)
class ServicePlacementPolicyDescription(msrest.serialization.Model):
"""Describes the policy to be used for placement of a Service Fabric service.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: .
All required parameters must be populated in order to send to Azure.
:param type: Required. The type of placement policy for a service fabric service. Following are
the possible values.Constant filled by server. Possible values include: "Invalid",
"InvalidDomain", "RequiredDomain", "PreferredPrimaryDomain", "RequiredDomainDistribution",
"NonPartiallyPlaceService".
:type type: str or ~azure.mgmt.servicefabric.models.ServicePlacementPolicyType
"""
_validation = {
'type': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
}
_subtype_map = {
'type': {}
}
def __init__(
self,
**kwargs
):
super(ServicePlacementPolicyDescription, self).__init__(**kwargs)
self.type = None # type: Optional[str]
class ServiceResource(ProxyResource):
"""The service resource.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param service_kind: The kind of service (Stateless or Stateful).Constant filled by server.
Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
:param service_type_name: The name of the service type.
:type service_type_name: str
:param partition_description: Describes how the service is partitioned.
:type partition_description: ~azure.mgmt.servicefabric.models.PartitionSchemeDescription
:param service_package_activation_mode: The activation Mode of the service package. Possible
values include: "SharedProcess", "ExclusiveProcess".
:type service_package_activation_mode: str or
~azure.mgmt.servicefabric.models.ArmServicePackageActivationMode
:param service_dns_name: Dns name used for the service. If this is specified, then the service
can be accessed via its DNS name instead of service name.
:type service_dns_name: str
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
'provisioning_state': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'placement_constraints': {'key': 'properties.placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'properties.correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'properties.serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'properties.servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'properties.defaultMoveCost', 'type': 'str'},
'provisioning_state': {'key': 'properties.provisioningState', 'type': 'str'},
'service_kind': {'key': 'properties.serviceKind', 'type': 'str'},
'service_type_name': {'key': 'properties.serviceTypeName', 'type': 'str'},
'partition_description': {'key': 'properties.partitionDescription', 'type': 'PartitionSchemeDescription'},
'service_package_activation_mode': {'key': 'properties.servicePackageActivationMode', 'type': 'str'},
'service_dns_name': {'key': 'properties.serviceDnsName', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServiceResource, self).__init__(**kwargs)
self.placement_constraints = kwargs.get('placement_constraints', None)
self.correlation_scheme = kwargs.get('correlation_scheme', None)
self.service_load_metrics = kwargs.get('service_load_metrics', None)
self.service_placement_policies = kwargs.get('service_placement_policies', None)
self.default_move_cost = kwargs.get('default_move_cost', None)
self.provisioning_state = None
self.service_kind = None # type: Optional[str]
self.service_type_name = kwargs.get('service_type_name', None)
self.partition_description = kwargs.get('partition_description', None)
self.service_package_activation_mode = kwargs.get('service_package_activation_mode', None)
self.service_dns_name = kwargs.get('service_dns_name', None)
class ServiceResourceList(msrest.serialization.Model):
"""The list of service resources.
Variables are only populated by the server, and will be ignored when sending a request.
:param value:
:type value: list[~azure.mgmt.servicefabric.models.ServiceResource]
:ivar next_link: URL to get the next set of service list results if there are any.
:vartype next_link: str
"""
_validation = {
'next_link': {'readonly': True},
}
_attribute_map = {
'value': {'key': 'value', 'type': '[ServiceResource]'},
'next_link': {'key': 'nextLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServiceResourceList, self).__init__(**kwargs)
self.value = kwargs.get('value', None)
self.next_link = None
class ServiceResourcePropertiesBase(msrest.serialization.Model):
"""The common service resource properties.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
"""
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServiceResourcePropertiesBase, self).__init__(**kwargs)
self.placement_constraints = kwargs.get('placement_constraints', None)
self.correlation_scheme = kwargs.get('correlation_scheme', None)
self.service_load_metrics = kwargs.get('service_load_metrics', None)
self.service_placement_policies = kwargs.get('service_placement_policies', None)
self.default_move_cost = kwargs.get('default_move_cost', None)
class ServiceResourceProperties(ServiceResourcePropertiesBase):
"""The service resource properties.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: StatefulServiceProperties, StatelessServiceProperties.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param service_kind: Required. The kind of service (Stateless or Stateful).Constant filled by
server. Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
:param service_type_name: The name of the service type.
:type service_type_name: str
:param partition_description: Describes how the service is partitioned.
:type partition_description: ~azure.mgmt.servicefabric.models.PartitionSchemeDescription
:param service_package_activation_mode: The activation Mode of the service package. Possible
values include: "SharedProcess", "ExclusiveProcess".
:type service_package_activation_mode: str or
~azure.mgmt.servicefabric.models.ArmServicePackageActivationMode
:param service_dns_name: Dns name used for the service. If this is specified, then the service
can be accessed via its DNS name instead of service name.
:type service_dns_name: str
"""
_validation = {
'provisioning_state': {'readonly': True},
'service_kind': {'required': True},
}
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
'provisioning_state': {'key': 'provisioningState', 'type': 'str'},
'service_kind': {'key': 'serviceKind', 'type': 'str'},
'service_type_name': {'key': 'serviceTypeName', 'type': 'str'},
'partition_description': {'key': 'partitionDescription', 'type': 'PartitionSchemeDescription'},
'service_package_activation_mode': {'key': 'servicePackageActivationMode', 'type': 'str'},
'service_dns_name': {'key': 'serviceDnsName', 'type': 'str'},
}
_subtype_map = {
'service_kind': {'Stateful': 'StatefulServiceProperties', 'Stateless': 'StatelessServiceProperties'}
}
def __init__(
self,
**kwargs
):
super(ServiceResourceProperties, self).__init__(**kwargs)
self.provisioning_state = None
self.service_kind = 'ServiceResourceProperties' # type: str
self.service_type_name = kwargs.get('service_type_name', None)
self.partition_description = kwargs.get('partition_description', None)
self.service_package_activation_mode = kwargs.get('service_package_activation_mode', None)
self.service_dns_name = kwargs.get('service_dns_name', None)
class ServiceResourceUpdate(ProxyResource):
"""The service resource for patch operations.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar id: Azure resource identifier.
:vartype id: str
:ivar name: Azure resource name.
:vartype name: str
:ivar type: Azure resource type.
:vartype type: str
:param location: It will be deprecated in New API, resource location depends on the parent
resource.
:type location: str
:param tags: A set of tags. Azure resource tags.
:type tags: dict[str, str]
:ivar etag: Azure resource etag.
:vartype etag: str
:ivar system_data: Metadata pertaining to creation and last modification of the resource.
:vartype system_data: ~azure.mgmt.servicefabric.models.SystemData
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:param service_kind: The kind of service (Stateless or Stateful).Constant filled by server.
Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'etag': {'readonly': True},
'system_data': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'tags': {'key': 'tags', 'type': '{str}'},
'etag': {'key': 'etag', 'type': 'str'},
'system_data': {'key': 'systemData', 'type': 'SystemData'},
'placement_constraints': {'key': 'properties.placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'properties.correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'properties.serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'properties.servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'properties.defaultMoveCost', 'type': 'str'},
'service_kind': {'key': 'properties.serviceKind', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ServiceResourceUpdate, self).__init__(**kwargs)
self.placement_constraints = kwargs.get('placement_constraints', None)
self.correlation_scheme = kwargs.get('correlation_scheme', None)
self.service_load_metrics = kwargs.get('service_load_metrics', None)
self.service_placement_policies = kwargs.get('service_placement_policies', None)
self.default_move_cost = kwargs.get('default_move_cost', None)
self.service_kind = None # type: Optional[str]
class ServiceResourceUpdateProperties(ServiceResourcePropertiesBase):
"""The service resource properties for patch operations.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: StatefulServiceUpdateProperties, StatelessServiceUpdateProperties.
All required parameters must be populated in order to send to Azure.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:param service_kind: Required. The kind of service (Stateless or Stateful).Constant filled by
server. Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
"""
_validation = {
'service_kind': {'required': True},
}
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
'service_kind': {'key': 'serviceKind', 'type': 'str'},
}
_subtype_map = {
'service_kind': {'Stateful': 'StatefulServiceUpdateProperties', 'Stateless': 'StatelessServiceUpdateProperties'}
}
def __init__(
self,
**kwargs
):
super(ServiceResourceUpdateProperties, self).__init__(**kwargs)
self.service_kind = 'ServiceResourceUpdateProperties' # type: str
class ServiceTypeDeltaHealthPolicy(msrest.serialization.Model):
"""Represents the delta health policy used to evaluate the health of services belonging to a service type when upgrading the cluster.
:param max_percent_delta_unhealthy_services: The maximum allowed percentage of services health
degradation allowed during cluster upgrades.
The delta is measured between the state of the services at the beginning of upgrade and the
state of the services at the time of the health evaluation.
The check is performed after every upgrade domain upgrade completion to make sure the global
state of the cluster is within tolerated limits.
:type max_percent_delta_unhealthy_services: int
"""
_validation = {
'max_percent_delta_unhealthy_services': {'maximum': 100, 'minimum': 0},
}
_attribute_map = {
'max_percent_delta_unhealthy_services': {'key': 'maxPercentDeltaUnhealthyServices', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(ServiceTypeDeltaHealthPolicy, self).__init__(**kwargs)
self.max_percent_delta_unhealthy_services = kwargs.get('max_percent_delta_unhealthy_services', 0)
class ServiceTypeHealthPolicy(msrest.serialization.Model):
"""Represents the health policy used to evaluate the health of services belonging to a service type.
:param max_percent_unhealthy_services: The maximum percentage of services allowed to be
unhealthy before your application is considered in error.
:type max_percent_unhealthy_services: int
"""
_validation = {
'max_percent_unhealthy_services': {'maximum': 100, 'minimum': 0},
}
_attribute_map = {
'max_percent_unhealthy_services': {'key': 'maxPercentUnhealthyServices', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(ServiceTypeHealthPolicy, self).__init__(**kwargs)
self.max_percent_unhealthy_services = kwargs.get('max_percent_unhealthy_services', 0)
class SettingsParameterDescription(msrest.serialization.Model):
"""Describes a parameter in fabric settings of the cluster.
All required parameters must be populated in order to send to Azure.
:param name: Required. The parameter name of fabric setting.
:type name: str
:param value: Required. The parameter value of fabric setting.
:type value: str
"""
_validation = {
'name': {'required': True},
'value': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'value': {'key': 'value', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(SettingsParameterDescription, self).__init__(**kwargs)
self.name = kwargs['name']
self.value = kwargs['value']
class SettingsSectionDescription(msrest.serialization.Model):
"""Describes a section in the fabric settings of the cluster.
All required parameters must be populated in order to send to Azure.
:param name: Required. The section name of the fabric settings.
:type name: str
:param parameters: Required. The collection of parameters in the section.
:type parameters: list[~azure.mgmt.servicefabric.models.SettingsParameterDescription]
"""
_validation = {
'name': {'required': True},
'parameters': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'parameters': {'key': 'parameters', 'type': '[SettingsParameterDescription]'},
}
def __init__(
self,
**kwargs
):
super(SettingsSectionDescription, self).__init__(**kwargs)
self.name = kwargs['name']
self.parameters = kwargs['parameters']
class SingletonPartitionSchemeDescription(PartitionSchemeDescription):
"""SingletonPartitionSchemeDescription.
All required parameters must be populated in order to send to Azure.
:param partition_scheme: Required. Specifies how the service is partitioned.Constant filled by
server. Possible values include: "Invalid", "Singleton", "UniformInt64Range", "Named".
:type partition_scheme: str or ~azure.mgmt.servicefabric.models.PartitionScheme
"""
_validation = {
'partition_scheme': {'required': True},
}
_attribute_map = {
'partition_scheme': {'key': 'partitionScheme', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(SingletonPartitionSchemeDescription, self).__init__(**kwargs)
self.partition_scheme = 'Singleton' # type: str
class StatefulServiceProperties(ServiceResourceProperties):
"""The properties of a stateful service resource.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param service_kind: Required. The kind of service (Stateless or Stateful).Constant filled by
server. Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
:param service_type_name: The name of the service type.
:type service_type_name: str
:param partition_description: Describes how the service is partitioned.
:type partition_description: ~azure.mgmt.servicefabric.models.PartitionSchemeDescription
:param service_package_activation_mode: The activation Mode of the service package. Possible
values include: "SharedProcess", "ExclusiveProcess".
:type service_package_activation_mode: str or
~azure.mgmt.servicefabric.models.ArmServicePackageActivationMode
:param service_dns_name: Dns name used for the service. If this is specified, then the service
can be accessed via its DNS name instead of service name.
:type service_dns_name: str
:param has_persisted_state: A flag indicating whether this is a persistent service which stores
states on the local disk. If it is then the value of this property is true, if not it is false.
:type has_persisted_state: bool
:param target_replica_set_size: The target replica set size as a number.
:type target_replica_set_size: int
:param min_replica_set_size: The minimum replica set size as a number.
:type min_replica_set_size: int
:param replica_restart_wait_duration: The duration between when a replica goes down and when a
new replica is created, represented in ISO 8601 format (hh:mm:ss.s).
:type replica_restart_wait_duration: ~datetime.datetime
:param quorum_loss_wait_duration: The maximum duration for which a partition is allowed to be
in a state of quorum loss, represented in ISO 8601 format (hh:mm:ss.s).
:type quorum_loss_wait_duration: ~datetime.datetime
:param stand_by_replica_keep_duration: The definition on how long StandBy replicas should be
maintained before being removed, represented in ISO 8601 format (hh:mm:ss.s).
:type stand_by_replica_keep_duration: ~datetime.datetime
"""
_validation = {
'provisioning_state': {'readonly': True},
'service_kind': {'required': True},
'target_replica_set_size': {'minimum': 1},
'min_replica_set_size': {'minimum': 1},
}
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
'provisioning_state': {'key': 'provisioningState', 'type': 'str'},
'service_kind': {'key': 'serviceKind', 'type': 'str'},
'service_type_name': {'key': 'serviceTypeName', 'type': 'str'},
'partition_description': {'key': 'partitionDescription', 'type': 'PartitionSchemeDescription'},
'service_package_activation_mode': {'key': 'servicePackageActivationMode', 'type': 'str'},
'service_dns_name': {'key': 'serviceDnsName', 'type': 'str'},
'has_persisted_state': {'key': 'hasPersistedState', 'type': 'bool'},
'target_replica_set_size': {'key': 'targetReplicaSetSize', 'type': 'int'},
'min_replica_set_size': {'key': 'minReplicaSetSize', 'type': 'int'},
'replica_restart_wait_duration': {'key': 'replicaRestartWaitDuration', 'type': 'iso-8601'},
'quorum_loss_wait_duration': {'key': 'quorumLossWaitDuration', 'type': 'iso-8601'},
'stand_by_replica_keep_duration': {'key': 'standByReplicaKeepDuration', 'type': 'iso-8601'},
}
def __init__(
self,
**kwargs
):
super(StatefulServiceProperties, self).__init__(**kwargs)
self.service_kind = 'Stateful' # type: str
self.has_persisted_state = kwargs.get('has_persisted_state', None)
self.target_replica_set_size = kwargs.get('target_replica_set_size', None)
self.min_replica_set_size = kwargs.get('min_replica_set_size', None)
self.replica_restart_wait_duration = kwargs.get('replica_restart_wait_duration', None)
self.quorum_loss_wait_duration = kwargs.get('quorum_loss_wait_duration', None)
self.stand_by_replica_keep_duration = kwargs.get('stand_by_replica_keep_duration', None)
class StatefulServiceUpdateProperties(ServiceResourceUpdateProperties):
"""The properties of a stateful service resource for patch operations.
All required parameters must be populated in order to send to Azure.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:param service_kind: Required. The kind of service (Stateless or Stateful).Constant filled by
server. Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
:param target_replica_set_size: The target replica set size as a number.
:type target_replica_set_size: int
:param min_replica_set_size: The minimum replica set size as a number.
:type min_replica_set_size: int
:param replica_restart_wait_duration: The duration between when a replica goes down and when a
new replica is created, represented in ISO 8601 format (hh:mm:ss.s).
:type replica_restart_wait_duration: ~datetime.datetime
:param quorum_loss_wait_duration: The maximum duration for which a partition is allowed to be
in a state of quorum loss, represented in ISO 8601 format (hh:mm:ss.s).
:type quorum_loss_wait_duration: ~datetime.datetime
:param stand_by_replica_keep_duration: The definition on how long StandBy replicas should be
maintained before being removed, represented in ISO 8601 format (hh:mm:ss.s).
:type stand_by_replica_keep_duration: ~datetime.datetime
"""
_validation = {
'service_kind': {'required': True},
'target_replica_set_size': {'minimum': 1},
'min_replica_set_size': {'minimum': 1},
}
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
'service_kind': {'key': 'serviceKind', 'type': 'str'},
'target_replica_set_size': {'key': 'targetReplicaSetSize', 'type': 'int'},
'min_replica_set_size': {'key': 'minReplicaSetSize', 'type': 'int'},
'replica_restart_wait_duration': {'key': 'replicaRestartWaitDuration', 'type': 'iso-8601'},
'quorum_loss_wait_duration': {'key': 'quorumLossWaitDuration', 'type': 'iso-8601'},
'stand_by_replica_keep_duration': {'key': 'standByReplicaKeepDuration', 'type': 'iso-8601'},
}
def __init__(
self,
**kwargs
):
super(StatefulServiceUpdateProperties, self).__init__(**kwargs)
self.service_kind = 'Stateful' # type: str
self.target_replica_set_size = kwargs.get('target_replica_set_size', None)
self.min_replica_set_size = kwargs.get('min_replica_set_size', None)
self.replica_restart_wait_duration = kwargs.get('replica_restart_wait_duration', None)
self.quorum_loss_wait_duration = kwargs.get('quorum_loss_wait_duration', None)
self.stand_by_replica_keep_duration = kwargs.get('stand_by_replica_keep_duration', None)
class StatelessServiceProperties(ServiceResourceProperties):
"""The properties of a stateless service resource.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:ivar provisioning_state: The current deployment or provisioning state, which only appears in
the response.
:vartype provisioning_state: str
:param service_kind: Required. The kind of service (Stateless or Stateful).Constant filled by
server. Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
:param service_type_name: The name of the service type.
:type service_type_name: str
:param partition_description: Describes how the service is partitioned.
:type partition_description: ~azure.mgmt.servicefabric.models.PartitionSchemeDescription
:param service_package_activation_mode: The activation Mode of the service package. Possible
values include: "SharedProcess", "ExclusiveProcess".
:type service_package_activation_mode: str or
~azure.mgmt.servicefabric.models.ArmServicePackageActivationMode
:param service_dns_name: Dns name used for the service. If this is specified, then the service
can be accessed via its DNS name instead of service name.
:type service_dns_name: str
:param instance_count: The instance count.
:type instance_count: int
:param instance_close_delay_duration: Delay duration for RequestDrain feature to ensures that
the endpoint advertised by the stateless instance is removed before the delay starts prior to
closing the instance. This delay enables existing requests to drain gracefully before the
instance actually goes down
(https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-advanced#avoid-connection-drops-during-stateless-service-planned-downtime-preview).
It is first interpreted as a string representing an ISO 8601 duration. If that fails, then it
is interpreted as a number representing the total number of milliseconds.
:type instance_close_delay_duration: str
"""
_validation = {
'provisioning_state': {'readonly': True},
'service_kind': {'required': True},
'instance_count': {'minimum': -1},
}
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
'provisioning_state': {'key': 'provisioningState', 'type': 'str'},
'service_kind': {'key': 'serviceKind', 'type': 'str'},
'service_type_name': {'key': 'serviceTypeName', 'type': 'str'},
'partition_description': {'key': 'partitionDescription', 'type': 'PartitionSchemeDescription'},
'service_package_activation_mode': {'key': 'servicePackageActivationMode', 'type': 'str'},
'service_dns_name': {'key': 'serviceDnsName', 'type': 'str'},
'instance_count': {'key': 'instanceCount', 'type': 'int'},
'instance_close_delay_duration': {'key': 'instanceCloseDelayDuration', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(StatelessServiceProperties, self).__init__(**kwargs)
self.service_kind = 'Stateless' # type: str
self.instance_count = kwargs.get('instance_count', None)
self.instance_close_delay_duration = kwargs.get('instance_close_delay_duration', None)
class StatelessServiceUpdateProperties(ServiceResourceUpdateProperties):
"""The properties of a stateless service resource for patch operations.
All required parameters must be populated in order to send to Azure.
:param placement_constraints: The placement constraints as a string. Placement constraints are
boolean expressions on node properties and allow for restricting a service to particular nodes
based on the service requirements. For example, to place a service on nodes where NodeType is
blue specify the following: "NodeColor == blue)".
:type placement_constraints: str
:param correlation_scheme: A list that describes the correlation of the service with other
services.
:type correlation_scheme: list[~azure.mgmt.servicefabric.models.ServiceCorrelationDescription]
:param service_load_metrics: The service load metrics is given as an array of
ServiceLoadMetricDescription objects.
:type service_load_metrics: list[~azure.mgmt.servicefabric.models.ServiceLoadMetricDescription]
:param service_placement_policies: A list that describes the correlation of the service with
other services.
:type service_placement_policies:
list[~azure.mgmt.servicefabric.models.ServicePlacementPolicyDescription]
:param default_move_cost: Specifies the move cost for the service. Possible values include:
"Zero", "Low", "Medium", "High".
:type default_move_cost: str or ~azure.mgmt.servicefabric.models.MoveCost
:param service_kind: Required. The kind of service (Stateless or Stateful).Constant filled by
server. Possible values include: "Invalid", "Stateless", "Stateful".
:type service_kind: str or ~azure.mgmt.servicefabric.models.ServiceKind
:param instance_count: The instance count.
:type instance_count: int
:param instance_close_delay_duration: Delay duration for RequestDrain feature to ensures that
the endpoint advertised by the stateless instance is removed before the delay starts prior to
closing the instance. This delay enables existing requests to drain gracefully before the
instance actually goes down
(https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-advanced#avoid-connection-drops-during-stateless-service-planned-downtime-preview).
It is first interpreted as a string representing an ISO 8601 duration. If that fails, then it
is interpreted as a number representing the total number of milliseconds.
:type instance_close_delay_duration: str
"""
_validation = {
'service_kind': {'required': True},
'instance_count': {'minimum': -1},
}
_attribute_map = {
'placement_constraints': {'key': 'placementConstraints', 'type': 'str'},
'correlation_scheme': {'key': 'correlationScheme', 'type': '[ServiceCorrelationDescription]'},
'service_load_metrics': {'key': 'serviceLoadMetrics', 'type': '[ServiceLoadMetricDescription]'},
'service_placement_policies': {'key': 'servicePlacementPolicies', 'type': '[ServicePlacementPolicyDescription]'},
'default_move_cost': {'key': 'defaultMoveCost', 'type': 'str'},
'service_kind': {'key': 'serviceKind', 'type': 'str'},
'instance_count': {'key': 'instanceCount', 'type': 'int'},
'instance_close_delay_duration': {'key': 'instanceCloseDelayDuration', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(StatelessServiceUpdateProperties, self).__init__(**kwargs)
self.service_kind = 'Stateless' # type: str
self.instance_count = kwargs.get('instance_count', None)
self.instance_close_delay_duration = kwargs.get('instance_close_delay_duration', None)
class SystemData(msrest.serialization.Model):
"""Metadata pertaining to creation and last modification of the resource.
:param created_by: The identity that created the resource.
:type created_by: str
:param created_by_type: The type of identity that created the resource.
:type created_by_type: str
:param created_at: The timestamp of resource creation (UTC).
:type created_at: ~datetime.datetime
:param last_modified_by: The identity that last modified the resource.
:type last_modified_by: str
:param last_modified_by_type: The type of identity that last modified the resource.
:type last_modified_by_type: str
:param last_modified_at: The timestamp of resource last modification (UTC).
:type last_modified_at: ~datetime.datetime
"""
_attribute_map = {
'created_by': {'key': 'createdBy', 'type': 'str'},
'created_by_type': {'key': 'createdByType', 'type': 'str'},
'created_at': {'key': 'createdAt', 'type': 'iso-8601'},
'last_modified_by': {'key': 'lastModifiedBy', 'type': 'str'},
'last_modified_by_type': {'key': 'lastModifiedByType', 'type': 'str'},
'last_modified_at': {'key': 'lastModifiedAt', 'type': 'iso-8601'},
}
def __init__(
self,
**kwargs
):
super(SystemData, self).__init__(**kwargs)
self.created_by = kwargs.get('created_by', None)
self.created_by_type = kwargs.get('created_by_type', None)
self.created_at = kwargs.get('created_at', None)
self.last_modified_by = kwargs.get('last_modified_by', None)
self.last_modified_by_type = kwargs.get('last_modified_by_type', None)
self.last_modified_at = kwargs.get('last_modified_at', None)
class UniformInt64RangePartitionSchemeDescription(PartitionSchemeDescription):
"""Describes a partitioning scheme where an integer range is allocated evenly across a number of partitions.
All required parameters must be populated in order to send to Azure.
:param partition_scheme: Required. Specifies how the service is partitioned.Constant filled by
server. Possible values include: "Invalid", "Singleton", "UniformInt64Range", "Named".
:type partition_scheme: str or ~azure.mgmt.servicefabric.models.PartitionScheme
:param count: Required. The number of partitions.
:type count: int
:param low_key: Required. String indicating the lower bound of the partition key range that
should be split between the partition ‘count’.
:type low_key: str
:param high_key: Required. String indicating the upper bound of the partition key range that
should be split between the partition ‘count’.
:type high_key: str
"""
_validation = {
'partition_scheme': {'required': True},
'count': {'required': True},
'low_key': {'required': True},
'high_key': {'required': True},
}
_attribute_map = {
'partition_scheme': {'key': 'partitionScheme', 'type': 'str'},
'count': {'key': 'count', 'type': 'int'},
'low_key': {'key': 'lowKey', 'type': 'str'},
'high_key': {'key': 'highKey', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(UniformInt64RangePartitionSchemeDescription, self).__init__(**kwargs)
self.partition_scheme = 'UniformInt64Range' # type: str
self.count = kwargs['count']
self.low_key = kwargs['low_key']
self.high_key = kwargs['high_key']
class UpgradableVersionPathResult(msrest.serialization.Model):
"""The list of intermediate cluster code versions for an upgrade or downgrade. Or minimum and maximum upgradable version if no target was given.
:param supported_path:
:type supported_path: list[str]
"""
_attribute_map = {
'supported_path': {'key': 'supportedPath', 'type': '[str]'},
}
def __init__(
self,
**kwargs
):
super(UpgradableVersionPathResult, self).__init__(**kwargs)
self.supported_path = kwargs.get('supported_path', None)
class UpgradableVersionsDescription(msrest.serialization.Model):
"""UpgradableVersionsDescription.
All required parameters must be populated in order to send to Azure.
:param target_version: Required. The target code version.
:type target_version: str
"""
_validation = {
'target_version': {'required': True},
}
_attribute_map = {
'target_version': {'key': 'targetVersion', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(UpgradableVersionsDescription, self).__init__(**kwargs)
self.target_version = kwargs['target_version']
class UserAssignedIdentity(msrest.serialization.Model):
"""UserAssignedIdentity.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar principal_id: The principal id of user assigned identity.
:vartype principal_id: str
:ivar client_id: The client id of user assigned identity.
:vartype client_id: str
"""
_validation = {
'principal_id': {'readonly': True},
'client_id': {'readonly': True},
}
_attribute_map = {
'principal_id': {'key': 'principalId', 'type': 'str'},
'client_id': {'key': 'clientId', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(UserAssignedIdentity, self).__init__(**kwargs)
self.principal_id = None
self.client_id = None
| 49.048746 | 192 | 0.705875 | 19,759 | 172,063 | 5.971304 | 0.051622 | 0.014951 | 0.029274 | 0.037258 | 0.762643 | 0.722563 | 0.699704 | 0.666006 | 0.644393 | 0.630722 | 0 | 0.002464 | 0.191006 | 172,063 | 3,507 | 193 | 49.062732 | 0.845156 | 0.513411 | 0 | 0.627653 | 0 | 0 | 0.372125 | 0.179301 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045659 | false | 0 | 0.001286 | 0 | 0.173633 | 0.012862 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c26c186953548c8fe8503ae89eeb67b05e8ac59 | 277 | py | Python | Aulasatualizada/execicio_curso em video/execicio_ler_velocidadeDeumCarro.py | swellington231/AulaPaython | 7b72ddec4d85f4660c0c395de07a133993aa2c70 | [
"MIT"
] | null | null | null | Aulasatualizada/execicio_curso em video/execicio_ler_velocidadeDeumCarro.py | swellington231/AulaPaython | 7b72ddec4d85f4660c0c395de07a133993aa2c70 | [
"MIT"
] | null | null | null | Aulasatualizada/execicio_curso em video/execicio_ler_velocidadeDeumCarro.py | swellington231/AulaPaython | 7b72ddec4d85f4660c0c395de07a133993aa2c70 | [
"MIT"
] | null | null | null | velocidade = float(input('Digite a velocidade atual do carro: '))
multa = (velocidade - 80) * 7
if velocidade > 80:
print('Você excedeu o limite de velocidade que é de 80 KM/h')
print('Sua multa é de R$ {:.2f}'.format(multa))
else:
print('Dirija com segurança! ')
| 30.777778 | 65 | 0.66426 | 43 | 277 | 4.27907 | 0.697674 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036036 | 0.198556 | 277 | 8 | 66 | 34.625 | 0.792793 | 0 | 0 | 0 | 0 | 0 | 0.483755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7c31b5f36f5fcde05331daba6eb7fb37b0d24746 | 18,596 | py | Python | pysnmp/MITEL-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/MITEL-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/MITEL-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module MITEL-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/MITEL-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 20:02:47 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, SingleValueConstraint, ConstraintsIntersection, ValueSizeConstraint, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "SingleValueConstraint", "ConstraintsIntersection", "ValueSizeConstraint", "ValueRangeConstraint")
ifIndex, = mibBuilder.importSymbols("IF-MIB", "ifIndex")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
ModuleIdentity, Gauge32, Counter64, Unsigned32, Bits, NotificationType, MibIdentifier, internet, ObjectIdentity, IpAddress, enterprises, NotificationType, MibScalar, MibTable, MibTableRow, MibTableColumn, Counter32, Integer32, iso, TimeTicks = mibBuilder.importSymbols("SNMPv2-SMI", "ModuleIdentity", "Gauge32", "Counter64", "Unsigned32", "Bits", "NotificationType", "MibIdentifier", "internet", "ObjectIdentity", "IpAddress", "enterprises", "NotificationType", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Counter32", "Integer32", "iso", "TimeTicks")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
mitel = MibIdentifier((1, 3, 6, 1, 4, 1, 1027))
snmpV2 = MibIdentifier((1, 3, 6, 1, 6))
snmpDomains = MibIdentifier((1, 3, 6, 1, 6, 1))
snmpUDPDomain = MibIdentifier((1, 3, 6, 1, 6, 1, 1))
class InterfaceIndex(Integer32):
pass
class TruthValue(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2))
namedValues = NamedValues(("true", 1), ("false", 2))
class RowStatus(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))
namedValues = NamedValues(("active", 1), ("notInService", 2), ("notReady", 3), ("createAndGo", 4), ("createAndWait", 5), ("destroy", 6))
class TimeStamp(TimeTicks):
pass
class TimeInterval(Integer32):
subtypeSpec = Integer32.subtypeSpec + ValueRangeConstraint(0, 2147483647)
mitelIdentification = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1))
mitelExperimental = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 2))
mitelExtensions = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 3))
mitelProprietary = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4))
mitelConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5))
mitelIdMgmtPlatforms = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 1))
mitelIdCallServers = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 2))
mitelIdTerminals = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 3))
mitelIdInterfaces = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 4))
mitelIdCtiPlatforms = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 5))
mitelIdMgmtOpsMgr = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 1, 1))
mitelIdCsMc2 = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 2, 1))
mitelIdCs2000Light = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 1, 2, 2))
mitelExtInterfaces = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 3, 2))
mitelPropApplications = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 1))
mitelPropTransmission = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 2))
mitelPropProtocols = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 3))
mitelPropUtilities = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 4))
mitelPropHardware = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 5))
mitelPropNotifications = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 6))
mitelPropReset = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 7))
mitelAppCallServer = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 1, 1))
mitelNotifTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 4, 6, 1))
mitelConfCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 1))
mitelConfGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2))
mitelGrpCommon = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1))
mitelGrpOpsMgr = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 2))
mitelGrpCs2000 = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 3))
mitelConfAgents = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 3))
class TDomain(ObjectIdentifier):
pass
class TAddress(OctetString):
subtypeSpec = OctetString.subtypeSpec + ValueSizeConstraint(1, 255)
class MitelIfType(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1))
namedValues = NamedValues(("dnic", 1))
class MitelNotifyTransportType(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 3))
namedValues = NamedValues(("mitelNotifTransV1Trap", 1), ("mitelNotifTransV2Trap", 2), ("mitelNotifTransInform", 3))
mitelIfNumber = MibScalar((1, 3, 6, 1, 4, 1, 1027, 3, 2, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelIfNumber.setStatus('mandatory')
mitelIfTable = MibTable((1, 3, 6, 1, 4, 1, 1027, 3, 2, 2), )
if mibBuilder.loadTexts: mitelIfTable.setStatus('mandatory')
mitelIfTableEntry = MibTableRow((1, 3, 6, 1, 4, 1, 1027, 3, 2, 2, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: mitelIfTableEntry.setStatus('mandatory')
mitelIfTblType = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 3, 2, 2, 1, 1), MitelIfType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelIfTblType.setStatus('mandatory')
mitelResetReason = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 7, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("shutdown", 1), ("coldStart", 2), ("warmStart", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelResetReason.setStatus('mandatory')
mitelResetPlatform = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 7, 2), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelResetPlatform.setStatus('mandatory')
mitelResetAgent = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 7, 3), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelResetAgent.setStatus('mandatory')
mitelNotifCount = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 6, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifCount.setStatus('mandatory')
mitelNotifEnableTable = MibTable((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3), )
if mibBuilder.loadTexts: mitelNotifEnableTable.setStatus('mandatory')
mitelNotifEnableTableEntry = MibTableRow((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1), ).setIndexNames((0, "MITEL-MIB", "mitelNotifEnblTblIndex"))
if mibBuilder.loadTexts: mitelNotifEnableTableEntry.setStatus('mandatory')
mitelNotifEnblTblIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifEnblTblIndex.setStatus('mandatory')
mitelNotifEnblTblOid = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1, 2), ObjectIdentifier()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifEnblTblOid.setStatus('mandatory')
mitelNotifEnblTblEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1, 3), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifEnblTblEnable.setStatus('mandatory')
mitelNotifEnblTblAck = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1, 4), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifEnblTblAck.setStatus('mandatory')
mitelNotifEnblTblOccurrences = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifEnblTblOccurrences.setStatus('mandatory')
mitelNotifEnblTblDescr = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 3, 1, 6), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifEnblTblDescr.setStatus('mandatory')
mitelNotifManagerTable = MibTable((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4), )
if mibBuilder.loadTexts: mitelNotifManagerTable.setStatus('mandatory')
mitelNotifManagerTableEntry = MibTableRow((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1), ).setIndexNames((0, "MITEL-MIB", "mitelNotifMgrTblIndex"))
if mibBuilder.loadTexts: mitelNotifManagerTableEntry.setStatus('mandatory')
mitelNotifMgrTblIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifMgrTblIndex.setStatus('mandatory')
mitelNotifMgrTblStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 2), RowStatus().clone('active')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblStatus.setStatus('mandatory')
mitelNotifMgrTblTransport = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 3), MitelNotifyTransportType().clone('mitelNotifTransV1Trap')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblTransport.setStatus('mandatory')
mitelNotifMgrTblDomain = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 4), TDomain().clone((1, 3, 6, 1, 6, 1, 1))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblDomain.setStatus('mandatory')
mitelNotifMgrTblAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 5), TAddress().clone(hexValue="c00002000489")).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblAddress.setStatus('mandatory')
mitelNotifMgrTblTimeout = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 6), TimeInterval().clone(1000)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblTimeout.setStatus('mandatory')
mitelNotifMgrTblRetries = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 10)).clone(3)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblRetries.setStatus('mandatory')
mitelNotifMgrTblLife = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 8), TimeInterval().clone(8640000)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblLife.setStatus('mandatory')
mitelNotifMgrTblCommunity = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 9), OctetString().clone('public')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblCommunity.setStatus('mandatory')
mitelNotifMgrTblName = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 4, 1, 10), DisplayString().clone('None specified.')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifMgrTblName.setStatus('mandatory')
mitelNotifHistoryMax = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 6, 5), Integer32().clone(100)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifHistoryMax.setStatus('mandatory')
mitelNotifHistorySize = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 6, 6), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifHistorySize.setStatus('mandatory')
mitelNotifHistoryTable = MibTable((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7), )
if mibBuilder.loadTexts: mitelNotifHistoryTable.setStatus('mandatory')
mitelNotifHistoryTableEntry = MibTableRow((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7, 1), ).setIndexNames((0, "MITEL-MIB", "mitelNotifHistTblIndex"))
if mibBuilder.loadTexts: mitelNotifHistoryTableEntry.setStatus('mandatory')
mitelNotifHistTblIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifHistTblIndex.setStatus('mandatory')
mitelNotifHistTblId = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifHistTblId.setStatus('mandatory')
mitelNotifHistTblTime = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7, 1, 3), TimeStamp()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifHistTblTime.setStatus('mandatory')
mitelNotifHistTblIfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7, 1, 4), InterfaceIndex()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifHistTblIfIndex.setStatus('mandatory')
mitelNotifHistTblConfirmed = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 7, 1, 5), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifHistTblConfirmed.setStatus('mandatory')
mitelNotifUnackTable = MibTable((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8), )
if mibBuilder.loadTexts: mitelNotifUnackTable.setStatus('mandatory')
mitelNotifUnackTableEntry = MibTableRow((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8, 1), ).setIndexNames((0, "MITEL-MIB", "mitelNotifUnackTblIndex"))
if mibBuilder.loadTexts: mitelNotifUnackTableEntry.setStatus('mandatory')
mitelNotifUnackTblIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifUnackTblIndex.setStatus('mandatory')
mitelNotifUnackTblStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 6))).clone(namedValues=NamedValues(("active", 1), ("notInService", 2), ("destory", 6)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: mitelNotifUnackTblStatus.setStatus('mandatory')
mitelNotifUnackTblHistory = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifUnackTblHistory.setStatus('mandatory')
mitelNotifUnackTblManager = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8, 1, 4), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifUnackTblManager.setStatus('mandatory')
mitelNotifUnackTblRetriesLeft = MibTableColumn((1, 3, 6, 1, 4, 1, 1027, 4, 6, 8, 1, 5), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifUnackTblRetriesLeft.setStatus('mandatory')
mitelNotifAgentAddress = MibScalar((1, 3, 6, 1, 4, 1, 1027, 4, 6, 9), TAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: mitelNotifAgentAddress.setStatus('mandatory')
mitelTrapsCommLost = NotificationType((1, 3, 6, 1, 4, 1, 1027, 4, 6, 1) + (0,1)).setObjects(("MITEL-MIB", "mitelNotifUnackTblStatus"), ("MITEL-MIB", "mitelNotifMgrTblDomain"), ("MITEL-MIB", "mitelNotifMgrTblAddress"))
mitelTrapsReset = NotificationType((1, 3, 6, 1, 4, 1, 1027, 4, 6, 1) + (0,2)).setObjects(("MITEL-MIB", "mitelNotifUnackTblStatus"), ("MITEL-MIB", "mitelResetReason"))
mitelGrpCmnNotifBasic = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1, 2))
mitelGrpCmnNotifManagers = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1, 3))
mitelGrpCmnNotifHistory = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1, 4))
mitelGrpCmnNotifAck = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1, 5))
mitelGrpCmnInterfaces = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1, 6))
mitelGrpCmnReset = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 2, 1, 7))
mitelComplMitel = MibIdentifier((1, 3, 6, 1, 4, 1, 1027, 5, 1, 1))
mibBuilder.exportSymbols("MITEL-MIB", snmpUDPDomain=snmpUDPDomain, mitelNotifMgrTblLife=mitelNotifMgrTblLife, mitelNotifMgrTblStatus=mitelNotifMgrTblStatus, mitelIdInterfaces=mitelIdInterfaces, TimeStamp=TimeStamp, mitelGrpCommon=mitelGrpCommon, mitelNotifHistoryTableEntry=mitelNotifHistoryTableEntry, mitelExtInterfaces=mitelExtInterfaces, mitelGrpCmnNotifBasic=mitelGrpCmnNotifBasic, TruthValue=TruthValue, mitelPropProtocols=mitelPropProtocols, mitelNotifUnackTableEntry=mitelNotifUnackTableEntry, mitelResetReason=mitelResetReason, mitelNotifEnableTable=mitelNotifEnableTable, mitelNotifMgrTblRetries=mitelNotifMgrTblRetries, RowStatus=RowStatus, MitelNotifyTransportType=MitelNotifyTransportType, mitelNotifHistoryMax=mitelNotifHistoryMax, mitelTrapsCommLost=mitelTrapsCommLost, mitelGrpCmnInterfaces=mitelGrpCmnInterfaces, mitelNotifMgrTblDomain=mitelNotifMgrTblDomain, mitelNotifEnblTblEnable=mitelNotifEnblTblEnable, mitelResetAgent=mitelResetAgent, mitelNotifEnblTblOid=mitelNotifEnblTblOid, snmpV2=snmpV2, mitelNotifHistoryTable=mitelNotifHistoryTable, mitelConformance=mitelConformance, mitelIdMgmtOpsMgr=mitelIdMgmtOpsMgr, mitelResetPlatform=mitelResetPlatform, mitelPropTransmission=mitelPropTransmission, snmpDomains=snmpDomains, mitelPropReset=mitelPropReset, InterfaceIndex=InterfaceIndex, mitelNotifUnackTblStatus=mitelNotifUnackTblStatus, mitelPropHardware=mitelPropHardware, MitelIfType=MitelIfType, mitelGrpCmnReset=mitelGrpCmnReset, mitelGrpOpsMgr=mitelGrpOpsMgr, mitelNotifMgrTblAddress=mitelNotifMgrTblAddress, mitelNotifHistTblConfirmed=mitelNotifHistTblConfirmed, mitelIfTable=mitelIfTable, mitelNotifUnackTblRetriesLeft=mitelNotifUnackTblRetriesLeft, mitelNotifHistTblTime=mitelNotifHistTblTime, mitelNotifHistTblIndex=mitelNotifHistTblIndex, mitelPropApplications=mitelPropApplications, mitelPropNotifications=mitelPropNotifications, mitelNotifMgrTblIndex=mitelNotifMgrTblIndex, mitelConfAgents=mitelConfAgents, mitelNotifTraps=mitelNotifTraps, mitelConfGroups=mitelConfGroups, mitelIfTableEntry=mitelIfTableEntry, mitelGrpCmnNotifHistory=mitelGrpCmnNotifHistory, mitelNotifMgrTblTimeout=mitelNotifMgrTblTimeout, mitelPropUtilities=mitelPropUtilities, mitelNotifHistTblIfIndex=mitelNotifHistTblIfIndex, mitelNotifEnblTblDescr=mitelNotifEnblTblDescr, mitelIdCsMc2=mitelIdCsMc2, mitelExtensions=mitelExtensions, mitelNotifAgentAddress=mitelNotifAgentAddress, mitelNotifMgrTblTransport=mitelNotifMgrTblTransport, mitelNotifManagerTableEntry=mitelNotifManagerTableEntry, TAddress=TAddress, mitelNotifHistorySize=mitelNotifHistorySize, mitelComplMitel=mitelComplMitel, mitelNotifCount=mitelNotifCount, TDomain=TDomain, mitelIdCtiPlatforms=mitelIdCtiPlatforms, mitelIdTerminals=mitelIdTerminals, mitelGrpCmnNotifAck=mitelGrpCmnNotifAck, mitelNotifHistTblId=mitelNotifHistTblId, mitelIdCs2000Light=mitelIdCs2000Light, mitelNotifMgrTblName=mitelNotifMgrTblName, mitelNotifMgrTblCommunity=mitelNotifMgrTblCommunity, mitelConfCompliances=mitelConfCompliances, mitelIdCallServers=mitelIdCallServers, mitelNotifEnableTableEntry=mitelNotifEnableTableEntry, mitelIfNumber=mitelIfNumber, TimeInterval=TimeInterval, mitelProprietary=mitelProprietary, mitelNotifEnblTblOccurrences=mitelNotifEnblTblOccurrences, mitelNotifUnackTblManager=mitelNotifUnackTblManager, mitelNotifUnackTable=mitelNotifUnackTable, mitelIdMgmtPlatforms=mitelIdMgmtPlatforms, mitelIfTblType=mitelIfTblType, mitelNotifUnackTblIndex=mitelNotifUnackTblIndex, mitelTrapsReset=mitelTrapsReset, mitelNotifUnackTblHistory=mitelNotifUnackTblHistory, mitelNotifEnblTblIndex=mitelNotifEnblTblIndex, mitelAppCallServer=mitelAppCallServer, mitel=mitel, mitelNotifEnblTblAck=mitelNotifEnblTblAck, mitelIdentification=mitelIdentification, mitelExperimental=mitelExperimental, mitelNotifManagerTable=mitelNotifManagerTable, mitelGrpCmnNotifManagers=mitelGrpCmnNotifManagers, mitelGrpCs2000=mitelGrpCs2000)
| 103.888268 | 3,875 | 0.766993 | 2,014 | 18,596 | 7.081927 | 0.114697 | 0.013602 | 0.018509 | 0.024679 | 0.358901 | 0.280586 | 0.24504 | 0.208932 | 0.182921 | 0.147164 | 0 | 0.082649 | 0.08975 | 18,596 | 178 | 3,876 | 104.47191 | 0.759969 | 0.016778 | 0 | 0.018519 | 0 | 0 | 0.094774 | 0.016908 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.018519 | 0.04321 | 0 | 0.160494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c34b15d9be364634711394e9a6a5f8337fd258e | 4,750 | py | Python | DQMServices/Components/python/test/SELECT_NONE.py | NTrevisani/cmssw | a212a27526f34eb9507cf8b875c93896e6544781 | [
"Apache-2.0"
] | 3 | 2018-08-24T19:10:26.000Z | 2019-02-19T11:45:32.000Z | DQMServices/Components/python/test/SELECT_NONE.py | NTrevisani/cmssw | a212a27526f34eb9507cf8b875c93896e6544781 | [
"Apache-2.0"
] | 7 | 2016-07-17T02:34:54.000Z | 2019-08-13T07:58:37.000Z | DQMServices/Components/python/test/SELECT_NONE.py | NTrevisani/cmssw | a212a27526f34eb9507cf8b875c93896e6544781 | [
"Apache-2.0"
] | 5 | 2018-08-21T16:37:52.000Z | 2020-01-09T13:33:17.000Z | # Auto generated configuration file
# using:
# Revision: 1.372.2.1
# Source: /local/reps/CMSSW.admin/CMSSW/Configuration/PyReleaseValidation/python/ConfigBuilder.py,v
# with command line options: SELECT -s NONE --filein=/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/DQMTest/MinimumBias__RAW__v1__165633__1CC420EE-B686-E011-A788-0030487CD6E8.root --fileout=Run1Run2.root --no_exec --conditions auto:com10 --data --eventcontent=RAW
# edmFileUtil -e /store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root | egrep -e "^[[:space:]]+([1-9]+[[:space:]]+){3}" | grep 196452 | head -n 100 | gawk '{print $1 " " $2 " " $3}' | sort --key=1 -n | gawk '{print "\x27" $1 "\x27, \x27" $2 "\x27, \x27" $3 "\x27"}'
import FWCore.ParameterSet.Config as cms
process = cms.Process('NONE')
# import of standard configurations
process.load('FWCore.MessageService.MessageLogger_cfi')
process.load('Configuration.EventContent.EventContent_cff')
process.load('Configuration.StandardSequences.FrontierConditions_GlobalTag_cff')
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(5)
)
# Input source
#myFirstRunLSEvt = ['172791', '1015', '1414286969']
#myLastRunLSEvt = ['172791', '1226', '1693205411']
#myFirstRunLSEvt = ['172791', '67', '35621558']
#myLastRunLSEvt = ['172791', '81', '58985871']
#myFirstRunLSEvt = ['172819', '58', '30852858']
#myLastRunLSEvt = ['172819', '60', '34303685']
#myFirstRunLSEvt = ['173241', '44', '62788265']
#myLastRunLSEvt = ['173241', '48', '67794693']
#myFirstRunLSEvt = ['196452', '829', '1114173926', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root']
#myLastRunLSEvt = ['196452', '829', '1114287699', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root']
#myFirstRunLSEvt = ['196452', '829', '1113629261', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root']
#myLastRunLSEvt = ['196452', '829', '1113689764', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root']
#myFirstRunLSEvt = ['196452', '174', '196826855', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root']
#myLastRunLSEvt = ['196452', '174', '197239827', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/A8FCA637-8E69-E211-9B26-002590596468.root']
#myFirstRunLSEvt = ['196453', '814', '692514946', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/002711B7-D169-E211-BF5D-0026189438EA.root']
#myLastRunLSEvt = ['196453', '814', '692526334', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/002711B7-D169-E211-BF5D-0026189438EA.root']
myFirstRunLSEvt = ['196437', '165', '155564199', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/6483FF2F-9B69-E211-A075-0025905964A6.root']
myLastRunLSEvt = ['196437', '165', '156191367', '/store/data/Run2012B/DoubleElectron/RAW-RECO/ZElectron-22Jan2013-v1/20000/6483FF2F-9B69-E211-A075-0025905964A6.root']
process.source = cms.Source("PoolSource",
firstRun = cms.untracked.uint32(int(myFirstRunLSEvt[0])),
eventsToProcess = cms.untracked.VEventRange(
':'.join(myFirstRunLSEvt[0:3])+'-'+':'.join(myLastRunLSEvt[0:3])
),
secondaryFileNames = cms.untracked.vstring(),
fileNames = cms.untracked.vstring('%s' % myFirstRunLSEvt[3])
)
process.options = cms.untracked.PSet(
)
# Production Info
process.configurationMetadata = cms.untracked.PSet(
annotation = cms.untracked.string('SELECT nevts:1'),
name = cms.untracked.string('PyReleaseValidation')
)
# Output definition
process.RAWoutput = cms.OutputModule("PoolOutputModule",
splitLevel = cms.untracked.int32(0),
eventAutoFlushCompressedSize = cms.untracked.int32(5242880),
outputCommands = process.RAWEventContent.outputCommands,
fileName = cms.untracked.string('skim%s-%s-%s_%s-%s-%s.root' % tuple(myFirstRunLSEvt[0:3]+myLastRunLSEvt[0:3])),
dataset = cms.untracked.PSet(
filterName = cms.untracked.string(''),
dataTier = cms.untracked.string('')
)
)
# Additional output definition
# Other statements
process.GlobalTag.globaltag = 'GR_R_52_V4::All'
# Path and EndPath definitions
process.RAWoutput_step = cms.EndPath(process.RAWoutput)
# Schedule definition
process.schedule = cms.Schedule(process.RAWoutput_step)
| 52.777778 | 329 | 0.717053 | 531 | 4,750 | 6.376648 | 0.372881 | 0.056704 | 0.055227 | 0.100709 | 0.359716 | 0.357944 | 0.357944 | 0.357944 | 0.357944 | 0.357944 | 0 | 0.207836 | 0.118737 | 4,750 | 89 | 330 | 53.370787 | 0.601051 | 0.569263 | 0 | 0 | 1 | 0.054054 | 0.258689 | 0.199603 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.027027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c352a9d7d1da94d78ff62995ecbaf9e7e3e0fb1 | 399 | py | Python | 0x03-python-data_structures/6-print_matrix_integer.py | coding-max/holbertonschool-higher_level_programming | 392fed1ae686642b6cca6bb6752050882bbf79fc | [
"MIT"
] | 1 | 2021-04-26T03:45:12.000Z | 2021-04-26T03:45:12.000Z | 0x03-python-data_structures/6-print_matrix_integer.py | coding-max/holbertonschool-higher_level_programming | 392fed1ae686642b6cca6bb6752050882bbf79fc | [
"MIT"
] | null | null | null | 0x03-python-data_structures/6-print_matrix_integer.py | coding-max/holbertonschool-higher_level_programming | 392fed1ae686642b6cca6bb6752050882bbf79fc | [
"MIT"
] | 1 | 2022-02-02T02:44:35.000Z | 2022-02-02T02:44:35.000Z | #!/usr/bin/python3
def print_matrix_integer(matrix=[[]]):
"prints a matrix of integers"
row_len = len(matrix)
col_len = len(matrix[0])
for i in range(row_len):
for j in range(col_len):
if j != col_len - 1:
print("{:d}".format(matrix[i][j]), end=' ')
else:
print("{:d}".format(matrix[i][j]), end='')
print("")
| 26.6 | 59 | 0.501253 | 55 | 399 | 3.509091 | 0.472727 | 0.093264 | 0.124352 | 0.186529 | 0.238342 | 0.238342 | 0.238342 | 0 | 0 | 0 | 0 | 0.010949 | 0.313283 | 399 | 14 | 60 | 28.5 | 0.693431 | 0.112782 | 0 | 0.181818 | 0 | 0 | 0.094488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7c43a8136660a4c98e3af608a002a33c2a1bd745 | 474 | py | Python | cap_1/vector.py | gillianomenezes/fluent_python | d9168f4bfc23f7bb7100dbc6bb7d5f4e0979cb61 | [
"MIT"
] | null | null | null | cap_1/vector.py | gillianomenezes/fluent_python | d9168f4bfc23f7bb7100dbc6bb7d5f4e0979cb61 | [
"MIT"
] | null | null | null | cap_1/vector.py | gillianomenezes/fluent_python | d9168f4bfc23f7bb7100dbc6bb7d5f4e0979cb61 | [
"MIT"
] | null | null | null | from math import hypot
class Vector():
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __repr__(self):
return 'Vector(%r, %r)' % (self.x, self.y)
def __abs__(self):
return hypot(self.x, self.y)
def __bool__(self):
return bool(abs(self))
def __add__(self, other):
return Vector(self.x+other.x, self.y+other.y)
def __mul__(self, scalar):
return Vector(self.x*scalar, self.y*scalar) | 21.545455 | 53 | 0.578059 | 72 | 474 | 3.472222 | 0.291667 | 0.12 | 0.096 | 0.08 | 0.104 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005831 | 0.276371 | 474 | 22 | 54 | 21.545455 | 0.723032 | 0 | 0 | 0 | 0 | 0 | 0.029474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.066667 | 0.333333 | 0.866667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7c485d24dee3c03724eadb2b1f8a1de5059d103d | 214 | py | Python | directGuiOverrides/__init__.py | FaderKing/NodeEditor | 37e5d193ff08c834ba5e901af20a48fa2adfe133 | [
"BSD-2-Clause"
] | 20 | 2019-06-17T07:21:22.000Z | 2022-02-26T08:46:35.000Z | directGuiOverrides/__init__.py | bloomv/NodeEditor | e1bfb3d77cc5fbb409dca0ee14a5779255377c33 | [
"BSD-2-Clause"
] | 6 | 2019-10-09T14:10:19.000Z | 2022-02-09T10:42:58.000Z | directGuiOverrides/__init__.py | bloomv/NodeEditor | e1bfb3d77cc5fbb409dca0ee14a5779255377c33 | [
"BSD-2-Clause"
] | 2 | 2020-05-21T22:55:59.000Z | 2021-10-06T03:45:07.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
__author__ = "Fireclaw the Fox"
__license__ = """
Simplified BSD (BSD 2-Clause) License.
See License.txt or http://opensource.org/licenses/BSD-2-Clause for more info
"""
| 21.4 | 76 | 0.691589 | 31 | 214 | 4.516129 | 0.806452 | 0.057143 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016216 | 0.135514 | 214 | 9 | 77 | 23.777778 | 0.740541 | 0.17757 | 0 | 0 | 0 | 0.2 | 0.773256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c5116a1bf23f1c8f72ddac452fa021194a37a23 | 818 | py | Python | src/fate_of_dice/discord_bot/mapper/exception_mapper.py | bonczeq/FateOfDice | ce1704ac490f55bc600c0963958d4175104e85e5 | [
"MIT"
] | null | null | null | src/fate_of_dice/discord_bot/mapper/exception_mapper.py | bonczeq/FateOfDice | ce1704ac490f55bc600c0963958d4175104e85e5 | [
"MIT"
] | null | null | null | src/fate_of_dice/discord_bot/mapper/exception_mapper.py | bonczeq/FateOfDice | ce1704ac490f55bc600c0963958d4175104e85e5 | [
"MIT"
] | null | null | null | # pylint: disable=function-redefined
from discord import File
from multipledispatch import dispatch
from fate_of_dice.common import DiceException
from fate_of_dice.resources.resource_handler import ResourceImageHandler
from .dice_embed import DiceEmbed
@dispatch(DiceException)
def from_exception(error: DiceException) -> {DiceEmbed, File}:
embed = DiceEmbed(description=str(error), colour=0xae6229)
embed.add_thumbnail(ResourceImageHandler.INNOVATION_IMAGE)
return {'embed': embed, 'file': embed.thumbnail_file()}
@dispatch(BaseException)
def from_exception(error: BaseException) -> {DiceEmbed, File}:
embed = DiceEmbed(description=str(error), title="Error", colour=0x000000)
embed.add_thumbnail(ResourceImageHandler.PROCESS_IMAGE)
return {'embed': embed, 'file': embed.thumbnail_file()}
| 37.181818 | 77 | 0.790954 | 93 | 818 | 6.806452 | 0.397849 | 0.056872 | 0.031596 | 0.044234 | 0.281201 | 0.281201 | 0.281201 | 0.135861 | 0 | 0 | 0 | 0.016416 | 0.106357 | 818 | 21 | 78 | 38.952381 | 0.849521 | 0.041565 | 0 | 0.133333 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0.02046 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.333333 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7c53f2794a25dd4c321b383ea5b4ff971880fcc6 | 2,471 | py | Python | src/message.py | t-keazirian/tic-tac-toe | df557aaa6f01e3bfaf68dea96f9994bc3815067b | [
"MIT"
] | null | null | null | src/message.py | t-keazirian/tic-tac-toe | df557aaa6f01e3bfaf68dea96f9994bc3815067b | [
"MIT"
] | 5 | 2022-03-18T19:07:56.000Z | 2022-03-25T17:00:50.000Z | src/message.py | t-keazirian/tic-tac-toe | df557aaa6f01e3bfaf68dea96f9994bc3815067b | [
"MIT"
] | null | null | null | from symtable import Symbol
from src.symbol import SymbolOptions
class Message:
def welcome_message(self):
return "Welcome to Tic Tac Toe"
def menu(self):
return """
Choose one of the options below:
1. Play game with symbols 'X' and 'O'
2. Choose your own symbols
"""
def display_symbols(self):
symbols = SymbolOptions().symbols
return (
f"Type a number to choose the associated symbol from this list: \n{symbols}"
)
def invalid_menu_input(self):
return "That input is invalid. Please enter 1 or 2."
def invalid_symbol_option(self):
return "That input is invalid. Please enter a number 1-10."
def choose_symbol_player_one(self):
return "Player One - please choose your mark:"
def choose_symbol_player_two(self):
return "Player Two - please choose your mark:"
def rules(self):
rules = """
Play this game by taking turns marking the board.
You will start by choosing between using the default X and O symbols, or your own symbol instead.
When prompted, type a number between 1 and 9 and press enter.
If that spot is taken, the computer will prompt you for a different spot.
The first player who gets three of their marks in a row wins!
If the board is full and neither player has three in a row, it is a draw and the game is over.
At the end of every game, you will have the option to play again or to exit.
"""
return rules
def game_over_message(self):
return "Game over - it's a draw!"
def prompt_for_move(self, current_player):
return f"Player {current_player.mark} - enter a number to place your mark"
def declare_winner(self, winner):
return f"Congrats Player {winner} - you are the winner!"
def invalid_board_input(self):
return "That input is incorrect. Please input a number 1-9 for a spot that is not occupied."
def play_again_prompt(self):
return "Would you like to play again? (Y/N)"
def goodbye_message(self):
return "Thanks for playing - goodbye!"
def invalid_repeat_game_input(self):
return "That input is incorrect. Please input Y to play again or N to exit the game."
def choose_language(self):
return """
Choose your language:
1. English
2. Spanish
"""
def choose_players(self):
return """
Please make a selection from the options:
1. Human vs Human (2 players)
2. Human vs Computer (1 player)
"""
| 30.8875 | 100 | 0.684338 | 385 | 2,471 | 4.316883 | 0.32987 | 0.078219 | 0.033694 | 0.045728 | 0.132972 | 0.105295 | 0.102286 | 0.102286 | 0.055355 | 0 | 0 | 0.009106 | 0.244435 | 2,471 | 79 | 101 | 31.278481 | 0.881093 | 0 | 0 | 0.114754 | 0 | 0.016393 | 0.561311 | 0.008499 | 0 | 0 | 0 | 0 | 0 | 1 | 0.278689 | false | 0 | 0.032787 | 0.245902 | 0.606557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7c5a77e2a1ce23417f0017cf4ff099328f5d1b10 | 18,416 | py | Python | py/orbit/sns_linac/LinacLatticeFactory.py | LeoRya/py-orbit | 340b14b6fd041ed8ec2cc25b0821b85742aabe0c | [
"MIT"
] | 17 | 2018-02-09T23:39:06.000Z | 2022-03-04T16:27:04.000Z | py/orbit/sns_linac/LinacLatticeFactory.py | LeoRya/py-orbit | 340b14b6fd041ed8ec2cc25b0821b85742aabe0c | [
"MIT"
] | 22 | 2017-05-31T19:40:14.000Z | 2021-09-24T22:07:47.000Z | py/orbit/sns_linac/LinacLatticeFactory.py | LeoRya/py-orbit | 340b14b6fd041ed8ec2cc25b0821b85742aabe0c | [
"MIT"
] | 37 | 2016-12-08T19:39:35.000Z | 2022-02-11T19:59:34.000Z | """
The Linac Lattice Factory generates the Linac Accelerator Lattice from the information
inside of the LinacStructureTree instance which in turn was generated by the LinacParser.
The Linac Lattice Factory uses a predefined set of Linac Acc Elements. If you do not have
the LinacStructureTree instance you can create the Linac Accelerator Lattice directly in
the script.
"""
import os
import math
# import the linac structure tree with all sequences and nodes, but without drifts
from LinacParser import LinacStructureTree
from LinacAccNodes import BaseLinacNode, LinacNode, LinacMagnetNode, MarkerLinacNode, Drift, Quad, BaseRF_Gap, Bend
from LinacAccNodes import DCorrectorH, DCorrectorV
from LinacAccNodes import RF_Cavity, Sequence
from LinacAccLattice import LinacAccLattice
# import general accelerator elements
from orbit.lattice import AccNode
# import pyORBIT Python utilities classes for objects with names, types, and dictionary parameters
from orbit.utils import orbitFinalize
class LinacLatticeFactory():
"""
The Linac Lattice Factory generates the Linac Accelerator Lattice
from the information inside of the LinacStructureTree instance.
"""
def __init__(self, ltree):
if(isinstance(ltree, LinacStructureTree) != True):
msg = "The LinacLatticeFactory constructor: you have to specify the LinacStructureTree instance as input!"
msg = msg + os.linesep
msg = msg + "Stop."
msg = msg + os.linesep
orbitFinalize(msg)
self.ltree = ltree
#We need to compare positions, lengths etc. This is our delta
self.zeroDistance = 0.00001
#The maximal length of the drift. It will be devided if it is more than that.
self.maxDriftLength = 1.
def setMaxDriftLength(self, maxDriftLength = 1.0):
"""
Sets the maximal drift length that is used for
the purpose of the space charge calculations and diagnostics.
"""
self.maxDriftLength = maxDriftLength
def getMaxDriftLength(self):
"""
Returns the maximal drift length that is used for the purpose
of the space charge calculations and diagnostics.
"""
return self.maxDriftLength
def getLinacAccLattice(self,names):
"""
Returns the linac accelerator lattice for specified sequence names.
"""
if(len(names) < 1):
msg = "The LinacLatticeFactory method getLinacAccLattice(names): you have to specify the names array!"
msg = msg + os.linesep
msg = msg + "Stop."
msg = msg + os.linesep
orbitFinalize(msg)
#let's check that the names in good order ==start==
seqencesLocal = self.ltree.getSeqs()
seqencesLocalNames = []
for seq in seqencesLocal:
seqencesLocalNames.append(seq.getName())
ind_old = -1
count = 0
for name in names:
ind = seqencesLocalNames.index(name)
if(ind < 0 or (count > 0 and ind != (ind_old + 1))):
msg = "The LinacLatticeFactory method getLinacAccLattice(names): sequence names array is wrong!"
msg = msg + os.linesep
msg = msg + "existing names=" + str(seqencesLocalNames)
msg = msg + os.linesep
msg = msg + "sequence names="+str(names)
orbitFinalize(msg)
ind_old = ind
count += 1
# let's check that the names in good order ==stop==
ind_start = seqencesLocalNames.index(names[0])
sequences = self.ltree.getSeqs()[ind_start:ind_start+len(names)]
#----make linac lattice
linacAccLattice = LinacAccLattice(self.ltree.getName())
#There are the folowing possible types of elements in the linac tree:
#QUAD - quadrupole
#RFGAP - RF Gap
#DCH - horizontal dipole corrector
#DCV - vertical dipole corrector
#Marker - anything else with the length equals to 0
#Before putting enerything into the linacAccLattice we will create sequences
# with all nodes.
#----------------------------------------------------------------------
# The DRIFTS will be generated additionally and put into right places
#----------------------------------------------------------------------
def positionComp(node1,node2):
if(node1.getParam("pos") > node2.getParam("pos")):
return 1
else:
if(node1.getParam("pos") == node2.getParam("pos")):
return 0
return -1
accSeqs = []
accRF_Cavs = []
seqPosition = 0.
for seq in sequences:
#print "debug =========================================== seq=",seq.getName()
accSeq = Sequence(seq.getName())
accSeq.setLinacAccLattice(linacAccLattice)
accSeq.setLength(float(seq.getLength()))
accSeq.setPosition(seqPosition)
seqPosition = seqPosition + accSeq.getLength()
accSeqs.append(accSeq)
#these nodes are not AccNodes. They are from linac parser
nodes = seq.getNodes()
#put nodes in order according to the position in the sequence
for node in nodes:
node.setParam("pos",float(node.getParam("pos")))
nodes.sort(positionComp)
#rf_cav_names is an auxilary array with RF Cav. names
rf_cav_names = []
#array of nodes that are AccNodes with zero length
#They can be positioned inside the thick nodes, and this will be done at the end
#of this constructor
thinNodes = []
for node in nodes:
#print "debug node=",node.getName()," pos=",node.getParam("pos")
#------------QUAD-----------------
if(node.getType() == "QUAD"):
accNode = Quad(node.getName())
accNode.updateParamsDict(node.getParamsDict())
accNode.setParam("dB/dr",node.getParam("field"))
accNode.setParam("field",node.getParam("field"))
accNode.setLength(node.getParam("effLength"))
if(0.5*accNode.getLength() > self.maxDriftLength):
accNode.setnParts(2*int(0.5*accNode.getLength()/self.maxDriftLength + 1.5 - 1.0e-12))
accSeq.addNode(accNode)
#------------BEND-----------------
elif(node.getType() == "BEND"):
accNode = Bend(node.getName())
accNode.updateParamsDict(node.getParamsDict())
accNode.setParam("poles",[int(x) for x in eval(node.getParam("poles"))])
accNode.setParam("kls", [x for x in eval(node.getParam("kls"))])
accNode.setParam("skews",[int(x) for x in eval(node.getParam("skews"))])
accNode.setParam("ea1",node.getParam("ea1"))
accNode.setParam("ea2",node.getParam("ea2"))
accNode.setParam("theta",node.getParam("theta"))
accNode.setLength(node.getParam("effLength"))
if(0.5*accNode.getLength() > self.maxDriftLength):
accNode.setnParts(2*int(0.5*accNode.getLength()/self.maxDriftLength + 1.5 - 1.0e-12))
accSeq.addNode(accNode)
#------------RF_Gap-----------------
elif(node.getType() == "RFGAP"):
accNode = BaseRF_Gap(node.getName())
accNode.updateParamsDict(node.getParamsDict())
accNode.setParam("gapOffset",node.getParam("gapOffset"))
accNode.setLength(node.getParam("gapLength"))
accNode.setParam("amp",node.getParam("amp"))
#the parameter from XAL in MeV, we use GeV
#accNode.setParam("E0TL",1.0e-3*node.getParam("E0TL"))
accNode.setParam("E0TL",0.001*node.getParam("E0TL"))
accNode.setParam("length",node.getParam("gapLength"))
accNode.setParam("gapLength",node.getParam("gapLength"))
accNode.setParam("modePhase",node.getParam("modePhase"))
rf_cav_name = node.getParam("parentCavity")
if(rf_cav_name not in rf_cav_names):
accNode.setParam("firstPhase", (math.pi/180.)*accNode.getParam("firstPhase"))
rf_cav_names.append(rf_cav_name)
accRF_Cav = RF_Cavity(rf_cav_name)
accRF_Cavs.append(accRF_Cav)
accRF_Cav._setDesignPhase(accNode.getParam("firstPhase"))
accRF_Cav.setPhase(accNode.getParam("firstPhase"))
accRF_Cav._setDesignAmp(1.)
accRF_Cav.setAmp(1.)
accRF_Cav.setFrequency(seq.getParam("rfFrequency"))
accRF_Cav = accRF_Cavs[len(accRF_Cavs) - 1]
accRF_Cav.addRF_GapNode(accNode)
accSeq.addNode(accNode)
else:
if(node.getParam("length") != 0.):
msg = "The LinacLatticeFactory method getLinacAccLattice(names): there is a strange element!"
msg = msg + os.linesep
msg = msg + "name=" + node.getName()
msg = msg + os.linesep
msg = msg + "type="+node.getType()
msg = msg + os.linesep
msg = msg + "length(should be 0.)="+str(node.getParam("length"))
orbitFinalize(msg)
thinNodes.append(node)
#insert the drifts ======================start ===========================
#-----now check the integrety quads and rf_gaps should not overlap
#-----and create drifts
copyAccNodes = accSeq.getNodes()[:]
firstNode = copyAccNodes[0]
lastNode = copyAccNodes[len(copyAccNodes)-1]
driftNodes_before = []
driftNodes_after = []
#insert the drift before the first element if its half length less than its position
if(math.fabs(firstNode.getLength()/2.0 - firstNode.getParam("pos")) > self.zeroDistance):
if(firstNode.getLength()/2.0 > firstNode.getParam("pos")):
msg = "The LinacLatticeFactory method getLinacAccLattice(names): the first node is too long!"
msg = msg + os.linesep
msg = msg + "name=" + firstNode.getName()
msg = msg + os.linesep
msg = msg + "type=" + firstNode.getType()
msg = msg + os.linesep
msg = msg + "length=" + str(firstNode.getLength())
msg = msg + os.linesep
msg = msg + "pos=" + str(firstNode.getParam("pos"))
orbitFinalize(msg)
else:
driftNodes = []
driftLength = firstNode.getParam("pos") - firstNode.getLength()/2.0
nDrifts = int(driftLength/self.maxDriftLength) + 1
driftLength = driftLength/nDrifts
for idrift in range(nDrifts):
drift = Drift(accSeq.getName()+":"+firstNode.getName()+":"+str(idrift+1)+":drift")
drift.setLength(driftLength)
drift.setParam("pos",0.+drift.getLength()*(idrift+0.5))
driftNodes.append(drift)
driftNodes_before = driftNodes
#insert the drift after the last element if its half length less + position is less then the sequence length
if(math.fabs(lastNode.getLength()/2.0 + lastNode.getParam("pos") - accSeq.getLength()) > self.zeroDistance):
if(lastNode.getLength()/2.0 + lastNode.getParam("pos") > accSeq.getLength()):
msg = "The LinacLatticeFactory method getLinacAccLattice(names): the last node is too long!"
msg = msg + os.linesep
msg = msg + "name=" + lastNode.getName()
msg = msg + os.linesep
msg = msg + "type=" + lastNode.getType()
msg = msg + os.linesep
msg = msg + "length=" + str(lastNode.getLength())
msg = msg + os.linesep
msg = msg + "pos=" + str(lastNode.getParam("pos"))
msg = msg + os.linesep
msg = msg + "sequence name=" + accSeq.getName()
msg = msg + os.linesep
msg = msg + "sequence length=" + str(accSeq.getLength())
orbitFinalize(msg)
else:
driftNodes = []
driftLength = accSeq.getLength() - (lastNode.getParam("pos") + lastNode.getLength()/2.0)
nDrifts = int(driftLength/self.maxDriftLength) + 1
driftLength = driftLength/nDrifts
for idrift in range(nDrifts):
drift = Drift(accSeq.getName()+":"+lastNode.getName()+":"+str(idrift+1)+":drift")
drift.setLength(driftLength)
drift.setParam("pos",lastNode.getParam("pos")+lastNode.getLength()/2.0 + drift.getLength()*(idrift+0.5))
driftNodes.append(drift)
driftNodes_after = driftNodes
#now move on and generate drifts between (i,i+1) nodes from copyAccNodes
newAccNodes = driftNodes_before
for node_ind in range(len(copyAccNodes)-1):
accNode0 = copyAccNodes[node_ind]
newAccNodes.append(accNode0)
accNode1 = copyAccNodes[node_ind+1]
ind_of_node = accSeq.getNodes().index(accNode1)
dist = accNode1.getParam("pos") - accNode1.getLength()/2 - (accNode0.getParam("pos") + accNode0.getLength()/2)
if(dist < 0.):
msg = "The LinacLatticeFactory method getLinacAccLattice(names): two nodes are overlapping!"
msg = msg + os.linesep
msg = msg + "sequence name=" + accSeq.getName()
msg = msg + os.linesep
msg = msg + "node 0 name=" + accNode0.getName() + " pos="+ str(accNode0.getParam("pos")) + " L="+str(accNode0.getLength())
msg = msg + os.linesep
msg = msg + "node 1 name=" + accNode1.getName() + " pos="+ str(accNode1.getParam("pos")) + " L="+str(accNode1.getLength())
msg = msg + os.linesep
orbitFinalize(msg)
elif(dist > self.zeroDistance):
driftNodes = []
nDrifts = int(dist/self.maxDriftLength) + 1
driftLength = dist/nDrifts
for idrift in range(nDrifts):
drift = Drift(accSeq.getName()+":"+accNode0.getName()+":"+str(idrift+1)+":drift")
drift.setLength(driftLength)
drift.setParam("pos",accNode0.getParam("pos")+accNode0.getLength()*0.5+drift.getLength()*(idrift+0.5))
driftNodes.append(drift)
newAccNodes += driftNodes
else:
pass
newAccNodes.append(lastNode)
newAccNodes += driftNodes_after
accSeq.setNodes(newAccNodes)
#insert the drifts ======================stop ===========================
#========================================================================
#Now we will go over all zero length nodes and attach them into the quads
#or drifts. We cannot put anything inside RF Cavity.
# zero length elements insertion ========== start ======================
# if a zero-length element is inside a quad it will be placed inside this
# quad
accQuads = []
for accNode in accSeq.getNodes():
if(isinstance(accNode,Quad)): accQuads.append(accNode)
unusedThinNodes = []
for node in thinNodes:
position = node.getParam("pos")
quad_found = False
for quad in accQuads:
pos = quad.getParam("pos")
L = quad.getLength()
nParts = quad.getnParts()
if(abs(position - pos) < self.zeroDistance or (position > pos - L/2.0 and position < pos +L/2.0)):
accNode = None
if(node.getType() == "DCV" or node.getType() == "DCH"):
if(node.getType() == "DCV"): accNode = DCorrectorV(node.getName())
if(node.getType() == "DCH"): accNode = DCorrectorH(node.getName())
accNode.setParam("effLength",float(node.getParam("effLength")))
else:
accNode = MarkerLinacNode(node.getName())
accNode.updateParamsDict(node.getParamsDict())
accNode.setParam("pos",quad.getParam("pos"))
quad.addChildNode(accNode, place = AccNode.BODY, part_index = (nParts/2) - 1 , place_in_part = AccNode.AFTER)
quad_found = True
break
if(not quad_found): unusedThinNodes.append(node)
#remove all assigned zero-length nodes from list of thin nodes
thinNodes = unusedThinNodes
def posCompFunc(node1,node2):
if(node1.getParam("pos") < node2.getParam("pos")):
return True
return False
thinNodes.sort(posCompFunc)
#----------------
# chop the drifts if the thin element is inside or insert this element into
# the sequence at the end or the beginning of the drift
usedThinNodes = []
for node in thinNodes:
#print "debug chop drift thin node=",node.getName()
position = node.getParam("pos")
driftNode = self.__getDriftThinNode(position,accSeq)
if(driftNode != None):
usedThinNodes.append(node)
pos = driftNode.getParam("pos")
L = driftNode.getLength()
ind_insertion = accSeq.getNodes().index(driftNode)
accNode = None
if(node.getType() == "DCV" or node.getType() == "DCH"):
if(node.getType() == "DCV"): accNode = DCorrectorV(node.getName())
if(node.getType() == "DCH"): accNode = DCorrectorH(node.getName())
accNode.setParam("effLength",float(node.getParam("effLength")))
else:
accNode = MarkerLinacNode(node.getName())
accNode.updateParamsDict(node.getParamsDict())
accNode.setParam("pos",position)
if(abs(position - (pos - L/2.0)) < self.zeroDistance):
#insert before the drift
accSeq.addNode(accNode, index = ind_insertion)
elif(abs(position - (pos + L/2.0)) < self.zeroDistance):
#insert after the drift
accSeq.addNode(accNode, index = ind_insertion+1)
else:
#we replace this drift with two new
drift_node_name = driftNode.getName()
ind_name_pos = drift_node_name.find(":drift")
drift_node_name = drift_node_name[0:ind_name_pos]
drift0 = Drift(drift_node_name+":1:drift")
drift0.setLength(position - (pos - L/2.0))
drift0.setParam("pos",(pos - L/2.0) + drift0.getLength()/2.0)
drift1 = Drift(drift_node_name+":2:drift")
drift1.setLength((pos + L/2.0) - position)
drift1.setParam("pos",(pos + L/2.0) - drift1.getLength()/2.0)
accSeq.getNodes().remove(driftNode)
accSeq.addNode(drift0, index = ind_insertion)
accSeq.addNode(accNode, index = ind_insertion + 1)
accSeq.addNode(drift1, index = ind_insertion + 2)
#remove all assigned zero-length nodes from list of thin nodes
for node in usedThinNodes:
thinNodes.remove(node)
if(len(thinNodes) != 0):
print "==========WARNING!!!!==============="
print "The seqence =",accSeq.getName()," has nodes that are not assigned to the lattice:"
for node in thinNodes:
print "unused node =",node.getName()," pos=",node.getParam("pos")
# add all AccNodes to the linac lattice
L_total = 0.
for accNode in accSeq.getNodes():
pos = accNode.getParam("pos")
L = accNode.getLength()
L_total += L
linacAccLattice.addNode(accNode)
linacAccLattice.addSequence(accSeq)
# zero length elements insertion ========== stop ======================
for accRF_Cav in accRF_Cavs:
linacAccLattice.addRF_Cavity(accRF_Cav)
linacAccLattice.initialize()
return linacAccLattice
def __getDriftThinNode(self,position,accSeq):
"""
This method will return None or the drift AccNode in accSeq which cover this
position.
"""
resNode = None
ind_start = 0
ind_stop = len(accSeq.getNodes()) - 1
while(ind_stop - ind_start > 1):
ind = (ind_stop + ind_start)/2
accNode = accSeq.getNodes()[ind]
pos = accNode.getParam("pos") - accNode.getLength()/2.0
if(position > pos):
ind_start = ind
else:
ind_stop = ind
if(isinstance(accSeq.getNodes()[ind_start],Drift)):
resNode = accSeq.getNodes()[ind_start]
#check if the last node is the guy
if(resNode == None):
accNode = accSeq.getNodes()[len(accSeq.getNodes())-1]
if(isinstance(accNode,Drift)):
pos = accNode.getParam("pos") - accNode.getLength()/2.0
if(pos <= position):
resNode = accNode
return resNode
| 43.63981 | 130 | 0.656603 | 2,237 | 18,416 | 5.357622 | 0.157801 | 0.021527 | 0.015353 | 0.028786 | 0.405173 | 0.356028 | 0.344514 | 0.285607 | 0.239466 | 0.19975 | 0 | 0.012229 | 0.191844 | 18,416 | 421 | 131 | 43.743468 | 0.793052 | 0.163771 | 0 | 0.301538 | 1 | 0 | 0.098388 | 0.013341 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003077 | 0.027692 | null | null | 0.009231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c610d50c06cb3ee608f2c36c0792f69ebb3e10d | 730 | py | Python | silicoin/types/name_puzzle_condition.py | zcomputerwiz/silicoin-light-wallet | 1cdc3784effec229cc841a04655078b1d9913d33 | [
"Apache-2.0"
] | null | null | null | silicoin/types/name_puzzle_condition.py | zcomputerwiz/silicoin-light-wallet | 1cdc3784effec229cc841a04655078b1d9913d33 | [
"Apache-2.0"
] | null | null | null | silicoin/types/name_puzzle_condition.py | zcomputerwiz/silicoin-light-wallet | 1cdc3784effec229cc841a04655078b1d9913d33 | [
"Apache-2.0"
] | null | null | null | from dataclasses import dataclass
from typing import Dict, List, Tuple
from silicoin.types.blockchain_format.sized_bytes import bytes32
from silicoin.types.condition_with_args import ConditionWithArgs
from silicoin.types.condition_opcodes import ConditionOpcode
from silicoin.util.streamable import Streamable, streamable
@dataclass(frozen=True)
@streamable
class NPC(Streamable):
coin_name: bytes32
puzzle_hash: bytes32
conditions: List[Tuple[ConditionOpcode, List[ConditionWithArgs]]]
@property
def condition_dict(self):
d: Dict[ConditionOpcode, List[ConditionWithArgs]] = {}
for opcode, l in self.conditions:
assert opcode not in d
d[opcode] = l
return d
| 30.416667 | 69 | 0.750685 | 86 | 730 | 6.27907 | 0.5 | 0.088889 | 0.094444 | 0.096296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01005 | 0.182192 | 730 | 23 | 70 | 31.73913 | 0.894472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.052632 | false | 0 | 0.315789 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7c62db0306be25f4fc8e50737d4c0c27f19ad2cc | 501 | py | Python | spirit/user/admin/forms.py | StepanBakshayev/Spirit | 28c053983d8323801d022c5314af7fdff4569228 | [
"MIT"
] | 3 | 2019-08-18T19:08:48.000Z | 2019-12-04T18:14:29.000Z | spirit/user/admin/forms.py | StepanBakshayev/Spirit | 28c053983d8323801d022c5314af7fdff4569228 | [
"MIT"
] | 16 | 2015-08-10T18:28:18.000Z | 2022-03-11T23:12:48.000Z | spirit/user/admin/forms.py | StepanBakshayev/Spirit | 28c053983d8323801d022c5314af7fdff4569228 | [
"MIT"
] | 6 | 2018-06-25T02:17:53.000Z | 2020-12-08T01:09:32.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django import forms
from django.contrib.auth import get_user_model
from ..models import UserProfile
User = get_user_model()
class UserForm(forms.ModelForm):
class Meta:
model = User
fields = ("username", "email", "is_active")
class UserProfileForm(forms.ModelForm):
class Meta:
model = UserProfile
fields = ("location", "timezone", "is_verified", "is_administrator", "is_moderator") | 20.875 | 92 | 0.690619 | 58 | 501 | 5.741379 | 0.551724 | 0.06006 | 0.072072 | 0.138138 | 0.168168 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002488 | 0.197605 | 501 | 24 | 92 | 20.875 | 0.825871 | 0.041916 | 0 | 0.153846 | 0 | 0 | 0.160752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7c70673150e9728e482c46cf7232d12fc3748283 | 12,852 | py | Python | mypy/strconv.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | 1 | 2019-06-27T11:34:27.000Z | 2019-06-27T11:34:27.000Z | mypy/strconv.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | null | null | null | mypy/strconv.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | null | null | null | """Conversion of parse tree nodes to strings."""
import re
import os
import typing
from mypy.util import dump_tagged, short_type
import mypy.nodes
from mypy.visitor import NodeVisitor
class StrConv(NodeVisitor[str]):
"""Visitor for converting a Node to a human-readable string.
For example, an MypyFile node from program '1' is converted into
something like this:
MypyFile:1(
fnam
ExpressionStmt:1(
IntExpr(1)))
"""
def dump(self, nodes, obj):
"""Convert a list of items to a multiline pretty-printed string.
The tag is produced from the type name of obj and its line
number. See mypy.util.dump_tagged for a description of the nodes
argument.
"""
return dump_tagged(nodes, short_type(obj) + ':' + str(obj.line))
def func_helper(self, o):
"""Return a list in a format suitable for dump() that represents the
arguments and the body of a function. The caller can then decorate the
array with information specific to methods, global functions or
anonymous functions.
"""
args = []
init = []
extra = []
for i, kind in enumerate(o.arg_kinds):
if kind == mypy.nodes.ARG_POS:
args.append(o.args[i])
elif kind in (mypy.nodes.ARG_OPT, mypy.nodes.ARG_NAMED):
args.append(o.args[i])
init.append(o.init[i])
elif kind == mypy.nodes.ARG_STAR:
extra.append(('VarArg', [o.args[i]]))
elif kind == mypy.nodes.ARG_STAR2:
extra.append(('DictVarArg', [o.args[i]]))
a = []
if args:
a.append(('Args', args))
if o.type:
a.append(o.type)
if init:
a.append(('Init', init))
if o.is_generator:
a.append('Generator')
a.extend(extra)
a.append(o.body)
return a
# Top-level structures
def visit_mypy_file(self, o):
# Skip implicit definitions.
defs = o.defs
while (defs and isinstance(defs[0], mypy.nodes.VarDef) and
not defs[0].repr):
defs = defs[1:]
a = [defs]
if o.is_bom:
a.insert(0, 'BOM')
# Omit path to special file with name "main". This is used to simplify
# test case descriptions; the file "main" is used by default in many
# test cases.
if o.path is not None and o.path != 'main':
# Insert path. Normalize directory separators to / to unify test
# case# output in all platforms.
a.insert(0, o.path.replace(os.sep, '/'))
return self.dump(a, o)
def visit_import(self, o):
a = []
for id, as_id in o.ids:
a.append('{} : {}'.format(id, as_id))
return 'Import:{}({})'.format(o.line, ', '.join(a))
def visit_import_from(self, o):
a = []
for name, as_name in o.names:
a.append('{} : {}'.format(name, as_name))
return 'ImportFrom:{}({}, [{}])'.format(o.line, o.id, ', '.join(a))
def visit_import_all(self, o):
return 'ImportAll:{}({})'.format(o.line, o.id)
# Definitions
def visit_func_def(self, o):
a = self.func_helper(o)
a.insert(0, o.name())
if mypy.nodes.ARG_NAMED in o.arg_kinds:
a.insert(1, 'MaxPos({})'.format(o.max_pos))
if o.is_abstract:
a.insert(-1, 'Abstract')
if o.is_static:
a.insert(-1, 'Static')
if o.is_property:
a.insert(-1, 'Property')
return self.dump(a, o)
def visit_overloaded_func_def(self, o):
a = o.items[:]
if o.type:
a.insert(0, o.type)
return self.dump(a, o)
def visit_type_def(self, o):
a = [o.name, o.defs.body]
# Display base types unless they are implicitly just builtins.object
# (in this case there is no representation).
if len(o.base_types) > 1 or (len(o.base_types) == 1
and o.base_types[0].repr):
a.insert(1, ('BaseType', o.base_types))
if o.type_vars:
a.insert(1, ('TypeVars', o.type_vars))
if o.metaclass:
a.insert(1, 'Metaclass({})'.format(o.metaclass))
return self.dump(a, o)
def visit_var_def(self, o):
a = []
for n in o.items:
a.append('Var({})'.format(n.name()))
a.append('Type({})'.format(n.type))
if o.init:
a.append(o.init)
return self.dump(a, o)
def visit_var(self, o):
l = ''
# Add :nil line number tag if no line number is specified to remain
# compatible with old test case descriptions that assume this.
if o.line < 0:
l = ':nil'
return 'Var' + l + '(' + o.name() + ')'
def visit_global_decl(self, o):
return self.dump([o.names], o)
def visit_decorator(self, o):
return self.dump([o.var, o.decorators, o.func], o)
def visit_annotation(self, o):
return 'Type:{}({})'.format(o.line, o.type)
# Statements
def visit_block(self, o):
return self.dump(o.body, o)
def visit_expression_stmt(self, o):
return self.dump([o.expr], o)
def visit_assignment_stmt(self, o):
if len(o.lvalues) > 1:
a = [('Lvalues', o.lvalues)]
else:
a = [o.lvalues[0]]
a.append(o.rvalue)
if o.type:
a.append(o.type)
return self.dump(a, o)
def visit_operator_assignment_stmt(self, o):
return self.dump([o.op, o.lvalue, o.rvalue], o)
def visit_while_stmt(self, o):
a = [o.expr, o.body]
if o.else_body:
a.append(('Else', o.else_body.body))
return self.dump(a, o)
def visit_for_stmt(self, o):
a = [o.index]
if o.types != [None] * len(o.types):
a += o.types
a.extend([o.expr, o.body])
if o.else_body:
a.append(('Else', o.else_body.body))
return self.dump(a, o)
def visit_return_stmt(self, o):
return self.dump([o.expr], o)
def visit_if_stmt(self, o):
a = []
for i in range(len(o.expr)):
a.append(('If', [o.expr[i]]))
a.append(('Then', o.body[i].body))
if not o.else_body:
return self.dump(a, o)
else:
return self.dump([a, ('Else', o.else_body.body)], o)
def visit_break_stmt(self, o):
return self.dump([], o)
def visit_continue_stmt(self, o):
return self.dump([], o)
def visit_pass_stmt(self, o):
return self.dump([], o)
def visit_raise_stmt(self, o):
return self.dump([o.expr, o.from_expr], o)
def visit_assert_stmt(self, o):
return self.dump([o.expr], o)
def visit_yield_stmt(self, o):
return self.dump([o.expr], o)
def visit_del_stmt(self, o):
return self.dump([o.expr], o)
def visit_try_stmt(self, o):
a = [o.body]
for i in range(len(o.vars)):
a.append(o.types[i])
if o.vars[i]:
a.append(o.vars[i])
a.append(o.handlers[i])
if o.else_body:
a.append(('Else', o.else_body.body))
if o.finally_body:
a.append(('Finally', o.finally_body.body))
return self.dump(a, o)
def visit_with_stmt(self, o):
a = []
for i in range(len(o.expr)):
a.append(('Expr', [o.expr[i]]))
if o.name[i]:
a.append(('Name', [o.name[i]]))
return self.dump(a + [o.body], o)
def visit_print_stmt(self, o):
a = o.args[:]
if o.newline:
a.append('Newline')
return self.dump(a, o)
# Expressions
# Simple expressions
def visit_int_expr(self, o):
return 'IntExpr({})'.format(o.value)
def visit_str_expr(self, o):
return 'StrExpr({})'.format(self.str_repr(o.value))
def visit_bytes_expr(self, o):
return 'BytesExpr({})'.format(self.str_repr(o.value))
def visit_unicode_expr(self, o):
return 'UnicodeExpr({})'.format(self.str_repr(o.value))
def str_repr(self, s):
s = re.sub(r'\\u[0-9a-fA-F]{4}', lambda m: '\\' + m.group(0), s)
return re.sub('[^\\x20-\\x7e]',
lambda m: r'\u%.4x' % ord(m.group(0)), s)
def visit_float_expr(self, o):
return 'FloatExpr({})'.format(o.value)
def visit_paren_expr(self, o):
return self.dump([o.expr], o)
def visit_name_expr(self, o):
return (short_type(o) + '(' + self.pretty_name(o.name, o.kind,
o.fullname, o.is_def)
+ ')')
def pretty_name(self, name, kind, fullname, is_def):
n = name
if is_def:
n += '*'
if kind == mypy.nodes.GDEF or (fullname != name and
fullname is not None):
# Append fully qualified name for global references.
n += ' [{}]'.format(fullname)
elif kind == mypy.nodes.LDEF:
# Add tag to signify a local reference.
n += ' [l]'
elif kind == mypy.nodes.MDEF:
# Add tag to signify a member reference.
n += ' [m]'
return n
def visit_member_expr(self, o):
return self.dump([o.expr, self.pretty_name(o.name, o.kind, o.fullname,
o.is_def)], o)
def visit_call_expr(self, o):
if o.analyzed:
return o.analyzed.accept(self)
args = []
extra = []
for i, kind in enumerate(o.arg_kinds):
if kind in [mypy.nodes.ARG_POS, mypy.nodes.ARG_STAR]:
args.append(o.args[i])
if kind == mypy.nodes.ARG_STAR:
extra.append('VarArg')
elif kind == mypy.nodes.ARG_NAMED:
extra.append(('KwArgs', [o.arg_names[i], o.args[i]]))
elif kind == mypy.nodes.ARG_STAR2:
extra.append(('DictVarArg', [o.args[i]]))
else:
raise RuntimeError('unknown kind %d' % kind)
return self.dump([o.callee, ('Args', args)] + extra, o)
def visit_op_expr(self, o):
return self.dump([o.op, o.left, o.right], o)
def visit_cast_expr(self, o):
return self.dump([o.expr, o.type], o)
def visit_unary_expr(self, o):
return self.dump([o.op, o.expr], o)
def visit_list_expr(self, o):
return self.dump(o.items, o)
def visit_dict_expr(self, o):
return self.dump([[k, v] for k, v in o.items], o)
def visit_set_expr(self, o):
return self.dump(o.items, o)
def visit_tuple_expr(self, o):
return self.dump(o.items, o)
def visit_index_expr(self, o):
if o.analyzed:
return o.analyzed.accept(self)
return self.dump([o.base, o.index], o)
def visit_super_expr(self, o):
return self.dump([o.name], o)
def visit_undefined_expr(self, o):
return 'UndefinedExpr:{}({})'.format(o.line, o.type)
def visit_type_application(self, o):
return self.dump([o.expr, ('Types', o.types)], o)
def visit_type_var_expr(self, o):
if o.values:
return self.dump([('Values', o.values)], o)
else:
return 'TypeVarExpr:{}()'.format(o.line)
def visit_func_expr(self, o):
a = self.func_helper(o)
return self.dump(a, o)
def visit_generator_expr(self, o):
# FIX types
return self.dump([o.left_expr, o.index, o.right_expr, o.condition], o)
def visit_list_comprehension(self, o):
return self.dump([o.generator], o)
def visit_conditional_expr(self, o):
return self.dump([('Condition', [o.cond]), o.if_expr, o.else_expr], o)
def visit_slice_expr(self, o):
a = [o.begin_index, o.end_index, o.stride]
if not a[0]:
a[0] = '<empty>'
if not a[1]:
a[1] = '<empty>'
return self.dump(a, o)
def visit_coerce_expr(self, o):
return self.dump([o.expr, ('Types', [o.target_type, o.source_type])],
o)
def visit_type_expr(self, o):
return self.dump([str(o.type)], o)
def visit_filter_node(self, o):
# These are for convenience. These node types are not defined in the
# parser module.
pass
| 31.655172 | 78 | 0.523654 | 1,780 | 12,852 | 3.676404 | 0.168539 | 0.045844 | 0.10055 | 0.066473 | 0.384933 | 0.303637 | 0.27063 | 0.236247 | 0.198808 | 0.14349 | 0 | 0.004575 | 0.336679 | 12,852 | 405 | 79 | 31.733333 | 0.76305 | 0.114535 | 0 | 0.235088 | 0 | 0 | 0.043881 | 0 | 0 | 0 | 0 | 0 | 0.003509 | 1 | 0.221053 | false | 0.007018 | 0.042105 | 0.133333 | 0.498246 | 0.003509 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7c74a588f5790e6804e6dcfa5bfa2f14a0bfb8ec | 886 | py | Python | irekua_rest_api/views/terms/terms.py | IslasGECI/irekua-rest-api | 35cf5153ed7f54d12ebad2ac07d472585f04e3e7 | [
"BSD-4-Clause"
] | null | null | null | irekua_rest_api/views/terms/terms.py | IslasGECI/irekua-rest-api | 35cf5153ed7f54d12ebad2ac07d472585f04e3e7 | [
"BSD-4-Clause"
] | 11 | 2020-03-28T18:51:50.000Z | 2022-01-13T01:47:40.000Z | irekua_rest_api/views/terms/terms.py | IslasGECI/irekua-rest-api | 35cf5153ed7f54d12ebad2ac07d472585f04e3e7 | [
"BSD-4-Clause"
] | 1 | 2021-05-06T19:38:14.000Z | 2021-05-06T19:38:14.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from rest_framework.viewsets import GenericViewSet
from rest_framework import mixins
from irekua_database import models
from irekua_rest_api import serializers
from irekua_rest_api import utils
from irekua_rest_api.permissions import IsAdmin
from irekua_rest_api.permissions import IsDeveloper
from irekua_rest_api.permissions import ReadOnly
class TermViewSet(mixins.UpdateModelMixin,
mixins.RetrieveModelMixin,
mixins.DestroyModelMixin,
utils.CustomViewSetMixin,
GenericViewSet):
queryset = models.Term.objects.all() # pylint: disable=E1101
serializer_mapping = utils.SerializerMapping.from_module(
serializers.terms.terms)
permission_mapping = utils.PermissionMapping(
default=IsDeveloper | IsAdmin | ReadOnly)
| 31.642857 | 65 | 0.751693 | 93 | 886 | 6.935484 | 0.473118 | 0.093023 | 0.108527 | 0.131783 | 0.229457 | 0.15814 | 0 | 0 | 0 | 0 | 0 | 0.007013 | 0.19526 | 886 | 27 | 66 | 32.814815 | 0.897616 | 0.048533 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.473684 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7c75b32268cf60a3b8838de9ea219575855eee6f | 1,000 | py | Python | src/sample_mnist.py | entaku/kusarigama | 0c9aa2535efa121febf941626196d2b6b0c46cac | [
"MIT"
] | null | null | null | src/sample_mnist.py | entaku/kusarigama | 0c9aa2535efa121febf941626196d2b6b0c46cac | [
"MIT"
] | null | null | null | src/sample_mnist.py | entaku/kusarigama | 0c9aa2535efa121febf941626196d2b6b0c46cac | [
"MIT"
] | null | null | null | __author__ = 'team-entaku'
import sys
import cv2
import numpy as np
import math
import matplotlib.pyplot as plt
import scipy.cluster.vq
import scipy.spatial.distance as distance
from chainer import computational_graph as c
from matplotlib.pyplot import show
from scipy.cluster.hierarchy import linkage, dendrogram, fcluster
import random
import argparse
import pickle
import six
import chainer
from chainer import computational_graph as c
from chainer import cuda
import chainer.functions as F
from chainer import optimizers
if __name__ == "__main__":
file_name = sys.argv[1]
orig = cv2.imread(file_name, 1)
gray = cv2.cvtColor(orig, cv2.COLOR_BGR2GRAY)
gray_float = gray.astype('float32')
gray28 = 1 - (cv2.resize(gray_float, (28, 28)) / 255.)
gray28.resize((1, 784))
h0 = chainer.Variable(gray28)
model = pickle.load(open('trained-mnist-model', 'rb'))
h1 = model.l1(h0)
h2 = model.l2(h1)
y = model.l3(h2)
print y.data
print F.softmax(y).data
| 25.641026 | 65 | 0.734 | 149 | 1,000 | 4.798658 | 0.489933 | 0.061538 | 0.095105 | 0.083916 | 0.117483 | 0.117483 | 0.117483 | 0.117483 | 0 | 0 | 0 | 0.044686 | 0.172 | 1,000 | 38 | 66 | 26.315789 | 0.818841 | 0 | 0 | 0.058824 | 0 | 0 | 0.047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.558824 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7c7e4b9d48fe85935c8f302fbd27aaad5a01c2dc | 2,748 | py | Python | statsmodels/stats/api.py | nikhase/statsmodels | e1822d4513f442002816bb898ca5794785f35c32 | [
"BSD-3-Clause"
] | 34 | 2018-07-13T11:30:46.000Z | 2022-01-05T13:48:10.000Z | statsmodels/stats/api.py | nikhase/statsmodels | e1822d4513f442002816bb898ca5794785f35c32 | [
"BSD-3-Clause"
] | 6 | 2015-08-28T16:59:03.000Z | 2019-04-12T22:29:01.000Z | statsmodels/stats/api.py | nikhase/statsmodels | e1822d4513f442002816bb898ca5794785f35c32 | [
"BSD-3-Clause"
] | 28 | 2015-04-01T20:02:25.000Z | 2021-07-03T00:09:28.000Z | # pylint: disable=W0611
from . import diagnostic
from .diagnostic import (
acorr_ljungbox, acorr_breusch_godfrey,
CompareCox, compare_cox, CompareJ, compare_j,
HetGoldfeldQuandt, het_goldfeldquandt,
het_breuschpagan, het_white, het_arch,
linear_harvey_collier, linear_rainbow, linear_lm,
breaks_cusumolsresid, breaks_hansen, recursive_olsresiduals,
unitroot_adf,
normal_ad, lilliefors,
# deprecated because of misspelling:
lillifors, het_breushpagan, acorr_breush_godfrey
)
from . import multicomp
from .multitest import (multipletests, fdrcorrection, fdrcorrection_twostage,
local_fdr, NullDistribution, RegressionFDR)
from .multicomp import tukeyhsd
from . import gof
from .gof import (powerdiscrepancy, gof_chisquare_discrete,
chisquare_effectsize)
from . import stattools
from .stattools import durbin_watson, omni_normtest, jarque_bera
from . import sandwich_covariance
from .sandwich_covariance import (
cov_cluster, cov_cluster_2groups, cov_nw_panel,
cov_hac, cov_white_simple,
cov_hc0, cov_hc1, cov_hc2, cov_hc3,
se_cov
)
from .weightstats import (DescrStatsW, CompareMeans, ttest_ind, ttost_ind,
ttost_paired, ztest, ztost, zconfint)
from .proportion import (binom_test_reject_interval, binom_test,
binom_tost, binom_tost_reject_interval,
power_binom_tost, power_ztost_prop,
proportion_confint, proportion_effectsize,
proportions_chisquare, proportions_chisquare_allpairs,
proportions_chisquare_pairscontrol, proportions_ztest,
proportions_ztost, multinomial_proportions_confint)
from .power import (TTestPower, TTestIndPower, GofChisquarePower,
NormalIndPower, FTestAnovaPower, FTestPower,
tt_solve_power, tt_ind_solve_power, zt_ind_solve_power)
from .descriptivestats import Describe
from .anova import anova_lm
from . import moment_helpers
from .correlation_tools import (corr_clipped, corr_nearest,
corr_nearest_factor, corr_thresholded, cov_nearest,
cov_nearest_factor_homog, FactoredPSDMatrix)
from statsmodels.sandbox.stats.runs import (Runs, runstest_1samp, runstest_2samp)
from statsmodels.stats.contingency_tables import (mcnemar, cochrans_q,
SquareTable,
Table2x2,
Table,
StratifiedTable)
from .mediation import Mediation
| 41.636364 | 81 | 0.667758 | 265 | 2,748 | 6.569811 | 0.535849 | 0.034463 | 0.014934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006616 | 0.284935 | 2,748 | 65 | 82 | 42.276923 | 0.879389 | 0.020378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.396226 | 0 | 0.396226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7c7f5ef2d93aab25e7cd8e029965187cadb3d94a | 1,048 | py | Python | flight/config.py | MissouriMRR/SUAS-2022 | 90076b4a62bef640ba19c5dcea12c639c2e82815 | [
"MIT"
] | 6 | 2021-10-05T23:24:36.000Z | 2022-03-25T19:05:39.000Z | flight/config.py | MissouriMRR/SUAS-2022 | 90076b4a62bef640ba19c5dcea12c639c2e82815 | [
"MIT"
] | 31 | 2021-10-05T04:20:24.000Z | 2022-03-25T01:08:57.000Z | flight/config.py | MissouriMRR/SUAS-2022 | 90076b4a62bef640ba19c5dcea12c639c2e82815 | [
"MIT"
] | null | null | null | """File to hold important constant values and configure drone upon startup"""
from mavsdk import System
MAX_ALT: int = 750 # Feet
TAKEOFF_ALT: int = 100 # Feet
WAIT: float = 2.0 # Seconds
async def config_params(drone: System) -> None:
"""
Sets certain parameters within the drone for flight
Parameters
----------
drone: System
MAVSDK object for manual drone control & manipulation
"""
await drone.param.set_param_float("MIS_TAKEOFF_ALT", TAKEOFF_ALT)
# Set data link loss failsafe mode HOLD
await drone.param.set_param_int("NAV_DLL_ACT", 1)
# Set offboard loss failsafe mode HOLD
await drone.param.set_param_int("COM_OBL_ACT", 1)
# Set offboard loss failsafe mode when RC is available HOLD
await drone.param.set_param_int("COM_OBL_RC_ACT", 5)
# Set RC loss failsafe mode HOLD
await drone.param.set_param_int("NAV_RCL_ACT", 1)
await drone.param.set_param_float("LNDMC_XY_VEL_MAX", 0.5)
await drone.param.set_param_float("LNDMC_ALT_MAX", MAX_ALT)
| 33.806452 | 77 | 0.703244 | 159 | 1,048 | 4.415094 | 0.427673 | 0.099715 | 0.149573 | 0.179487 | 0.464387 | 0.464387 | 0.424501 | 0.264957 | 0.264957 | 0.205128 | 0 | 0.016847 | 0.207061 | 1,048 | 30 | 78 | 34.933333 | 0.827918 | 0.242366 | 0 | 0 | 0 | 0 | 0.152685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c81eada5ae12fbce12215e6a8b1ce2c79a4a204 | 319 | py | Python | dict/sorting.py | janbodnar/Python-Course | 51705ab5a2adef52bcdb99a800e94c0d67144a38 | [
"BSD-2-Clause"
] | 13 | 2017-08-22T12:26:07.000Z | 2021-07-29T16:13:50.000Z | dict/sorting.py | janbodnar/Python-Course | 51705ab5a2adef52bcdb99a800e94c0d67144a38 | [
"BSD-2-Clause"
] | 1 | 2021-02-08T10:24:33.000Z | 2021-02-08T10:24:33.000Z | dict/sorting.py | janbodnar/Python-Course | 51705ab5a2adef52bcdb99a800e94c0d67144a38 | [
"BSD-2-Clause"
] | 17 | 2018-08-13T11:10:33.000Z | 2021-07-29T16:14:02.000Z | #!/usr/bin/python
# sorting.py
items = { "coins": 7, "pens": 3, "cups": 2,
"bags": 1, "bottles": 4, "books": 5 }
for key in sorted(items.keys()):
print ("%s: %s" % (key, items[key]))
print ("####### #######")
for key in sorted(items.keys(), reverse=True):
print ("%s: %s" % (key, items[key]))
| 21.266667 | 46 | 0.498433 | 45 | 319 | 3.533333 | 0.6 | 0.075472 | 0.100629 | 0.176101 | 0.515723 | 0.515723 | 0 | 0 | 0 | 0 | 0 | 0.024 | 0.216301 | 319 | 14 | 47 | 22.785714 | 0.612 | 0.08464 | 0 | 0.285714 | 0 | 0 | 0.193103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7c9342c724f3a255cda4dcd772bf6d867549e4c1 | 638 | py | Python | wrangalytics/config/api.py | acastrounis/data-wrangler | 0288449e08ebd14393e1718e6ecf3765b0ec77b2 | [
"MIT"
] | 1 | 2018-07-02T22:18:24.000Z | 2018-07-02T22:18:24.000Z | wrangalytics/config/api.py | acastrounis/data-wrangler | 0288449e08ebd14393e1718e6ecf3765b0ec77b2 | [
"MIT"
] | null | null | null | wrangalytics/config/api.py | acastrounis/data-wrangler | 0288449e08ebd14393e1718e6ecf3765b0ec77b2 | [
"MIT"
] | null | null | null | def debug(x):
print(x)
### Notebook Magics
# %matplotlib inline
def juptyerConfig(pd, max_columns=500, max_rows = 500, float_format = '{:,.6f}', max_info_rows = 1000, max_categories = 500):
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
pd.options.display.float_format = float_format.format
pd.options.display.max_info_rows = 1000
pd.options.display.max_categories = 500
# from IPython.core.interactiveshell import InteractiveShell
# InteractiveShell.ast_node_interactivity = "all"
def createDataFilePath(data_dir='', data_file_name=''):
return data_dir + data_file_name | 35.444444 | 125 | 0.736677 | 85 | 638 | 5.282353 | 0.435294 | 0.100223 | 0.178174 | 0.169265 | 0.182628 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.153605 | 638 | 18 | 126 | 35.444444 | 0.781481 | 0.221003 | 0 | 0 | 0 | 0 | 0.014257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0.1 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c9382218072cea7ae62cceb5f5bdb2460abf759 | 477 | py | Python | app/screens/about.py | mobile-insight/mobileInsight-mobile | ec5d64bd400d9409cd7fe5a351cbcbbd3ed2e97a | [
"Apache-2.0"
] | 63 | 2017-06-30T15:04:15.000Z | 2021-11-15T09:58:45.000Z | app/screens/about.py | mobile-insight/mobileInsight-mobile | ec5d64bd400d9409cd7fe5a351cbcbbd3ed2e97a | [
"Apache-2.0"
] | 28 | 2017-07-24T15:51:50.000Z | 2022-03-13T21:13:09.000Z | app/screens/about.py | mobile-insight/mobileInsight-mobile | ec5d64bd400d9409cd7fe5a351cbcbbd3ed2e97a | [
"Apache-2.0"
] | 45 | 2017-07-02T13:16:37.000Z | 2022-03-22T07:26:13.000Z | import main_utils
from kivy.lang import Builder
from . import MobileInsightScreenBase
Builder.load_file('screens/about.kv')
class AboutScreen(MobileInsightScreenBase):
about_text = 'MobileInsight ' + main_utils.get_cur_version() + '\n'
with open('screens/about.txt', 'r') as content_file:
content = content_file.read()
about_text = about_text + content + '\n'
about_text = about_text + 'copyright (c) 2015 - 2020 by MobileInsight Team'
pass
| 29.8125 | 79 | 0.716981 | 60 | 477 | 5.5 | 0.583333 | 0.136364 | 0.084848 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.178197 | 477 | 15 | 80 | 31.8 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0.207547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090909 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7c93e9abea4c593e3eebab1fe5fab7c1887e41b5 | 939 | py | Python | lib/aquilon/worker/commands/cat_eon_id.py | ned21/aquilon | 6562ea0f224cda33b72a6f7664f48d65f96bd41a | [
"Apache-2.0"
] | 7 | 2015-07-31T05:57:30.000Z | 2021-09-07T15:18:56.000Z | lib/aquilon/worker/commands/cat_eon_id.py | ned21/aquilon | 6562ea0f224cda33b72a6f7664f48d65f96bd41a | [
"Apache-2.0"
] | 115 | 2015-03-03T13:11:46.000Z | 2021-09-20T12:42:24.000Z | lib/aquilon/worker/commands/cat_eon_id.py | ned21/aquilon | 6562ea0f224cda33b72a6f7664f48d65f96bd41a | [
"Apache-2.0"
] | 13 | 2015-03-03T11:17:59.000Z | 2021-09-09T09:16:41.000Z | # -*- cpy-indent-level: 4; indent-tabs-mode: nil -*-
# ex: set expandtab softtabstop=4 shiftwidth=4:
#
# Copyright (C) 2011,2013,2015,2018 Contributor
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains the logic for `aq cat --eon_id`."""
from aquilon.worker.broker import BrokerCommand # noqa
from aquilon.worker.commands.cat_grn import CommandCatGrn
class CommandCatEonId(CommandCatGrn):
required_parameters = ["eon_id"]
| 36.115385 | 74 | 0.752929 | 138 | 939 | 5.094203 | 0.702899 | 0.085349 | 0.036984 | 0.045519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028967 | 0.15442 | 939 | 25 | 75 | 37.56 | 0.856423 | 0.759318 | 0 | 0 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7c97fb14320e96504234d197d99aeb8279d8b99f | 281 | py | Python | code/ch14/chord.py | raionik/Py4Bio | f95ba16ef295f4889149123c5f76419d38077bc5 | [
"MIT"
] | 66 | 2017-01-11T14:37:31.000Z | 2022-03-20T23:23:45.000Z | code/ch14/chord.py | raionik/Py4Bio | f95ba16ef295f4889149123c5f76419d38077bc5 | [
"MIT"
] | 8 | 2019-12-14T23:44:27.000Z | 2021-01-05T02:04:10.000Z | code/ch14/chord.py | raionik/Py4Bio | f95ba16ef295f4889149123c5f76419d38077bc5 | [
"MIT"
] | 32 | 2017-08-21T11:57:55.000Z | 2021-07-22T00:42:21.000Z | from bokeh.charts import output_file, Chord
from bokeh.io import show
import pandas as pd
data = pd.read_csv('../../samples/test3.csv')
chord_from_df = Chord(data, source='name_x', target='name_y',
value='value')
output_file('chord.html')
show(chord_from_df)
| 31.222222 | 61 | 0.697509 | 44 | 281 | 4.25 | 0.568182 | 0.144385 | 0.160428 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004255 | 0.163701 | 281 | 8 | 62 | 35.125 | 0.791489 | 0 | 0 | 0 | 0 | 0 | 0.177936 | 0.081851 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7c9b78cd1d16ec845ba9da5b1757b47bae0f978e | 7,424 | py | Python | src/robotide/lib/robot/utils/charwidth.py | veryl-technologies/t24-tests-ide | 16cd803895916a785c0e1fec3f71f9388c21edc9 | [
"ECL-2.0",
"Apache-2.0"
] | 8 | 2015-09-10T07:45:58.000Z | 2020-04-13T06:25:06.000Z | src/robotide/lib/robot/utils/charwidth.py | veryl-technologies/t24-tests-ide | 16cd803895916a785c0e1fec3f71f9388c21edc9 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-08-03T10:01:06.000Z | 2020-08-03T10:01:06.000Z | src/robotide/lib/robot/utils/charwidth.py | veryl-technologies/t24-tests-ide | 16cd803895916a785c0e1fec3f71f9388c21edc9 | [
"ECL-2.0",
"Apache-2.0"
] | 10 | 2015-10-06T13:29:50.000Z | 2021-05-31T01:04:01.000Z | # Copyright 2008-2012 Nokia Siemens Networks Oyj
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A module to handle different character widths on the console.
Some East Asian characters have width of two on console, and combining
characters themselves take no extra space.
See issue 604 [1] for more details about East Asian characters. The issue also
contains `generate_wild_chars.py` script that was originally used to create
`_EAST_ASIAN_WILD_CHARS` mapping. An updated version of the script is attached
to issue 1096. Big thanks for xieyanbo for the script and the original patch.
Note that Python's `unicodedata` module is not used here because importing
it takes several seconds on Jython.
[1] http://code.google.com/p/robotframework/issues/detail?id=604
[2] http://code.google.com/p/robotframework/issues/detail?id=1096
"""
def get_char_width(char):
char = ord(char)
if _char_in_map(char, _COMBINING_CHARS):
return 0
if _char_in_map(char, _EAST_ASIAN_WILD_CHARS):
return 2
return 1
def _char_in_map(char, map):
for begin, end in map:
if char < begin:
break
if begin <= char <= end:
return True
return False
_COMBINING_CHARS = [(768, 879)]
_EAST_ASIAN_WILD_CHARS = [
(888, 889), (895, 899), (907, 907), (909, 909), (930, 930),
(1316, 1328), (1367, 1368), (1376, 1376), (1416, 1416),
(1419, 1424), (1480, 1487), (1515, 1519), (1525, 1535),
(1540, 1541), (1564, 1565), (1568, 1568), (1631, 1631),
(1806, 1806), (1867, 1868), (1970, 1983), (2043, 2304),
(2362, 2363), (2382, 2383), (2389, 2391), (2419, 2426),
(2432, 2432), (2436, 2436), (2445, 2446), (2449, 2450),
(2473, 2473), (2481, 2481), (2483, 2485), (2490, 2491),
(2501, 2502), (2505, 2506), (2511, 2518), (2520, 2523),
(2526, 2526), (2532, 2533), (2555, 2560), (2564, 2564),
(2571, 2574), (2577, 2578), (2601, 2601), (2609, 2609),
(2612, 2612), (2615, 2615), (2618, 2619), (2621, 2621),
(2627, 2630), (2633, 2634), (2638, 2640), (2642, 2648),
(2653, 2653), (2655, 2661), (2678, 2688), (2692, 2692),
(2702, 2702), (2706, 2706), (2729, 2729), (2737, 2737),
(2740, 2740), (2746, 2747), (2758, 2758), (2762, 2762),
(2766, 2767), (2769, 2783), (2788, 2789), (2800, 2800),
(2802, 2816), (2820, 2820), (2829, 2830), (2833, 2834),
(2857, 2857), (2865, 2865), (2868, 2868), (2874, 2875),
(2885, 2886), (2889, 2890), (2894, 2901), (2904, 2907),
(2910, 2910), (2916, 2917), (2930, 2945), (2948, 2948),
(2955, 2957), (2961, 2961), (2966, 2968), (2971, 2971),
(2973, 2973), (2976, 2978), (2981, 2983), (2987, 2989),
(3002, 3005), (3011, 3013), (3017, 3017), (3022, 3023),
(3025, 3030), (3032, 3045), (3067, 3072), (3076, 3076),
(3085, 3085), (3089, 3089), (3113, 3113), (3124, 3124),
(3130, 3132), (3141, 3141), (3145, 3145), (3150, 3156),
(3159, 3159), (3162, 3167), (3172, 3173), (3184, 3191),
(3200, 3201), (3204, 3204), (3213, 3213), (3217, 3217),
(3241, 3241), (3252, 3252), (3258, 3259), (3269, 3269),
(3273, 3273), (3278, 3284), (3287, 3293), (3295, 3295),
(3300, 3301), (3312, 3312), (3315, 3329), (3332, 3332),
(3341, 3341), (3345, 3345), (3369, 3369), (3386, 3388),
(3397, 3397), (3401, 3401), (3406, 3414), (3416, 3423),
(3428, 3429), (3446, 3448), (3456, 3457), (3460, 3460),
(3479, 3481), (3506, 3506), (3516, 3516), (3518, 3519),
(3527, 3529), (3531, 3534), (3541, 3541), (3543, 3543),
(3552, 3569), (3573, 3584), (3643, 3646), (3676, 3712),
(3715, 3715), (3717, 3718), (3721, 3721), (3723, 3724),
(3726, 3731), (3736, 3736), (3744, 3744), (3748, 3748),
(3750, 3750), (3752, 3753), (3756, 3756), (3770, 3770),
(3774, 3775), (3781, 3781), (3783, 3783), (3790, 3791),
(3802, 3803), (3806, 3839), (3912, 3912), (3949, 3952),
(3980, 3983), (3992, 3992), (4029, 4029), (4045, 4045),
(4053, 4095), (4250, 4253), (4294, 4303), (4349, 4447),
(4515, 4519), (4602, 4607), (4681, 4681), (4686, 4687),
(4695, 4695), (4697, 4697), (4702, 4703), (4745, 4745),
(4750, 4751), (4785, 4785), (4790, 4791), (4799, 4799),
(4801, 4801), (4806, 4807), (4823, 4823), (4881, 4881),
(4886, 4887), (4955, 4958), (4989, 4991), (5018, 5023),
(5109, 5120), (5751, 5759), (5789, 5791), (5873, 5887),
(5901, 5901), (5909, 5919), (5943, 5951), (5972, 5983),
(5997, 5997), (6001, 6001), (6004, 6015), (6110, 6111),
(6122, 6127), (6138, 6143), (6159, 6159), (6170, 6175),
(6264, 6271), (6315, 6399), (6429, 6431), (6444, 6447),
(6460, 6463), (6465, 6467), (6510, 6511), (6517, 6527),
(6570, 6575), (6602, 6607), (6618, 6621), (6684, 6685),
(6688, 6911), (6988, 6991), (7037, 7039), (7083, 7085),
(7098, 7167), (7224, 7226), (7242, 7244), (7296, 7423),
(7655, 7677), (7958, 7959), (7966, 7967), (8006, 8007),
(8014, 8015), (8024, 8024), (8026, 8026), (8028, 8028),
(8030, 8030), (8062, 8063), (8117, 8117), (8133, 8133),
(8148, 8149), (8156, 8156), (8176, 8177), (8181, 8181),
(8191, 8191), (8293, 8297), (8306, 8307), (8335, 8335),
(8341, 8351), (8374, 8399), (8433, 8447), (8528, 8530),
(8585, 8591), (9001, 9002), (9192, 9215), (9255, 9279),
(9291, 9311), (9886, 9887), (9917, 9919), (9924, 9984),
(9989, 9989), (9994, 9995), (10024, 10024), (10060, 10060),
(10062, 10062), (10067, 10069), (10071, 10071), (10079, 10080),
(10133, 10135), (10160, 10160), (10175, 10175), (10187, 10187),
(10189, 10191), (11085, 11087), (11093, 11263), (11311, 11311),
(11359, 11359), (11376, 11376), (11390, 11391), (11499, 11512),
(11558, 11567), (11622, 11630), (11632, 11647), (11671, 11679),
(11687, 11687), (11695, 11695), (11703, 11703), (11711, 11711),
(11719, 11719), (11727, 11727), (11735, 11735), (11743, 11743),
(11825, 12350), (12352, 19903), (19968, 42239), (42540, 42559),
(42592, 42593), (42612, 42619), (42648, 42751), (42893, 43002),
(43052, 43071), (43128, 43135), (43205, 43213), (43226, 43263),
(43348, 43358), (43360, 43519), (43575, 43583), (43598, 43599),
(43610, 43611), (43616, 55295), (63744, 64255), (64263, 64274),
(64280, 64284), (64311, 64311), (64317, 64317), (64319, 64319),
(64322, 64322), (64325, 64325), (64434, 64466), (64832, 64847),
(64912, 64913), (64968, 65007), (65022, 65023), (65040, 65055),
(65063, 65135), (65141, 65141), (65277, 65278), (65280, 65376),
(65471, 65473), (65480, 65481), (65488, 65489), (65496, 65497),
(65501, 65511), (65519, 65528), (65534, 65535),
]
| 53.410072 | 78 | 0.564925 | 963 | 7,424 | 4.325026 | 0.732087 | 0.014406 | 0.009364 | 0.012965 | 0.029292 | 0.022089 | 0.022089 | 0.022089 | 0.022089 | 0 | 0 | 0.513874 | 0.233028 | 7,424 | 138 | 79 | 53.797101 | 0.217597 | 0.176859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019417 | false | 0 | 0 | 0 | 0.067961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c9ca05e9c6b7018da88c76c81ab435644d0399f | 887 | py | Python | _utils.py | Maxwell-lt/bitrate-viewer | 9398c75cdfcf60645adf7bb8745aec175abfa16e | [
"MIT"
] | 32 | 2020-11-13T01:06:08.000Z | 2022-03-06T08:52:17.000Z | _utils.py | Maxwell-lt/bitrate-viewer | 9398c75cdfcf60645adf7bb8745aec175abfa16e | [
"MIT"
] | 3 | 2021-10-29T19:23:54.000Z | 2022-01-01T20:18:49.000Z | _utils.py | Maxwell-lt/bitrate-viewer | 9398c75cdfcf60645adf7bb8745aec175abfa16e | [
"MIT"
] | 6 | 2020-12-23T22:53:12.000Z | 2022-01-01T19:01:26.000Z | import math
from ffmpeg import probe
def get_bitrate(video_path):
bitrate = probe(video_path)['format']['bit_rate']
return f'{math.trunc(int(bitrate) / 1000)} kbit/s'
def get_framerate_fraction(video_path):
r_frame_rate = [stream for stream in probe(video_path)['streams']
if stream['codec_type'] == 'video'][0][
'r_frame_rate']
return r_frame_rate
def get_framerate_float(video_path):
numerator, denominator = get_framerate_fraction(video_path).split('/')
return round((int(numerator) / int(denominator)), 3)
def get_duration(video_path):
return probe(video_path)['format']['duration']
def get_mbit_str(megabits):
return f'{megabits} Mbps'
def get_pretty_codec_name(codec):
dict = {
'h264': 'H.264 (AVC)',
'hevc': 'H.265 (HEVC)'
}
return dict.get(codec, codec)
| 23.972973 | 74 | 0.647125 | 118 | 887 | 4.618644 | 0.440678 | 0.13211 | 0.077064 | 0.073395 | 0.106422 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021583 | 0.21646 | 887 | 36 | 75 | 24.638889 | 0.76259 | 0 | 0 | 0 | 0 | 0 | 0.167982 | 0.027058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0 | 0.086957 | 0.086957 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7ca19f6f0d31de9770e404bea8c70cfff15061d7 | 299 | py | Python | notes_app/urls.py | infomail4575/TODO | 60a9da59eb6f2c04dbf1650ffdb42d9181e7740a | [
"MIT"
] | 1 | 2022-02-06T08:31:45.000Z | 2022-02-06T08:31:45.000Z | notes_app/urls.py | infomail4575/TODO | 60a9da59eb6f2c04dbf1650ffdb42d9181e7740a | [
"MIT"
] | null | null | null | notes_app/urls.py | infomail4575/TODO | 60a9da59eb6f2c04dbf1650ffdb42d9181e7740a | [
"MIT"
] | null | null | null | from django.urls import path
from notes_app import views
urlpatterns = [
path('',views.home,name='home'),
path('home/',views.home,name='home'),
path('add/',views.add,name='add'),
path('edit/<int:id>',views.edit,name='edit'),
path('delete/<int:id>',views.delete, name='delete')
]
| 29.9 | 55 | 0.645485 | 44 | 299 | 4.363636 | 0.363636 | 0.09375 | 0.135417 | 0.177083 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12709 | 299 | 9 | 56 | 33.222222 | 0.735632 | 0 | 0 | 0 | 0 | 0 | 0.19398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ca3a4a508ff4d760323e9077b3d78d67d2fdb67 | 12,753 | py | Python | panzerserver/panzer_pb2.py | misakahi/panzerserver | 3246138703b55d96f8749f9cccf02b6cd53bb1c6 | [
"MIT"
] | null | null | null | panzerserver/panzer_pb2.py | misakahi/panzerserver | 3246138703b55d96f8749f9cccf02b6cd53bb1c6 | [
"MIT"
] | 1 | 2018-06-19T14:50:10.000Z | 2018-08-15T12:55:45.000Z | panzerserver/panzer_pb2.py | misakahi/panzerserver | 3246138703b55d96f8749f9cccf02b6cd53bb1c6 | [
"MIT"
] | null | null | null | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: panzer.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='panzer.proto',
package='panzer',
syntax='proto3',
serialized_pb=_b('\n\x0cpanzer.proto\x12\x06panzer\"7\n\x0c\x44riveRequest\x12\x12\n\nleft_level\x18\x01 \x01(\x01\x12\x13\n\x0bright_level\x18\x02 \x01(\x01\" \n\rDriveResponse\x12\x0f\n\x07success\x18\x01 \x01(\x08\"5\n\x11MoveTurretRequest\x12\x10\n\x08rotation\x18\x01 \x01(\x01\x12\x0e\n\x06updown\x18\x02 \x01(\x01\"%\n\x12MoveTurretResponse\x12\x0f\n\x07success\x18\x01 \x01(\x08\"r\n\x0e\x43ontrolRequest\x12*\n\x0c\x64riveRequest\x18\x01 \x01(\x0b\x32\x14.panzer.DriveRequest\x12\x34\n\x11moveTurretRequest\x18\x02 \x01(\x0b\x32\x19.panzer.MoveTurretRequest\"\"\n\x0f\x43ontrolResponse\x12\x0f\n\x07success\x18\x01 \x01(\x08\"\x14\n\x04Ping\x12\x0c\n\x04ping\x18\x01 \x01(\t\"\x14\n\x04Pong\x12\x0c\n\x04pong\x18\x01 \x01(\t2\xef\x01\n\x06Panzer\x12\x36\n\x05\x44rive\x12\x14.panzer.DriveRequest\x1a\x15.panzer.DriveResponse\"\x00\x12\x45\n\nMoveTurret\x12\x19.panzer.MoveTurretRequest\x1a\x1a.panzer.MoveTurretResponse\"\x00\x12<\n\x07\x43ontrol\x12\x16.panzer.ControlRequest\x1a\x17.panzer.ControlResponse\"\x00\x12(\n\x08SendPing\x12\x0c.panzer.Ping\x1a\x0c.panzer.Pong\"\x00\x62\x06proto3')
)
_DRIVEREQUEST = _descriptor.Descriptor(
name='DriveRequest',
full_name='panzer.DriveRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='left_level', full_name='panzer.DriveRequest.left_level', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='right_level', full_name='panzer.DriveRequest.right_level', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=24,
serialized_end=79,
)
_DRIVERESPONSE = _descriptor.Descriptor(
name='DriveResponse',
full_name='panzer.DriveResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='success', full_name='panzer.DriveResponse.success', index=0,
number=1, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=81,
serialized_end=113,
)
_MOVETURRETREQUEST = _descriptor.Descriptor(
name='MoveTurretRequest',
full_name='panzer.MoveTurretRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='rotation', full_name='panzer.MoveTurretRequest.rotation', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='updown', full_name='panzer.MoveTurretRequest.updown', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=115,
serialized_end=168,
)
_MOVETURRETRESPONSE = _descriptor.Descriptor(
name='MoveTurretResponse',
full_name='panzer.MoveTurretResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='success', full_name='panzer.MoveTurretResponse.success', index=0,
number=1, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=170,
serialized_end=207,
)
_CONTROLREQUEST = _descriptor.Descriptor(
name='ControlRequest',
full_name='panzer.ControlRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='driveRequest', full_name='panzer.ControlRequest.driveRequest', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='moveTurretRequest', full_name='panzer.ControlRequest.moveTurretRequest', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=209,
serialized_end=323,
)
_CONTROLRESPONSE = _descriptor.Descriptor(
name='ControlResponse',
full_name='panzer.ControlResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='success', full_name='panzer.ControlResponse.success', index=0,
number=1, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=325,
serialized_end=359,
)
_PING = _descriptor.Descriptor(
name='Ping',
full_name='panzer.Ping',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='ping', full_name='panzer.Ping.ping', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=361,
serialized_end=381,
)
_PONG = _descriptor.Descriptor(
name='Pong',
full_name='panzer.Pong',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='pong', full_name='panzer.Pong.pong', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=383,
serialized_end=403,
)
_CONTROLREQUEST.fields_by_name['driveRequest'].message_type = _DRIVEREQUEST
_CONTROLREQUEST.fields_by_name['moveTurretRequest'].message_type = _MOVETURRETREQUEST
DESCRIPTOR.message_types_by_name['DriveRequest'] = _DRIVEREQUEST
DESCRIPTOR.message_types_by_name['DriveResponse'] = _DRIVERESPONSE
DESCRIPTOR.message_types_by_name['MoveTurretRequest'] = _MOVETURRETREQUEST
DESCRIPTOR.message_types_by_name['MoveTurretResponse'] = _MOVETURRETRESPONSE
DESCRIPTOR.message_types_by_name['ControlRequest'] = _CONTROLREQUEST
DESCRIPTOR.message_types_by_name['ControlResponse'] = _CONTROLRESPONSE
DESCRIPTOR.message_types_by_name['Ping'] = _PING
DESCRIPTOR.message_types_by_name['Pong'] = _PONG
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
DriveRequest = _reflection.GeneratedProtocolMessageType('DriveRequest', (_message.Message,), dict(
DESCRIPTOR = _DRIVEREQUEST,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.DriveRequest)
))
_sym_db.RegisterMessage(DriveRequest)
DriveResponse = _reflection.GeneratedProtocolMessageType('DriveResponse', (_message.Message,), dict(
DESCRIPTOR = _DRIVERESPONSE,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.DriveResponse)
))
_sym_db.RegisterMessage(DriveResponse)
MoveTurretRequest = _reflection.GeneratedProtocolMessageType('MoveTurretRequest', (_message.Message,), dict(
DESCRIPTOR = _MOVETURRETREQUEST,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.MoveTurretRequest)
))
_sym_db.RegisterMessage(MoveTurretRequest)
MoveTurretResponse = _reflection.GeneratedProtocolMessageType('MoveTurretResponse', (_message.Message,), dict(
DESCRIPTOR = _MOVETURRETRESPONSE,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.MoveTurretResponse)
))
_sym_db.RegisterMessage(MoveTurretResponse)
ControlRequest = _reflection.GeneratedProtocolMessageType('ControlRequest', (_message.Message,), dict(
DESCRIPTOR = _CONTROLREQUEST,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.ControlRequest)
))
_sym_db.RegisterMessage(ControlRequest)
ControlResponse = _reflection.GeneratedProtocolMessageType('ControlResponse', (_message.Message,), dict(
DESCRIPTOR = _CONTROLRESPONSE,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.ControlResponse)
))
_sym_db.RegisterMessage(ControlResponse)
Ping = _reflection.GeneratedProtocolMessageType('Ping', (_message.Message,), dict(
DESCRIPTOR = _PING,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.Ping)
))
_sym_db.RegisterMessage(Ping)
Pong = _reflection.GeneratedProtocolMessageType('Pong', (_message.Message,), dict(
DESCRIPTOR = _PONG,
__module__ = 'panzer_pb2'
# @@protoc_insertion_point(class_scope:panzer.Pong)
))
_sym_db.RegisterMessage(Pong)
_PANZER = _descriptor.ServiceDescriptor(
name='Panzer',
full_name='panzer.Panzer',
file=DESCRIPTOR,
index=0,
options=None,
serialized_start=406,
serialized_end=645,
methods=[
_descriptor.MethodDescriptor(
name='Drive',
full_name='panzer.Panzer.Drive',
index=0,
containing_service=None,
input_type=_DRIVEREQUEST,
output_type=_DRIVERESPONSE,
options=None,
),
_descriptor.MethodDescriptor(
name='MoveTurret',
full_name='panzer.Panzer.MoveTurret',
index=1,
containing_service=None,
input_type=_MOVETURRETREQUEST,
output_type=_MOVETURRETRESPONSE,
options=None,
),
_descriptor.MethodDescriptor(
name='Control',
full_name='panzer.Panzer.Control',
index=2,
containing_service=None,
input_type=_CONTROLREQUEST,
output_type=_CONTROLRESPONSE,
options=None,
),
_descriptor.MethodDescriptor(
name='SendPing',
full_name='panzer.Panzer.SendPing',
index=3,
containing_service=None,
input_type=_PING,
output_type=_PONG,
options=None,
),
])
_sym_db.RegisterServiceDescriptor(_PANZER)
DESCRIPTOR.services_by_name['Panzer'] = _PANZER
# @@protoc_insertion_point(module_scope)
| 30.582734 | 1,105 | 0.740845 | 1,482 | 12,753 | 6.093792 | 0.12753 | 0.036319 | 0.037205 | 0.019488 | 0.550548 | 0.468054 | 0.447459 | 0.447459 | 0.437825 | 0.392648 | 0 | 0.034389 | 0.131263 | 12,753 | 416 | 1,106 | 30.65625 | 0.780756 | 0.048224 | 0 | 0.625683 | 1 | 0.002732 | 0.187959 | 0.124371 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016393 | 0 | 0.016393 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ca4892f85d341a0cccf15e48518db9c7629e3bf | 352 | py | Python | agent/indy_catalyst_agent/verifier/base.py | nairobi222/indy-catalyst | dcbd80524ace7747ecfecd716ff932e9b571d69a | [
"Apache-2.0"
] | 7 | 2020-07-07T15:44:41.000Z | 2022-03-26T21:20:41.000Z | agent/indy_catalyst_agent/verifier/base.py | nairobi222/indy-catalyst | dcbd80524ace7747ecfecd716ff932e9b571d69a | [
"Apache-2.0"
] | 6 | 2021-03-10T20:05:19.000Z | 2022-02-27T05:41:09.000Z | agent/indy_catalyst_agent/verifier/base.py | nairobi222/indy-catalyst | dcbd80524ace7747ecfecd716ff932e9b571d69a | [
"Apache-2.0"
] | 4 | 2019-07-09T20:41:03.000Z | 2021-06-06T10:45:23.000Z | """Base Verifier class."""
from abc import ABC
class BaseVerifier(ABC):
"""Base class for verifier."""
def __repr__(self) -> str:
"""
Return a human readable representation of this class.
Returns:
A human readable string for this class
"""
return "<{}>".format(self.__class__.__name__)
| 19.555556 | 61 | 0.590909 | 39 | 352 | 5.025641 | 0.589744 | 0.061224 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292614 | 352 | 17 | 62 | 20.705882 | 0.787149 | 0.431818 | 0 | 0 | 0 | 0 | 0.026846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7cb6a09a5cb3a8eab0631e7da796d865d688d2cd | 4,110 | py | Python | src/oci/log_analytics/models/step_info.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/log_analytics/models/step_info.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/log_analytics/models/step_info.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class StepInfo(object):
"""
StepInfo
"""
def __init__(self, **kwargs):
"""
Initializes a new StepInfo object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param input_sequence_current_match:
The value to assign to the input_sequence_current_match property of this StepInfo.
:type input_sequence_current_match: str
:param regex_engine_class_name:
The value to assign to the regex_engine_class_name property of this StepInfo.
:type regex_engine_class_name: str
:param step_count:
The value to assign to the step_count property of this StepInfo.
:type step_count: int
"""
self.swagger_types = {
'input_sequence_current_match': 'str',
'regex_engine_class_name': 'str',
'step_count': 'int'
}
self.attribute_map = {
'input_sequence_current_match': 'inputSequenceCurrentMatch',
'regex_engine_class_name': 'regexEngineClassName',
'step_count': 'stepCount'
}
self._input_sequence_current_match = None
self._regex_engine_class_name = None
self._step_count = None
@property
def input_sequence_current_match(self):
"""
Gets the input_sequence_current_match of this StepInfo.
The currnet input sequence match.
:return: The input_sequence_current_match of this StepInfo.
:rtype: str
"""
return self._input_sequence_current_match
@input_sequence_current_match.setter
def input_sequence_current_match(self, input_sequence_current_match):
"""
Sets the input_sequence_current_match of this StepInfo.
The currnet input sequence match.
:param input_sequence_current_match: The input_sequence_current_match of this StepInfo.
:type: str
"""
self._input_sequence_current_match = input_sequence_current_match
@property
def regex_engine_class_name(self):
"""
Gets the regex_engine_class_name of this StepInfo.
The regular expression engine class name.
:return: The regex_engine_class_name of this StepInfo.
:rtype: str
"""
return self._regex_engine_class_name
@regex_engine_class_name.setter
def regex_engine_class_name(self, regex_engine_class_name):
"""
Sets the regex_engine_class_name of this StepInfo.
The regular expression engine class name.
:param regex_engine_class_name: The regex_engine_class_name of this StepInfo.
:type: str
"""
self._regex_engine_class_name = regex_engine_class_name
@property
def step_count(self):
"""
Gets the step_count of this StepInfo.
The step count.
:return: The step_count of this StepInfo.
:rtype: int
"""
return self._step_count
@step_count.setter
def step_count(self, step_count):
"""
Sets the step_count of this StepInfo.
The step count.
:param step_count: The step_count of this StepInfo.
:type: int
"""
self._step_count = step_count
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 30.902256 | 245 | 0.668856 | 518 | 4,110 | 4.96139 | 0.22973 | 0.101167 | 0.116732 | 0.175097 | 0.595331 | 0.427626 | 0.288716 | 0.277821 | 0.216342 | 0.110506 | 0 | 0.005966 | 0.265937 | 4,110 | 132 | 246 | 31.136364 | 0.845873 | 0.457908 | 0 | 0.068182 | 0 | 0 | 0.101481 | 0.069665 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0.045455 | 0.045455 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7cb94832243cf801ee066204cd1cb5b87d7502d4 | 2,776 | py | Python | fltk/datasets/dataset.py | JMGaljaard/fltk-testbed | af6af25c82f91d0e3a78dfff6071b3b423cdf3d1 | [
"BSD-2-Clause"
] | null | null | null | fltk/datasets/dataset.py | JMGaljaard/fltk-testbed | af6af25c82f91d0e3a78dfff6071b3b423cdf3d1 | [
"BSD-2-Clause"
] | 1 | 2021-10-03T08:11:25.000Z | 2021-10-04T14:47:47.000Z | fltk/datasets/dataset.py | JMGaljaard/fltk-testbed | af6af25c82f91d0e3a78dfff6071b3b423cdf3d1 | [
"BSD-2-Clause"
] | 1 | 2021-10-05T18:52:05.000Z | 2021-10-05T18:52:05.000Z | from abc import abstractmethod
import torch
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
from fltk.util.config import DistLearningConfig
class Dataset:
def __init__(self, config, learning_params: DistLearningConfig, rank: int, world_size: int):
self.config = config
self.learning_params = learning_params
self.rank = rank
self.world_size = world_size
self.train_loader = self.load_train_dataset()
self.test_loader = self.load_test_dataset()
def get_train_dataset(self):
"""
Returns the train dataset.
:return: tuple
"""
return self.train_loader
def get_test_dataset(self):
"""
Returns the test dataset.
:return: tuple
"""
return self.test_loader
@abstractmethod
def load_train_dataset(self):
"""
Loads & returns the training dataset.
:return: tuple
"""
raise NotImplementedError("load_train_dataset() isn't implemented")
@abstractmethod
def load_test_dataset(self):
"""
Loads & returns the test dataset.
:return: tuple
"""
raise NotImplementedError("load_test_dataset() isn't implemented")
def get_train_loader(self, **kwargs):
"""
Return the data loader for the train dataset.
:param batch_size: batch size of data loader.
:type batch_size: int
:return: torch.utils.data.DataLoader.
"""
return self.train_loader
def get_test_loader(self, **kwargs):
"""
Return the data loader for the test dataset.
:param batch_size: batch size of data loader.
:type batch_size: int
:return: torch.utils.data.DataLoader
"""
return self.test_loader
@staticmethod
def get_data_loader_from_data(batch_size, X, Y, **kwargs):
"""
Get a data loader created from a given set of data.
:param batch_size: batch size of data loader.
:type batch_size: int
:param X: data features,
:type X: numpy.Array()
:param Y: data labels.
:type Y: numpy.Array()
:return: torch.utils.data.DataLoader
"""
X_torch = torch.from_numpy(X).float() # pylint: disable=no-member
if "classification_problem" in kwargs and kwargs["classification_problem"] == False:
Y_torch = torch.from_numpy(Y).float() # pylint: disable=no-member
else:
Y_torch = torch.from_numpy(Y).long() # pylint: disable=no-member
dataset = TensorDataset(X_torch, Y_torch)
kwargs.pop("classification_problem", None)
return DataLoader(dataset, batch_size=batch_size, **kwargs)
| 27.76 | 96 | 0.626801 | 329 | 2,776 | 5.106383 | 0.218845 | 0.064286 | 0.041667 | 0.042857 | 0.452976 | 0.333929 | 0.226786 | 0.195833 | 0.195833 | 0.147024 | 0 | 0 | 0.2817 | 2,776 | 99 | 97 | 28.040404 | 0.842528 | 0.29647 | 0 | 0.162162 | 0 | 0 | 0.085299 | 0.039927 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216216 | false | 0 | 0.135135 | 0 | 0.513514 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7cbce227195172dbf3f7750df0df7b2cd4e383a2 | 5,086 | py | Python | tests/pfc/test_pfc_pause_lossless.py | macikgozwa/sonic-mgmt | 86338be8b2e55fd03d4913037d0e641e443762b0 | [
"Apache-2.0"
] | null | null | null | tests/pfc/test_pfc_pause_lossless.py | macikgozwa/sonic-mgmt | 86338be8b2e55fd03d4913037d0e641e443762b0 | [
"Apache-2.0"
] | 1 | 2020-11-26T14:24:02.000Z | 2020-11-26T14:24:02.000Z | tests/pfc/test_pfc_pause_lossless.py | macikgozwa/sonic-mgmt | 86338be8b2e55fd03d4913037d0e641e443762b0 | [
"Apache-2.0"
] | null | null | null | import pytest
from tests.common.helpers.assertions import pytest_require
from tests.common.fixtures.conn_graph_facts import conn_graph_facts,\
fanout_graph_facts
from tests.common.ixia.ixia_fixtures import ixia_api_serv_ip, ixia_api_serv_port,\
ixia_api_serv_user, ixia_api_serv_passwd, ixia_api, ixia_testbed
from tests.common.ixia.qos_fixtures import prio_dscp_map, all_prio_list, lossless_prio_list,\
lossy_prio_list
from files.helper import run_pfc_test
@pytest.mark.topology("tgen")
def test_pfc_pause_single_lossless_prio(ixia_api,
ixia_testbed,
conn_graph_facts,
fanout_graph_facts,
duthosts,
rand_one_dut_hostname,
enum_dut_portname_oper_up,
enum_dut_lossless_prio,
all_prio_list,
prio_dscp_map):
"""
Test if PFC can pause a single lossless priority
Args:
ixia_api (pytest fixture): IXIA session
ixia_testbed (pytest fixture): L2/L3 config of a T0 testbed
conn_graph_facts (pytest fixture): connection graph
fanout_graph_facts (pytest fixture): fanout graph
duthosts (pytest fixture): list of DUTs
rand_one_dut_hostname (str): hostname of DUT
enum_dut_portname_oper_up (str): name of port to test, e.g., 's6100-1|Ethernet0'
enum_dut_lossless_prio (str): name of lossless priority to test, e.g., 's6100-1|3'
all_prio_list (pytest fixture): list of all the priorities
prio_dscp_map (pytest fixture): priority vs. DSCP map (key = priority).
Returns:
None
"""
dut_hostname, dut_port = enum_dut_portname_oper_up.split('|')
dut_hostname2, lossless_prio = enum_dut_lossless_prio.split('|')
pytest_require(rand_one_dut_hostname == dut_hostname == dut_hostname2,
"Priority and port are not mapped to the expected DUT")
duthost = duthosts[rand_one_dut_hostname]
lossless_prio = int(lossless_prio)
pause_prio_list = [lossless_prio]
test_prio_list = [lossless_prio]
bg_prio_list = [p for p in all_prio_list]
bg_prio_list.remove(lossless_prio)
run_pfc_test(api=ixia_api,
testbed_config=ixia_testbed,
conn_data=conn_graph_facts,
fanout_data=fanout_graph_facts,
duthost=duthost,
dut_port=dut_port,
global_pause=False,
pause_prio_list=pause_prio_list,
test_prio_list=test_prio_list,
bg_prio_list=bg_prio_list,
prio_dscp_map=prio_dscp_map,
test_traffic_pause=True)
def test_pfc_pause_multi_lossless_prio(ixia_api,
ixia_testbed,
conn_graph_facts,
fanout_graph_facts,
duthosts,
rand_one_dut_hostname,
enum_dut_portname_oper_up,
lossless_prio_list,
lossy_prio_list,
prio_dscp_map):
"""
Test if PFC can pause multiple lossless priorities
Args:
ixia_api (pytest fixture): IXIA session
ixia_testbed (pytest fixture): L2/L3 config of a T0 testbed
conn_graph_facts (pytest fixture): connection graph
fanout_graph_facts (pytest fixture): fanout graph
duthosts (pytest fixture): list of DUTs
rand_one_dut_hostname (str): hostname of DUT
enum_dut_portname_oper_up (str): name of port to test, e.g., 's6100-1|Ethernet0'
lossless_prio_list (pytest fixture): list of all the lossless priorities
lossy_prio_list (pytest fixture): list of all the lossy priorities
prio_dscp_map (pytest fixture): priority vs. DSCP map (key = priority).
Returns:
None
"""
dut_hostname, dut_port = enum_dut_portname_oper_up.split('|')
pytest_require(rand_one_dut_hostname == dut_hostname,
"Port is not mapped to the expected DUT")
duthost = duthosts[rand_one_dut_hostname]
pause_prio_list = lossless_prio_list
test_prio_list = lossless_prio_list
bg_prio_list = lossy_prio_list
run_pfc_test(api=ixia_api,
testbed_config=ixia_testbed,
conn_data=conn_graph_facts,
fanout_data=fanout_graph_facts,
duthost=duthost,
dut_port=dut_port,
global_pause=False,
pause_prio_list=pause_prio_list,
test_prio_list=test_prio_list,
bg_prio_list=bg_prio_list,
prio_dscp_map=prio_dscp_map,
test_traffic_pause=True)
| 42.033058 | 93 | 0.599489 | 609 | 5,086 | 4.602627 | 0.164204 | 0.091331 | 0.035319 | 0.051374 | 0.763111 | 0.707813 | 0.674278 | 0.674278 | 0.638958 | 0.607207 | 0 | 0.007805 | 0.345065 | 5,086 | 120 | 94 | 42.383333 | 0.833684 | 0.273889 | 0 | 0.591549 | 0 | 0 | 0.027263 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 1 | 0.028169 | false | 0.014085 | 0.084507 | 0 | 0.112676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7cc0566aa9a467c22c30455b2d966338a71401ad | 1,009 | py | Python | tests/test_timespec.py | farsightsec/axamd_client | 560dfe9a8e23163597bf4efddfa5843669ec4ba5 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2016-11-01T21:52:14.000Z | 2016-11-01T21:52:14.000Z | tests/test_timespec.py | farsightsec/axamd_client | 560dfe9a8e23163597bf4efddfa5843669ec4ba5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | tests/test_timespec.py | farsightsec/axamd_client | 560dfe9a8e23163597bf4efddfa5843669ec4ba5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright (c) 2018 by Farsight Security, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from axamd.client.__main__ import timespec_to_seconds
class TimespecTestCase(unittest.TestCase):
def test_vanilla_1w1d1h1m1s(self):
assert timespec_to_seconds('1w1d1h1m1s') == 694861
def test_out_of_order_returns_none(self):
assert timespec_to_seconds('1h1w') is None
def test_vanilla_colon_delimited(self):
assert timespec_to_seconds('01:01:01') == 3661
| 31.53125 | 74 | 0.756194 | 147 | 1,009 | 5.040816 | 0.62585 | 0.080972 | 0.091768 | 0.080972 | 0.109312 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043011 | 0.170466 | 1,009 | 31 | 75 | 32.548387 | 0.842294 | 0.560951 | 0 | 0 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7ceade2dd51be562a0aadb30428612f6621b4124 | 1,136 | py | Python | py/py_0361_subsequence_of_thue-morse_sequence.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | py/py_0361_subsequence_of_thue-morse_sequence.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | py/py_0361_subsequence_of_thue-morse_sequence.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | # Solution of;
# Project Euler Problem 361: Subsequence of Thue-Morse sequence
# https://projecteuler.net/problem=361
#
# The Thue-Morse sequence {Tn} is a binary sequence satisfying:T0 = 0T2n =
# TnT2n+1 = 1 - TnThe first several terms of {Tn} are given as
# follows:01101001100101101001011001101001. . . . We define {An} as the sorted
# sequence of integers such that the binary expression of each element appears
# as a subsequence in {Tn}. For example, the decimal number 18 is expressed as
# 10010 in binary. 10010 appears in {Tn} (T8 to T12), so 18 is an element of
# {An}. The decimal number 14 is expressed as 1110 in binary. 1110 never
# appears in {Tn}, so 14 is not an element of {An}. The first several terms of
# An are given as follows:n0123456789101112…An012345691011121318…We can also
# verify that A100 = 3251 and A1000 = 80852364498. Find the last 9 digits of
# $\sum \limits_{k = 1}^{18} {A_{10^k}}$.
#
# by lcsm29 http://github.com/lcsm29/project-euler
import timed
def dummy(n):
pass
if __name__ == '__main__':
n = 1000
i = 10000
prob_id = 361
timed.caller(dummy, n, i, prob_id)
| 37.866667 | 79 | 0.707746 | 183 | 1,136 | 4.360656 | 0.535519 | 0.015038 | 0.042607 | 0.047619 | 0.0401 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165208 | 0.195423 | 1,136 | 29 | 80 | 39.172414 | 0.701313 | 0.836268 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7cf2fe104419cc11fe22abe8dc977fa80ba5a1d2 | 1,160 | py | Python | v2.5.7/toontown/toonbase/HolidayData.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-01T15:46:43.000Z | 2021-07-23T16:26:48.000Z | v2.5.7/toontown/toonbase/HolidayData.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 1 | 2019-06-29T03:40:05.000Z | 2021-06-13T01:15:16.000Z | v2.5.7/toontown/toonbase/HolidayData.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-28T21:18:46.000Z | 2021-02-25T06:37:25.000Z | from toontown.toonbase.ToontownGlobals import *
ONCELY_HOLIDAYS = []
WEEKLY_HOLIDAYS = [
[
TROLLEY_HOLIDAY, 3],
[
CIRCUIT_RACING, 0]]
YEARLY_HOLIDAYS = [
[
TOP_TOONS_MARATHON, [1, 1, 0, 0], [1, 1, 23, 59]],
[
VALENTINES_DAY, [2, 9, 0, 0], [2, 16, 23, 59]],
[
IDES_OF_MARCH, [3, 17, 0, 0], [3, 17, 23, 59]],
[
APRIL_FOOLS_COSTUMES, [3, 31, 0, 0], [4, 5, 23, 59]],
[
ORANGES, [4, 1, 0, 0], [4, 1, 23, 59]],
[
SILLYMETER_HOLIDAY, [3, 31, 0, 0], [4, 5, 23, 59]],
[
SILLY_CHATTER_ONE, [3, 31, 0, 0], [4, 5, 23, 59]],
[
JULY4_FIREWORKS, [7, 4, 0, 0], [7, 4, 23, 59]],
[
HALLOWEEN_PROPS, [10, 24, 0, 0], [10, 31, 23, 59]],
[
TRICK_OR_TREAT, [10, 24, 0, 0], [10, 31, 23, 59]],
[
HALLOWEEN_COSTUMES, [10, 24, 0, 0], [10, 31, 23, 59]],
[
HALLOWEEN, [10, 31, 0, 0], [10, 31, 23, 59]],
[
HALLOWEEN_CHAIRMAN_REVEAL, [10, 31, 0, 0], [10, 31, 0, 59]],
[
BLACK_CAT_DAY, [10, 31, 0, 0], [10, 31, 23, 59]],
[
WINTER_DECORATIONS, [12, 1, 0, 0], [12, 31, 23, 59]],
[
WINTER_CAROLING, [12, 1, 0, 0], [12, 31, 23, 59]],
[
CHRISTMAS, [12, 25, 0, 0], [12, 25, 23, 59]],
[
NEWYEARS_FIREWORKS, [12, 26, 0, 0], [1, 1, 23, 59]]] | 26.363636 | 62 | 0.52931 | 201 | 1,160 | 2.920398 | 0.313433 | 0.061329 | 0.07155 | 0.061329 | 0.310051 | 0.310051 | 0.265758 | 0.250426 | 0.078365 | 0 | 0 | 0.248337 | 0.222414 | 1,160 | 44 | 63 | 26.363636 | 0.402439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022727 | 0 | 0.022727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7cf6b672498e72c73fe4fe762a1a4d35c3583c8d | 842 | py | Python | pypika/selectable.py | rob-blackbourn/pypika | 8241f0bed8435cb5911e06ef476466f477ab9f59 | [
"Apache-2.0"
] | null | null | null | pypika/selectable.py | rob-blackbourn/pypika | 8241f0bed8435cb5911e06ef476466f477ab9f59 | [
"Apache-2.0"
] | null | null | null | pypika/selectable.py | rob-blackbourn/pypika | 8241f0bed8435cb5911e06ef476466f477ab9f59 | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
from typing import Optional
from .node import Node
from .terms import Field, Star
from .utils import copy_if_immutable, ignore_copy
class Selectable(Node):
def __init__(self, alias: Optional[str]) -> None:
self.alias = alias
def as_(self, alias: str) -> Selectable:
with copy_if_immutable(self) as this:
this.alias = alias
return this
def field(self, name: str) -> Field:
return Field(name, table=self)
@property
def star(self) -> Star:
return Star(self)
@ignore_copy
def __getattr__(self, name: str) -> Field:
return self.field(name)
@ignore_copy
def __getitem__(self, name: str) -> Field:
return self.field(name)
def get_table_name(self) -> Optional[str]:
return self.alias
| 22.756757 | 53 | 0.644893 | 109 | 842 | 4.743119 | 0.302752 | 0.069633 | 0.06383 | 0.092843 | 0.17795 | 0.135397 | 0.135397 | 0.135397 | 0 | 0 | 0 | 0 | 0.260095 | 842 | 36 | 54 | 23.388889 | 0.829856 | 0 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28 | false | 0 | 0.2 | 0.2 | 0.76 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7cf8648629ea8530623d1f55e84653adfb2d1cf7 | 722 | py | Python | scripts/issues/issue14_streamlit.py | jlstevens/awesome-panel | c67b0f4529a3ce6a8517648f49fef8358e2e2c8b | [
"Apache-2.0"
] | null | null | null | scripts/issues/issue14_streamlit.py | jlstevens/awesome-panel | c67b0f4529a3ce6a8517648f49fef8358e2e2c8b | [
"Apache-2.0"
] | null | null | null | scripts/issues/issue14_streamlit.py | jlstevens/awesome-panel | c67b0f4529a3ce6a8517648f49fef8358e2e2c8b | [
"Apache-2.0"
] | 1 | 2021-10-05T05:40:19.000Z | 2021-10-05T05:40:19.000Z | import streamlit as st
text = """\
## Custom CSS does not play nicely with Bokeh HTML, CSS and Javascipt
I've experienced numerous problems when using css.
I have a feeling that the Bokeh Javascript on elements does not take everything like images and inline css into account. But it's difficult for me to catch and understand.
For example I struggled with the below scrollbar until I found out it was because i had a `margin-bottom: 1rem;` in the css for the info box. When I removed that the problem was solved.
<img src="https://github.com/MarcSkovMadsen/awesome-panel/blob/master/gallery/bootstrap_dashboard/assets/images/info_alert_scrollbar_problem.png?raw=true" width="200" height="400" />
"""
st.write(text)
| 45.125 | 185 | 0.778393 | 121 | 722 | 4.61157 | 0.735537 | 0.02509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011345 | 0.145429 | 722 | 15 | 186 | 48.133333 | 0.893031 | 0 | 0 | 0 | 0 | 0.333333 | 0.925208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7cf9f68411f88c3635ff8fa8678ea49c4cb628dd | 2,465 | py | Python | parser/team28/models/column.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
] | 35 | 2020-12-07T03:11:43.000Z | 2021-04-15T17:38:16.000Z | parser/team28/models/column.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
] | 47 | 2020-12-09T01:29:09.000Z | 2021-01-13T05:37:50.000Z | parser/team28/models/column.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
] | 556 | 2020-12-07T03:13:31.000Z | 2021-06-17T17:41:10.000Z | class Column(object):
def __init__(self, name, dataType):
self._number = 0
self._name = name
self._dataType = dataType
self._length = None
self._default = None
self._notNull = False
self._unique = False
self._constraint = None
self._check = []
self._primaryKey = False
self._foreignKey = None
self._autoincrement = False
# TODO FOREIGN KEY implementation
# {'refTable':None,'refColumn':None} {'Referenced table': None}
def __str__(self):
return self._name
@property
def name(self):
return self._name
@name.setter
def name(self, name):
self._name = name
@property
def dataType(self):
return self._dataType
@dataType.setter
def dataType(self, dataType):
self._dataType = dataType
@property
def length(self):
return self._length
@length.setter
def length(self, length):
self._length = length
@property
def notNull(self):
return self._notNull
@notNull.setter
def notNull(self, notNull):
self._notNull = notNull
@property
def primaryKey(self):
return self._primaryKey
@primaryKey.setter
def primaryKey(self, primaryKey):
self._primaryKey = primaryKey
@property
def number(self):
return self._number
@number.setter
def number(self, number):
self._number = number
@property
def foreignKey(self):
return self._foreignKey
@foreignKey.setter
def foreignKey(self, foreignKey):
self._foreignKey = foreignKey
@property
def unique(self):
return self._unique
@unique.setter
def unique(self, unique):
self._unique = unique
@property
def constraint(self):
return self._constraint
@constraint.setter
def constraint(self, constraint):
self._constraint = constraint
@property
def default(self):
return self._default
@default.setter
def default(self, default):
self._default = default
@property
def autoincrement(self):
return self._autoincrement
@autoincrement.setter
def autoincrement(self, autoincrement):
self._autoincrement = autoincrement
@property
def check(self):
return self._check
@check.setter
def check(self, check):
self._check = check
| 21.25 | 71 | 0.615822 | 252 | 2,465 | 5.845238 | 0.130952 | 0.088255 | 0.123557 | 0.02444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000579 | 0.298986 | 2,465 | 115 | 72 | 21.434783 | 0.851852 | 0.037728 | 0 | 0.204545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0 | 1 | 0.295455 | false | 0 | 0 | 0.147727 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6b0a3fbe6835f42e5d4e69d6c636f3ed4410b535 | 508 | py | Python | main.py | zhenweil/Bag-of-words-for-Scene-Classification | 7db72cb89c2f07edcb41ce2caa69b8fac0e54ab1 | [
"MIT"
] | 1 | 2020-12-24T05:53:46.000Z | 2020-12-24T05:53:46.000Z | main.py | zhenweil/Bag-of-words-for-Scene-Classification | 7db72cb89c2f07edcb41ce2caa69b8fac0e54ab1 | [
"MIT"
] | null | null | null | main.py | zhenweil/Bag-of-words-for-Scene-Classification | 7db72cb89c2f07edcb41ce2caa69b8fac0e54ab1 | [
"MIT"
] | null | null | null | from os.path import join
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import util
import visual_words
import visual_recog
from opts import get_opts
def main():
opts = get_opts()
n_cpu = util.get_num_CPU()
visual_words.compute_dictionary(opts, n_worker=n_cpu)
visual_recog.build_recognition_system(opts, n_worker=n_cpu)
conf, accuracy = visual_recog.evaluate_recognition_system(opts, n_worker=n_cpu)
if __name__ == '__main__':
main()
| 24.190476 | 84 | 0.742126 | 77 | 508 | 4.519481 | 0.454545 | 0.057471 | 0.094828 | 0.103448 | 0.227011 | 0.183908 | 0.183908 | 0 | 0 | 0 | 0 | 0 | 0.187008 | 508 | 20 | 85 | 25.4 | 0.842615 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.5 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6b16a2ee3b2f4bfd764ab5b80b55a7e93a1d13f6 | 1,520 | py | Python | catalog/models.py | maurosantosdev/lojavirtual | 636c155449169e762db9c4897596136d1f479b92 | [
"MIT"
] | null | null | null | catalog/models.py | maurosantosdev/lojavirtual | 636c155449169e762db9c4897596136d1f479b92 | [
"MIT"
] | null | null | null | catalog/models.py | maurosantosdev/lojavirtual | 636c155449169e762db9c4897596136d1f479b92 | [
"MIT"
] | null | null | null | # coding=utf-8
from django.db import models
from django.urls import reverse
class Category(models.Model):
name = models.CharField('Nome', max_length=100)
slug = models.SlugField('Identificador', max_length=100)
created = models.DateTimeField('Criado em', auto_now_add=True)
modified = models.DateTimeField('Modificado em', auto_now=True)
class Meta:
verbose_name = 'Categoria'
verbose_name_plural = 'Categorias'
ordering = ['name']
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse('catalog:category', kwargs={'slug': self.slug})
class Product(models.Model):
name = models.CharField('Nome', max_length=100)
slug = models.SlugField('Identificador', max_length=100)
category = models.ForeignKey('catalog.Category', verbose_name='Categoria', on_delete=models.CASCADE)
description = models.TextField('Descrição', blank=True)
price = models.DecimalField('Preço', decimal_places=2, max_digits=8)
image = models.ImageField(
'Imagem', upload_to='products', blank=True, null=True
)
created = models.DateTimeField('Criado em', auto_now_add=True)
modified = models.DateTimeField('Modificado em', auto_now=True)
class Meta:
verbose_name = 'Produto'
verbose_name_plural = 'Produtos'
ordering = ['name']
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse('catalog:product', kwargs={'slug': self.slug}) | 32.340426 | 104 | 0.686842 | 182 | 1,520 | 5.554945 | 0.395604 | 0.054402 | 0.047478 | 0.041543 | 0.563798 | 0.563798 | 0.563798 | 0.563798 | 0.563798 | 0.563798 | 0 | 0.012185 | 0.190132 | 1,520 | 47 | 105 | 32.340426 | 0.809098 | 0.007895 | 0 | 0.529412 | 0 | 0 | 0.140677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0.117647 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
6b1a3a965664049ab3d653d6d66991c5424fc16c | 273 | py | Python | demo_video.py | dovietchinh/via-laneline-segmentation | d1a1cc87c44e1275784ce0c0acf5be4ba619eeea | [
"Unlicense"
] | 5 | 2021-03-26T09:26:14.000Z | 2021-06-29T23:29:10.000Z | demo_video.py | dovietchinh/via-laneline-segmentation | d1a1cc87c44e1275784ce0c0acf5be4ba619eeea | [
"Unlicense"
] | null | null | null | demo_video.py | dovietchinh/via-laneline-segmentation | d1a1cc87c44e1275784ce0c0acf5be4ba619eeea | [
"Unlicense"
] | null | null | null | """ author:
name : Do Viet Chinh
personal email: dovietchinh1998@mgail.com
personal facebook: https://www.facebook.com/profile.php?id=100005935236259
VNOpenAI team: vnopenai@gmail.com
via team :
date:
26.3.2021
""" | 27.3 | 79 | 0.604396 | 31 | 273 | 5.322581 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134715 | 0.29304 | 273 | 10 | 80 | 27.3 | 0.720207 | 0.970696 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6b2594de711a8f10da1512dbeef1090ec017a0aa | 4,740 | py | Python | frappe/email/smtp.py | Steggur/frappe | be95a19704dd3ac667f7ad64e1694dc5d59856fe | [
"MIT"
] | null | null | null | frappe/email/smtp.py | Steggur/frappe | be95a19704dd3ac667f7ad64e1694dc5d59856fe | [
"MIT"
] | null | null | null | frappe/email/smtp.py | Steggur/frappe | be95a19704dd3ac667f7ad64e1694dc5d59856fe | [
"MIT"
] | 1 | 2018-03-22T00:24:53.000Z | 2018-03-22T00:24:53.000Z | # Copyright (c) 2013, Web Notes Technologies Pvt. Ltd. and Contributors
# MIT License. See license.txt
from __future__ import unicode_literals
import frappe
import smtplib
import _socket
from frappe.utils import cint
from frappe import _
def send(email, as_bulk=False, append_to=None):
"""send the message or add it to Outbox Email"""
if frappe.flags.mute_emails or frappe.conf.get("mute_emails") or False:
frappe.msgprint(_("Emails are muted"))
return
if frappe.flags.in_test:
frappe.flags.sent_mail = email.as_string()
return
try:
smtpserver = SMTPServer(append_to=append_to)
if hasattr(smtpserver, "always_use_login_id_as_sender") and \
cint(smtpserver.always_use_login_id_as_sender) and smtpserver.login:
if not email.reply_to:
email.reply_to = email.sender
email.sender = smtpserver.login
smtpserver.sess.sendmail(email.sender, email.recipients + (email.cc or []),
email.as_string())
except smtplib.SMTPSenderRefused:
frappe.msgprint(_("Invalid login or password"))
raise
except smtplib.SMTPRecipientsRefused:
frappe.msgprint(_("Invalid recipient address"))
raise
def get_outgoing_email_account(raise_exception_not_set=True, append_to=None):
"""Returns outgoing email account based on `append_to` or the default
outgoing account. If default outgoing account is not found, it will
try getting settings from `site_config.json`."""
def _get_email_account(filters):
name = frappe.db.get_value("Email Account", filters)
return frappe.get_doc("Email Account", name) if name else None
if not getattr(frappe.local, "outgoing_email_account", None):
email_account = None
if append_to:
email_account = _get_email_account({"enable_outgoing": 1, "append_to": append_to})
if not email_account:
email_account = _get_email_account({"enable_outgoing": 1, "default_outgoing": 1})
if not email_account and frappe.conf.get("mail_server"):
# from site_config.json
email_account = frappe.new_doc("Email Account")
email_account.update({
"smtp_server": frappe.conf.get("mail_server"),
"smtp_port": frappe.conf.get("mail_port"),
"use_tls": cint(frappe.conf.get("use_ssl") or 0),
"email_id": frappe.conf.get("mail_login"),
"password": frappe.conf.get("mail_password"),
"sender": frappe.conf.get("auto_email_id")
})
email_account.from_site_config = True
if not email_account and not raise_exception_not_set:
return None
if not email_account:
frappe.throw(_("Please setup default Email Account from Setup > Email > Email Account"))
frappe.local.outgoing_email_account = email_account
return frappe.local.outgoing_email_account
class SMTPServer:
def __init__(self, login=None, password=None, server=None, port=None, use_ssl=None, append_to=None):
# get defaults from mail settings
self._sess = None
self.email_account = None
if server:
self.server = server
self.port = port
self.use_ssl = cint(use_ssl)
self.login = login
self.password = password
else:
self.setup_email_account()
def setup_email_account(self):
self.email_account = get_outgoing_email_account(raise_exception_not_set=False)
if self.email_account:
self.server = self.email_account.smtp_server
self.login = self.email_account.email_id
self.password = self.email_account.password
self.port = self.email_account.smtp_port
self.use_ssl = self.email_account.use_tls
@property
def sess(self):
"""get session"""
if self._sess:
return self._sess
# check if email server specified
if not self.server:
err_msg = _('Email Account not setup. Please create a new Email Account from Setup > Email > Email Account')
frappe.msgprint(err_msg)
raise frappe.OutgoingEmailError, err_msg
try:
if self.use_ssl and not self.port:
self.port = 587
self._sess = smtplib.SMTP((self.server or "").encode('utf-8'),
cint(self.port) or None)
if not self._sess:
err_msg = _('Could not connect to outgoing email server')
frappe.msgprint(err_msg)
raise frappe.OutgoingEmailError, err_msg
if self.use_ssl:
self._sess.ehlo()
self._sess.starttls()
self._sess.ehlo()
if self.login:
ret = self._sess.login((self.login or "").encode('utf-8'),
(self.password or "").encode('utf-8'))
# check if logged correctly
if ret[0]!=235:
frappe.msgprint(ret[1])
raise frappe.OutgoingEmailError, ret[1]
return self._sess
except _socket.error:
# Invalid mail server -- due to refusing connection
frappe.throw(_('Invalid Outgoing Mail Server or Port'))
except smtplib.SMTPAuthenticationError:
frappe.throw(_("Invalid login or password"))
except smtplib.SMTPException:
frappe.msgprint(_('Unable to send emails at this time'))
raise
| 30.779221 | 111 | 0.735021 | 681 | 4,740 | 4.895742 | 0.220264 | 0.136773 | 0.031194 | 0.025495 | 0.193461 | 0.130774 | 0.130774 | 0.130774 | 0.032993 | 0 | 0 | 0.005016 | 0.158861 | 4,740 | 153 | 112 | 30.980392 | 0.831201 | 0.054852 | 0 | 0.154545 | 0 | 0 | 0.156272 | 0.012094 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.063636 | 0.054545 | null | null | 0.063636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6b29bdf5804b793d94bf282f532f1bff3e2c9152 | 206 | py | Python | assignment8/reducer.py | IITDU-BSSE06/ads-demystifying-the-logs-AliZafar120 | 5d5b35d290a3d5072ee6d116ba082eb6c21a2e9e | [
"MIT"
] | null | null | null | assignment8/reducer.py | IITDU-BSSE06/ads-demystifying-the-logs-AliZafar120 | 5d5b35d290a3d5072ee6d116ba082eb6c21a2e9e | [
"MIT"
] | 1 | 2017-11-01T12:25:31.000Z | 2017-11-18T16:25:45.000Z | assignment8/reducer.py | IITDU-BSSE06/ads-demystifying-the-logs-AliZafar120 | 5d5b35d290a3d5072ee6d116ba082eb6c21a2e9e | [
"MIT"
] | null | null | null | #!/usr/bin/python
import sys
files=dict()
for line in sys.stdin:
files[line]=int(files.get(line,0))+1
maxhit=str(max(files,key=files.get))
print "{0} {1}".format(maxhit.split("\n")[0], files.get(maxhit))
| 22.888889 | 64 | 0.684466 | 38 | 206 | 3.710526 | 0.605263 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.07767 | 206 | 8 | 65 | 25.75 | 0.715789 | 0.07767 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6b2c56c6235b809f9d9e040eb7f31bf8abd268d7 | 329 | py | Python | view/interface.py | gcleiton/pgm-processing-image-py | 63bd17b7eeb586942fb7b41649d27be71e197966 | [
"MIT"
] | null | null | null | view/interface.py | gcleiton/pgm-processing-image-py | 63bd17b7eeb586942fb7b41649d27be71e197966 | [
"MIT"
] | null | null | null | view/interface.py | gcleiton/pgm-processing-image-py | 63bd17b7eeb586942fb7b41649d27be71e197966 | [
"MIT"
] | null | null | null | import npyscreen
from view.home import InitialScreen
from view.process import ProcessScreen
class CreateInterface(npyscreen.NPSAppManaged):
def onStart(self):
self.addForm("MAIN", InitialScreen, name="PGM Image Processing Program")
self.addForm("PROCESSING", ProcessScreen, name="Processing Image...") | 41.125 | 81 | 0.74772 | 35 | 329 | 7.028571 | 0.6 | 0.065041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155015 | 329 | 8 | 82 | 41.125 | 0.884892 | 0 | 0 | 0 | 0 | 0 | 0.188854 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6b3e2834ece7f73024f06f3f7d1fa5e83f965414 | 159 | py | Python | pfdo_mgz2img/__main__.py | FNNDSC/pl-pfdo_mgz2img | 0eb289aecd0769e47e1603dc9c7386a90c0649bd | [
"MIT"
] | null | null | null | pfdo_mgz2img/__main__.py | FNNDSC/pl-pfdo_mgz2img | 0eb289aecd0769e47e1603dc9c7386a90c0649bd | [
"MIT"
] | null | null | null | pfdo_mgz2img/__main__.py | FNNDSC/pl-pfdo_mgz2img | 0eb289aecd0769e47e1603dc9c7386a90c0649bd | [
"MIT"
] | 3 | 2020-10-22T01:30:04.000Z | 2020-12-17T17:20:25.000Z | from pfdo_mgz2img.pfdo_mgz2img import Pfdo_mgz2img
def main():
chris_app = Pfdo_mgz2img()
chris_app.launch()
if __name__ == "__main__":
main()
| 14.454545 | 50 | 0.698113 | 21 | 159 | 4.619048 | 0.52381 | 0.453608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.194969 | 159 | 10 | 51 | 15.9 | 0.726563 | 0 | 0 | 0 | 0 | 0 | 0.050314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6b45dbe9f7042ed3eb8fd15c80c06dd2eb5ab9f3 | 2,845 | py | Python | dispel4py/examples/graph_testing/delayed_pipeline.py | AndreiFrunze/wrangler | 076a07de00fc966dcf18ca6b6a6e804be5245ed9 | [
"Apache-2.0"
] | 2 | 2017-09-07T04:33:18.000Z | 2019-01-07T13:32:15.000Z | dispel4py/examples/graph_testing/delayed_pipeline.py | AndreiFrunze/wrangler | 076a07de00fc966dcf18ca6b6a6e804be5245ed9 | [
"Apache-2.0"
] | 2 | 2016-10-06T13:07:05.000Z | 2017-12-20T09:47:08.000Z | dispel4py/examples/graph_testing/delayed_pipeline.py | AndreiFrunze/wrangler | 076a07de00fc966dcf18ca6b6a6e804be5245ed9 | [
"Apache-2.0"
] | 5 | 2016-09-01T08:38:20.000Z | 2018-08-28T12:08:39.000Z | # Copyright (c) The University of Edinburgh 2014
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''
This is a dispel4py graph which produces a pipeline workflow with one producer node (prod) and 2 consumer nodes.
The second consumer node delays the output by a fixed time and records the average processing time.
Execution:
* MPI: Please, locate yourself into the dispel4py directory.
Execute the MPI mapping as follows::
mpiexec -n <number mpi_processes> python -m dispel4py.worker_mpi [-a name_dispel4py_graph] [-f file containing the input dataset in JSON format]
[-i number of iterations/runs'] [-s]
The argument '-s' forces to run the graph in a simple processing, which means that the first node of the graph will be executed in a process, and the rest of nodes will be executed in a second process.
When <-i number of interations/runs> is not indicated, the graph is executed once by default.
For example::
mpiexec -n 6 python -m dispel4py.worker_mpi dispel4py.examples.graph_testing.delayed_pipeline
.. note::
Each node in the graph is executed as a separate MPI process.
This graph has 3 nodes. For this reason we need at least 3 MPI processes to execute it.
Output::
Processes: {'TestDelayOneInOneOut2': [2, 3], 'TestProducer0': [4], 'TestOneInOneOut1': [0, 1]}
TestProducer0 (rank 4): Processed 10 iterations.
TestOneInOneOut1 (rank 1): Processed 5 iterations.
TestOneInOneOut1 (rank 0): Processed 5 iterations.
TestDelayOneInOneOut2 (rank 3): Average processing time: 1.00058307648
TestDelayOneInOneOut2 (rank 3): Processed 5 iterations.
TestDelayOneInOneOut2 (rank 2): Average processing time: 1.00079641342
TestDelayOneInOneOut2 (rank 2): Processed 5 iterations.
'''
from dispel4py.examples.graph_testing import testing_PEs as t
from dispel4py.workflow_graph import WorkflowGraph
from dispel4py.new.monitoring import ProcessTimingPE
prod = t.TestProducer()
cons1 = t.TestOneInOneOut()
''' adding a processing timer '''
cons2 = ProcessTimingPE(t.TestDelayOneInOneOut())
''' important: this is the graph_variable '''
graph = WorkflowGraph()
graph.connect(prod, 'output', cons1, 'input')
graph.connect(cons1, 'output', cons2, 'input')
| 41.838235 | 214 | 0.723726 | 387 | 2,845 | 5.29199 | 0.447028 | 0.029297 | 0.039063 | 0.015625 | 0.084961 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034361 | 0.202109 | 2,845 | 67 | 215 | 42.462687 | 0.867841 | 0.829525 | 0 | 0 | 0 | 0 | 0.054321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
860fb914a3d3a66024a520f717f81b7fd6c506ed | 2,283 | py | Python | app/forms.py | Hamudi0/BLOGGER | e013f92badca5ebaa36abf0478e314ccfad6a4fc | [
"MIT"
] | null | null | null | app/forms.py | Hamudi0/BLOGGER | e013f92badca5ebaa36abf0478e314ccfad6a4fc | [
"MIT"
] | 3 | 2021-06-08T19:36:08.000Z | 2022-01-13T01:03:39.000Z | app/forms.py | Hamudi0/BLOGGER | e013f92badca5ebaa36abf0478e314ccfad6a4fc | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from flask_login import current_user
from flask_wtf.file import FileField, FileAllowed
from wtforms import StringField, PasswordField, BooleanField, SubmitField, TextAreaField
from wtforms.validators import ValidationError, DataRequired, Email, EqualTo, Length
from app.models import User
class LoginForm(FlaskForm):
username = StringField('Username', validators=[DataRequired()])
password = PasswordField('Password', validators=[DataRequired()])
remember= BooleanField('Remember Me')
submit = SubmitField('Sign In')
class RegistrationForm(FlaskForm):
username = StringField('Username', validators=[DataRequired()])
email = StringField('Email', validators=[DataRequired(), Email()])
password = PasswordField('Password', validators=[DataRequired()])
password2 = PasswordField(
'Repeat Password', validators=[DataRequired(), EqualTo('password')])
submit = SubmitField('Register')
def validate_username(self, username):
user = User.query.filter_by(username=username.data).first()
if user is not None:
raise ValidationError('Please use a different username.')
def validate_email(self, email):
user = User.query.filter_by(email=email.data).first()
if user is not None:
raise ValidationError('Please use a different email address.')
class EditProfileForm(FlaskForm):
username = StringField('Username', validators=[DataRequired()])
about_me = TextAreaField('About me', validators=[Length(min=0, max=140)])
picture = FileField('Profile Picture', validators=[FileAllowed(['jpg', 'png'])])
submit = SubmitField('Update')
def validate_username(self, username):
if username.data != current_user.username:
user = User.query.filter_by(username=username.data).first()
if user:
raise ValidationError('That username is taken. Please choose a different one.')
def validate_email(self, email):
if email.data != current_user.email:
user = User.query.filter_by(email=email.data).first()
if user:
raise ValidationError('That email is taken. Please choose a different one.')
class PostForm(FlaskForm):
title = StringField('Title', validators=[DataRequired()])
content = TextAreaField('Title', validators=[DataRequired()])
submit = SubmitField('Post') | 43.075472 | 89 | 0.74113 | 256 | 2,283 | 6.550781 | 0.292969 | 0.118068 | 0.031008 | 0.045319 | 0.480024 | 0.360167 | 0.25641 | 0.189624 | 0.189624 | 0.189624 | 0 | 0.002536 | 0.136224 | 2,283 | 53 | 90 | 43.075472 | 0.84787 | 0 | 0 | 0.377778 | 0 | 0 | 0.142025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0.111111 | 0.133333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
8617f344951a72474031fde55dcff214052b252d | 1,200 | py | Python | tests/contrib/test_securecookie.py | lucashangxu/Ancient-Werkzeug | 5f6312f6cde5466aed545bfbef8c53c15d935a0f | [
"BSD-3-Clause"
] | null | null | null | tests/contrib/test_securecookie.py | lucashangxu/Ancient-Werkzeug | 5f6312f6cde5466aed545bfbef8c53c15d935a0f | [
"BSD-3-Clause"
] | null | null | null | tests/contrib/test_securecookie.py | lucashangxu/Ancient-Werkzeug | 5f6312f6cde5466aed545bfbef8c53c15d935a0f | [
"BSD-3-Clause"
] | null | null | null | from werkzeug import Request, Response, parse_cookie
from werkzeug.contrib.securecookie import SecureCookie
def test_basic_support():
"""Basid SecureCookie support"""
c = SecureCookie(secret_key='foo')
assert c.new
print c.modified, c.should_save
assert not c.modified
assert not c.should_save
c['x'] = 42
assert c.modified
assert c.should_save
s = c.serialize()
c2 = SecureCookie.unserialize(s, 'foo')
assert c is not c2
assert not c2.new
assert not c2.modified
assert not c2.should_save
assert c2 == c
c3 = SecureCookie.unserialize(s, 'wrong foo')
assert not c3.modified
assert not c3.new
assert c3 == {}
def test_wrapper_support():
"""Securecookie wrapper integration"""
req = Request.from_values()
resp = Response()
c = SecureCookie.load_cookie(req, secret_key='foo')
assert c.new
c['foo'] = 42
assert c.secret_key == 'foo'
c.save_cookie(resp)
req = Request.from_values(headers={
'Cookie': 'session="%s"' % parse_cookie(resp.headers['set-cookie'])['session']
})
c2 = SecureCookie.load_cookie(req, secret_key='foo')
assert not c2.new
assert c2 == c
| 26.086957 | 87 | 0.660833 | 164 | 1,200 | 4.719512 | 0.262195 | 0.093023 | 0.062016 | 0.069767 | 0.18863 | 0.144703 | 0.111111 | 0.111111 | 0 | 0 | 0 | 0.01824 | 0.223333 | 1,200 | 45 | 88 | 26.666667 | 0.812232 | 0 | 0 | 0.166667 | 0 | 0 | 0.055752 | 0 | 0 | 0 | 0 | 0 | 0.472222 | 0 | null | null | 0 | 0.055556 | null | null | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
861fc453e9213a792fa108ee19e1c9fe255b6ab5 | 767 | py | Python | Python/5 kyu/Common Denominators/test_convertFracts.py | newtonsspawn/codewars_challenges | 62b20d4e729c8ba79eac7cae6a179af57abd45d4 | [
"MIT"
] | 3 | 2020-05-29T23:29:35.000Z | 2021-08-12T03:16:44.000Z | Python/5 kyu/Common Denominators/test_convertFracts.py | newtonsspawn/codewars_challenges | 62b20d4e729c8ba79eac7cae6a179af57abd45d4 | [
"MIT"
] | null | null | null | Python/5 kyu/Common Denominators/test_convertFracts.py | newtonsspawn/codewars_challenges | 62b20d4e729c8ba79eac7cae6a179af57abd45d4 | [
"MIT"
] | 3 | 2020-05-22T12:14:55.000Z | 2021-04-15T12:52:42.000Z | from unittest import TestCase
from convertfracts import convertFracts
class TestConvertFracts(TestCase):
def test_convertFracts_01(self):
self.assertEqual(convertFracts([[1, 2], [1, 3], [1, 4]]),
[[6, 12], [4, 12], [3, 12]])
def test_convertFracts_02(self):
self.assertEqual(convertFracts([]), [])
def test_convertFracts_03(self):
self.assertEqual(convertFracts([[27115, 5262],
[87546, 11111111],
[43216, 255689]]),
[[77033412951888085, 14949283383840498],
[117787497858828, 14949283383840498],
[2526695441399712, 14949283383840498]])
| 34.863636 | 65 | 0.530639 | 59 | 767 | 6.79661 | 0.508475 | 0.052369 | 0.149626 | 0.239402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.310345 | 0.357236 | 767 | 21 | 66 | 36.52381 | 0.503043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.133333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
863cd6e9e74a184af736af2f6356721fcb42c289 | 721 | py | Python | timebank_site/timebank_app/permissions.py | SombiriX/TimeBank | 94f379e341ac6f34ef127dc65ce07f494e2f4f8a | [
"MIT"
] | 1 | 2019-10-01T17:57:03.000Z | 2019-10-01T17:57:03.000Z | timebank_site/timebank_app/permissions.py | SombiriX/TimeBank | 94f379e341ac6f34ef127dc65ce07f494e2f4f8a | [
"MIT"
] | 35 | 2018-09-21T18:12:24.000Z | 2022-02-27T07:55:08.000Z | timebank_site/timebank_app/permissions.py | SombiriX/TimeBank | 94f379e341ac6f34ef127dc65ce07f494e2f4f8a | [
"MIT"
] | null | null | null | from rest_framework.permissions import SAFE_METHODS, BasePermission
class SafeMethodsOnly(BasePermission):
def has_permission(self, request, view):
return request.method in SAFE_METHODS
def has_object_permission(self, request, view, obj=None):
return request.method in SAFE_METHODS
class AdminOrAuthorCanEdit(BasePermission):
def has_object_permission(self, request, view, obj=None):
"""Only the author can modify existing instances."""
is_safe = request.method in SAFE_METHODS
try:
is_author = request.user == obj.author
except AttributeError:
is_author = False
return is_safe or is_author or request.user.is_superuser
| 28.84 | 67 | 0.710125 | 87 | 721 | 5.701149 | 0.436782 | 0.08871 | 0.127016 | 0.15121 | 0.358871 | 0.306452 | 0.177419 | 0.177419 | 0.177419 | 0 | 0 | 0 | 0.223301 | 721 | 24 | 68 | 30.041667 | 0.885714 | 0.0638 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.142857 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
864a26525040d8070ddd4d430e5d6a1076c804ca | 680 | py | Python | leasing/renderers.py | hkotkanen/mvj | a22d40869ef1b13924da428f3026d248acef81a7 | [
"MIT"
] | null | null | null | leasing/renderers.py | hkotkanen/mvj | a22d40869ef1b13924da428f3026d248acef81a7 | [
"MIT"
] | null | null | null | leasing/renderers.py | hkotkanen/mvj | a22d40869ef1b13924da428f3026d248acef81a7 | [
"MIT"
] | null | null | null | from rest_framework.renderers import BrowsableAPIRenderer
# From: https://bradmontgomery.net/blog/disabling-forms-django-rest-frameworks-browsable-api/
class BrowsableAPIRendererWithoutForms(BrowsableAPIRenderer):
"""Renders the browsable api, but excludes the html form."""
def get_context(self, *args, **kwargs):
ctx = super().get_context(*args, **kwargs)
return ctx
def get_rendered_html_form(self, data, view, method, request):
"""Returns empty html for the html form, except True for DELETE
and OPTIONS methods to show buttons for them"""
if method in ('DELETE', 'OPTIONS'):
return True
return ""
| 34 | 93 | 0.692647 | 81 | 680 | 5.740741 | 0.654321 | 0.051613 | 0.047312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205882 | 680 | 19 | 94 | 35.789474 | 0.861111 | 0.372059 | 0 | 0 | 0 | 0 | 0.031863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
864e36381307d60d47d2565702cdadf5209c455c | 4,231 | py | Python | skimage/data/__init__.py | genp/scikit-image | 0295d5423585efc39bea7b25b5b00a6f6ee4533a | [
"BSD-3-Clause"
] | null | null | null | skimage/data/__init__.py | genp/scikit-image | 0295d5423585efc39bea7b25b5b00a6f6ee4533a | [
"BSD-3-Clause"
] | null | null | null | skimage/data/__init__.py | genp/scikit-image | 0295d5423585efc39bea7b25b5b00a6f6ee4533a | [
"BSD-3-Clause"
] | 1 | 2020-02-25T10:44:47.000Z | 2020-02-25T10:44:47.000Z | """Standard test images.
For more images, see
- http://sipi.usc.edu/database/database.php
"""
import os as _os
from ..io import imread
from skimage import data_dir
__all__ = ['load',
'camera',
'lena',
'text',
'checkerboard',
'coins',
'moon',
'page',
'horse',
'clock',
'immunohistochemistry',
'chelsea']
def load(f):
"""Load an image file located in the data directory.
Parameters
----------
f : string
File name.
Returns
-------
img : ndarray
Image loaded from skimage.data_dir.
"""
return imread(_os.path.join(data_dir, f))
def camera():
"""Gray-level "camera" image.
Often used for segmentation and denoising examples.
"""
return load("camera.png")
def lena():
"""Colour "Lena" image.
The standard, yet sometimes controversial Lena test image was
scanned from the November 1972 edition of Playboy magazine. From
an image processing perspective, this image is useful because it
contains smooth, textured, shaded as well as detail areas.
"""
return load("lena.png")
def text():
"""Gray-level "text" image used for corner detection.
Notes
-----
This image was downloaded from Wikipedia
<http://en.wikipedia.org/wiki/File:Corner.png>`__.
No known copyright restrictions, released into the public domain.
"""
return load("text.png")
def checkerboard():
"""Checkerboard image.
Checkerboards are often used in image calibration, since the
corner-points are easy to locate. Because of the many parallel
edges, they also visualise distortions particularly well.
"""
return load("chessboard_GRAY.png")
def coins():
"""Greek coins from Pompeii.
This image shows several coins outlined against a gray background.
It is especially useful in, e.g. segmentation tests, where
individual objects need to be identified against a background.
The background shares enough grey levels with the coins that a
simple segmentation is not sufficient.
Notes
-----
This image was downloaded from the
`Brooklyn Museum Collection
<http://www.brooklynmuseum.org/opencollection/archives/image/617/image>`__.
No known copyright restrictions.
"""
return load("coins.png")
def moon():
"""Surface of the moon.
This low-contrast image of the surface of the moon is useful for
illustrating histogram equalization and contrast stretching.
"""
return load("moon.png")
def page():
"""Scanned page.
This image of printed text is useful for demonstrations requiring uneven
background illumination.
"""
return load("page.png")
def horse():
"""Black and white silhouette of a horse.
This image was downloaded from
`openclipart <http://openclipart.org/detail/158377/horse-by-marauder>`
Released into public domain and drawn and uploaded by Andreas Preuss
(marauder).
"""
return load("horse.png")
def clock():
"""Motion blurred clock.
This photograph of a wall clock was taken while moving the camera in an
aproximately horizontal direction. It may be used to illustrate
inverse filters and deconvolution.
Released into the public domain by the photographer (Stefan van der Walt).
"""
return load("clock_motion.png")
def immunohistochemistry():
"""Immunohistochemical (IHC) staining with hematoxylin counterstaining.
This picture shows colonic glands where the IHC expression of FHL2 protein
is revealed with DAB. Hematoxylin counterstaining is applied to enhance the
negative parts of the tissue.
This image was acquired at the Center for Microscopy And Molecular Imaging
(CMMI).
No known copyright restrictions.
"""
return load("ihc.png")
def chelsea():
"""Chelsea the cat.
An example with texture, prominent edges in horizontal and diagonal
directions, as well as features of differing scales.
Notes
-----
No copyright restrictions. CC0 by the photographer (Stefan van der Walt).
"""
return load("chelsea.png")
| 22.625668 | 79 | 0.665564 | 524 | 4,231 | 5.34542 | 0.446565 | 0.039272 | 0.017137 | 0.023563 | 0.108533 | 0.079971 | 0.030703 | 0.030703 | 0.030703 | 0 | 0 | 0.004704 | 0.246277 | 4,231 | 186 | 80 | 22.747312 | 0.873628 | 0.684472 | 0 | 0 | 0 | 0 | 0.197746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.076923 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
864fa86ab5f2cae736cf8742507940ca1f53be56 | 791 | py | Python | ansible/install/roles/rally/files/browbeat-rally/browbeat_rally/db/models.py | cloud-bulldozer/browbeat | 47eacfe85e6c236a2a9a21984c1692754203eda3 | [
"Apache-2.0"
] | 19 | 2019-07-12T08:46:58.000Z | 2022-03-11T19:25:28.000Z | ansible/install/roles/rally/files/browbeat-rally/browbeat_rally/db/models.py | cloud-bulldozer/browbeat | 47eacfe85e6c236a2a9a21984c1692754203eda3 | [
"Apache-2.0"
] | 1 | 2022-03-30T07:05:53.000Z | 2022-03-30T07:05:53.000Z | ansible/install/roles/rally/files/browbeat-rally/browbeat_rally/db/models.py | cloud-bulldozer/browbeat | 47eacfe85e6c236a2a9a21984c1692754203eda3 | [
"Apache-2.0"
] | 31 | 2019-06-10T20:08:44.000Z | 2022-02-23T15:43:32.000Z | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sqlalchemy as sa
from rally.common.db import models
BASE = models.BASE
class RallyLock(BASE, models.RallyBase):
__tablename__ = "rallylocks"
lock_uuid = sa.Column(sa.String(36), primary_key=True, nullable=False)
| 32.958333 | 76 | 0.745891 | 118 | 791 | 4.949153 | 0.70339 | 0.10274 | 0.044521 | 0.054795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.180784 | 791 | 23 | 77 | 34.391304 | 0.891975 | 0.673831 | 0 | 0 | 0 | 0 | 0.040984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
86520ad31274062e1d8abe30cb8363d6d73adaea | 378 | py | Python | contacts/validators.py | sjy5386/subshorts | d8170ee4a66989c3e852f86aa83bab6341e3aa10 | [
"MIT"
] | 3 | 2022-03-08T19:02:41.000Z | 2022-03-16T23:04:37.000Z | contacts/validators.py | sjy5386/subshorts | d8170ee4a66989c3e852f86aa83bab6341e3aa10 | [
"MIT"
] | 5 | 2022-03-17T02:16:52.000Z | 2022-03-18T02:55:25.000Z | contacts/validators.py | sjy5386/subshorts | d8170ee4a66989c3e852f86aa83bab6341e3aa10 | [
"MIT"
] | null | null | null | import re
from django.core.exceptions import ValidationError
def validate_country(value):
if len(value) != 2 or not re.match('[A-Z]{2}', value):
raise ValidationError('Please enter your country code. e.g. US')
def validate_phone(value):
if not re.match('\+\d{1,3}\.\d+', value):
raise ValidationError('Please your phone number. e.g. +1.1234567890')
| 27 | 77 | 0.674603 | 57 | 378 | 4.438596 | 0.578947 | 0.086957 | 0.079051 | 0.245059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048077 | 0.174603 | 378 | 13 | 78 | 29.076923 | 0.762821 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86600d392defe0b3255e61eb50ee1a7c5f3a381a | 893 | py | Python | tests/contrib/autoguide/test_mean_field_entropy.py | cweniger/pyro | ba104f07ca17865d2600e8765d920d549fcb3fbc | [
"MIT"
] | 10 | 2020-03-18T14:41:25.000Z | 2021-07-04T08:49:57.000Z | tests/contrib/autoguide/test_mean_field_entropy.py | cweniger/pyro | ba104f07ca17865d2600e8765d920d549fcb3fbc | [
"MIT"
] | 19 | 2018-10-30T13:45:31.000Z | 2019-09-27T14:16:57.000Z | tests/contrib/autoguide/test_mean_field_entropy.py | cweniger/pyro | ba104f07ca17865d2600e8765d920d549fcb3fbc | [
"MIT"
] | 5 | 2020-06-21T23:40:35.000Z | 2021-11-09T16:18:42.000Z | import torch
import scipy.special as sc
import pytest
import pyro
import pyro.distributions as dist
from pyro.contrib.autoguide import mean_field_entropy
from tests.common import assert_equal
def mean_field_guide(batch_tensor, design):
# A batched variable
w_p = pyro.param("w_p", 0.2*torch.ones(batch_tensor.shape))
u_p = pyro.param("u_p", 0.5*torch.ones(batch_tensor.shape))
pyro.sample("w", dist.Bernoulli(w_p))
pyro.sample("u", dist.Bernoulli(u_p))
def h(p):
return -(sc.xlogy(p, p) + sc.xlog1py(1 - p, -p))
@pytest.mark.parametrize("guide,args,expected_entropy", [
(mean_field_guide, (torch.Tensor([0.]), None), torch.Tensor([h(0.2) + h(0.5)])),
(mean_field_guide, (torch.eye(2), None), (h(0.2) + h(0.5))*torch.ones(2, 2))
])
def test_guide_entropy(guide, args, expected_entropy):
assert_equal(mean_field_entropy(guide, args), expected_entropy)
| 30.793103 | 84 | 0.707727 | 149 | 893 | 4.067114 | 0.342282 | 0.074257 | 0.069307 | 0.118812 | 0.20462 | 0.019802 | 0 | 0 | 0 | 0 | 0 | 0.023196 | 0.131019 | 893 | 28 | 85 | 31.892857 | 0.757732 | 0.020157 | 0 | 0 | 0 | 0 | 0.040092 | 0.030928 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.15 | false | 0 | 0.35 | 0.05 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
86670724a0af0be723927543829c5c15c9211ec9 | 809 | py | Python | github/memoized.py | alextbok/py-github | 56e11e9694c0e99a9814382249061c83afe09000 | [
"MIT"
] | 2 | 2017-06-14T15:39:13.000Z | 2022-02-06T18:43:58.000Z | github/memoized.py | alextbok/py-github | 56e11e9694c0e99a9814382249061c83afe09000 | [
"MIT"
] | null | null | null | github/memoized.py | alextbok/py-github | 56e11e9694c0e99a9814382249061c83afe09000 | [
"MIT"
] | null | null | null | #!/usr/bin/python
from github import collections, functools
'''
Decorator that caches a function's return value each time it is called so if it is
called later with the same arguments, the cached value is returned, instead of
reevaluating the function
'''
class Memoized(object):
def __init__(self, func):
self.func = func
self.cache = {}
def __call__(self, *args):
if not isinstance(args, collections.Hashable):
return self.func(*args)
if args in self.cache:
return self.cache[args]
else:
value = self.func(*args)
self.cache[args] = value
return value
def __repr__(self):
return self.func.__doc__
def __get__(self, obj, objtype):
return functools.partial(self.__call__, obj) | 31.115385 | 82 | 0.640297 | 105 | 809 | 4.704762 | 0.514286 | 0.080972 | 0.040486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.270705 | 809 | 26 | 83 | 31.115385 | 0.837288 | 0.019778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0.111111 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8677ea9b68816e00837b2c99a64640c65e781cc0 | 1,205 | py | Python | Homework/HW08.py | siljaxane/Beetroot | 30385c2a78ae2599fc986f0268c83908d482643c | [
"MIT"
] | null | null | null | Homework/HW08.py | siljaxane/Beetroot | 30385c2a78ae2599fc986f0268c83908d482643c | [
"MIT"
] | null | null | null | Homework/HW08.py | siljaxane/Beetroot | 30385c2a78ae2599fc986f0268c83908d482643c | [
"MIT"
] | null | null | null | # TASK 1
# Write a function called oops that explicitly raises an IndexError exception when called.
# Then write another function that calls oops inside a try/except statement to catch the error.
# What happens if you change oops to raise KeyError instead of IndexError?
def oops():
raise IndexError
try:
IndexError
except:
print("This is an IndexError")
def oops1():
pass
try:
def oops():
raise IndexError
except: print("Oopsi daisy")
########## With Key error##########
def oops():
raise KeyError
try:
KeyError
except:
print("This is an KeyError")
def oops1():
pass
try:
def oops():
raise KeyError
except: print("Oopsi daisy")
# TASK 2
# Write a function that takes in two numbers from the user via input(),
# call the numbers a and b, and then returns the value of squared a divided by b,
# construct a try-except block which raises an exception if the two values given by the input function were not numbers,
# and if value b was zero (cannot divide by zero).
def TwoNumbers(a, b):
return
# Hi. You just need to name function arguments as a and b and do math operation on them.
a = []
b =[]
c = a + b
[c*c / b] | 20.083333 | 121 | 0.675519 | 190 | 1,205 | 4.289474 | 0.463158 | 0.034356 | 0.058896 | 0.053988 | 0.112883 | 0.066258 | 0.066258 | 0 | 0 | 0 | 0 | 0.004353 | 0.237344 | 1,205 | 60 | 122 | 20.083333 | 0.881393 | 0.580913 | 0 | 0.666667 | 0 | 0 | 0.130526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0.066667 | 0 | 0.033333 | 0.266667 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8680bae6bbc27c723d919daac6d889323eac84e0 | 212 | py | Python | utils.py | ramonmedeiros/Pypermail | 342c0bbe324ed1f8bd927cb7656855360c330873 | [
"MIT"
] | 1 | 2018-08-21T14:41:55.000Z | 2018-08-21T14:41:55.000Z | utils.py | ramonmedeiros/Pypermail | 342c0bbe324ed1f8bd927cb7656855360c330873 | [
"MIT"
] | null | null | null | utils.py | ramonmedeiros/Pypermail | 342c0bbe324ed1f8bd927cb7656855360c330873 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import urllib2
def getContent(url):
"""
Retrives url content
"""
try:
content = urllib2.urlopen(url)
except IOError:
return ""
return content.read()
| 13.25 | 38 | 0.575472 | 22 | 212 | 5.545455 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013605 | 0.306604 | 212 | 15 | 39 | 14.133333 | 0.816327 | 0.179245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
868899b5d3ed976e958ad41b19a94891758cb263 | 1,709 | py | Python | aether/python/avro/tests/__init__.py | eHealthAfrica/aether-python-library | d595b4e4b7a4d824c08da0e5162a620efac2b6ee | [
"Apache-2.0"
] | null | null | null | aether/python/avro/tests/__init__.py | eHealthAfrica/aether-python-library | d595b4e4b7a4d824c08da0e5162a620efac2b6ee | [
"Apache-2.0"
] | 1 | 2020-08-10T11:27:31.000Z | 2020-08-10T11:30:41.000Z | aether/python/avro/tests/__init__.py | eHealthAfrica/aether-python-library | d595b4e4b7a4d824c08da0e5162a620efac2b6ee | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2019 by eHealth Africa : http://www.eHealthAfrica.org
#
# See the NOTICE file distributed with this work for additional information
# regarding copyright ownership.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import pytest
from aether.python.tests import (
EXAMPLE_ALL_TYPES,
EXAMPLE_ANNOTATED_SCHEMA,
EXAMPLE_AUTOGEN_SCHEMA,
EXAMPLE_SIMPLE_SCHEMA
)
from aether.python.avro.schema import Node
from aether.python.avro.generation import SampleGenerator
@pytest.mark.unit
@pytest.fixture(scope='module')
def AutoSchema():
return Node(EXAMPLE_AUTOGEN_SCHEMA)
@pytest.mark.unit
@pytest.fixture(scope='module')
def SimpleSchema():
return Node(EXAMPLE_SIMPLE_SCHEMA)
@pytest.mark.unit
@pytest.fixture(scope='module')
def ComplexSchema():
return Node(EXAMPLE_ANNOTATED_SCHEMA)
@pytest.mark.unit
@pytest.fixture(scope='function')
def SimpleGenerator():
return SampleGenerator(EXAMPLE_SIMPLE_SCHEMA)
@pytest.mark.unit
@pytest.fixture(scope='function')
def ComplexGenerator():
return SampleGenerator(EXAMPLE_ANNOTATED_SCHEMA)
@pytest.mark.unit
@pytest.fixture(scope='function')
def AllTypesGenerator():
return SampleGenerator(EXAMPLE_ALL_TYPES)
| 25.893939 | 75 | 0.775307 | 227 | 1,709 | 5.748899 | 0.449339 | 0.045977 | 0.064368 | 0.091954 | 0.260536 | 0.260536 | 0.260536 | 0.260536 | 0.229119 | 0.099617 | 0 | 0.005413 | 0.135167 | 1,709 | 65 | 76 | 26.292308 | 0.877537 | 0.406085 | 0 | 0.363636 | 0 | 0 | 0.042126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | true | 0 | 0.121212 | 0.181818 | 0.484848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8688cbeaeec4107e0c591dc4ef46598f7f2749a8 | 4,387 | py | Python | grr/tools/console.py | mikecb/grr | 52fdd977729af2a09a147301c55b8b7f1eccfa67 | [
"Apache-2.0"
] | 2 | 2019-06-02T13:11:16.000Z | 2019-06-25T13:30:46.000Z | grr/tools/console.py | mikecb/grr | 52fdd977729af2a09a147301c55b8b7f1eccfa67 | [
"Apache-2.0"
] | null | null | null | grr/tools/console.py | mikecb/grr | 52fdd977729af2a09a147301c55b8b7f1eccfa67 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""This is the GRR Console.
We can schedule a new flow for a specific client.
"""
# pylint: disable=unused-import
# Import things that are useful from the console.
import collections
import csv
import datetime
import getpass
import os
import re
import sys
import time
# pylint: disable=unused-import,g-bad-import-order
from grr.lib import server_plugins
# pylint: enable=g-bad-import-order
import logging
from grr.config import contexts
from grr.endtoend_tests import base
from grr.lib import access_control
from grr.lib import aff4
from grr.lib import artifact
from grr.lib import artifact_utils
from grr.lib import config_lib
from grr.lib import console_utils
from grr.lib import data_store
from grr.lib import export_utils
from grr.lib import flags
from grr.lib import flow
from grr.lib import flow_runner
from grr.lib import flow_utils
from grr.lib import hunts
from grr.lib import ipshell
from grr.lib import maintenance_utils
from grr.lib import server_startup
from grr.lib import type_info
from grr.lib import utils
from grr.lib import worker
from grr.lib.aff4_objects import aff4_grr
from grr.lib.aff4_objects import reports
from grr.lib.aff4_objects import security
# All the functions in this lib we want in local namespace.
# pylint: disable=wildcard-import
from grr.lib.console_utils import *
# pylint: enable=wildcard-import
from grr.lib.flows import console
from grr.lib.flows.console import debugging
from grr.lib.flows.general import memory
# pylint: enable=unused-import
from grr.tools import end_to_end_tests
flags.DEFINE_string("client", None,
"Initialise the console with this client id "
"(e.g. C.1234345).")
flags.DEFINE_string("reason", None,
"Create a default token with this access reason ")
flags.DEFINE_string("code_to_execute", None,
"If present, no console is started but the code given in "
"the flag is run instead (comparable to the -c option of "
"IPython).")
flags.DEFINE_string("command_file", None,
"If present, no console is started but the code given in "
"command file is supplied as input instead.")
flags.DEFINE_bool("exit_on_complete", True,
"If set to False and command_file or code_to_execute is "
"set we keep the console alive after the code completes.")
def Lister(arg):
for x in arg:
print x
def GetChildrenList(urn, token=None):
return list(aff4.FACTORY.Open(urn, token=token).ListChildren())
def main(unused_argv):
"""Main."""
banner = ("\nWelcome to the GRR console\n")
config_lib.CONFIG.AddContext(contexts.COMMAND_LINE_CONTEXT)
config_lib.CONFIG.AddContext(
contexts.CONSOLE_CONTEXT,
"Context applied when running the console binary.")
server_startup.Init()
# To make the console easier to use, we make a default token which will be
# used in StartFlow operations.
data_store.default_token = access_control.ACLToken(
username=getpass.getuser(), reason=flags.FLAGS.reason)
locals_vars = {
"__name__": "GRR Console",
"l": Lister,
"lc": GetChildrenList,
"o": aff4.FACTORY.Open,
# Bring some symbols from other modules into the console's
# namespace.
"StartFlowAndWait": flow_utils.StartFlowAndWait,
"StartFlowAndWorker": debugging.StartFlowAndWorker,
"RunEndToEndTests": end_to_end_tests.RunEndToEndTests,
}
locals_vars.update(globals()) # add global variables to console
if flags.FLAGS.client is not None:
locals_vars["client"], locals_vars["token"] = console_utils.OpenClient(
client_id=flags.FLAGS.client)
if flags.FLAGS.code_to_execute:
logging.info("Running code from flag: %s", flags.FLAGS.code_to_execute)
exec (flags.FLAGS.code_to_execute) # pylint: disable=exec-used
elif flags.FLAGS.command_file:
logging.info("Running code from file: %s", flags.FLAGS.command_file)
execfile(flags.FLAGS.command_file)
if (flags.FLAGS.exit_on_complete and
(flags.FLAGS.code_to_execute or flags.FLAGS.command_file)):
return
else: # We want the normal shell.
locals_vars.update(globals()) # add global variables to console
ipshell.IPShell(argv=[], user_ns=locals_vars, banner=banner)
if __name__ == "__main__":
flags.StartMain(main)
| 30.255172 | 78 | 0.724185 | 634 | 4,387 | 4.880126 | 0.310726 | 0.067873 | 0.087266 | 0.103426 | 0.257595 | 0.089528 | 0.063348 | 0.063348 | 0.063348 | 0.031028 | 0 | 0.003947 | 0.191475 | 4,387 | 144 | 79 | 30.465278 | 0.868339 | 0.140871 | 0 | 0.042105 | 0 | 0 | 0.194862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.021053 | 0.410526 | null | null | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8690dab25dcfe244b19e8b1f2065a916c9c873cf | 1,488 | py | Python | framewirc/filters.py | meshy/framewirc | 3a75e95c3cdad8705d09e1e80d05feb766e97893 | [
"BSD-2-Clause"
] | 41 | 2015-10-13T22:46:07.000Z | 2018-05-06T16:46:18.000Z | framewirc/filters.py | meshy/framewirc | 3a75e95c3cdad8705d09e1e80d05feb766e97893 | [
"BSD-2-Clause"
] | 22 | 2015-10-13T22:52:00.000Z | 2018-03-05T20:30:50.000Z | framewirc/filters.py | meshy/framewirc | 3a75e95c3cdad8705d09e1e80d05feb766e97893 | [
"BSD-2-Clause"
] | 2 | 2015-10-16T22:51:10.000Z | 2015-11-23T23:19:49.000Z | def deny(blacklist):
"""
Decorates a handler to filter out a blacklist of commands.
The decorated handler will not be called if message.command is in the
blacklist:
@deny(['A', 'B'])
def handle_everything_except_a_and_b(client, message):
pass
Single-item blacklists may be passed as a string:
@deny('THIS')
def handle_everything_except_this(client, message):
pass
"""
blacklist = [blacklist] if isinstance(blacklist, str) else blacklist
def inner_decorator(handler):
def wrapped(client, message):
if message.command not in blacklist:
handler(client=client, message=message)
return wrapped
return inner_decorator
def allow(whitelist):
"""
Decorates a handler to filter all except a whitelist of commands
The decorated handler will only be called if message.command is in the
whitelist:
@allow(['A', 'B'])
def handle_only_a_and_b(client, message):
pass
Single-item whitelists may be passed as a string:
@allow('THIS')
def handle_only_this(client, message):
pass
"""
whitelist = [whitelist] if isinstance(whitelist, str) else whitelist
def inner_decorator(handler):
def wrapped(client, message):
if message.command in whitelist:
handler(client=client, message=message)
return wrapped
return inner_decorator
| 28.075472 | 74 | 0.639113 | 180 | 1,488 | 5.183333 | 0.272222 | 0.111468 | 0.068596 | 0.040729 | 0.578778 | 0.525188 | 0.411576 | 0.411576 | 0.276527 | 0.276527 | 0 | 0 | 0.284946 | 1,488 | 52 | 75 | 28.615385 | 0.87688 | 0.491935 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
869218d7545d89609eba773b1264fb9fe6ad4a5d | 769 | py | Python | filing_fetcher/items.py | benbarber/companies_house_filing_fetcher | efab107541c557e7d1af2009e0617792bf5fd567 | [
"MIT"
] | 2 | 2020-08-10T14:43:25.000Z | 2021-05-14T11:54:19.000Z | filing_fetcher/items.py | benbarber/companies_house_filing_fetcher | efab107541c557e7d1af2009e0617792bf5fd567 | [
"MIT"
] | null | null | null | filing_fetcher/items.py | benbarber/companies_house_filing_fetcher | efab107541c557e7d1af2009e0617792bf5fd567 | [
"MIT"
] | 4 | 2019-11-18T15:37:23.000Z | 2020-05-01T22:36:35.000Z | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class FilingItem(scrapy.Item):
# From the BasicCompanyInformation file
company_number = scrapy.Field()
company_status = scrapy.Field()
incorporation_date = scrapy.Field()
accounts_next_due_date = scrapy.Field()
accounts_last_made_up_date = scrapy.Field()
accounts_account_category = scrapy.Field()
# From the filing-history
pages = scrapy.Field()
date = scrapy.Field()
type = scrapy.Field()
barcode = scrapy.Field()
file_urls = scrapy.Field()
files = scrapy.Field()
made_up_date = scrapy.Field()
filing_history_status = scrapy.Field()
| 26.517241 | 52 | 0.698309 | 96 | 769 | 5.416667 | 0.520833 | 0.296154 | 0.144231 | 0.132692 | 0.080769 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001603 | 0.188557 | 769 | 28 | 53 | 27.464286 | 0.831731 | 0.262679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
86941abaf0c0fbd6627a7b5d5cde43c582d440dc | 1,306 | py | Python | __manifest__.py | davidkimolo/library_management_system_with_odoo | 0f3aca132f4e46d3aef9eab99d11194aa2eaecf1 | [
"Apache-2.0"
] | null | null | null | __manifest__.py | davidkimolo/library_management_system_with_odoo | 0f3aca132f4e46d3aef9eab99d11194aa2eaecf1 | [
"Apache-2.0"
] | null | null | null | __manifest__.py | davidkimolo/library_management_system_with_odoo | 0f3aca132f4e46d3aef9eab99d11194aa2eaecf1 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# See LICENSE file for full copyright and licensing details.
{
'name': 'Library Management system odoo',
'version': "10.0.1.0.6",
'author': "David Kimolo",
'category': 'School Management',
'website': '',
'license': "AGPL-3",
'summary': """A Module For Library Management For School
Library Management
Library
School Library
Books
""",
'complexity': 'easy',
'depends': ['report_intrastat', 'school', 'stock',
'account_accountant', 'product'],
'data': ['security/library_security.xml',
'security/ir.model.access.csv',
'views/report_view.xml',
'views/qrcode_label.xml',
'views/library_data.xml',
'views/library_view.xml',
'views/library_sequence.xml',
'views/library_category_data.xml',
'data/library_card_schedular.xml',
'wizard/update_prices_view.xml',
'wizard/update_book_view.xml',
'wizard/book_issue_no_view.xml',
'wizard/card_no_view.xml'],
'demo': ['demo/library_demo.xml'],
'image': ['static/description/SchoolLibrary.png'],
'installable': True,
'application': True
}
| 34.368421 | 60 | 0.557427 | 132 | 1,306 | 5.348485 | 0.545455 | 0.05949 | 0.084986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008715 | 0.29709 | 1,306 | 37 | 61 | 35.297297 | 0.760349 | 0.061256 | 0 | 0 | 0 | 0 | 0.651676 | 0.324612 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86958af3dec9da1ec0742ebda86080c1c8920272 | 665 | py | Python | kozit/Update.py | zarlo/GitScripts | 3cc65aafaa1ce96fc07aca28a64f11b614500f31 | [
"MIT"
] | null | null | null | kozit/Update.py | zarlo/GitScripts | 3cc65aafaa1ce96fc07aca28a64f11b614500f31 | [
"MIT"
] | null | null | null | kozit/Update.py | zarlo/GitScripts | 3cc65aafaa1ce96fc07aca28a64f11b614500f31 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import subprocess
import os.path
import platform
def git(*args):
return subprocess.check_call(['git'] + list(args))
def gitupdate(repo, mPath):
if os.path.exists(mPath) == False:
git("clone", repo, mPath)
else:
git("pull", repo, mPath)
def Restore(*args):
return subprocess.check_call(['dotnet', 'restore'] + list(args))
print(platform.system())
gitupdate("https://github.com/kozit/kozit.git", "kozit/")
gitupdate("https://github.com/kozit/kozitScript.git", "kozitScript/")
gitupdate("https://github.com/kozit/libraries.git", "libraries/")
gitupdate("https://github.com/kozit/installer.git", "installer/")
| 23.75 | 69 | 0.684211 | 85 | 665 | 5.329412 | 0.411765 | 0.12362 | 0.1766 | 0.203091 | 0.375276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 665 | 27 | 70 | 24.62963 | 0.77969 | 0.02406 | 0 | 0 | 0 | 0 | 0.328704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0.117647 | 0.470588 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
869961aa3781955bb934debf0b5dfd7fd0b453f5 | 2,760 | py | Python | FP.py | demarchenac/PAKEv2- | 2b468aad30ac8b0557dc75c56403ac05d0a13424 | [
"MIT"
] | null | null | null | FP.py | demarchenac/PAKEv2- | 2b468aad30ac8b0557dc75c56403ac05d0a13424 | [
"MIT"
] | null | null | null | FP.py | demarchenac/PAKEv2- | 2b468aad30ac8b0557dc75c56403ac05d0a13424 | [
"MIT"
] | null | null | null | import random
class FP:
def __init__(self, rep, p):
self.p = p
self.rep = rep % p
def __add__(self, b):
rep = (self.rep + b.rep) % self.p
return FP(rep, self.p)
def __sub__(self, b):
rep = (self.rep - b.rep) % self.p
return FP(rep, self.p)
def get_random_element(self):
rep = int(random.randint(0, self.p - 1))
return FP(rep, self.p)
def get_zero(self):
return FP(0, self.p)
def get_one(self):
return FP(1, self.p)
def get_minus_one(self):
return FP(self.p - 1, self.p)
def get_random_nonzero_element(self):
element = random.randint(1, self.p - 1)
return FP(element, self.p)
def __pow__(self, n):
return FP(pow(self.rep, n, self.p), self.p)
def get_square(self):
g = self.get_random_nonzero_element()
one = self.get_one()
output = g ** ((self.p - 1) // 2)
while not output.equals(one):
g = self.get_random_nonzero_element()
output = g ** ((self.p - 1) // 2)
return g
def get_nonsquare(self):
g = self.get_random_nonzero_element()
one = self.get_one()
output = g ** ((self.p - 1) // 2)
while output.equals(one):
g = self.get_random_nonzero_element()
output = g ** ((self.p - 1) // 2)
return g
def __truediv__(self, y):
return self * y.m_inverse()
def get_order(self):
return self.p
def get_primitive_root(self):
if self.p == 17:
g = self.get_random_nonzero_element()
g8 = g ** 8
while g8.rep == 1:
g = self.get_random_nonzero_element()
g8 = g ** 8
gi = g ** 15
return g, gi
def a_inverse(self):
zero = self.get_zero()
return zero - self
def m_inverse(self):
def egcd(a, b):
if a == 0:
return (b, 0, 1)
g, y, x = egcd(b % a, a)
return (g, x - (b // a) * y, y)
def modinv(a, m):
g, x, y = egcd(a, m)
if g != 1:
raise Exception("No modular inverse")
return x % m
return FP(modinv(self.rep, self.p), self.p)
def __mul__(self, b):
rep = (self.rep * b.rep) % self.p
return FP(rep, self.p)
def equals(self, a):
if isinstance(a, self.__class__):
return self.rep == a.rep and self.p == a.p
return False
def __eq__(self, other):
return self.equals(other)
def __str__(self):
return str(self.rep)
def is_one(self):
return self.rep == 1
def is_nonzero(self):
return self.rep != 0
| 24 | 54 | 0.50471 | 396 | 2,760 | 3.325758 | 0.156566 | 0.098709 | 0.066819 | 0.058466 | 0.426727 | 0.364465 | 0.364465 | 0.345482 | 0.345482 | 0.296887 | 0 | 0.017714 | 0.365942 | 2,760 | 114 | 55 | 24.210526 | 0.734857 | 0 | 0 | 0.238095 | 0 | 0 | 0.006522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.011905 | 0.119048 | 0.607143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
869cf20e5f2ad85c36e0cca21930d0a1f79d951e | 402 | py | Python | cyder/cydhcp/vrf/tests.py | jwasinger/cyder | de3e61ca47b71ac28d0a36568f250f4bc2617f22 | [
"BSD-3-Clause"
] | null | null | null | cyder/cydhcp/vrf/tests.py | jwasinger/cyder | de3e61ca47b71ac28d0a36568f250f4bc2617f22 | [
"BSD-3-Clause"
] | null | null | null | cyder/cydhcp/vrf/tests.py | jwasinger/cyder | de3e61ca47b71ac28d0a36568f250f4bc2617f22 | [
"BSD-3-Clause"
] | null | null | null | from cyder.base.tests import ModelTestMixin, TestCase
from cyder.cydhcp.vrf.models import Vrf
class VrfTests(TestCase, ModelTestMixin):
@property
def objs(self):
"""Create objects for test_create_delete."""
return (
Vrf.objects.create(name='a'),
Vrf.objects.create(name='bbbbbbbbbbbbb'),
Vrf.objects.create(name='Hello, world.'),
)
| 28.714286 | 53 | 0.639303 | 45 | 402 | 5.666667 | 0.6 | 0.117647 | 0.188235 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238806 | 402 | 13 | 54 | 30.923077 | 0.833333 | 0.094527 | 0 | 0 | 0 | 0 | 0.075419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
869ec4477ed0c6a645fd89a6caa85c13e59473a8 | 574 | py | Python | src/dqn/dqn.py | bestvibes/multi-agent-cooperation | d1233865123fde388ba84a917221265c1cfea550 | [
"MIT"
] | 3 | 2019-11-02T05:14:49.000Z | 2019-11-12T12:48:53.000Z | src/dqn/dqn.py | bestvibes/multi-agent-cooperation | d1233865123fde388ba84a917221265c1cfea550 | [
"MIT"
] | null | null | null | src/dqn/dqn.py | bestvibes/multi-agent-cooperation | d1233865123fde388ba84a917221265c1cfea550 | [
"MIT"
] | 1 | 2019-10-02T17:22:39.000Z | 2019-10-02T17:22:39.000Z | from collections import namedtuple
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.linear1 = nn.Linear(4, 64)
self.linear2 = nn.Linear(64, 64)
self.linear3 = nn.Linear(64, 64)
self.linear4 = nn.Linear(64, 4)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = F.relu(self.linear3(x))
x = self.linear4(x)
return x
| 26.090909 | 40 | 0.614983 | 89 | 574 | 3.876404 | 0.337079 | 0.127536 | 0.086957 | 0.06087 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052009 | 0.263066 | 574 | 21 | 41 | 27.333333 | 0.763593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.315789 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
86a56f0edecf80d356ff08ee98bf71f2570298d6 | 475 | py | Python | chrome/test/pyautolib/pyauto_errors.py | Gitman1989/chromium | 2b1cceae1075ef012fb225deec8b4c8bbe4bc897 | [
"BSD-3-Clause"
] | 2 | 2017-09-02T19:08:28.000Z | 2021-11-15T15:15:14.000Z | chrome/test/pyautolib/pyauto_errors.py | Gitman1989/chromium | 2b1cceae1075ef012fb225deec8b4c8bbe4bc897 | [
"BSD-3-Clause"
] | null | null | null | chrome/test/pyautolib/pyauto_errors.py | Gitman1989/chromium | 2b1cceae1075ef012fb225deec8b4c8bbe4bc897 | [
"BSD-3-Clause"
] | 1 | 2020-04-13T05:45:10.000Z | 2020-04-13T05:45:10.000Z | #!/usr/bin/python
# Copyright (c) 2010 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""PyAuto Errors."""
class JSONInterfaceError(RuntimeError):
"""Represent an error in the JSON ipc interface."""
pass
class NTPThumbnailNotShownError(RuntimeError):
"""Represent an error from attempting to manipulate a NTP thumbnail that
is not visible to a real user."""
pass
| 27.941176 | 74 | 0.745263 | 68 | 475 | 5.205882 | 0.779412 | 0.028249 | 0.129944 | 0.158192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010127 | 0.168421 | 475 | 16 | 75 | 29.6875 | 0.886076 | 0.711579 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
86d7d9118c2549b70d76206f61993c2c3d5ee250 | 163 | py | Python | class/lect/Lect-08/read-phone5.py | MikenzieAlasca/F21-1010 | a7c15b8d9bf84f316aa6921f6d8a588c513a22b8 | [
"MIT"
] | 5 | 2021-09-09T21:08:14.000Z | 2021-12-14T02:30:52.000Z | class/lect/Lect-08/read-phone5.py | MikenzieAlasca/F21-1010 | a7c15b8d9bf84f316aa6921f6d8a588c513a22b8 | [
"MIT"
] | null | null | null | class/lect/Lect-08/read-phone5.py | MikenzieAlasca/F21-1010 | a7c15b8d9bf84f316aa6921f6d8a588c513a22b8 | [
"MIT"
] | 8 | 2021-09-09T17:46:07.000Z | 2022-02-08T22:41:35.000Z |
from readNameListCSV import readNameListCSV
x = readNameListCSV("5-lines.phone.csv" )
for key in x:
print ( "name={} phone={}".format(key,x[key]))
| 18.111111 | 58 | 0.650307 | 21 | 163 | 5.047619 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.196319 | 163 | 8 | 59 | 20.375 | 0.801527 | 0 | 0 | 0 | 0 | 0 | 0.254658 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86dcd8884ecf99a16857b6f5eb7e9dcece6c8729 | 658 | py | Python | backend/app/db/init_db.py | BartlomiejRasztabiga/Rentally | ba70199d329895a5295ceddd0ecc4c61928890dd | [
"MIT"
] | 2 | 2021-01-11T23:24:29.000Z | 2021-01-12T09:55:58.000Z | backend/app/db/init_db.py | BartlomiejRasztabiga/Rentally | ba70199d329895a5295ceddd0ecc4c61928890dd | [
"MIT"
] | null | null | null | backend/app/db/init_db.py | BartlomiejRasztabiga/Rentally | ba70199d329895a5295ceddd0ecc4c61928890dd | [
"MIT"
] | null | null | null | from sqlalchemy.orm import Session
from app import schemas, services
from app.core.config import settings
from app.db import base # noqa: F401
# make sure all SQL Alchemy models are imported (app.db.base) before initializing DB
# otherwise, SQL Alchemy might fail to initialize relationships properly
def init_db(db: Session) -> None:
user = services.user.get_by_email(db, email=settings.FIRST_SUPERUSER)
if not user:
user_in = schemas.UserCreateDto(
email=settings.FIRST_SUPERUSER,
password=settings.FIRST_SUPERUSER_PASSWORD,
is_admin=True,
)
services.user.create(db, obj_in=user_in)
| 32.9 | 84 | 0.720365 | 90 | 658 | 5.144444 | 0.566667 | 0.045356 | 0.142549 | 0.116631 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005758 | 0.208207 | 658 | 19 | 85 | 34.631579 | 0.882917 | 0.24924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.076923 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
86df2a9250c300c37cd41f2757a8c92ef85c2966 | 8,355 | py | Python | utils/data_utils.py | luweishuang/TransFG | 4edfa775775ff96c703e4d907584cbbb23c6d595 | [
"MIT"
] | 1 | 2021-09-15T10:58:07.000Z | 2021-09-15T10:58:07.000Z | utils/data_utils.py | luweishuang/TransFG | 4edfa775775ff96c703e4d907584cbbb23c6d595 | [
"MIT"
] | null | null | null | utils/data_utils.py | luweishuang/TransFG | 4edfa775775ff96c703e4d907584cbbb23c6d595 | [
"MIT"
] | null | null | null | import logging
from PIL import Image
import os
import torch
from torchvision import transforms
from torch.utils.data import DataLoader, RandomSampler, DistributedSampler, SequentialSampler
from .dataset import CUB, CarsDataset, NABirds, dogs, INat2017, emptyJudge
from .autoaugment import AutoAugImageNetPolicy
logger = logging.getLogger(__name__)
def get_loader(args):
if args.local_rank not in [-1, 0]:
torch.distributed.barrier()
if args.dataset == 'CUB_200_2011':
train_transform=transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.RandomCrop((448, 448)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
test_transform=transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.CenterCrop((448, 448)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
trainset = CUB(root=args.data_root, is_train=True, transform=train_transform)
testset = CUB(root=args.data_root, is_train=False, transform=test_transform)
elif args.dataset == 'car':
trainset = CarsDataset(os.path.join(args.data_root,'devkit/cars_train_annos.mat'),
os.path.join(args.data_root,'cars_train'),
os.path.join(args.data_root,'devkit/cars_meta.mat'),
# cleaned=os.path.join(data_dir,'cleaned.dat'),
transform=transforms.Compose([
transforms.Resize((600, 600), Image.BILINEAR),
transforms.RandomCrop((448, 448)),
transforms.RandomHorizontalFlip(),
AutoAugImageNetPolicy(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
)
testset = CarsDataset(os.path.join(args.data_root,'cars_test_annos_withlabels.mat'),
os.path.join(args.data_root,'cars_test'),
os.path.join(args.data_root,'devkit/cars_meta.mat'),
# cleaned=os.path.join(data_dir,'cleaned_test.dat'),
transform=transforms.Compose([
transforms.Resize((600, 600), Image.BILINEAR),
transforms.CenterCrop((448, 448)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
)
elif args.dataset == 'dog':
train_transform=transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.RandomCrop((448, 448)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
test_transform=transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.CenterCrop((448, 448)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
trainset = dogs(root=args.data_root,
train=True,
cropped=False,
transform=train_transform,
download=False
)
testset = dogs(root=args.data_root,
train=False,
cropped=False,
transform=test_transform,
download=False
)
elif args.dataset == 'nabirds':
train_transform=transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.RandomCrop((448, 448)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
test_transform=transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.CenterCrop((448, 448)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
trainset = NABirds(root=args.data_root, train=True, transform=train_transform)
testset = NABirds(root=args.data_root, train=False, transform=test_transform)
elif args.dataset == 'INat2017':
train_transform=transforms.Compose([transforms.Resize((400, 400), Image.BILINEAR),
transforms.RandomCrop((304, 304)),
transforms.RandomHorizontalFlip(),
AutoAugImageNetPolicy(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
test_transform=transforms.Compose([transforms.Resize((400, 400), Image.BILINEAR),
transforms.CenterCrop((304, 304)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
trainset = INat2017(args.data_root, 'train', train_transform)
testset = INat2017(args.data_root, 'val', test_transform)
elif args.dataset == 'emptyJudge5' or args.dataset == 'emptyJudge4':
train_transform = transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
transforms.RandomCrop((448, 448)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# test_transform = transforms.Compose([transforms.Resize((600, 600), Image.BILINEAR),
# transforms.CenterCrop((448, 448)),
# transforms.ToTensor(),
# transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
test_transform = transforms.Compose([transforms.Resize((448, 448), Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
trainset = emptyJudge(root=args.data_root, is_train=True, transform=train_transform)
testset = emptyJudge(root=args.data_root, is_train=False, transform=test_transform)
if args.local_rank == 0:
torch.distributed.barrier()
train_sampler = RandomSampler(trainset) if args.local_rank == -1 else DistributedSampler(trainset)
test_sampler = SequentialSampler(testset) if args.local_rank == -1 else DistributedSampler(testset)
train_loader = DataLoader(trainset,
sampler=train_sampler,
batch_size=args.train_batch_size,
num_workers=4,
drop_last=True,
pin_memory=True)
test_loader = DataLoader(testset,
sampler=test_sampler,
batch_size=args.eval_batch_size,
num_workers=4,
pin_memory=True) if testset is not None else None
return train_loader, test_loader
| 61.433824 | 108 | 0.502214 | 763 | 8,355 | 5.390564 | 0.134993 | 0.031121 | 0.046681 | 0.113786 | 0.73912 | 0.722587 | 0.68636 | 0.656212 | 0.61099 | 0.61099 | 0 | 0.096806 | 0.389228 | 8,355 | 135 | 109 | 61.888889 | 0.709191 | 0.046798 | 0 | 0.521008 | 0 | 0 | 0.022496 | 0.007164 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008403 | false | 0 | 0.067227 | 0 | 0.084034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86e6a6818068e8a11e3df04731b18fdc8388018b | 565 | py | Python | data/getLocs.py | qmkarriem/endings | 4a17c69cac02ac0ef0c7fb7e6fc242666598d28c | [
"MIT"
] | null | null | null | data/getLocs.py | qmkarriem/endings | 4a17c69cac02ac0ef0c7fb7e6fc242666598d28c | [
"MIT"
] | null | null | null | data/getLocs.py | qmkarriem/endings | 4a17c69cac02ac0ef0c7fb7e6fc242666598d28c | [
"MIT"
] | null | null | null | latLongList = open('/Users/quran/Dropbox/MusicArtProjects/endsSentenceFunction/data/latLongList.txt', "r")
longsList = open('/Users/quran/Dropbox/MusicArtProjects/endsSentenceFunction/data/longsList.txt', "w")
#get Longitudes from list
for line in latLongList:
temp = line.split(', ')
longsList.write(temp[1][:-2])
longsList.write("\n")
longsListRead = open('/Users/quran/Dropbox/MusicArtProjects/endsSentenceFunction/data/longsList.txt', "r")
for line in longsListRead:
print line
latLongList.close()
longsList.close()
longsListRead.close()
| 33.235294 | 106 | 0.752212 | 64 | 565 | 6.640625 | 0.4375 | 0.063529 | 0.098824 | 0.148235 | 0.487059 | 0.487059 | 0.487059 | 0.343529 | 0.343529 | 0 | 0 | 0.003945 | 0.102655 | 565 | 16 | 107 | 35.3125 | 0.83432 | 0.042478 | 0 | 0 | 0 | 0 | 0.444444 | 0.431481 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86eb73046442ff7a83f84d6ee23935575c381893 | 646 | py | Python | pack/demos/Couple_different_packages.py | kingeta/VOPP | 2d1b80cf9be9f035cb9bda7f0b8f82fbda9bf226 | [
"MIT"
] | 4 | 2020-10-24T18:37:06.000Z | 2020-10-25T19:48:17.000Z | pack/demos/Couple_different_packages.py | kingeta/VOPP | 2d1b80cf9be9f035cb9bda7f0b8f82fbda9bf226 | [
"MIT"
] | null | null | null | pack/demos/Couple_different_packages.py | kingeta/VOPP | 2d1b80cf9be9f035cb9bda7f0b8f82fbda9bf226 | [
"MIT"
] | null | null | null | from request import request_function
payload = """{
"itemSets": [
{
"refId": 0,
"color": "tomato",
"weight": 0,
"dimensions": {
"x": 5,
"y": 6,
"z": 4
},
"quantity": 1
},
{
"refId": 1,
"color": "cornflowerblue",
"weight": 0,
"dimensions": {
"x": 2.5,
"y": 3,
"z": 2
},
"quantity": 4
}
],
"boxTypes": [
{
"weightMax": 150,
"name": "5x6x8",
"dimensions": {
"x": 5,
"y": 6,
"z": 8
}
}
]
}"""
r = request_function(payload)
print(r.text)
| 14.681818 | 36 | 0.362229 | 57 | 646 | 4.070175 | 0.561404 | 0.142241 | 0.189655 | 0.155172 | 0.12931 | 0.12931 | 0 | 0 | 0 | 0 | 0 | 0.060773 | 0.439628 | 646 | 43 | 37 | 15.023256 | 0.580111 | 0 | 0 | 0.225 | 0 | 0 | 0.843653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0.025 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
86fe482ad65fd62cf82770175f958166e697b592 | 11,092 | py | Python | lib/matplotlib/pylab.py | jbbrokaw/matplotlib | 86ec1b6fc5628bfb2d09797c58d7eed0ca8c2427 | [
"MIT",
"BSD-3-Clause"
] | 113 | 2015-08-16T22:38:56.000Z | 2022-01-27T16:46:09.000Z | lib/matplotlib/pylab.py | yingkailiang/matplotlib | 255a79b106c98c1904489afe6a754e4d943179d6 | [
"MIT",
"BSD-3-Clause"
] | 7 | 2016-10-21T04:15:22.000Z | 2020-02-15T04:06:19.000Z | lib/matplotlib/pylab.py | yingkailiang/matplotlib | 255a79b106c98c1904489afe6a754e4d943179d6 | [
"MIT",
"BSD-3-Clause"
] | 41 | 2015-08-16T22:38:28.000Z | 2022-01-26T21:21:30.000Z | """
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
MATLAB |reg| [*]_ analogs and similar arguments.
.. |reg| unicode:: 0xAE
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
autoscale - turn axis autoscaling on or off, and apply it
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
violinplot - make a violin plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a graphics property
grid - set whether gridding is on
hist - make a histogram
hold - set the axes hold state
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imsave - save array as an image file
imshow - plot image data
ishold - return the hold state of the current axes
legend - make an axes legend
locator_params - adjust parameters used in locating axis ticks
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
margins - set margins used in autoscaling
pause - pause for a specified interval
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make one subplot (numrows, numcols, axesnum)
subplots - make a figure with a set of (numrows, numcols) subplots
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
tick_params - control the appearance of ticks and tick labels
ticklabel_format - control the format of tick labels
title - add a title to the current axes
tricontour - make a contour plot on a triangular grid
tricontourf - make a filled contour plot on a triangular grid
tripcolor - make a pseudocolor plot on a triangular grid
triplot - plot a triangular grid
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
spectral - set the default colormap to spectral
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
amax - the maximum along dimension m
amin - the minimum along dimension m
corrcoef - correlation coefficient
cov - covariance matrix
mean - the mean along dimension m
median - the median along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
ksdensity - the kernel density estimate
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - Deprecated--please use loadtxt.
loadtxt - load ASCII data into array.
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - Deprecated--please use savetxt.
savetxt - save an array to an ASCII file.
trapz - trapezoidal integration
__end
.. [*] MATLAB is a registered trademark of The MathWorks, Inc.
"""
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
import sys, warnings
from matplotlib.cbook import flatten, is_string_like, exception_to_str, \
silent_list, iterable, dedent
import matplotlib as mpl
# make mpl.finance module available for backwards compatability, in case folks
# using pylab interface depended on not having to import it
import matplotlib.finance
from matplotlib.dates import date2num, num2date,\
datestr2num, strpdate2num, drange,\
epoch2num, num2epoch, mx2num,\
DateFormatter, IndexDateFormatter, DateLocator,\
RRuleLocator, YearLocator, MonthLocator, WeekdayLocator,\
DayLocator, HourLocator, MinuteLocator, SecondLocator,\
rrule, MO, TU, WE, TH, FR, SA, SU, YEARLY, MONTHLY,\
WEEKLY, DAILY, HOURLY, MINUTELY, SECONDLY, relativedelta
import matplotlib.dates # Do we need this at all?
# bring all the symbols in so folks can import them from
# pylab in one fell swoop
## We are still importing too many things from mlab; more cleanup is needed.
from matplotlib.mlab import griddata, stineman_interp, slopes, \
inside_poly, poly_below, poly_between, \
is_closed_polygon, path_length, distances_along_curve, vector_lengths
from matplotlib.mlab import window_hanning, window_none, detrend, demean, \
detrend_mean, detrend_none, detrend_linear, entropy, normpdf, \
find, longest_contiguous_ones, longest_ones, prepca, \
prctile, prctile_rank, \
center_matrix, rk4, bivariate_normal, get_xyz_where, \
get_sparse_matrix, dist, \
dist_point_to_segment, segments_intersect, fftsurr, movavg, \
exp_safe, \
amap, rms_flat, l1norm, l2norm, norm_flat, frange, identity, \
base_repr, binary_repr, log2, ispower2, \
rec_append_fields, rec_drop_fields, rec_join, csv2rec, rec2csv, isvector
import matplotlib.mlab as mlab
import matplotlib.cbook as cbook
from numpy import *
from numpy.fft import *
from numpy.random import *
from numpy.linalg import *
from matplotlib.pyplot import *
# provide the recommended module abbrevs in the pylab namespace
import matplotlib.pyplot as plt
import numpy as np
import numpy.ma as ma
# don't let numpy's datetime hide stdlib
import datetime
# This is needed, or bytes will be numpy.random.bytes from
# "from numpy.random import *" above
bytes = __builtins__['bytes']
| 38.648084 | 89 | 0.716372 | 1,576 | 11,092 | 4.995558 | 0.385152 | 0.013972 | 0.024768 | 0.04001 | 0.089038 | 0.017782 | 0.010669 | 0.010669 | 0.010669 | 0.010669 | 0 | 0.003348 | 0.245943 | 11,092 | 286 | 90 | 38.783217 | 0.937948 | 0.826632 | 0 | 0 | 0 | 0 | 0.002627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.488372 | 0 | 0.488372 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
810b2f5910eadb2cbbd8b9860fd49ee49a7f7731 | 21,760 | py | Python | Tests/test_SearchIO_hhsuite2_text.py | fredricj/biopython | 922063e5fec0b3c0c975f030b36eeea962e51091 | [
"BSD-3-Clause"
] | 2 | 2019-10-25T18:20:34.000Z | 2019-10-28T15:26:40.000Z | Tests/test_SearchIO_hhsuite2_text.py | fredricj/biopython | 922063e5fec0b3c0c975f030b36eeea962e51091 | [
"BSD-3-Clause"
] | null | null | null | Tests/test_SearchIO_hhsuite2_text.py | fredricj/biopython | 922063e5fec0b3c0c975f030b36eeea962e51091 | [
"BSD-3-Clause"
] | 1 | 2021-01-07T07:55:09.000Z | 2021-01-07T07:55:09.000Z | # Copyright 2019 by Jens Thomas. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Tests for SearchIO HhsuiteIO parsers."""
import os
import unittest
from Bio.SearchIO import parse
# test case files are in the Blast directory
TEST_DIR = "HHsuite"
FMT = "hhsuite2-text"
def get_file(filename):
"""Return the path of a test file."""
return os.path.join(TEST_DIR, filename)
class HhsuiteCases(unittest.TestCase):
"""Test hhsuite2 output."""
def test_2uvo(self):
"""Parsing 2uvo."""
txt_file = get_file("2uvo_hhblits.hhr")
qresults = parse(txt_file, FMT)
# test first and only qresult
qresult = next(qresults)
num_hits = 16
self.assertEqual("HHSUITE", qresult.program)
self.assertEqual("2UVO:A|PDBID|CHAIN|SEQUENCE", qresult.id)
self.assertEqual(171, qresult.seq_len)
self.assertEqual(num_hits, len(qresult))
hit = qresult[0]
self.assertEqual("2uvo_A", hit.id)
self.assertEqual("Agglutinin isolectin 1; carbohydrate-binding protein, hevein domain, chitin-binding,"
" GERM agglutinin, chitin-binding protein; HET: NDG NAG GOL; 1.40A {Triticum aestivum}"
" PDB: 1wgc_A* 2cwg_A* 2x3t_A* 4aml_A* 7wga_A 9wga_A 2wgc_A 1wgt_A 1k7t_A* 1k7v_A* 1k7u_A"
" 2x52_A* 1t0w_A*", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(3.7e-34, hit.evalue)
self.assertEqual(210.31, hit.score)
self.assertEqual(2, len(hit))
hsp = hit.hsps[0]
self.assertTrue(hsp.is_included)
self.assertEqual(0, hsp.output_index)
self.assertEqual(99.95, hsp.prob)
self.assertEqual(210.31, hsp.score)
self.assertEqual(3.7e-34, hsp.evalue)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(171, hsp.hit_end)
self.assertEqual(0, hsp.query_start)
self.assertEqual(171, hsp.query_end)
self.assertEqual("ERCGEQGSNMECPNNLCCSQYGYCGMGGDYCGKGCQNGACWTSKRCGSQAGGATCTNNQCCSQYGYCGFGAEYC"
"GAGCQGGPCRADIKCGSQAGGKLCPNNLCCSQWGFCGLGSEFCGGGCQSGACSTDKPCGKDAGGRVCTNNYCCS"
"KWGSCGIGPGYCGAGCQSGGCDG",
str(hsp.hit.seq))
self.assertEqual("ERCGEQGSNMECPNNLCCSQYGYCGMGGDYCGKGCQNGACWTSKRCGSQAGGATCTNNQCCSQYGYCGFGAEYC"
"GAGCQGGPCRADIKCGSQAGGKLCPNNLCCSQWGFCGLGSEFCGGGCQSGACSTDKPCGKDAGGRVCTNNYCCS"
"KWGSCGIGPGYCGAGCQSGGCDG",
str(hsp.query.seq))
# Check last hit
hit = qresult[num_hits - 1]
self.assertEqual("4z8i_A", hit.id)
self.assertEqual("BBTPGRP3, peptidoglycan recognition protein 3; chitin-binding domain, "
"AM hydrolase; 2.70A {Branchiostoma belcheri tsingtauense}", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(0.11, hit.evalue)
self.assertEqual(36.29, hit.score)
self.assertEqual(2, len(hit))
# Check we can get the original last HSP from the file.
num_hsps = 32
self.assertEqual(num_hsps, len(qresult.hsps))
hsp = qresult.hsps[-1]
self.assertTrue(hsp.is_included)
self.assertEqual(num_hsps - 1, hsp.output_index)
self.assertEqual(2.6, hsp.evalue)
self.assertEqual(25.90, hsp.score)
self.assertEqual(40.43, hsp.prob)
self.assertEqual(10, hsp.hit_start)
self.assertEqual(116, hsp.hit_end)
self.assertEqual(53, hsp.query_start)
self.assertEqual(163, hsp.query_end)
self.assertEqual("XCXXXXCCXXXXXCXXXXXXCXXXCXXXXCXXXXXCXXX--XXXCXXXXCCXXXXXCXXXXXXCXXXCXXXXCXXXXXCX"
"XX--XXXCXXXXCCXXXXXCXXXXXXCXXX",
str(hsp.hit.seq))
self.assertEqual("TCTNNQCCSQYGYCGFGAEYCGAGCQGGPCRADIKCGSQAGGKLCPNNLCCSQWGFCGLGSEFCGGGCQSGACSTDKPCG"
"KDAGGRVCTNNYCCSKWGSCGIGPGYCGAG",
str(hsp.query.seq))
def test_2uvo_onlyheader(self):
"""Parsing 4uvo with only header present."""
txt_file = get_file("2uvo_hhblits_onlyheader.hhr")
qresults = parse(txt_file, FMT)
with self.assertRaises(RuntimeError):
next(qresults)
def test_2uvo_emptytable(self):
"""Parsing 4uvo with empty results table."""
txt_file = get_file("2uvo_hhblits_emptytable.hhr")
qresults = parse(txt_file, FMT)
with self.assertRaises(RuntimeError):
next(qresults)
def test_allx(self):
"""Parsing allx.hhr file."""
txt_file = get_file("allx.hhr")
qresults = parse(txt_file, FMT)
# test first and only qresult
qresult = next(qresults)
num_hits = 10
self.assertEqual("HHSUITE", qresult.program)
self.assertEqual("Only X amino acids", qresult.id)
self.assertEqual(39, qresult.seq_len)
self.assertEqual(num_hits, len(qresult))
hit = qresult[0]
self.assertEqual("1klr_A", hit.id)
self.assertEqual("Zinc finger Y-chromosomal protein; transcription; NMR {Synthetic} SCOP: g.37.1.1 PDB: "
"5znf_A 1kls_A 1xrz_A* 7znf_A", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(3.4E+04, hit.evalue)
self.assertEqual(-0.01, hit.score)
self.assertEqual(1, len(hit))
hsp = hit.hsps[0]
self.assertTrue(hsp.is_included)
self.assertEqual(0, hsp.output_index)
self.assertEqual(3.4E+04, hsp.evalue)
self.assertEqual(-0.01, hsp.score)
self.assertEqual(0.04, hsp.prob)
self.assertEqual(23, hsp.hit_start)
self.assertEqual(24, hsp.hit_end)
self.assertEqual(38, hsp.query_start)
self.assertEqual(39, hsp.query_end)
self.assertEqual("T", str(hsp.hit.seq))
self.assertEqual("X", str(hsp.query.seq))
# Check last hit
hit = qresult[num_hits - 1]
self.assertEqual("1zfd_A", hit.id)
self.assertEqual("SWI5; DNA binding motif, zinc finger DNA binding domain; NMR {Saccharomyces cerevisiae}"
" SCOP: g.37.1.1", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(3.6e+04, hit.evalue)
self.assertEqual(0.03, hit.score)
self.assertEqual(1, len(hit))
# Check we can get the original last HSP from the file.
num_hsps = num_hits
self.assertEqual(num_hsps, len(qresult.hsps))
hsp = qresult.hsps[-1]
self.assertTrue(hsp.is_included)
self.assertEqual(num_hsps - 1, hsp.output_index)
self.assertEqual(3.6e+04, hsp.evalue)
self.assertEqual(0.03, hsp.score)
self.assertEqual(0.03, hsp.prob)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(1, hsp.hit_end)
self.assertEqual(3, hsp.query_start)
self.assertEqual(4, hsp.query_end)
self.assertEqual("D", str(hsp.hit.seq))
self.assertEqual("X", str(hsp.query.seq))
def test_4y9h_nossm(self):
"""Parsing 4y9h_hhsearch_server_NOssm.hhr file."""
txt_file = get_file("4y9h_hhsearch_server_NOssm.hhr")
qresults = parse(txt_file, FMT)
# test first and only qresult
qresult = next(qresults)
num_hits = 29
self.assertEqual("HHSUITE", qresult.program)
self.assertEqual("4Y9H:A|PDBID|CHAIN|SEQUENCE", qresult.id)
self.assertEqual(226, qresult.seq_len)
self.assertEqual(num_hits, len(qresult))
hit = qresult[0]
self.assertEqual("5ZIM_A", hit.id)
self.assertEqual("Bacteriorhodopsin; proton pump, membrane protein, PROTON; HET: L2P, RET; 1.25A {Halobacterium"
" salinarum}; Related PDB entries: 1R84_A 1KG8_A 1KME_B 1KGB_A 1KG9_A 1KME_A 4X31_A 5ZIL_A 1E0P_A "
"4X32_A 5ZIN_A 1S53_B 1S51_B 1S53_A 1S54_A 1F50_A 1S54_B 1S51_A 1F4Z_A 5J7A_A 1S52_B 1S52_A 4Y9H_A "
"3T45_A 3T45_C 3T45_B 1C3W_A 1L0M_A", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(2.1e-48, hit.evalue)
self.assertEqual(320.44, hit.score)
self.assertEqual(1, len(hit))
hsp = hit.hsps[0]
self.assertTrue(hsp.is_included)
self.assertEqual(0, hsp.output_index)
self.assertEqual(2.1e-48, hsp.evalue)
self.assertEqual(320.44, hsp.score)
self.assertEqual(100.00, hsp.prob)
self.assertEqual(1, hsp.hit_start)
self.assertEqual(227, hsp.hit_end)
self.assertEqual(0, hsp.query_start)
self.assertEqual(226, hsp.query_end)
self.assertEqual("GRPEWIWLALGTALMGLGTLYFLVKGMGVSDPDAKKFYAITTLVPAIAFTMYLSMLLGYGLTMVPFGGEQNPIYWARYAD"
"WLFTTPLLLLDLALLVDADQGTILALVGADGIMIGTGLVGALTKVYSYRFVWWAISTAAMLYILYVLFFGFTSKAESMRP"
"EVASTFKVLRNVTVVLWSAYPVVWLIGSEGAGIVPLNIETLLFMVLDVSAKVGFGLILLRSRAIFG", str(hsp.hit.seq))
self.assertEqual("GRPEWIWLALGTALMGLGTLYFLVKGMGVSDPDAKKFYAITTLVPAIAFTMYLSMLLGYGLTMVPFGGEQNPIYWARYAD"
"WLFTTPLLLLDLALLVDADQGTILALVGADGIMIGTGLVGALTKVYSYRFVWWAISTAAMLYILYVLFFGFTSKAESMRP"
"EVASTFKVLRNVTVVLWSAYPVVWLIGSEGAGIVPLNIETLLFMVLDVSAKVGFGLILLRSRAIFG", str(hsp.query.seq))
# Check last hit
hit = qresult[num_hits - 1]
self.assertEqual("5ABB_Z", hit.id)
self.assertEqual("PROTEIN TRANSLOCASE SUBUNIT SECY, PROTEIN; TRANSLATION, RIBOSOME, MEMBRANE PROTEIN, "
"TRANSLOCON; 8.0A {ESCHERICHIA COLI}", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(3.3e-05, hit.evalue)
self.assertEqual(51.24, hit.score)
self.assertEqual(1, len(hit))
# Check we can get the original last HSP from the file.
num_hsps = num_hits
self.assertEqual(num_hsps, len(qresult.hsps))
hsp = qresult.hsps[-1]
self.assertTrue(hsp.is_included)
self.assertEqual(num_hsps - 1, hsp.output_index)
self.assertEqual(3.3e-05, hsp.evalue)
self.assertEqual(51.24, hsp.score)
self.assertEqual(96.55, hsp.prob)
self.assertEqual(14, hsp.hit_start)
self.assertEqual(65, hsp.hit_end)
self.assertEqual(7, hsp.query_start)
self.assertEqual(59, hsp.query_end)
self.assertEqual("FWLVTAALLASTVFFFVERDRVS-AKWKTSLTVSGLVTGIAFWHYMYMRGVW", str(hsp.hit.seq))
self.assertEqual("LALGTALMGLGTLYFLVKGMGVSDPDAKKFYAITTLVPAIAFTMYLSMLLGY", str(hsp.query.seq))
def test_q9bsu1(self):
"""Parsing hhsearch_q9bsu1_uniclust_w_ss_pfamA_30.hhr file."""
txt_file = get_file("hhsearch_q9bsu1_uniclust_w_ss_pfamA_30.hhr")
qresults = parse(txt_file, FMT)
# test first and only qresult
qresult = next(qresults)
num_hits = 12
self.assertEqual("HHSUITE", qresult.program)
self.assertEqual("sp|Q9BSU1|CP070_HUMAN UPF0183 protein C16orf70 OS=Homo sapiens OX=9606 GN=C16orf70"
" PE=1 SV=1", qresult.id)
self.assertEqual(422, qresult.seq_len)
self.assertEqual(num_hits, len(qresult))
hit = qresult[0]
self.assertEqual("PF03676.13", hit.id)
self.assertEqual("UPF0183 ; Uncharacterised protein family (UPF0183)", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(2e-106, hit.evalue)
self.assertEqual(822.75, hit.score)
self.assertEqual(1, len(hit))
hsp = hit.hsps[0]
self.assertTrue(hsp.is_included)
self.assertEqual(0, hsp.output_index)
self.assertEqual(2e-106, hsp.evalue)
self.assertEqual(822.75, hsp.score)
self.assertEqual(100.00, hsp.prob)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(395, hsp.hit_end)
self.assertEqual(10, hsp.query_start)
self.assertEqual(407, hsp.query_end)
self.assertEqual("SLGNEQWEFTLGMPLAQAVAILQKHCRIIKNVQVLYSEQSPLSHDLILNLTQDGIKLMFDAFNQRLKVIEVCDLTKVKLK"
"YCGVHFNSQAIAPTIEQIDQSFGATHPGVYNSAEQLFHLNFRGLSFSFQLDSWTEAPKYEPNFAHGLASLQIPHGATVKR"
"MYIYSGNSLQDTKAPMMPLSCFLGNVYAESVDVLRDGTGPAGLRLRLLAAGCGPGLLADAKMRVFERSVYFGDSCQDVLS"
"MLGSPHKVFYKSEDKMKIHSPSPHKQVPSKCNDYFFNYFTLGVDILFDANTHKVKKFVLHTNYPGHYNFNIYHRCEFKIP"
"LAIKKENADGQTE--TCTTYSKWDNIQELLGHPVEKPVVLHRSSSPNNTNPFGSTFCFGLQRMIFEVMQNNHIASVTLY",
str(hsp.query.seq))
self.assertEqual("EQWE----FALGMPLAQAISILQKHCRIIKNVQVLYSEQMPLSHDLILNLTQDGIKLLFDACNQRLKVIEVYDLTKVKLK"
"YCGVHFNSQAIAPTIEQIDQSFGATHPGVYNAAEQLFHLNFRGLSFSFQLDSWSEAPKYEPNFAHGLASLQIPHGATVKR"
"MYIYSGNNLQETKAPAMPLACFLGNVYAECVEVLRDGAGPLGLKLRLLTAGCGPGVLADTKVRAVERSIYFGDSCQDVLS"
"ALGSPHKVFYKSEDKMKIHSPSPHKQVPSKCNDYFFNYYILGVDILFDSTTHLVKKFVLHTNFPGHYNFNIYHRCDFKIP"
"LIIKKDGADAHSEDCILTTYSKWDQIQELLGHPMEKPVVLHRSSSANNTNPFGSTFCFGLQRMIFEVMQNNHIASVTLY",
str(hsp.hit.seq))
# Check last hit
hit = qresult[num_hits - 1]
self.assertEqual("PF10049.8", hit.id)
self.assertEqual("DUF2283 ; Protein of unknown function (DUF2283)", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(78, hit.evalue)
self.assertEqual(19.81, hit.score)
self.assertEqual(1, len(hit))
# Check we can get the original last HSP from the file.
num_hsps = 16
self.assertEqual(num_hsps, len(qresult.hsps))
hsp = qresult.hsps[-1]
self.assertTrue(hsp.is_included)
self.assertEqual(num_hsps - 1, hsp.output_index)
self.assertEqual(78, hsp.evalue)
self.assertEqual(19.81, hsp.score)
self.assertEqual(20.88, hsp.prob)
self.assertEqual(25, hsp.hit_start)
self.assertEqual(48, hsp.hit_end)
self.assertEqual(61, hsp.query_start)
self.assertEqual(85, hsp.query_end)
self.assertEqual("APNVIFDYDA-EGRIVGIELLDAR", str(hsp.hit.seq))
self.assertEqual("QDGIKLMFDAFNQRLKVIEVCDLT", str(hsp.query.seq))
def test_4p79(self):
"""Parsing 4p79_hhsearch_server_NOssm.hhr file."""
txt_file = get_file("4p79_hhsearch_server_NOssm.hhr")
qresults = parse(txt_file, FMT)
# test first and only qresult
qresult = next(qresults)
num_hits = 8
self.assertEqual("HHSUITE", qresult.program)
self.assertEqual("4P79:A|PDBID|CHAIN|SEQUENCE", qresult.id)
self.assertEqual(198, qresult.seq_len)
self.assertEqual(num_hits, len(qresult))
hit = qresult[0]
self.assertEqual("4P79_A", hit.id)
self.assertEqual("cell adhesion protein; cell adhesion, tight junction, membrane; HET: OLC"
", MSE; 2.4A {Mus musculus}", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(6.8e-32, hit.evalue)
self.assertEqual(194.63, hit.score)
self.assertEqual(1, len(hit))
hsp = hit.hsps[0]
self.assertTrue(hsp.is_included)
self.assertEqual(0, hsp.output_index)
self.assertEqual(6.8e-32, hsp.evalue)
self.assertEqual(194.63, hsp.score)
self.assertEqual(99.94, hsp.prob)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(198, hsp.hit_end)
self.assertEqual(0, hsp.query_start)
self.assertEqual(198, hsp.query_end)
self.assertEqual("GSEFMSVAVETFGFFMSALGLLMLGLTLSNSYWRVSTVHGNVITTNTIFENLWYSCATDSLGVSNCWDFPSMLALSGYVQ"
"GCRALMITAILLGFLGLFLGMVGLRATNVGNMDLSKKAKLLAIAGTLHILAGACGMVAISWYAVNITTDFFNPLYAGTKY"
"ELGPALYLGWSASLLSILGGICVFSTAAASSKEEPATR", str(hsp.query.seq))
self.assertEqual("GSEFMSVAVETFGFFMSALGLLMLGLTLSNSYWRVSTVHGNVITTNTIFENLWYSCATDSLGVSNCWDFPSMLALSGYVQ"
"GCRALMITAILLGFLGLFLGMVGLRATNVGNMDLSKKAKLLAIAGTLHILAGACGMVAISWYAVNITTDFFNPLYAGTKY"
"ELGPALYLGWSASLLSILGGICVFSTAAASSKEEPATR", str(hsp.hit.seq))
# Check last hit
hit = qresult[num_hits - 1]
self.assertEqual("5YQ7_F", hit.id)
self.assertEqual("Beta subunit of light-harvesting 1; Photosynthetic core complex, PHOTOSYNTHESIS; "
"HET: MQE, BCL, HEM, KGD, BPH;{Roseiflexus castenholzii}; Related PDB entries: 5YQ7_V"
" 5YQ7_3 5YQ7_T 5YQ7_J 5YQ7_9 5YQ7_N 5YQ7_A 5YQ7_P 5YQ7_H 5YQ7_D 5YQ7_5 5YQ7_7 5YQ7_1 "
"5YQ7_R", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(6.7, hit.evalue)
self.assertEqual(20.51, hit.score)
self.assertEqual(1, len(hit))
# Check we can get the original last HSP from the file.
num_hsps = num_hits
self.assertEqual(num_hsps, len(qresult.hsps))
hsp = qresult.hsps[-1]
self.assertTrue(hsp.is_included)
self.assertEqual(num_hsps - 1, hsp.output_index)
self.assertEqual(6.7, hsp.evalue)
self.assertEqual(20.51, hsp.score)
self.assertEqual(52.07, hsp.prob)
self.assertEqual(8, hsp.hit_start)
self.assertEqual(42, hsp.hit_end)
self.assertEqual(5, hsp.query_start)
self.assertEqual(37, hsp.query_end)
self.assertEqual("RTSVVVSTLLGLVMALLIHFVVLSSGAFNWLRAP", str(hsp.hit.seq))
self.assertEqual("SVAVETFGFFMSALGLLMLGLTLSNS--YWRVST", str(hsp.query.seq))
def test_9590198(self):
"""Parsing hhpred_9590198.hhr file."""
txt_file = get_file("hhpred_9590198.hhr")
qresults = parse(txt_file, FMT)
# test first and only qresult
qresult = next(qresults)
num_hits = 22
self.assertEqual("HHSUITE", qresult.program)
self.assertEqual("sp|Q9BSU1|CP070_HUMAN UPF0183 protein C16orf70 OS=Homo sapiens OX=9606 GN=C16orf70"
" PE=1 SV=1",
qresult.id)
self.assertEqual(422, qresult.seq_len)
self.assertEqual(num_hits, len(qresult))
hit = qresult[0]
self.assertEqual("PF03676.14", hit.id)
self.assertEqual("UPF0183 ; Uncharacterised protein family (UPF0183)", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(9.9e-102, hit.evalue)
self.assertEqual(792.76, hit.score)
self.assertEqual(1, len(hit))
hsp = hit.hsps[0]
self.assertTrue(hsp.is_included)
self.assertEqual(0, hsp.output_index)
self.assertEqual(9.9e-102, hsp.evalue)
self.assertEqual(792.76, hsp.score)
self.assertEqual(100.00, hsp.prob)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(394, hsp.hit_end)
self.assertEqual(21, hsp.query_start)
self.assertEqual(407, hsp.query_end)
self.assertEqual("GMHFSQSVAIIQSQVGTIRGVQVLYSDQNPLSVDLVINMPQDGMRLIFDPVAQRLKIIEIYNMKLVKLRYSGMCFNSPEI"
"TPSIEQVEHCFGATHPGLYDSQRHLFALNFRGLSFYFPVDS-----KFEPGYAHGLGSLQFPNGGSPVVSRTTIYYGSQH"
"QLSSNTSSRVSGVPLPDLPLSCYRQQLHLRRCDVLRNTTSTMGLRLHMFTEGT--SRALEPSQVALVRVVRFGDSCQGVA"
"RALGAPARLYYKADDKMRIHRPTARRR-PPPASDYLFNYFTLGLDVLFDARTNQVKKFVLHTNYPGHYNFNMYHRCEFEL"
"TVQPD-KSEAHSLVESGGGVAVTAYSKWEVVSRAL-RVCERPVVLNRASSTNTTNPFGSTFCYGYQDIIFEVMSNNYIAS"
"ITLY", str(hsp.hit.seq))
self.assertEqual("GMPLAQAVAILQKHCRIIKNVQVLYSEQSPLSHDLILNLTQDGIKLMFDAFNQRLKVIEVCDLTKVKLKYCGVHFNSQAI"
"APTIEQIDQSFGATHPGVYNSAEQLFHLNFRGLSFSFQLDSWTEAPKYEPNFAHGLASLQIPHGA--TVKRMYIYSGNSL"
"Q---------DTKA-PMMPLSCFLGNVYAESVDVLRDGTGPAGLRLRLLAAGCGPGLLADAKMRVFERSVYFGDSCQDVL"
"SMLGSPHKVFYKSEDKMKIHSPSPHKQVPSKCNDYFFNYFTLGVDILFDANTHKVKKFVLHTNYPGHYNFNIYHRCEFKI"
"PLAIKKENADG------QTETCTTYSKWDNIQELLGHPVEKPVVLHRSSSPNNTNPFGSTFCFGLQRMIFEVMQNNHIAS"
"VTLY", str(hsp.query.seq))
# Check last hit
hit = qresult[num_hits - 1]
self.assertEqual("4IL7_A", hit.id)
self.assertEqual("Putative uncharacterized protein; partial jelly roll fold, hypothetical; 1.4A "
"{Sulfolobus turreted icosahedral virus}", hit.description)
self.assertTrue(hit.is_included)
self.assertEqual(6.8e+02, hit.evalue)
self.assertEqual(22.72, hit.score)
self.assertEqual(1, len(hit))
# Check we can get the original last HSP from the file.
num_hsps = 34
self.assertEqual(num_hsps, len(qresult.hsps))
hsp = qresult.hsps[-1]
self.assertTrue(hsp.is_included)
self.assertEqual(num_hsps - 1, hsp.output_index)
self.assertEqual(3.9e+02, hsp.evalue)
self.assertEqual(22.84, hsp.score)
self.assertEqual(21.56, hsp.prob)
self.assertEqual(7, hsp.hit_start)
self.assertEqual(96, hsp.hit_end)
self.assertEqual(18, hsp.query_start)
self.assertEqual(114, hsp.query_end)
self.assertEqual("FTLGMPLAQAVAILQKHCRIIKNVQVLYSEQSPLSHDLILNLTQDGIKLMFDAFNQRLKVIEVCDLTKVKLKYCGVH-FN"
"SQAIAPTIEQIDQSFGA", str(hsp.query.seq))
self.assertEqual("IQFGMDRTLVWQLAGADQSCSDQVERIICYNNPDH-------YGPQGHFFFNA-ADKLIHKRQMELFPAPKPTMRLATYN"
"KTQTGMTEAQFWAAVPS", str(hsp.hit.seq))
if __name__ == "__main__":
runner = unittest.TextTestRunner(verbosity=2)
unittest.main(testRunner=runner)
| 46.101695 | 125 | 0.656204 | 2,342 | 21,760 | 5.979078 | 0.198121 | 0.224952 | 0.023995 | 0.042848 | 0.644005 | 0.537099 | 0.497679 | 0.386989 | 0.369992 | 0.356281 | 0 | 0.044903 | 0.245726 | 21,760 | 471 | 126 | 46.199575 | 0.808262 | 0.056388 | 0 | 0.407507 | 0 | 0.008043 | 0.280763 | 0.179272 | 0 | 0 | 0 | 0 | 0.632708 | 1 | 0.024129 | false | 0 | 0.008043 | 0 | 0.037534 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
811caa4ddae0fb9555b35a25b391f8d84b850935 | 3,964 | py | Python | starthinker/task/dcm_api/schema/conversionStatus.py | quan/starthinker | 4e392415d77affd4a3d91165d1141ab38efd3b8b | [
"Apache-2.0"
] | null | null | null | starthinker/task/dcm_api/schema/conversionStatus.py | quan/starthinker | 4e392415d77affd4a3d91165d1141ab38efd3b8b | [
"Apache-2.0"
] | null | null | null | starthinker/task/dcm_api/schema/conversionStatus.py | quan/starthinker | 4e392415d77affd4a3d91165d1141ab38efd3b8b | [
"Apache-2.0"
] | null | null | null | ###########################################################################
#
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
###########################################################################
conversionStatus_Schema = [[{
'name': 'childDirectedTreatment',
'type': 'BOOLEAN',
'mode': 'NULLABLE'
}, {
'name':
'customVariables',
'type':
'RECORD',
'mode':
'REPEATED',
'fields': [{
'description': '',
'name': 'kind',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'description':
'U1, U10, U100, U11, U12, U13, U14, U15, U16, U17, U18, U19, U2, '
'U20, U21, U22, U23, U24, U25, U26, U27, U28, U29, U3, U30, U31, '
'U32, U33, U34, U35, U36, U37, U38, U39, U4, U40, U41, U42, U43, '
'U44, U45, U46, U47, U48, U49, U5, U50, U51, U52, U53, U54, U55, '
'U56, U57, U58, U59, U6, U60, U61, U62, U63, U64, U65, U66, U67, '
'U68, U69, U7, U70, U71, U72, U73, U74, U75, U76, U77, U78, U79, '
'U8, U80, U81, U82, U83, U84, U85, U86, U87, U88, U89, U9, U90, '
'U91, U92, U93, U94, U95, U96, U97, U98, U99',
'name':
'type',
'type':
'STRING',
'mode':
'NULLABLE'
}, {
'description': '',
'name': 'value',
'type': 'STRING',
'mode': 'NULLABLE'
}]
}, {
'description': '',
'name': 'encryptedUserId',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'name': 'encryptedUserIdCandidates',
'type': 'STRING',
'mode': 'REPEATED'
}, {
'description': '',
'name': 'floodlightActivityId',
'type': 'INT64',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'floodlightConfigurationId',
'type': 'INT64',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'gclid',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'kind',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'name': 'limitAdTracking',
'type': 'BOOLEAN',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'mobileDeviceId',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'name': 'nonPersonalizedAd',
'type': 'BOOLEAN',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'ordinal',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'quantity',
'type': 'INT64',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'timestampMicros',
'type': 'INT64',
'mode': 'NULLABLE'
}, {
'name': 'treatmentForUnderage',
'type': 'BOOLEAN',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'value',
'type': 'FLOAT64',
'mode': 'NULLABLE'
}], {
'name':
'errors',
'type':
'RECORD',
'mode':
'REPEATED',
'fields': [{
'description':
'INTERNAL, INVALID_ARGUMENT, NOT_FOUND, PERMISSION_DENIED',
'name':
'code',
'type':
'STRING',
'mode':
'NULLABLE'
}, {
'description': '',
'name': 'kind',
'type': 'STRING',
'mode': 'NULLABLE'
}, {
'description': '',
'name': 'message',
'type': 'STRING',
'mode': 'NULLABLE'
}]
}, {
'description': '',
'name': 'kind',
'type': 'STRING',
'mode': 'NULLABLE'
}]
| 25.410256 | 78 | 0.478809 | 362 | 3,964 | 5.232044 | 0.555249 | 0.133052 | 0.170011 | 0.185322 | 0.383316 | 0.353749 | 0.133052 | 0.133052 | 0.099789 | 0.099789 | 0 | 0.075027 | 0.293895 | 3,964 | 155 | 79 | 25.574194 | 0.601643 | 0.140515 | 0 | 0.773723 | 0 | 0.051095 | 0.506173 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
81245449ce95fdf57bb1c4c72d405f29413a1815 | 136 | py | Python | insta loader.py | mhamaneamogh50/All_Python_Pro | e943d1924f160a98e5df612b920a67ef80be3b61 | [
"0BSD"
] | null | null | null | insta loader.py | mhamaneamogh50/All_Python_Pro | e943d1924f160a98e5df612b920a67ef80be3b61 | [
"0BSD"
] | null | null | null | insta loader.py | mhamaneamogh50/All_Python_Pro | e943d1924f160a98e5df612b920a67ef80be3b61 | [
"0BSD"
] | null | null | null | import instaloader
mod=instaloader.Instaloader
a=input("Enter the profile name --> ")
mod.download_profile(a,profile_pic_only=True)
| 27.2 | 46 | 0.786765 | 19 | 136 | 5.473684 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102941 | 136 | 4 | 47 | 34 | 0.852459 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
81287d1cbc1ce6953c401f5c870a6781a855265e | 1,654 | py | Python | examples/kubernetes/files/conda_store_config.py | peytondmurray/conda-store | 40fe4a0cecbefaff7cac819244f9862bd188b045 | [
"BSD-3-Clause"
] | null | null | null | examples/kubernetes/files/conda_store_config.py | peytondmurray/conda-store | 40fe4a0cecbefaff7cac819244f9862bd188b045 | [
"BSD-3-Clause"
] | null | null | null | examples/kubernetes/files/conda_store_config.py | peytondmurray/conda-store | 40fe4a0cecbefaff7cac819244f9862bd188b045 | [
"BSD-3-Clause"
] | null | null | null | import logging
from conda_store_server.storage import S3Storage
from conda_store_server.server.auth import DummyAuthentication
# ==================================
# conda-store settings
# ==================================
c.CondaStore.storage_class = S3Storage
c.CondaStore.store_directory = "/opt/conda-store/conda-store/"
c.CondaStore.database_url = "postgresql+psycopg2://admin:password@postgres/conda-store"
c.CondaStore.default_uid = 1000
c.CondaStore.default_gid = 100
c.CondaStore.default_permissions = "775"
c.S3Storage.internal_endpoint = "minio:9000"
c.S3Storage.external_endpoint = "localhost:30900"
c.S3Storage.access_key = "admin"
c.S3Storage.secret_key = "password"
c.S3Storage.region = "us-east-1" # minio region default
c.S3Storage.bucket_name = "conda-store"
c.S3Storage.internal_secure = False
c.S3Storage.external_secure = False
# ==================================
# server settings
# ==================================
c.CondaStoreServer.log_level = logging.INFO
c.CondaStoreServer.enable_ui = True
c.CondaStoreServer.enable_api = True
c.CondaStoreServer.enable_registry = True
c.CondaStoreServer.enable_metrics = True
c.CondaStoreServer.address = "0.0.0.0"
c.CondaStoreServer.port = 5000
# This MUST start with `/`
c.CondaStoreServer.url_prefix = "/"
# ==================================
# auth settings
# ==================================
c.CondaStoreServer.authentication_class = DummyAuthentication
# ==================================
# worker settings
# ==================================
c.CondaStoreWorker.log_level = logging.INFO
c.CondaStoreWorker.watch_paths = ["/opt/environments"]
| 33.755102 | 87 | 0.649335 | 175 | 1,654 | 5.988571 | 0.417143 | 0.145992 | 0.087786 | 0.07729 | 0.038168 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026087 | 0.096131 | 1,654 | 48 | 88 | 34.458333 | 0.674916 | 0.25393 | 0 | 0 | 0 | 0 | 0.141099 | 0.07055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.071429 | 0.107143 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
812da82da010a04e832a8cfb9414792af6e7a630 | 481 | py | Python | setup.py | braveheuel/prniotlet | 86eb56a1ab0841a84637e4b8f61ee4820c052855 | [
"MIT"
] | 1 | 2020-09-08T22:09:14.000Z | 2020-09-08T22:09:14.000Z | setup.py | braveheuel/prniotlet | 86eb56a1ab0841a84637e4b8f61ee4820c052855 | [
"MIT"
] | null | null | null | setup.py | braveheuel/prniotlet | 86eb56a1ab0841a84637e4b8f61ee4820c052855 | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
from setuptools import setup
setup(
name="prniotlet",
version="0.2.1",
author="Christoph Heuel",
author_email="christoph@heuel-web.de",
license="MIT",
install_requires=['python-escpos', 'aiomas[mp]', 'click'],
packages=setuptools.find_packages(),
scripts=[
'scripts/prniotlet-server', 'scripts/prniotlet-xkcd',
'scripts/prniotlet-wlan', 'scripts/prniotlet-advent',
'scripts/prniotlet-text',
],
)
| 26.722222 | 62 | 0.650728 | 53 | 481 | 5.849057 | 0.698113 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010152 | 0.180873 | 481 | 17 | 63 | 28.294118 | 0.77665 | 0.045738 | 0 | 0 | 0 | 0 | 0.427948 | 0.296943 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8138a427810e5e0b1d5421d00c14a306d2cf9c1e | 1,174 | py | Python | Yikai_helper_funcs/_nbdev.py | DavidykZhao/Yikai_helper_funcs | fe2bed7ae3d0b5b722766358ed1405831837cfa3 | [
"Apache-2.0"
] | null | null | null | Yikai_helper_funcs/_nbdev.py | DavidykZhao/Yikai_helper_funcs | fe2bed7ae3d0b5b722766358ed1405831837cfa3 | [
"Apache-2.0"
] | 2 | 2021-09-28T05:38:10.000Z | 2022-02-26T10:06:59.000Z | Yikai_helper_funcs/_nbdev.py | DavidykZhao/Yikai_helper_funcs | fe2bed7ae3d0b5b722766358ed1405831837cfa3 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED BY NBDEV! DO NOT EDIT!
__all__ = ["index", "modules", "custom_doc_links", "git_url"]
index = {"say_hello": "00_core.ipynb",
"say_bye": "00_core.ipynb",
"optimize_bayes_param": "01_bayes_opt.ipynb",
"ReadTabBatchIdentity": "02_tab_ae.ipynb",
"TabularPandasIdentity": "02_tab_ae.ipynb",
"TabDataLoaderIdentity": "02_tab_ae.ipynb",
"RecreatedLoss": "02_tab_ae.ipynb",
"BatchSwapNoise": "02_tab_ae.ipynb",
"TabularAE": "02_tab_ae.ipynb",
"HyperparamsGenerator": "03_param_finetune.ipynb",
"XgboostParamGenerator": "03_param_finetune.ipynb",
"LgbmParamGenerator": "03_param_finetune.ipynb",
"CatParamGenerator": "03_param_finetune.ipynb",
"RFParamGenerator": "03_param_finetune.ipynb",
"ModelIterator": "04_model_zoo.ipynb"}
modules = ["core.py",
"bayes_opt.py",
"tab_ae.py",
"params.py",
"model_iterator.py"]
doc_url = "https://DavidykZhao.github.io/Yikai_helper_funcs/"
git_url = "https://github.com/DavidykZhao/Yikai_helper_funcs/tree/master/"
def custom_doc_links(name): return None
| 36.6875 | 74 | 0.651618 | 133 | 1,174 | 5.37594 | 0.466165 | 0.048951 | 0.058741 | 0.100699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032051 | 0.202726 | 1,174 | 31 | 75 | 37.870968 | 0.731838 | 0.030664 | 0 | 0 | 1 | 0 | 0.621479 | 0.15669 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0 | 0.041667 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
813d84a4b51823c471c1f733063cefbb8d02891a | 294 | py | Python | tensor_rl/__init__.py | umd-huang-lab/reinforcement-learning-via-spectral-methods | c7bd04d7eea6869807ed70af76960dcc542b0a82 | [
"MIT"
] | null | null | null | tensor_rl/__init__.py | umd-huang-lab/reinforcement-learning-via-spectral-methods | c7bd04d7eea6869807ed70af76960dcc542b0a82 | [
"MIT"
] | null | null | null | tensor_rl/__init__.py | umd-huang-lab/reinforcement-learning-via-spectral-methods | c7bd04d7eea6869807ed70af76960dcc542b0a82 | [
"MIT"
] | null | null | null |
try:
xrange
except NameError:
xrange = range
# Fix input to cooperate with python 2 and 3.
try:
input = raw_input
except NameError:
pass
# Imports.
import tensor_rl.agents, tensor_rl.experiments, tensor_rl.mdp, tensor_rl.tasks, tensor_rl.utils
import tensor_rl.run_experiments
| 18.375 | 95 | 0.758503 | 44 | 294 | 4.886364 | 0.590909 | 0.223256 | 0.130233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00823 | 0.173469 | 294 | 15 | 96 | 19.6 | 0.876543 | 0.176871 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d48f1da85175184e5f2c61925936133a57c8f80a | 467 | py | Python | fjord/redirector/tests/__init__.py | lgp171188/fjord | 1cb4aa3c3e0932c586cdd2c4ee3b6b9974a6976a | [
"BSD-3-Clause"
] | 16 | 2015-02-06T14:35:57.000Z | 2021-07-10T11:14:00.000Z | fjord/redirector/tests/__init__.py | lgp171188/fjord | 1cb4aa3c3e0932c586cdd2c4ee3b6b9974a6976a | [
"BSD-3-Clause"
] | 310 | 2015-01-07T14:39:35.000Z | 2016-05-02T17:41:30.000Z | fjord/redirector/tests/__init__.py | lgp171188/fjord | 1cb4aa3c3e0932c586cdd2c4ee3b6b9974a6976a | [
"BSD-3-Clause"
] | 22 | 2015-01-15T13:46:03.000Z | 2020-07-24T10:08:51.000Z | from fjord.base.plugin_utils import load_providers
from fjord.redirector import _REDIRECTORS
class RedirectorTestMixin(object):
"""Mixin that loads Redirectors specified with ``redirectors`` attribute"""
redirectors = []
def setUp(self):
_REDIRECTORS[:] = load_providers(self.redirectors)
super(RedirectorTestMixin, self).setUp()
def tearDown(self):
_REDIRECTORS[:] = []
super(RedirectorTestMixin, self).tearDown()
| 29.1875 | 79 | 0.706638 | 45 | 467 | 7.2 | 0.533333 | 0.138889 | 0.123457 | 0.240741 | 0.265432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186296 | 467 | 15 | 80 | 31.133333 | 0.852632 | 0.147752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d491bce2ea5abf9a60f9a2661a9e69099888b73a | 561 | py | Python | mystery/tests/utils.py | anselmbradford/collab-mystery-meet | cda20bd1888edf8666f290c87817e63d8921f3bd | [
"CC0-1.0"
] | 2 | 2015-07-11T17:52:13.000Z | 2016-08-15T04:04:03.000Z | mystery/tests/utils.py | anselmbradford/collab-mystery-meet | cda20bd1888edf8666f290c87817e63d8921f3bd | [
"CC0-1.0"
] | null | null | null | mystery/tests/utils.py | anselmbradford/collab-mystery-meet | cda20bd1888edf8666f290c87817e63d8921f3bd | [
"CC0-1.0"
] | 3 | 2017-07-14T03:20:05.000Z | 2021-02-20T10:40:57.000Z | from django.contrib.auth import get_user_model
from django.test.client import RequestFactory
from core.models import Person
import random
import string
def random_user():
user = get_user_model().objects.create_user(
''.join(random.choice(string.lowercase) for _ in range(12)))
person = Person.objects.create(user=user)
return user
def mock_req(path='/', user = None):
"""
Mock request -- with user!
"""
if not user:
user = random_user()
req = RequestFactory().get(path)
req.user = user
return req
| 23.375 | 72 | 0.673797 | 75 | 561 | 4.92 | 0.466667 | 0.086721 | 0.065041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004545 | 0.215686 | 561 | 23 | 73 | 24.391304 | 0.834091 | 0.046346 | 0 | 0 | 0 | 0 | 0.001931 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.3125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d491d23747b7a1b75b028540bf735ad5eca55d90 | 2,766 | py | Python | python/pikov/properties.py | google/pikov | e13f114cab829c18c6a5447d5bec17990267d092 | [
"Apache-2.0"
] | 17 | 2018-04-29T21:08:27.000Z | 2021-12-23T16:11:22.000Z | python/pikov/properties.py | google/pikov | e13f114cab829c18c6a5447d5bec17990267d092 | [
"Apache-2.0"
] | 31 | 2018-06-06T13:50:21.000Z | 2019-05-02T16:06:59.000Z | python/pikov/properties.py | google/pikov | e13f114cab829c18c6a5447d5bec17990267d092 | [
"Apache-2.0"
] | 9 | 2018-04-28T21:14:05.000Z | 2021-10-16T09:13:01.000Z | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Property classes for building wrapper classes for Pikov nodes.
We want to wrap our semantic graph with Python classes. This allows us to
interact with Python objects to modify the guid_map.
These classes encode the core types used in the semantic graph. When classes
use these properties, the guid_map is updated with the correct serialization
of the property.
"""
from .core import Int64Node, StringNode
class AbstractSemanticGraphProperty(object):
def __init__(self, label):
self._label = label
def from_node(self, obj, value):
raise NotImplementedError()
def to_node(self, value):
raise NotImplementedError()
def __get__(self, obj, type=None):
return self.from_node(obj, obj[self._label])
def __set__(self, obj, value):
obj[self._label] = self.to_node(value)
class UnspecifiedProperty(AbstractSemanticGraphProperty):
def from_node(self, obj, value):
obj._graph.get_value(obj, self._label)
def to_node(self, value):
# Value should already by a Node.
return value
class GuidProperty(AbstractSemanticGraphProperty):
def __init__(self, label, cls):
super().__init__(label)
self._cls = cls
def from_node(self, obj, value):
if value is None:
return None
return self._cls(obj._graph, guid=value.guid)
def to_node(self, value):
# Value should already by a GuidNode.
return value
def make_guid_property(wrapped):
def __init__(self, label):
GuidProperty.__init__(self, label, wrapped)
return type(
wrapped.__name__ + "Property",
(GuidProperty,),
{
"__init__": __init__,
}
)
class ScalarProperty(AbstractSemanticGraphProperty):
def from_node(self, obj, value):
if value is None:
return None
return value.value
class Int64Property(ScalarProperty):
def to_node(self, value):
if value is None:
return None
return Int64Node(value)
class StringProperty(ScalarProperty):
def to_node(self, value):
if value is None:
return None
return StringNode(value)
| 27.386139 | 76 | 0.685105 | 356 | 2,766 | 5.143258 | 0.359551 | 0.039323 | 0.032769 | 0.0355 | 0.23266 | 0.222829 | 0.210268 | 0.166029 | 0.166029 | 0.166029 | 0 | 0.006648 | 0.238612 | 2,766 | 100 | 77 | 27.66 | 0.862773 | 0.354302 | 0 | 0.442308 | 0 | 0 | 0.009065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.288462 | false | 0 | 0.019231 | 0.057692 | 0.653846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d492c782df81e0b1685a07e42535a9c0a95a53b1 | 7,461 | py | Python | pyDefi/defi.py | richardstrnad/pyDefi | 172f9e599508b9723ab33088a3df935b7fdf0c6a | [
"MIT"
] | 1 | 2022-03-01T14:09:29.000Z | 2022-03-01T14:09:29.000Z | pyDefi/defi.py | richardstrnad/pyDefi | 172f9e599508b9723ab33088a3df935b7fdf0c6a | [
"MIT"
] | null | null | null | pyDefi/defi.py | richardstrnad/pyDefi | 172f9e599508b9723ab33088a3df935b7fdf0c6a | [
"MIT"
] | 1 | 2022-03-24T12:26:36.000Z | 2022-03-24T12:26:36.000Z | from jsonrpcclient import request
import requests
import logging
logger = logging.getLogger('main.defi')
logger.addHandler(logging.NullHandler())
class Defi(object):
def __init__(self, host, walletpassphrase):
self.host = host
self.walletpassphrase = walletpassphrase
self.account = self._Account(self)
self.blockchain = self._Blockchain(self)
self.loan = self._Loan(self)
self.pool = self._Pool(self)
self.oracle = self._Oracle(self)
self.token = self._Token(self)
self.wallet = self._Wallet(self)
def _request(self, method):
resp = requests.post(
self.host,
json=method
)
if resp.status_code == 200:
return resp.json().get('result')
logger.error(
f'Issue with the request, status code: {resp.status_code}'
)
logger.error('Dump of the json object')
logger.error(resp.json())
def unlock_wallet(self, duration=600):
return self._request(
request(
"walletpassphrase",
params=(
self.walletpassphrase,
duration
)
)
)
class _Wallet(object):
def __init__(self, defi):
self.defi = defi
def sendToAddress(self, address, amount, comment='', comment_to='',
subtractfeefromamount=False):
return self.defi._request(
request(
"sendtoaddress",
params=(
address,
amount,
comment,
comment_to,
subtractfeefromamount
)
)
)
def getBalance(self):
return self.defi._request(
request("getbalance")
)
def getBalances(self):
return self.defi._request(
request("getbalances")
)
def getWalletInfo(self):
return self.defi._request(
request("getwalletinfo")
)
def getAddressInfo(self, address):
return self.defi._request(
request(
"getaddressinfo",
params=(
address,
)
)
)
def getTransaction(self, txid):
return self.defi._request(
request(
"gettransaction",
params=(
txid,
)
)
)
def listUnspent(self):
return self.defi._request(
request("listunspent")
)
def getNewAddress(self, label=''):
return self.defi._request(
request(
"getnewaddress",
params=(
label,
)
)
)
class _Loan(object):
def __init__(self, defi):
self.defi = defi
def listVaults(self, ownerAddress):
return self.defi._request(
request(
"listvaults",
params=(
{'ownerAddress': ownerAddress},
)
)
)
def getVault(self, vaultId):
return self.defi._request(
request(
"getvault",
params=(
vaultId,
)
)
)
def listAuctions(self):
return self.defi._request(
request("listauctions")
)
def placeAuctionBid(self, vaultId, index, from_addr, amount):
return self.defi._request(
request(
"placeauctionbid",
params=(
vaultId,
index,
from_addr,
amount
)
)
)
class _Pool(object):
def __init__(self, defi):
self.defi = defi
def listPoolPairs(self):
return self.defi._request(
request("listpoolpairs")
)
def poolSwap(self, from_addr, tokenFrom, amountFrom, to_addr, tokenTo,
maxPrice=0):
return self.defi._request(
request(
"poolswap",
params={
'metadata': {
'from': from_addr,
'tokenFrom': tokenFrom,
'amountFrom': amountFrom,
'to': to_addr,
'tokenTo': tokenTo,
'maxPrice': maxPrice
}
})
)
def testPoolSwap(self, from_addr, tokenFrom, amountFrom, to_addr,
tokenTo, maxPrice=0):
return self.defi._request(
request(
"testpoolswap",
params={
'metadata': {
'from': from_addr,
'tokenFrom': tokenFrom,
'amountFrom': amountFrom,
'to': to_addr,
'tokenTo': tokenTo,
'maxPrice': maxPrice
}
})
)
class _Oracle(object):
def __init__(self, defi):
self.defi = defi
def listOracles(self):
return self.defi._request(
request("listoracles")
)
def listPrices(self):
return self.defi._request(
request("listprices")
)
class _Token(object):
def __init__(self, defi):
self.defi = defi
def listTokens(self):
return self.defi._request(
request("listtokens")
)
def mintTokens(self, amountToken, UXTO):
return self.defi._request(
request(
"minttokens",
params=(
amountToken,
UXTO
)
)
)
class _Account(object):
def __init__(self, defi):
self.defi = defi
def listAccountHistory(self):
return self.defi._request(
request("listaccounthistory")
)
def getTokenBalances(self):
return self.defi._request(
request("gettokenbalances")
)
def utxosToAccount(self, address, amount):
return self.defi._request(
request(
"utxostoaccount",
params=(
{address: f"{amount}@DFI"},
)
)
)
class _Blockchain(object):
def __init__(self, defi):
self.defi = defi
def getBlockchainInfo(self):
return self.defi._request(
request("getblockchaininfo")
)
| 28.048872 | 78 | 0.416432 | 505 | 7,461 | 5.982178 | 0.182178 | 0.097981 | 0.106587 | 0.159881 | 0.467395 | 0.387289 | 0.204568 | 0.204568 | 0.204568 | 0.121152 | 0 | 0.002154 | 0.502212 | 7,461 | 265 | 79 | 28.154717 | 0.811255 | 0 | 0 | 0.353712 | 0 | 0 | 0.068624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144105 | false | 0.017467 | 0.0131 | 0.104803 | 0.30131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d4a62f50c00248be5e70bd42cb0f0ac4496cf3ac | 1,268 | py | Python | src/activation.py | sokmontrey/neural-tinet | 6855035a8136498b864054899e88712c7962c8c1 | [
"MIT"
] | 1 | 2022-03-02T19:05:25.000Z | 2022-03-02T19:05:25.000Z | src/activation.py | sokmontrey/neural-tinet | 6855035a8136498b864054899e88712c7962c8c1 | [
"MIT"
] | null | null | null | src/activation.py | sokmontrey/neural-tinet | 6855035a8136498b864054899e88712c7962c8c1 | [
"MIT"
] | null | null | null | import numpy as np
import math
class Activation:
#stil need to improve reLU and numpy datatype as they always overflow when use RELU
def __init__(self): return None
def ReLU(self, x, Derivative=False):
x = np.clip(x, -600, 600)
if not Derivative:
return np.maximum(0,x)
else: return np.greater(x, 0) + 0
def LeakyReLU(self, x, Derivative=False):
x = np.clip(x, -600, 600)
if not Derivative:
return np.maximum(2.0/10.0*x, x)
else: return (np.greater(x, 0)+0) + (np.less(x,0.001)+0)*2.0/10.0
def Sigmoid(self, x, Derivative=False):
x = np.clip(x, -600, 600)
if not Derivative:
return 1/(1+np.exp(-x))
else: return x * (1 - x)
def Tanh(self, x, Derivative=False):
x = np.clip(x, -600, 600)
if not Derivative:
return np.tanh(x)
else: return 1 - x**2
def Softmax(self, x, Derivative=False):
x = np.clip(x, -600, 600)
if not Derivative:
exp_sum = np.sum( np.exp(x) )
return np.exp(x)/exp_sum
else: return x * (1 - x)
def Linear(self, x, Derivative=False):
x = np.clip(x, -600, 600)
if not Derivative: return x
else: return 1
| 34.27027 | 87 | 0.554416 | 199 | 1,268 | 3.502513 | 0.236181 | 0.043042 | 0.129125 | 0.172166 | 0.605452 | 0.605452 | 0.559541 | 0.559541 | 0.493544 | 0.493544 | 0 | 0.072581 | 0.315457 | 1,268 | 36 | 88 | 35.222222 | 0.730415 | 0.064669 | 0 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.058824 | 0.029412 | 0.441176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4afdfbaa85ef9310751bf5604559fa24cc21822 | 819 | py | Python | Part_2_intermediate/mod_2/lesson_3/examples/ex_4_constructor.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_3/examples/ex_4_constructor.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_3/examples/ex_4_constructor.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null |
class Student:
# Konstruktor może przyjąć argumenty
def __init__(self, first_name, last_name):
self.first_name = first_name
self.last_name = last_name
self.promoted = False
student = Student()
# Obiekty możemy przekazywać jako argumenty do funkcji
def print_student(student):
print(f"Student: {student.first_name} {student.last_name}, promoted: {student.promoted}")
# W funkcji możemy zmodyfikować stan obiektu (side effect)
def promote_student(student):
student.promoted = True
def run_example():
student = Student(first_name="Ola", last_name="Nowak")
print_student(student)
other_student = Student("Jerzy", "Jurkowski")
print_student(other_student)
promote_student(student)
print_student(student)
if __name__ == '__main__':
run_example() | 24.088235 | 93 | 0.71917 | 100 | 819 | 5.57 | 0.39 | 0.251347 | 0.102334 | 0.057451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18315 | 819 | 34 | 94 | 24.088235 | 0.832586 | 0.175824 | 0 | 0.105263 | 0 | 0 | 0.162444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0 | 0 | 0.263158 | 0.263158 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4ba2cacca7171c8add0ad912b812dc890f59eb8 | 586 | py | Python | doxygen/examples/python.py | SwerveRoboticSystems/swerve_resources | f35fa21d8134fbd31255cfa3791a6322188c0457 | [
"MIT"
] | null | null | null | doxygen/examples/python.py | SwerveRoboticSystems/swerve_resources | f35fa21d8134fbd31255cfa3791a6322188c0457 | [
"MIT"
] | null | null | null | doxygen/examples/python.py | SwerveRoboticSystems/swerve_resources | f35fa21d8134fbd31255cfa3791a6322188c0457 | [
"MIT"
] | null | null | null | #! /usr/bin/python
## @file <file_name>
# @brief <file_description>
# @author <first_name> <last_name> - <email>
# @date Created: <date>
## <var_description>
var = 0
class <class_name>:
"""!
@brief <brief_description>
@author <first_name> <last_name> - <email>
@date Created: <date>
"""
def function(input, second_input):
"""!
@brief <brief_description>
@author <first_name> <last_name> - <email>
@date Created: <date>
@param input - <var_description>
@param second_input - <var_description>
@param <var_name> - <var_description>
@return <return_description>
"""
| 20.206897 | 45 | 0.675768 | 71 | 586 | 5.309859 | 0.309859 | 0.148541 | 0.175066 | 0.206897 | 0.482759 | 0.482759 | 0.482759 | 0.482759 | 0.482759 | 0.482759 | 0 | 0.002012 | 0.151877 | 586 | 28 | 46 | 20.928571 | 0.756539 | 0.25256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4c874e28d40be672e9d74c9e7779aa2746f3152 | 133 | py | Python | screenshot/__init__.py | imThinking/wininfo | 5230cbe3f8a68a9124daa5905a4c819fd004149f | [
"Apache-2.0"
] | null | null | null | screenshot/__init__.py | imThinking/wininfo | 5230cbe3f8a68a9124daa5905a4c819fd004149f | [
"Apache-2.0"
] | null | null | null | screenshot/__init__.py | imThinking/wininfo | 5230cbe3f8a68a9124daa5905a4c819fd004149f | [
"Apache-2.0"
] | null | null | null | __title__ = 'wininfo'
__version__ = '0.1'
__author__ = 'ZHOU'
__license__ = 'Apache v2.0 License'
from .screenshot import ScreenShot | 22.166667 | 35 | 0.75188 | 16 | 133 | 5.25 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034783 | 0.135338 | 133 | 6 | 36 | 22.166667 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0.246269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d4c99edca4e06a1b4c521f8db100e690fb2e0295 | 615 | py | Python | api/anubis/views/public/repos.py | synoet/Anubis | 051888a88e37c67e5e772245604c79ceb4db8764 | [
"MIT"
] | 2 | 2022-02-24T17:39:27.000Z | 2022-02-25T02:14:06.000Z | api/anubis/views/public/repos.py | synoet/Anubis | 051888a88e37c67e5e772245604c79ceb4db8764 | [
"MIT"
] | null | null | null | api/anubis/views/public/repos.py | synoet/Anubis | 051888a88e37c67e5e772245604c79ceb4db8764 | [
"MIT"
] | null | null | null | from flask import Blueprint
from anubis.utils.auth import current_user, require_user
from anubis.utils.http.decorators import json_response
from anubis.utils.http.https import success_response
from anubis.utils.lms.repos import get_repos
repos_ = Blueprint("public-repos", __name__, url_prefix="/public/repos")
@repos_.route("/")
@repos_.route("/list")
@require_user()
@json_response
def public_repos():
"""
Get all unique repos for a user
:return:
"""
# Get all repos for the user
repos = get_repos(current_user.id)
# Pass them back
return success_response({"repos": repos})
| 22.777778 | 72 | 0.731707 | 86 | 615 | 5.011628 | 0.418605 | 0.092807 | 0.139211 | 0.088167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15935 | 615 | 26 | 73 | 23.653846 | 0.833656 | 0.136585 | 0 | 0 | 0 | 0 | 0.070588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.538462 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d4d2646b43665695a46ce83dfa1601e9aa7aa96a | 236 | py | Python | src/main/helloworld/git_info.py | salimfadhley/python_hello_world_server | eaffd7ffa0d5b5c11e548d0c8efbfab856df0bee | [
"MIT"
] | null | null | null | src/main/helloworld/git_info.py | salimfadhley/python_hello_world_server | eaffd7ffa0d5b5c11e548d0c8efbfab856df0bee | [
"MIT"
] | null | null | null | src/main/helloworld/git_info.py | salimfadhley/python_hello_world_server | eaffd7ffa0d5b5c11e548d0c8efbfab856df0bee | [
"MIT"
] | null | null | null | """
A very important module, designed to do something hella-useful.
"""
import git
def get_git_info(repo_root="/project") -> git.refs.log.RefLogEntry:
"""Such a cool function!
"""
return git.Repo(repo_root).head.log()[-1]
| 21.454545 | 67 | 0.677966 | 35 | 236 | 4.457143 | 0.771429 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005051 | 0.161017 | 236 | 10 | 68 | 23.6 | 0.782828 | 0.381356 | 0 | 0 | 0 | 0 | 0.06015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d4d317010492dc86a5da83834fb290dc9e7d805e | 1,962 | py | Python | functionaltests/api/v2/clients/zone_client.py | infobloxopen/designate | 531a28b8453cfe5641284a16e0342db8d709ab36 | [
"Apache-2.0"
] | null | null | null | functionaltests/api/v2/clients/zone_client.py | infobloxopen/designate | 531a28b8453cfe5641284a16e0342db8d709ab36 | [
"Apache-2.0"
] | null | null | null | functionaltests/api/v2/clients/zone_client.py | infobloxopen/designate | 531a28b8453cfe5641284a16e0342db8d709ab36 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2015 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from functionaltests.api.v2.models.zone_model import ZoneModel
from functionaltests.api.v2.models.zone_model import ZoneListModel
class ZoneClient(object):
def __init__(self, client):
self.client = client
@classmethod
def zones_uri(cls):
return "/v2/zones"
@classmethod
def zone_uri(cls, id):
return "{0}/{1}".format(cls.zones_uri(), id)
@classmethod
def deserialize(cls, resp, body, model_type):
return resp, model_type.from_json(body)
def list_zones(self, **kwargs):
resp, body = self.client.get(self.zones_uri(), **kwargs)
return self.deserialize(resp, body, ZoneListModel)
def get_zone(self, id, **kwargs):
resp, body = self.client.get(self.zone_uri(id))
return self.deserialize(resp, body, ZoneModel)
def post_zone(self, zone_model, **kwargs):
resp, body = self.client.post(self.zones_uri(),
body=zone_model.to_json(), **kwargs)
return self.deserialize(resp, body, ZoneModel)
def patch_zone(self, id, zone_model, **kwargs):
resp, body = self.client.patch(self.zone_uri(id),
body=zone_model.to_json(), **kwargs)
return self.deserialize(resp, body, ZoneModel)
def delete_zone(self, id, **kwargs):
resp, body = self.client.delete(self.zone_uri(id), **kwargs)
return self.deserialize(resp, body, ZoneModel)
| 33.254237 | 72 | 0.69368 | 273 | 1,962 | 4.882784 | 0.358974 | 0.066017 | 0.052513 | 0.067517 | 0.385596 | 0.385596 | 0.35934 | 0.217554 | 0.099025 | 0.099025 | 0 | 0.008264 | 0.198267 | 1,962 | 58 | 73 | 33.827586 | 0.839161 | 0.279817 | 0 | 0.290323 | 0 | 0 | 0.011388 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.290323 | false | 0 | 0.064516 | 0.096774 | 0.645161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d4d48bfaae76f28897549b255d01e8ba270298c6 | 2,258 | py | Python | run.py | mcarans/hdx-scraper-unesco | 01185326de5f8f041618ff5ae538880ca2ee6ba1 | [
"MIT"
] | null | null | null | run.py | mcarans/hdx-scraper-unesco | 01185326de5f8f041618ff5ae538880ca2ee6ba1 | [
"MIT"
] | null | null | null | run.py | mcarans/hdx-scraper-unesco | 01185326de5f8f041618ff5ae538880ca2ee6ba1 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Top level script. Calls other functions that generate datasets that this script then creates in HDX.
"""
import logging
from os.path import join, expanduser
from timeit import default_timer
from hdx.hdx_configuration import Configuration
from hdx.utilities.downloader import Download
from hdx.utilities.path import temp_dir
from unesco import generate_dataset_and_showcase, get_countriesdata, get_endpoints_metadata
from hdx.facades.simple import facade
logger = logging.getLogger(__name__)
lookup = 'hdx-scraper-unesco'
def main():
"""Generate dataset and create it in HDX"""
base_url = Configuration.read()['base_url']
with temp_dir('UNESCO') as folder:
with Download(extra_params_yaml=join(expanduser('~'), '.extraparams.yml'), extra_params_lookup=lookup) as downloader:
endpoints = Configuration.read()['endpoints']
endpoints_metadata = get_endpoints_metadata(base_url, downloader, endpoints)
countriesdata = get_countriesdata(base_url, downloader)
logger.info('Number of datasets to upload: %d' % len(countriesdata))
for countrydata in countriesdata:
for dataset, showcase in generate_dataset_and_showcase(downloader, countrydata, endpoints_metadata, folder=folder, merge_resources=True, single_dataset=False): # TODO: fix folder
if dataset:
dataset.update_from_yaml()
start = default_timer()
dataset.create_in_hdx(remove_additional_resources=True, hxl_update=False)
print("total time = %d" % (default_timer() - start))
resources = dataset.get_resources()
resource_ids = [x['id'] for x in sorted(resources, key=lambda x: x['name'], reverse=False)]
dataset.reorder_resources(resource_ids, hxl_update=False)
showcase.create_in_hdx()
showcase.add_dataset(dataset)
if __name__ == '__main__':
facade(main, user_agent_config_yaml=join(expanduser('~'), '.useragents.yml'), user_agent_lookup=lookup, project_config_yaml=join('config', 'project_configuration.yml'))
| 42.603774 | 194 | 0.675819 | 260 | 2,258 | 5.619231 | 0.411538 | 0.013689 | 0.036961 | 0.035592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000574 | 0.228964 | 2,258 | 52 | 195 | 43.423077 | 0.838599 | 0.085917 | 0 | 0 | 1 | 0 | 0.081094 | 0.012213 | 0 | 0 | 0 | 0.019231 | 0 | 1 | 0.03125 | false | 0 | 0.25 | 0 | 0.28125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.